Docstoc

Design Implementation-final

Document Sample
Design Implementation-final Powered By Docstoc
					DATE:           September 8, 2011

TO:             Dean Neikirk, Emily Richardson, Michael Becker

FROM:           Josh Schulte, Pritesh Solanki, Jonathan Pham, Mesele Bedasa, Po-Han Wu

SUBJECT:        A design implementation plan that describes the design problem and project solution
                that will be organized to solve it.


INTRODUCTION
The purpose of this design implementation plan is to present our focus on building a multi-sensor
robotics platform prototype. The project is sponsored by Jacek Stachurski of Texas Instruments (TI). TI
has given our team the goal to build an autonomous robot that is able to perform an intruder detection
task and to play a laser tag game. The robot will be able to move without a human controller while
avoiding obstacles in its path. These tasks will be implemented by using multiple sensors, mobile
platform, and microcontroller.

Our team consists of Jonathan Pham, Po-Han Wu, Josh Schulte, Pritesh Solanki, and Mesele Bedasa.
Our implementation of the project will help TI showcase the capabilities of their Beagleboard
microcontroller. Since the platform will use a variety of sensors and will be very software intensive, it
demonstrates the power and performance capabilities of the microcontroller. Dr. Dean Neikirk is our
faculty mentor who specializes in sensors. Dr. Neikirk will assist our team with designing and
implementing sensor related components on the platform. The project will also be all open-source,
meaning all of the platform’s documentation and code will be available for future teams to modify and
improve the platform.


This document contains a description of the problem background, description of design, and
implementation plan. The problem background will contain the definition of our problem, which is the
ability to showcase the Beagleboard. This section will also include a description of the solution,
specifications, and deliverables. The description of design will contain a high-level discussion of our
solution which is broken down into four subsystems: object recognition, movement, speech recognition,
and aim-and-shoot. This section will also include individual modules and describe the overall readiness
for implementation. The implementation plan discusses the schedule of activities, division of labor,
required materials, and budget. A project schedule is provided in this document that maps a 14 week
time span that discusses the time to produce a prototype robot and the resources estimated to finish the
project. There is also a bill of materials containing the projected cost of materials.
PROBLEM BACKGROUND

The purpose of this project is to create a multi-sensor robotics platform that meets the requirements
specified by our sponsor, Texas Instruments. Our group will need to implement the TI processor
into a mobile robot platform. We will integrate the processor with an off-the-shelf robot. This
integrated robot will have a multitude of sensors including, an infrared sensor, microphone, camera,
and touch sensors. All of these sensors can be utilized for specific tasks and games. Our goal for our
robot is to complete one game and one task with the capability of speech recognition. Our group
will implement a platform that is fully expandable, and will allow its diverse users to add and
remove different sensors for various tasks and games that the user defines.

In order for our project to be considered a success, our design must address the following requests
from our customer, Texas Instruments.

         Use a processor from Texas Instruments:
          Our solution is to use TI’s Beagleboard, as the main processor. The Beagleboard has a
          600 MHz superscalar ARM Cortex processor with 128 MB RAM. This should be
          sufficient to perform the task based on proof from similar existing projects.

         Perform one task and one game:
          We chose to perform intruder detection as our task, and laser tag as our game. We chose
          these two because they have similar requirements including object recognition and room
          circulation. This will ensure that we have enough time to complete this project.

         Use an off-the-shelf robot platform:
          We chose to use the Roomba as our platform because it is circular shaped and has three
          wheels which are good for stable movement and accurate turning. Also, UT has a Pharos
          Lab that already has some existing code and built Roombas ready to use and check out.

         Control with Speech recognition:
          We will integrate the speech recognition program supplied by our sponsor, Jacek
          Stachurski, into our robotic system.
         Use multi-sensor as input:
          Based on our task and game, we will need an infrared sensor to avoid objects, a camera
          for object recognition and microphone for voice commands. We chose to use a Kinect as
                                                  2
          our sensor because it can fulfill all these requirements and with its SDK we should able to
          complete the task in time.

Operation Requirements

The robot platform should be able to traverse a terrain by using wheels, detect objects through a
camera sensor, recognize key words in human speech using a microphone, and aim-and shoot a
target with a laser. These operational requirements will be tested on efficiency, timing, and accuracy
on the robot's performance during its basic movement, intruder detection mode, and the laser tag
game.

Deliverables

The deliverables for the multi-sensor robot project will include a prototype robot, user manual,
programmer manual, software code, and mechanical design. All these materials will serve the
purpose of helping the user control the robot and further develop it.

The most important deliverable is a vehicle platform that has a microcontroller, camera, infrared
sensors, microphone, and laser emitting device properly attached and interfaced to it. The prototype
should be able to demonstrate the ability to perform basic mechanical functions, play the laser tag
game, and run the intruder detection mode.

The user manual is a quick guide for the general purpose user to utilize the robot. It will contain
features and user controls. First, there will be a list of the features that are implemented with a
simple description of each. Second, there will be instructions for all the controls on the robot. For
example, the on/off button, volume control, switch task and give voice command.

The programmer’s manual is for people who are interested in further developing the robot. This
manual will include features, user control, static results, limitations and re-programming
information. The feature and user control will be similar to the user manual, but written in greater
detail. We will introduce how and why we implement each feature, other than just how to use each
feature. For the static results and limitations, we will supply all measured data acquired from the
robot. A maximum speed of how fast the robot can move, rotate and lift its arm, and the minimum
angle for the motor will be recorded. Furthermore, there will be a list of sensors. A maximum range



                                                    3
of each sensor will be measured and reported. The data gathered will help the programmer to better
understand the capabilities of the robot.

This is an open source project; all the code used in the project will be provided for the programmer.
By having the software open source, it allows users to understand and modify for their own
applications. There will be hardware diagrams on how each motor and sensor is connected to the
robot. A list of parts detailing all of the components used in the platform will also be included.
These diagrams and lists will inform users of how the components are put together.

DESCRIPTION OF DESIGN

Since 364d, we have decided to move forward with our final design. In this section, we will discuss
the four main subsystems of this design for our robotics platform: object recognition, movement,
speech recognition, and the aim-and-shoot subsystem as seen in Figure 1.0 below. Each subsystem
will interact with different components of the platform to achieve the desired task. The robot will be
activated through the user's voice. From the voice, the robot will decide which task it is required to
carry out (I.E. laser tag or intruder detection). If the laser tag task has been chosen, the robot will
commence movement towards the infrared emitting device that another user will wear. The robot
will utilize the object identification subsystem to identify the correct target. Once it has reached the
target, the aim-and-shoot subsystem will calculate the angle at which to activate the infrared device
and shoot the target. If the intruder detection subsystem is chosen, the movement subsystem will
make the robot traverse the room. It will then use the object identification subsystem to recognize
potential collisions and feed data back to the movement subsystem to adjust its path. The object
identification will also identify any intruders in the room.

Object Recognition

The Object Recognition subsystem will be used to perform two objectives: avoiding barriers and
identifying objects. Avoiding barriers will be used when the robot is roaming an environment. This
subsystem will take in input from the Kinect camera and infrared sensors about the object. With
these measurements, the vehicle will initiate its collision avoidance program. In object
identification, the robot will identify a target from the background by using its video camera. We
will classify a target with special characteristics, such as a specific color. The robot will be
programmed to identify special characteristics and will again use the infrared sensor and camera to

                                                    4
                              Figure 1.0 System Design Block Diagram

calculate distance. This subsystem will have a key role in the entire system. The outputs of the
system will be signals to the motor to move the robot in the correct path or towards the target.

Movement

The movement system’s objective is to move a platform with wheels controlled by motors. The
movements that are implemented will be forward, reverse, turn right, and turn left. The system will
take inputs from the Beagleboard in response to other subsystems. For example, the object
recognition subsystem's outputs will be signals to the motors for path corrective movements. The
robot will move forward and backward at a constant speed. When turning left or right, the platform
will only turn at 30, 60, 90, and 180 degree angles. Once the robot has processed the available
inputs for a given task, it will begin to move around and search for a target. First, the robot will go
straight until it sees the barrier or object blockings its path (by using the object identification
system). When the barrier is in a certain range it will turn 90 degrees clockwise and go straight
again. However, if it also detects object on the right side, it will rotate 180 degrees. Thereafter, if
the robot still detects a barrier it will turn 90 degrees clockwise. The robot will then move
backwards, checking right and left every 2 feet until it finds a way out. The outputs of this
subsystem will be signals to the motors on the Roomba that will allow the entire platform to move.

Speech Recognition

Our major user input will be voice commands. These voice commands will be picked up through
the Microsoft Kinect's microphone and processed by the Beagleboard. The robot will execute
instructions based on what was said. Our team plans to incorporate five different voice commands:
“PATROL”, “LASER”, “GO”, “STOP” and “END”. The robot will initially be in a standby state

                                                     5
when turned on. In this state, the robot will take the commands “PATROL” and “LASER”. When
“PATROL” is said, the robot will automatically move around the environment searching for an
intruder. When “LASER” is said, the robot will go into laser tag mode which the robot searches and
uses a laser gun to shoot at a target. While the robot is currently executing a task, it can only take
the commands “GO”, “STOP” and “END”. The command “GO” and “STOP” will start and stop the
current task, respectively. The last command is “END”. The robot will accept the “END” command
at all times. This command will force the robot to stop all the movement and go to the initial state
where it is in standby mode.

Aim-and-Shoot

When the robot is in laser tag mode, it will try to shoot the laser gun at our target. Therefore, after
we detect our target is in range by the object recognition subsystem, the robot will adjust its arm
level base on the target location. The arm will only need to adjust for height on the Z- Plane since
the robot will be already facing the object directly. Once the robot adjusts to the right angle, it will
shoot the target with an infrared device. This subsystem will accept inputs about the target from the
object identification system to determine the location, altitude, distance, etc. The system will then
output to the infrared device, which will send an infrared signal to the receiver of the target.


User Interface

With the use of multiple sensors, our robotics platform will offer consumers many ways to interact
with the system. Since the robot is semi-autonomous, the only user interfaces needed are voice
communication, a remote control, and bush button controls.

The main communication interface with our robot will be through voice communications. By using
an array of microphones, the robot should be able to distinguish a command from ambient noise.
Users can use basic pre-programmed commands to initiate the laser-tag game or the intruder
detection task. These commands will all have a precondition word that the robot will recognize, and
the following word will be the command. In addition, the user will be able to program new
commands via a remote. Push buttons located directly on the robot will allow the user to turn on/off
power to the platform and directly switch the robot into intruder detection or laser tag mode. A
simple rocker switch will be interfaced to control power from the battery.


                                                    6
An important feature that our robotics system will offer the user is its modular design and
versatility. The robot will be designed so that the user will be able to customize the features and
capabilities of the robot. This enables the robot to evolve and learn at a rapid pace; there is no limit
to what the user can program this robot on-the-fly. Ideally, our robotic system will be able to select
and obey commands from specific users and have a remote control range of at least 50 feet.

Hardware Description

The hardware for our project will consist entirely of items already produced and available on the
market: A Microsoft Kinect, Roomba, and Beagleboard. In addition, we will need an infrared laser
gun, cables, and a few minor parts to aid us in integrating the three main components. Hence, there
is no true hardware production required for our project, thereby mitigating risks and increasing the
probability of success.

Software Description

By contrast, we will need to develop our software almost entirely from scratch. Since our particular
project has not been attempted before, there is limited software available for our specific needs. We
will need to develop software system(s) to recognize voice commands, communicate with the
hardware to physically move the robot, capture and process input feeds from the Kinect, decipher
collected information to determine the next logical step for out robot to perform, acknowledge when
to terminate the requested tasks, and potentially additional tasks. Luckily, we will have access to
TI's voice recognition software that we can incorporate into our product as well as Microsoft's
newly released Kinect SDK. These two tools should drastically minimize the time and effort
allotted to software development and enables us to have much more time in integration and testing
of the final product. These software systems will be the means by which we control the physical
hardware of the robot to both move as well as capture and process data from the Kinect.

We intend to program our robot entirely in C/C++ primarily because the SDK is written for this
language and due to the low-resources required of the C programming language. This should work
well as we have several knowledgeable C/C++ programmers on our team.




                                                    7
Overall readiness

We have not changed any aspect of our design since our solution was proposed last semester. Thus,
our concerns for our design also remain unchanged; namely, precisely how to power all the
hardware components remains uncertain. We've a few ideas of how to proceed, but since this
project has never been done before, the uncertainty of success remains. We have done research and
postulated the best methods of how to proceed, so from this point all we can do is accept the risks.

IMPLEMENTATION PLAN

The multi-sensor robotics project schedule will be divided into 5 stages: Subcomponent Research,
Integration I, Integration II, Final Combination, and Adjustments and Beyond. A Project Flow
Diagram is shown in Appendix B.


Stage 1: Component Research
This is the stage we are currently in and will last about one and a half weeks longer, placing the
ending date on September 19, 2011. Each team member is currently assigned to understand and
implement one component of the project. One person will investigate the Roomba robot and start to
implement the robot's motion. Two members will share one Kinect and work on the RGB and depth
cameras for object recognition and avoidance. Another person will have the Beagleboard and install
the Linux operating system and Microsoft Software Development Kit (SDK) for the Kinect. The
last member will have one Kinect and work on implementing voice recognition. We will have a big
group meeting on September 21, 2011, to check if everyone is on track and whether we can proceed
to the next stage.


Stage 2: Integration I
This stage will take three weeks and is scheduled to be completed on October 10, 2011. If
everything goes according to schedule, we should have basic implementations for the Roomba’s
motion, object recognition, depth calculation, and operating system. In this stage we will divide into
three groups. The first group will have two team members with the Kinect, Beagleboard, and
Roomba. Their goal is to implement the robot's motion so it can navigate a room without crashing
into any objects. The second group will also have two team members with a Kinect and
Beagleboard. Their task is to combine object recognition and depth calculation to accurately detect

                                                  8
a target's location. The final group will share a Kinect with either of the groups mentioned before
and try to complete the implementation of voice commands. As with the previous stage, there will
be a stage completion meeting on October 12, 2011 to decide if we can move on to the next stage.


Stage 3: Integration II
This stage will take two weeks and will finish on October 24th, 2011. In this stage, our team will
separate into two groups; the first group will consist of three team members, the Roomba, Kinect,
and Beagleboard. They will work on implementing the intruder detection task, which is a
combination of moving around an area without crashing and searching for a target. The second
group will have two members, a Kinect, and a Beagleboard. They will work on the main program:
User interface (voice commands) and the alarm system. As usual, there will be a stage complete
meeting held on October 26, 2011.


Stage 4: Final Combination
This phase will take three weeks and will be completed on November 14, 2011. Three of the
team members will finalize the intruder detection program, and the other two group members will
make the hardware for a laser tag game, which includes a robot arm and frequency emitting station.
The stage completion meeting will be held on November 16, 2011.


Stage 5: Adjustment and Beyond
For the final stage, our goal is to make final adjustments, prepare for the written work, and add as
many extra features as possible. The due date will be the end of the semester, November 28,
2011. In this stage, two members will prepare for the open house which will be held at the end of
the semester. The other two members will continue to work on completing the laser tag game,
specifically the aiming and shooting portion. To see a more detailed schedule please refer to the
Gantt chart in Appendix A
                                    Table 1.0 Division of Labor

                                   Stage 1: Component Research
   Task                                                   Team Member(s) Assigned
   Investigate Roomba                                                                      Mesele
   Work on Kinect                                                                   Jonathan, Josh
   Beagleboard                                                                             Pritesh
   Voice Recognition                                                                       Po-han
                                                  9
                                       Stage 2: Integration I
   Task                                                  Team Member(s) Assigned
   Implement Robot's Motion                                                  Pritesh, Mesele
   Detect a Target's Location                                                 Jonathan, Josh
   Implement Voice Commands                                                           Po-han
                                      Stage 3: Integration II
   Task                                                  Team Member(s) Assigned
   Implement Intruder Detection                                      Pritesh, Mesele, Po-han
   User Interface/Alarm System                                                Jonathan, Josh
                                    Stage 4: Final Combination
   Task                                                  Team Member(s) Assigned
   Finalize Intruder Detection Program                                Jonathan, Josh, Po-han
   Hardware for Laser Tag                                                    Pritesh, Mesele
                                 Stage 5: Adjustment and Beyond
   Task                                                  Team Member(s) Assigned
   Prepare for Open House                                                  Jonathan, Mesele
   Complete Laser Tag Game                                            Jonathan, Josh, Po-han

Project Resources

Our project’s resources can be broken down in terms of the major components of the project. These
components include the Roomba, Beagleboard, Kinect, and speech recognition software. For the
Roomba, we will utilize the 5th floor lab which already has the equipment we need in order to
interface our Roomba with the Beagleboard. We will utilize many of the open source resources on
the web for writing the drivers for Roomba operation. The Beagleboard as well as many of its
related resources will come from TI. We plan on using TI’s technical documents that are provided
with the Beagleboard to understand how we can interface it with the other components of our
system. The Kinect will be provided by our faculty mentor and many of the resources will be from
open sources content on the internet. For starters, we have begun using the beta Kinect SDK that
was released by Microsoft this summer. We will also be using other websites that demonstrate how
to use the Kinect and other individual’s projects for inspiration on our own. For the speech
embedded system we will use TIesr software which will be provided to us by our TI sponsor. Our
team will use a combination of open source projects and forums, our TI sponsor who specializes in
speech recognition software, and our faculty mentor to progress through our project.


                                                 10
Project Materials

The following list consists of the seven main components and materials which our group will
require to proceed with this project. Each listed item will consist of the item itself followed by how
we plan to obtain it.

      1. Microsoft Kinect: will be provided by our faculty mentor
      2. Microsoft Kinect SDK: beta version officially released on the Microsoft website
      3. TIesr: embedded speech recognition software provided by TI
      4. Beagleboard: provided by TI
      5. Roomba: provided by 5th floor lab
      6. Infrared Laser Gun: Amazon
      7. Various circuit components: Digikey

Project Costs

The following table lists our project items and their associated costs. These costs were gathered
from various online sources and are accurate. The travel to expense to Dallas and the circuit
components are both variable expenses have been estimated.

     Quantity                   Description                  Unit Cost                 Cost
        1          Kinect System[1]                               $150.00                $150.00
        1          Roomba[2]                                      $159.94                $159.94
        1          Beagle Board[3]                                $125.00                $125.00
        2          Mini DIN 7 serial/USB converter                 $20.00                 $40.00
        1          12V 1A regulator                                 $1.59                   $1.59
        1          Infrared Laser Gun[4]                           $17.56                 $17.56
        1          Travel to Dallas to meet TI mentor             $100.00                $100.00
       N/A         Circuit Components                              $10.00                 $10.00
                           Table 2.0 Project Items and Associated Costs


Significant Threats

The most significant problem that would lead our team to miss these projections would be part
malfunctions or destruction. If we blow-up a Beagleboard, Roomba, or the Kinect by using too
much power or the wrong components, then our costs would increase dramatically. In order to
reduce the risk of these occurrences, we can test our circuitry through various tools such as
oscilloscopes, multi-meters, and SPICE. By using these tools we can verify proper power outputs.
                                                  11
We can also reduce potential blowouts by including safety margins in our design. So if the
equipment has a max rating of 50Watts, then we would design our circuitry to only output 40Watts,
thus ensuring a 10Watt buffer. Another way to reduce risk would be to verify our designs with our
faculty mentor who may be able to see any shortcomings.

CONCLUSION

This report presents the system design implementation of the multi-sensor robotics platform project
whose purpose is to give the reader a thorough understanding of how we will implement a prototype
based on thorough research. The goals of this project are to implement an autonomous robotics
platform that is able to roam corridors, detect intruders within a set perimeter, and participate in a
laser tag game. The robot will consist of a Roomba that will carry all of the components, the
Microsoft Kinect which has multiple sensors to be used in collecting inputs, and a Beagleboard that
will control the entire system. Since similar design implementations by other organizations have
been successful, the technical feasibility of our design is high. The problem background addressed
the criteria in which our project would be considered successful. The goals consist of using a TI
processor to control a multi-sensor robotic platform. The user interface design section discussed the
ways of interacting with the system. Interaction through voice commands is the primary goal, but it
will also consist of touch buttons to control the platform. Lastly, the implementation plan section
breaks our project into five key phases: Component Research, Integration I, Integration II, Final
Combination, and Adjustment and Beyond. We have little time to complete many objectives, but
our resources and research make the project feasible.


REFERENCES


[1] Unknown. (2011). Amazon.com: Kinect[Online]. 4 May 2011. http://www.amazon.com/Kinect-
Sensor-Adventures-Xbox-360/dp/B002BSA298/ref=sr_1_1?ie=UTF8&qid=1304523005&sr=8-1


[2] Unknown. (2011). Amazon.com: Roomba[Online]. 4 May 2011.
http://www.amazon.com/iRobot-44001-Roomba-Vacuum-Cleaning-Robot/dp/B002MSO9OQ


[3] Unknown. (2011). Liquidware.com: Beagleboard[Online]. 4 May 2011.
http://www.liquidware.com/shop/show/BB-C4/BeagleBoard+C4
                                                   12
[4] Unknown. (2011). Amazon: Infrared Laser Gun[Online]. 4 May 2011.
http://www.amazon.com/Wii-Infrared-Laser-Magnum-Gun-Nintendo/dp/B000VSA5Y2




                                             13
APPENDIX A – GANTT CHART




           14
APPENDIX A – GANTT CHART




           15
APPENDIX B – Project Flow Diagram




               16
                         APPENDIX B – Project Flow Diagram



                           Identify goals and requirements.
                           Develop preliminary solution for a
                           robot prototype.




Research components and options for                 Plan implementation on most
implementing design. Options like                   important parts of project and see if
multiple sensors or Kinect.                         working in parallel is possible.




           Begin Integration I:
           building platform                           Refining software and hardware
           and software


                   OK?



   Begin Integration II: creating functions
   like intruder detection and user                             Testing and refining
   interface.



                   OK?



   Final Combination: implementation of
                                                                Testing and refining
   laser tag.



                   OK?




   Adjustment and beyond: fine tuning
   and prepping for demo.                                        FINAL DEMONSTRATION
                                              17

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:10/13/2012
language:Unknown
pages:17