Docstoc

Ultrasonic Mapper Garment

Document Sample
Ultrasonic Mapper Garment Powered By Docstoc
					       Ultrasonic Mapper Garment


                   Project done in
    Partial fulfillment of the requirements for
              Course No: ECE 5984
    SS: Wearable and Ubiquitous Computing



                   Spring 2003



             Under the guidance of


                 Dr. Tom Martin
               Assistant Professor
Department of Electrical and Computer Engineering
                  Virginia Tech

                        By

               Madhup Chandra
            Shekhar Agrawal Sharad
              William O. Plymale
Problem Statement
         The goal of this project was to model and create an electronic-textile garment
that maps a building as the user walks through it using ultrasonic emitters and detectors,
and perhaps other sensors. The garment should be capable of being used indoors to
create a map of the areas of a building that a user has passed through. The system
should account for the movement of the user and should not require the user to stand
still. The developed model takes as input a floor plan and a path through that floor plan,
and outputs the garment's map of the area along the path. The model also accounts for
the physical properties of the ultrasonic sensors and addresses three key issues: (1) The
various algorithms for mapping and feature detection (2) Number of sensors, placement
and effect of placement on accuracy of measurement (3) Information needed from the
user in order to generate the map.

Literature Review

Reviewing the project description of the mapper garment, we determined the garment’s
requirements closely matched those of an autonomous robot navigating, or mapping, an
unknown space. Using web search engines, such as Google, IEEE Explore, and
CiteSeer, we began a literature search on such topics as “mapping robot”, “ultrasonic
sensors”, and “line fitting”. The articles we discovered tended to fall into five categories:
robot navigation and obstacle avoidance, room mapping robots, line fitting and
segmentation algorithms, sensor characteristics, and feature extraction. During the
school term, team members identified articles as contributing to the mapper garment
project.

Gregory Dudek and others at McGill University in Canada describe strategies for
accurately modeling the typical behavior of an ultrasonic sensor used in robot
navigation. Their model accounts for multiple reflections of the sonar signal between
transmission and reception[1]. Mark Legg, a student at the University of York, uses a
multi-receiver ultrasonic sensor array to investigate ways of improving poor positional
resolution of detected obstacles. His technique involves using arrays of multiple
ultrasonic receivers and transmitters to measure and analyze multiple Time-Of-Flight
(TOF) signals from an obstacle[2]. At the University of Michigan, Cao and Borenstein
discuss the propagation patterns of Polaroid ultrasonic sensors. From detailed
experimentation, they attempt to narrow the bandwidth of this ultrasonic sensor using
phased arrays[3].

Rodney Brooks at MIT’s Artificial Intelligence Lab has developed map making and
navigational algorithms used by mobile robots operating in unstructured domains such
as hotel and office cleaning operations[4]. Cahut, Valavanis, and Delic’s research
involves analysis of a set of range readings collected by a ring of sensors on a mobile
robot. The readings are correlated to acquire a 2-D map of the robot’s environment.
The map is continuously enhanced via matching and update algorithms as new data
points are collected by the robot while in motion[5]. Kleeman and Kuc of Yale University
present a novel design approach of ultrasonic sensor placement intended to minimize
the correspondence problem of associating different receiver echoes from multiple
targets[6]. The research of Thomas Rofer defines an approach to generate consistent
maps in real-time using laser range sensors. The robot in this study performs room
mapping “on-the-fly” allowing the resulting maps to be used for self-localization[7].
Akihisa Ohya and others at the University of Tsukuba study map construction algorithms


                                                                                           2
using ultrasonic sensing. This paper contains a trade-off description of ultrasonic versus
laser range sensors[8]. Using multiple sensor types (proximity, light, compass, camera),
Phillip Machler analyses this fusion of sensor input to develop mapping algorithms used
by a robot to determine routes without prior information[9]. The research work
performed by Alberto Elfes from Carnegie-Mellon University is the major contributing
factor to our project. Elfes’ system uses sonar range data to build a multileveled
description of a robot’s surrounding. Ultrasonic sensor readings are interpreted using
probability profiles to determine empty and occupied areas. Range measurements from
multiple points of view are integrated into a sensor-level sonar map[10].

Borenstein and others perform a review of mobile robot navigation and positioning
technologies. Categories described include odometry, inertial navigation, magnet
compasses, active beacons, global positioning systems, landmark navigation, and model
matching[11]. Edlinger and Puttkamer describe the exploration components of an
autonomous mobile robot for indoor applications. They developed a method for the
autonomous and systematic construction of a 2D-map of the robot’s environment[12].
Gutmann and Schlegel evaluate different self-localization approaches for robot
navigation in indoor environments[13]. The research activities of Hoppenot, Colle, and
Barat focuses on the localization for mobile robot anatomy. To solve the localization
problem, they segment the collected ultrasonic image by applying Hough transform. The
resulting segments are matched with a prior room information to determine the robot’s
location[14]. Pfister and others introduce a “weighted” matching algorithm to estimate a
robot’s planar displacement by matching two-dimensional ultrasonic range scans[15].
Pose estimation is a fundamental requirement of a mobile robot which enables it to
position itself within its environment. Shaffer, Gonzalez, and Stentz compare two 2D
pose estimation algorithms: feature-based, and iconic[16]. Weiβ and others use optical
range finder scans to position a robot in a space. Algorithms they developed are used to
match and cross-correlate the scans from different locations to find the translational and
rotational displacement of the robot[17].

Pavlin and Braunstingl’s research involves context-based feature extraction with
Polaroid ultrasonic transducers. Sampled raw data are clustered in such a way as to
reduce the sonar’s angular uncertainty allowing accurate determination of positions
relative to the robot[18].

T. Darrell’s MIT Course 6.801/866 presents concepts of segmentation and line fitting
covering topics such as Hough transform, iterative fitting, and background
subtraction[19]. R. Unger at McGill University discusses a clustering technique that can
be used in defining straight lines (walls) in data points collected by the mapper garment.
This technique is based on a the Spheres of Influence Graph[20]. Pfister and others
introduce useful algorithms for creating line-based maps from sets of dense range data
that are collected by a mobile robot from multiple poses[21]. Matthew Ricci’s thesis on
Autonomous Vehicles contains a useful and complete description of data manipulations
used to fit multiple lines to data points returned by a mapping robot[22].




                                                                                        3
User’s Guide
        At the outset, the mapper garment model simulates the mapping exercise by
generating the map of the room through which the user walks through. The user walks
from one marker to another. At each marker, the range to all the visible objects is
obtained by means of this model. The methodology involved entails obtaining the
probability of volumes swept by each sensor placed around the body of the user. This
probability is used to obtain information about which parts of the room are empty or
could be occupied. These readings are obtained from all the sensors and merged
together to get the final probabilities for the empty or probably occupied portions of the
room. The probabilities are then manipulated in order to get the actual occupied areas of
the room.
        The simulation is carried out in windows using Visual Studio C++ 6.0 and
MATLAB. The various parameters can be adjusted in the config.h file. The config.h file
includes among other parameters, the number of sensors used, room dimensions,
starting point of the user, the markers (the future positions of the user), angle of path
traversed. An example for the config.h file is shown in figure 1.

Tutorial to run the Simulation:
       The following steps need to be followed in order to generate the map of the room
for each user location.

Step 1: Load the workspace <workspace file name> using Visual C++ 6.0.

Step 2: Adjust the values as appropriate in the config.h file.

Step 3: Compile and run the program : <program.exe name>

Step 4: Launch MATLAB and run the < m file name > file to obtain the results for a
        given user position.


 //User Information
 #define NUM_SENSORS 24                 //   Number of sensors on the waist
 #define ROOM "wall1"                   //   The room to be used. Please specify
                                        //   The room in double quotes
 #define START_X 120                    //   The starting X position of the user
 #define START_Y 250                    //   The starting Y position of the user
 #define THETHA 270                     //   The orientation of the user

 // Ultrasonic characteristics
 #define ROW     241        //               The maximum rows of the canvas
 #define COLOUMN 501        //               The maximum columns of the canvas
 #define OMEGA   30         //               The main lobe of ultrasonic waves
 #define EPSILON 3          //               The error in the range measurements
 #define RMIN    27         //               Minimum range detected by ultrasonic
 #define RMAX    914        //               The maximum range detected by ultrasonic


                Figure 1. Sample Config.h file for Mapper Garment Model




                                                                                        4
Mapper Garment Design Issues

In designing and modeling the ultrasonic building mapping garment, our goal was to
investigate and resolve the following design issues:
    • What number of ultrasonic sensors is required to accurately map a space, and
        how should the sensors be positioned on the wearer’s body?
    • In addition to ultrasonic sensors, what other types of sensors are required to
        accurately map a space?
    • What constraints, if any, does the mapping system impose on the wearer?
    • What algorithm(s) should be used to map a room and its features?

In the following sections, we describe the decisions and trade-offs that occurred during
the course of this project.


Sensors & Algorithms used:
        With regard to the Mapper Garment, when it came to which sensor to use and
what algorithm to implement, we were faced with an interesting situation. We found out
that the algorithms that we chose would determine what kind of sensors we would need.
On the other hand, we also found that the choice of sensors would also restrict the
algorithm that could be chosen.

       Since the problem was primarily about ranging, translation and rotation, we
conducted a comprehensive study on the major types of sensors that were available
determining which of the sensors would suit our purposes. Table 1 lists the sensors that
we studied including the pros and cons of each sensor.

                Table 1. Sensors applicable for the Mapper Garment
Sensor              Intended to            Pros                   Cons
                    measure
IR Sensors          Range                  Cheap, Small size,     Prey to background
                    Measurement            low weight, No         and fluorescent
                                           moving parts, no       light, and accuracy
                                           vibration in the focal depends on IR
                                           plane, no noxious      reflectance which is
                                           gases and good         very poor for our
                                           power efficiency.      purposes
Ultrasonic sensors Page: 5                 Cheap, no moving fairly accurate
                    [0]Range               parts, wide range,     timing circuitry
                    Measurement            negligible             needed hence
                                           pressure loss and precise tools
                                           bi-directional         needed to devise
                                           operation              the same, usually
                                                                  processor needed,
                                                                  dependence on
                                                                  commercial
                                                                  sensors, cannot
                                                                  create our own.
LASER sensors       Page: 5                highly directional,    Expensive and



                                                                                           5
                 [0]Range            highly intense,        requires lot of power
                 Measurement         coherent light,        for the range
                                     perfect reflection     required
                                     characteristics
GPS              Position and        Highly accurate        Cannot be used
                 distance            measuring solution     inside buildings.
                                     for position and       Requires a
                                     location.              supporting
                                                            architecture, The
                                                            precision of GPS is
                                                            not good enough,
                                                            the best GPS can
                                                            give location in the
                                                            range of 1m
                                                            expensive!

Digital          Distance            temperature            Unwanted
Accelerometer                        stability, solid       sensitity to
                                     repeatability,         motion. Cannot
                                     CMOS switched-         sense small
                                     capacitor circuit      displacements.
                                     compatibility, and     Measurement
                                     low frequency          dependant upon
                                     (DC) acceleration      user
                                     measurement.
Pedometer        Distance            Inexpensive, small,    Measurement is
                                     lightweight,           based on number of
                                     unobtrusive,           steps, sampling rate
                                                            is not suitable for
                                                            our purposes
RF sensors       Distance            no additional          high variation over
                                     components such        frequency, needs
                                     as magnets, coils or   frequency correction
                                     magnetic circuits,     for reasonable
                                     accommodated on        accuracy
                                     100µm square of
                                     silicon wafer, high
                                     SNR values, output
                                     is speed
                                     independent
Gyroscope        Angle of Rotation   seeks true or          requires a constant
                                     geographic meridian    source of electrical
                                     instead of magnetic    power, sensitive to
                                     meridian, can be       power fluctuations.
                                     used near the
                                     earth’s magnetic
                                     poles, not affected
                                     by surrounding
                                     material,
Fluid Rotation   Angle of rotation   Rate of change of      Fluid system needs



                                                                                   6
sensor
Digital Compass         Angle of rotation        Digital output         less stable than a
                                                 easily interfaced      good quality
                                                 with other             magnetic
                                                 electronic             compass, Must be
                                                 equipment,             corrected for
                                                 compensated for        magnetic variation
                                                 magnetic deviation
                                                 and incorrect
                                                 alignment

The Ultrasonic sensor was chosen because of the following reasons: It offered the cost
effectiveness like the Infra-Red sensors and was not affected by fluorescent or diffused
light. The Ultrasonic sensors are also easy to mount and place on the body. Since they
had no moving parts and could be used bi-directionally, they offered more advantages
than other sensors in the same category. The Digital Accelerometer was chosen
because it was stable under varying temperatures, exhibited good repeatability
according to the data sheets and had the facility of an analog output. The Digital
compass was chosen because it was easy to interface this device with other
computation units. Also, the heading, pitch and roll errors could be corrected at the
software levels enabling a wider range of usability.

Thus armed with a digital accelerometer, a digital compass and ultrasonic sensors, we
now faced the task of determining the algorithm that could be chosen for
implementation.


Algorithms and Simulation:
        In course of choosing the algorithms, we came across several algorithms that
could be used for rangeability but our choice was restricted due to the choice of our
sensors. We developed the following algorithm (Algorithm 1) after making a careful study
of the various algorithms using these three sensors.

Algorithm 1:
Step 1: The user moves to a new position Pi =(xi,yi).

Step 2: Obtain the distance traveled from Pi-1, di.

Step 3: for the given position Pi and orientation θi get the probable rooms in which the
       user may be present R={r1, r2,… rn} where n < N(total number of rooms)

Step 4: for each room rj
             for each wall wk in rj
                  get intersection point from Pi and θi if one exists
             end for
       end for

Step 5: Determine the nearest intersection point to Pi for given θi

Step 6: Repeat Step 3 to 5 for all possible θ at Pi and add the points to the scan set Si


                                                                                            7
Step 7: Translate the points in the map M by di and add the points in scan set Si

Step 8: Repeat the steps 1 – 7 for all the positions the user moves to.

Step 9: Curve fit the points to get the probable wall positions in the final map


Testing of Sensors:
         To make sure that our algorithm would work, we needed to verify the correctness
of our sensors. We started out by testing and calibrating the sensors. The digital
compass was tested first. The experimental setup involved using a reference for the
angles as an angle distribution printed on a paper. The experimental setup is shown in
Figure 2a. The compass was the rotated on this paper both clockwise and anticlockwise
to get the reading. This reading is compared against the actual reference value and the
error is calculated. Figure 2b shows the calibrated readings of the compass.

                                               400

                                               350

                                               300

                                               250

                                               200                                                         Paper

                                               150                                                         Average

                                               100

                                                50

                                                 0
                                                     1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37
                                               -50

               (a)                                                                 (b)
                                Figure 2. Calibration of Compass


                                                              Primary Target

                                                              Partial Reflection away



     Partial Reflection back
                          30o

                                                                     Artifact




                        Ultrasonic transceiver
As can be seen the error is can be compensated for. Hence the values derived from the
                           Figure 3: Goal implementing algorithm 1. The next sensor
Digital Compass seemed satisfactory for of Ultrasonic Testing


                                                                                                                   8
that we chose to test was the ultrasonic sensor. The reason we needed to test this
sensor is as follows: We needed to determine whether an echo is received if an
ultrasonic pulse is directed towards an object at an angle θ, where θ != 90o. The situation
is as shown in figure 3.




                    (a)                                             (b)




                                           (c)




For this purpose we used the receiver unit as shown in figure 5b. For the transmitter
                (e)                                                 (f)
                      a function generator to generate and Ptolemy transmitted through
functionality we used Figure 4. Accelerometer receiver the wave andmodel


                                                                                         9
an ultrasonic transmitter. The testing showed us that the situation in figure 3 actually
does occur and that an echo is received even at an incident angle other than 90 degrees
hence the ultrasonic sensors could be used for the purposes of our algorithm. The next
step was building the ptolemy model of the accelerometer. The ptolemy model of the
accelerometer is shown in figure 4b, 4c and 4d.

The next step in our testing procedure was to test the accelerometer. To test the
accelerometer, we hung a dead weight from two ends of a draw string whose mass mds
<< mdw where mdw is the mass of the dead weight. The accelerometer was then mounted
on this deadweight as shown in figure 4a. The readings from this was then fed




                    Figure 5 (a) Accelerometer (b) Ultrasonic Receiver

to a data acquisition system that could be analyzed on the computer and measurements
can be made. The result of this setup is shown in Figure 4e. As can be seen from the
display, the accelerometer does not give readings for small displacements as shown by
the red circle. These results show that the accelerometer would be an unsuitable option
for our purposes. Also the accelerometer would give an acceleration value at each foot
step. We needed acceleration data at higher frequencies than that . At this point,
Algorithm 1 had to be discarded as without any way of getting the distance, it would be
impossible to get the displacement and hence apply the translation. Hence we needed
another algorithm. The algorithm should ideally use only the Ultrasonic transceivers and
the compass. Another requirement for our algorithm was that it should be extremely
computationally inexpensive. After searching the literature, we came to the conclusion
that there are no computationally inexpensive algorithms. So we decided to build one of
our own. The result of that thought process was Algorithm 2. This algorithm can be
broken down into 8 basic steps which are as follows:


Algorithm 2:
Step 1: Walk to a new point. Call that point the origin (0,0)

Step 2: Find the intersection points with the walls for that point as discussed in algorithm
        1. (new set of scan points)




                                                                                          10
Step 3: Rotate these scan points by the reading given by the compass(θ)

Step 4: Join the scan points together to get a closed polygon

Step 5: Find the centroid of these points

Step 6: The last set of scan points and ther centroids were computed at the previous
       sampling time

Step 7: Compare the centroids and calculate the translation.

Step 8: updated map = updated map (rotated and translated to sync to the new origin)

Step 9: Add the new scan points to the updated map and call this updated map.

Step 10: old set of scan points = new set of scan points

Step 11: Once every ten iterations, do curve fitting on the updated map.

After implementing Algorithm 2, we realized that something was not measuring up. On
debugging the logic, we found that the centroid of a polygon will tend towards where
points are localized, as it was basically finding the center of mass of the polygon. So, we
had to shift the centroid paradigm to ‘center of figure’ paradigm. Also we found that there
was an inherent error in the translation calculated. After discussions with Dr.Lynn Abbott
and and Dr. Roger W. Ehrich, it was decided that we need to add a refinement step,
which would involve cross – correlation in order to get the exact translation. So, the
modified Algorithm 2 with the refinement step and the center of figure concept, called
Algorithm 2a is as follows:


Algorithm 2a:
a) Approximation
Step 1: Walk to a new point. Call that point the origin (0,0)

Step 2: Find the intersection points with the walls for that point as discussed in algorithm
        1. (new set of scan points)

Step 3: Rotate these scan points by the reading given by the compass(θ)

Step 4: Join the scan points together to get a closed polygon

Step 5: Find the center of the polygon

Step 6: The last set of scan points and its center were computed at the previous
       sampling instant

Step 7: Compare the center of figures for both and calculate out the approximate
        translation

b) Refinement



                                                                                          11
Step 1: Use cross-correlation to get the exact translation and apply the translation
       to the map

Step 2: updated map = updated map (rotated and translated to sync to the new origin)

Step 3: Add the new scan points to the updated map and call this updated map.

Step 4: old set of scan points = new set of scan points

Step 5: Every 10 iterations, do curve fitting on the updated map.




               (a)                                                    (b)




               (b)                                                  (d)

                            Figure 6 Results for Algorithm 2a

The implementation results for algorithm 2a are shown in figures 6(a – d). These results
are for the different number of sensors. Figure 6a. is the result for 18 sensors placed


                                                                                       12
with an angle of separation equal to 20o, 6b. is the same room with 24 sensors placed at
an angle of 15o, 6c. is the result obtained with 36 sensors placed at 10o while 6d. was to
show the accuracy with 360 sensors placed at 1o between each other. Also the time
required for these configurations to process a set of 10 samples is shown in Table 2.

               Table 2. Processing Times for the various number of sensors
                     Number of Sensors             Processing Time
                                                      (seconds)
                                18                              3.7
                                24                              4.9
                                36                              7.3
                               360                              71.3

It can be seen that by using 18 sensors we require the least amount of processing time,
but it can also be noted that the end result that we obtain are not enough to do a curve fit
in order to get the walls. It can also be seen that 360 sensors give too many points that
are not necessary. Hence, we had to choose from 24 and 36 sensors. We found that 24
sensors seemed ideal as it had an average processing time at the same time giving
sufficient readings in order to do curve fit to obtain the wall positions in the images. Also
our test was supported in literature. The user walk rate, which is 0.5 m/s was determined
after conducting experiments which involved making people walk from one position to
the other and measuring the time taken.

After discussions with Dr. Martin, we concluded that we were not taking the actual
behavior of the ultrasonic sensors. We assumed the ultrasonic wave to behave like
lasers in our simulation.. Thus in order to emulate the actual behavior of Ultrasonic
Sensors, we had to implement the algorithms as stated in <references>. This
consolidated algorithm(Algorithm 3) consisted of the following steps:

Algorithm 3
Step 1: Make a canvas of ROW*COLOUMN dimensions which will hold the map of the building

Step 2: Walk to a new point. Call that point the origin (0,0)

Step 3: For each sensor
             Find the range measurement
             Find the probability of the empty and the somewhere occupied region of thevolume
             swept by it.
            The volume not inside the beam of the ultrasonic is marked as "UNKNOWN"
        endFor

Step 4: Merge the empty probabilities of the sensors together.

Step 5: Merge the occupied probabilities by taking into consideration the probability of that pixel
        being empty too as computed in the last step.

Step 6: Populate the map. The map will be marked occupied if the occupied probability is higher
        than the empty and vica versa.

Step 7: Move to a new point and get the map again



                                                                                                 13
Step 8: Rotate the new map to align with the old map. Correlate the two maps to get the
        displacement between the maps.

Step 9: Merge the two maps together.

Step 10: Curve fit through the maps once the data points exceeds MAX_POINTS.

Step 11: Repeat till done




                (a)                                                      (b)




                (c)                                                            (d)
                               Figure 7. Results of Algorithm 3




                                                                                          14
The results of this algorithm of this algorithm are shown in figure 7. The figure 7a shows
the reading that is obtained for one sensor. The region in blue is the probable empty
space whereas the region in black is the probable occupied space Figure 7b shows the
probable empty space in the room. The unknown area is represented by the cylinder in the
center. The region around it is empty.




                (a)                                                   (b)




               (c)                                                       (d)




                (e)                                                   (f)
                  Figure 8. Result for another room (a) by algorithm 3




                                                                                       15
To illustrate the effectiveness of this algorithm, we present the results obtained for
different number of sensors and for different rotation angles. The figure 8b shows the
room in figure 8a for 8 sensors. Figure 8c shows the room mapped for 12 sensors.
Figures 8d, 8e and 8f show the results likewise with 18, 24 and 36 sensors. We would
like to draw the reader’s attention to the figures 8e and 8f. Though the number of
sensors used vary by 12, the room mappings obtained closely resemble each other.
Since the points obtained using 24 sensors is enough to curve fit the lines, we believe
that 24 is the optimum number of sensors needed for the device.

       An important afterthought of this implementation was the rate of sampling. We
found that we could not sample at 10 samples/sec since the processing time for the
each sample was found to be in seconds. Hence we decided to sample data only and
only when the previous sample has been processed. This algorithm is capable of
mapping the room even if the sampling is done after interval in the order of few seconds.




                                                                                      16
Line Fitting

To present a realistic representation of the space the wearer is in, it is desirable to
construct line-based maps from the data points collected by the sensor scans. In this
section, two suggested clustering and line fitting techniques are discussed.

Pfister, Roumeliotis, and Burdick describe this exercise as the weighted line fitting
problem[21]. The steps of their line fitting solution are:
    1. Use Hough Transform to group the raw data points into subsets of collinear
         points.
    2. Define a candidate line, l, to a subset of points.
    3. Minimize the error of the candidate line to the subset of points.
    4. Apply steps 2 and 3 to the remaining point subsets.
    5. Merge the lines to form the features (walls, etc.) of the space.

In Figure 9, plot A represents the raw points, plot B indicates the fitted lines, and plot C
shows the merged lines.




                                          Figure 9




                                                                                           17
In his thesis on Navigation and Localisation of Autonomous Vehicles, Matthew Ricci
describes a statistical least squares method used to fit lines to clusters of scan
points[22]. A combination of the variance and coefficient of determination methods is
used to identify lines associated with clusters. Appropriates values of the variance and
coefficient of determination result in the line being accepted. If not, the line is recursively
bisected using a technigue referred to as the corner rule. This bisection process
continues until conditions result in either accepted lines, or the feature being ignored.
Ricci’s line fitting flow chart is in Figure 10.




                                          Figure 10




If We could do it all over again
       The only thing that we felt we should have considered before doing anything else
was a comprehensive study on the behavior of the Ultrasonic wave propagation. If we
were to redo this project, we would lay more emphasis, infact, MOST emphasis on the
characteristics of Ultrasonic wave propagation since this affected our thought process
and approaches the most.
       Also, we assumed that our accelerometer can measure the distance traversed
as a function of the acceleration. This assumption was wrong for very high sampling
frequency as was needed initially. If we were to go back to the beginning, we would first
test the sensors to make sure that they would actually serve our purpose before
beginning the search for a suitable algorithm.



                                                                                            18
       Another area which we would like to correct involves the study on the reflective
nature of surfaces and the errors associated with the ultrasonic sensors due to the
same. We consider this important since any simulation should incorporate the true
nature of the environment in which the system is to be deployed.

Conclusion and Future Work
        By means of this project, we have been able to answer several questions
including what kind of sensors are needed – viz. Ultrasonic sensors and Digital
compass, how many of these sensors are needed – after experimenting, we believe that
the optimum limit is 24 sensors placed at a separation of 15O from each other is the way
to go for this problem, what kind of algorithm to be used – the algorithm that gave us the
best and realistic results was the one illustrated in <reference> which we state as
algorithm 3, how often should the sampling be carried out – because the processing time
for algorithm 3 is high, the next sample is taken only when the previous position’s data
has been processed.

        In the future, we look towards modeling the same algorithm completely in the
Ptolemy environment. We are also looking at ways of prototyping this system and test it
in the campus. We are also examining the various ways of mounting this system on the
user and the best places to do so. We propose to use a belt for mounting the ultrasonic
sensors. The other areas are in the process of being explored to determine which would
be the best place on the body to mount the device.




                                                                                       19
References

1.    Dudek, G., et al., Reflections on Modelling a Sonar Range Sensor. 1996, McGill
      Uiversity: Montreal, Quebec, Canada.
2.    Legg, M., Multi-receiver Ultrasonic Receiver Array. 2003, Universiy of York.
3.    Cao, A. and J. Borenstein, Experimental Characterization of Polaroid Ultrasonic
      Sensors in Single and Phased Array Configuration, University of Michigan.
4.    Brooks, R.A., Visual Map Making for a mobile Robot, MIT Artificial Intelligence
      Lab: Cambridge, Mass.
5.    Cahut, L., K. Valavanis, and H. Delic, Sonar Resolution-Based Environment
      Mapping.
6.    Kleeman, L. and R. Kuc, Mobile Robot Sonar for Target Localization and
      Classification, Yale University: New Haven, CT.
7.    Rofer, T., Using Histogram Correlation to Create Consistent Laser Scan Maps.
8.    Ohya, A., Y. Nagashima, and S.i. Yuta, Exploring Unknown Environment and
      Map Construction Using Ultrasonic Sensing of Normal Direction of Walls. 1994,
      Institute of Information Sciences and Electronics: Tennodai, Tsukuba JAPAN.
9.    Machler, P. Looking For Concepts: Unsupervised Map Construction with
      Unknown Sensor Configuration. in IEEE/RSJ Intl. Conference on Intellignet
      Robots and Systems. 1998. Victoris, B.C., Canda.
10.   Elfes, A., Sonar-Based Real-World Mapping and Navigation. IEEE Journal of
      Robotics and Automation, 1987. RA-3(3).
11.   Borenstein, J., et al., Mobile Robot Positig - Sensors and Techniquesoni. Journal
      of Robotic Systems. 14(4): p. 231-249.
12.   Edinger, T. and E. von Puttkamer, Exploration of an Indoor-Environment by an
      Autonomous Mobile Robot, University of Kaiserslautern: Kaiserslautern,
      Germany.
13.   Gutmann, J.-S. and C. Schlegel, AMOS: Comparison of Scan Matching
      Approaches for Self-Locialization in Indoor Environments. 1996, Research
      Institute for Applied Knowledge Processing: Ulm, Germany.
14.   Hoppenot, P., E. Colle, and C. Barat, Off-Line Localization of a Mobile Robot
      Using Ultrasonic Measures, in Robotica. 2000. p. 315-323.
15.   Pfister, S.T., et al., Weighted Range Sensor Matching Algorithms for Mobile
      Robot Displacement Estimation, California Institute of Technology.
16.   Shaffer, G., J. Gonzalez, and A. Stentz, A Comparison of Two Range-Based
      Pose Estimators for a Mobile Robot, Carnegie Mellon University: Pittsburg, PA.
17.   Weib, G., C. Wetzler, and E. von Puttkamer, Keeping Track of Position and
      Orientation of Moving Indoor Systems by Correlation of Range-Finder Scans,
      University of Kaiserslautern: Kaiserslautern, Germany.
18.   Pavlin, G. and R. Braunstingl, Context-Based Feature Extraction with Wide-Angle
      Sonars, Graz University of Technology.
19.   Darrell, T., Segmentation and Line Fitting, in MIT: Course 6.801.
20.   Unger, R., Spheres of Influence Graph, McGill University.
21.   Pfister, S.T., S.I. Roumeliotis, and J.W. Burdick, Weighted Line Fitting Algorithms
      for Mobile Robot Map Building and Efficient Data Representation. 2002,
      California Institute of Technology.
22.   Ricci, M., Navigation and Localisation of Autonomous Vehicles, in Department of
      Mechanical Mechatronic and Aeronautical Engineering. 2002, University of
      Sydney.




                                                                                      20
21

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:6
posted:9/6/2011
language:English
pages:21