POTENTIAL OBJECTS DETECTION by swenthomasovelil

VIEWS: 43 PAGES: 17

									                                  CHAPTER 1


                              INTRODUCTION


         Smart space, in which devices, agents, robots and everyday objects are all
expected to seamlessly integrated and cooperate in support of human lives, will
soon become part of our daily activities . However, to achieve this goal, numerous
challenges need to be tackled, one of which is the information acquisition from
physical world. For example, in order to perform sophisticated communication,
robots need to acquire sufficient information about the object that human pays
attention to. Two main problems exist when acquiring information from objects.
One is how to obtain information as extensive as possible. The second problem is
how to deploy sensors when detecting objects in an interesting region. We attach
sensors to everyday objects to realize smart objects, which will help us to know
more clearly and rapidly what is happening in the environment. However, owing to
the use limitation of sensor and sensor faults, in some circumstances, the smart
object might be undetectable; meanwhile, there also exist no sensor attached
objects, thus how to detect the existence and obtain information from these objects,
which we called the potential object detection problem, becomes a challenge in our
smart space research. A physical context reasoning system named Sixth-Sense is
used to detect the existence of potential objects and obtain information from these
objects . Here, wireless sensor nodes named MOTE is attached to several selected
everyday objects in our smart space. The sensory data obtained from each MOTE
is analyzed by the IRR (Inference Rule Reasoning) module, which executes
context reasoning based on the inference rules stored at IRB (Inference Rule Base).

                                         1
                               CHAPTER 2


                 POTENTIAL OBJECTS DETECTION


2.1 PROBLEM DESCRIPTION


           How to detect the existence and obtain information from objects is
called the potential object detection problem. The following figure gives a sample
of potential object detection. There are three sensor attached objects in the figure.
But as Book-A is placed on Book-B, the ultrasonic radio sent by Book-B is
blocked by Book-A, thus Book-B is undetectable by the network. Then how to
detect Book-B is one type of potential objects detection problem.




                    FIG. 2.1: Potential object detection example



                                          2
2.2 PROBLEM DEFINITIONS


  Definition 1: Potential object Detection: - It can simply be defined by the
function Y=f(X). Here, X denotes the sensory data set from physical objects, Y
denotes the detected potential objects, f denotes the reasoning function based on
inference rules.


 Definition 2: 4D Reference Space: - The whole problem domain can be described
as (X,Y,Z,T) i.e., 3D space + time , called as 4D reference space.


Definition 3: Reference objects and Potential objects:- In our smart space, the
detectable sensor attached objects are called reference objects and the undetectable
objects are called potential objects. A reference object O is described as
O= <Nid, Sid,S,T> where Nid and Sid separately denote the unique ID O and
sensor ID that object O is attached, S is the state set.


Definition 4: State Set:- State Set S of reference object O can be defined as
S=<L,G,M,D,P,A,V,R>, where L denotes the location, G denotes the slope angle,
M denotes the motion state, D denotes the direction of O’s motion, V denotes the
light sense attribute, A denotes the audition attribute , P denotes the force analysis
attribute and R denotes the action zone of the force.


Definition 5: State-Stable and State-Alter:- If the state of a reference object
remains the same for a period, it is called state-stable and if the state changes
during the period, it is called state-alter.


                                               3
2.3 SIXTH SENSE ARCHITECTURE




                         FIG 2.2: Architecture of sixth sense


The whole architecture of sixth sense is shown above. The first two layers, the
physical and sensor deployment layers appear as the input X while the application
layers appears as the output Y. the context reasoning layer acts as the f function.


PHYSICAL LAYER: This layer involves various everyday objects and sensors.
Most of the objects are found in a typical home such as books, toys, table,etc.




                                          4
SENSOR DEPLOYMENT LAYER: Not all objects can be attached with sensors
and directly sensed at any time. How to select proper objects to act as reference
objects is this layer’s work. Here two aspects are considered: firstly whether an
everyday object can be selected as a reference object is determined by the specific
application domain. Secondly, detectability of an object is also very important.


CONTEXT REASONING LAYER: This layer accepts the sensory input data from
MOTE nodes, which will then be used for context reasoning. It includes two
levels of contents: the first level is the state of the object itself (e.g. what is the
object’s motion state); the second level consists of the possible relation between
this object and others (e.g. is the object located on the table). By analyzing the
sensory data from the reference objects, the current contexts about the objects
themselves and the relations between them can be understood. IRR, the core
module of this layer, will then read and reason these contexts based on the
inference rules from IRB. The finally reasoning result will be sent to the next layer.


APPLICATION LAYER: This layer is formed by a variety of applications.




                                          5
                               CHAPTER 3


      INFERENCE RULES BUILDING AND REASONING


3.1 INFERENCE RULES


In Sixth-Sense, the inference rules mainly come from common sense and physics
theory. According to the different knowledge they concern, we classify the
inference rules into 4 classes as shown in the table below. The first two kinds of
rules are all based on static context, called 3D Static Rules; while the last two
kinds are derived from dynamic context, called 4D Dynamic Rules.


                  TYPE                                     DESCRIPTION
           Location Relation                   Inference based on specific location
                                               context between reference objects.
             Force Relation                    Inference by the force context.
          Motion State Change                  Inference according to the motion state
                                               change context from the reference
                                               object.
         Senders and Receivers                 Several reference objects make up
                                               sender- receiver pairs and are deployed
                                               in a specific model . Inference based on
                                               the dynamic context change between
                                               them.
                      FIG 3.1: Classification of inference rules


                                           6
3.2 BUILDING INFERENCE RULES


LOCATION RELATION
         Inference Rule 3D-L1: A relation R up ,A,B ,ti exists between
reference objects A and B at time ti. Theslope angle of A is 0 or∏2 ; besides,
there is no touch point between A and B, then we infer that an object Cis
presentbetween them .
        Inference Rule 3D-L2: A relation R up, A, B, texists between reference
objects A and B at time ti. Moreover, the slope angle of A is within (0,∏/ 2) , then
we infer that there is an object C between A and B.




                     FIG 3.2: Inference rule sample for 3D-L2


FORCE RELATION
        Inference Rule 3D-F1: If a reference object A is State-Stable and suffers
from downward pressure in scope Sp (action zone of the force). Moreover, there is
no other reference object x in scope Sp can form the relationup ,x , A ,twith A,
then there will be an object B on A.



                                          7
                        FIG 3.3: Inference rule sample for 3D-F1


MOTION STATE CHANGES
       Inference Rule 4D-M1: If there is a State-Alter to reference object A
between time T1 and T2 (from rest to motion), then there will be an object B in the
inverse direction of A’s movement.




                    FIG 3.4: Inference rule sample for 4D-M1
                                         8
SENDERS AND RECEIVERS
        Inference Rule 4D-SR: Two reference objects A and B, B is a State-Stable
light source and A can sense the light from B. If there is a State-Alter of A’s light
sense attribute from T1 to T2 (from bright to dark, or from dark to bright), then
there will be an object C between A and B.




                    FIS 3.5: Inference rule sample for 4D-SR1




                                         9
                                   CHAPTER 4


                      EXPERIMENTS AND RESULTS


4.1 EXPERIMENT 1
   In the first situation, an ultrasonic location sensor and an acceleration sensor (2-
axis) were attached on Book B, and two other location sensors on Table. The target
is to detect the existence of Book A. Through the location sensors we can infer the
location context that Book B is on Table. Moreover, by using the acceleration
sensor, we can estimate the slope angle of Book B, for different slope angle yields
different acceleration values. Firstly the acceleration values were measured when
Book B was flatly deployed. Owing to the existence of noise, the measured results
were bounded in two intervals (the X axis in [1.1g,0.7g], and the Y-axis in
[0.5g,0.75g] (the available range is 2g ). Thus if the measured X or Y value didn’t
drop in these two intervals, there would be a slope angle to Book B, by which we
can get the other location context about slope angle. With the above two contexts,
the inference rule 3D_L2 condition will meet, and we can then detect the existence
of potential object Book A and its location (between Table and Book B).




                          FIG 4.1: Experimental set up 1

                                          10
The result is as follows:




                            FIG 4.2 : Situation 1 result


Here, the black squares make up the variation range of the acceleration value when
book B is flatly deployed while the red circles indicate the value when book B is
inclined deployed.


4.2 EXPERIMENT 2


    In the second situation, a location sensor was attached on Box and our purpose
is to detect the existence of Toy Car. At first Toy Car was moving towards Box.
One time it knocked into the Box and stopped, meanwhile Box started moving for
a short distance for the impulsive force and then stopped. From the location sensor
we acquired two locations of Box before and after the movement, which could be
viewed as a displacement context. By the context we can use the inference rule
4DM1 to infer the existence of Toy Car and the probable location of Toy Car (in
the inverse direction of Box’s movement).

                                           11
                            FIG 4.3: Experimental set up 2


The result is as follows:




                              FIG 4.4: Situation 2 result


The X and Y values have obvious change between time 12 to 14, which indicate
the displacement context of the box.




                                          12
                                CHAPTER 5


                      SIXTH SENSE PROTOTYPE


We've evolved over millions of years to sense the world around us. When we
encounter something, someone or some place, we use our five natural senses to
perceive information about it; that information helps us make decisions and choose
the right actions to take. But arguably the most useful information that can help us
make the right decision is not naturally perceivable with our five senses, namely
the data, information and knowledge that mankind has accumulated about
everything and which is increasingly all available online. Although the
miniaturization of computing devices allows us to carry computers in our pockets,
keeping us continually connected to the digital world, there is no link between our
digital devices and our interactions with the physical world. Information is
confined traditionally on paper or digitally on a screen. Sixth Sense bridges this
gap, bringing intangible, digital information out into the tangible world, and
allowing us to interact with this information via natural hand gestures. ‘Sixth
Sense’ frees information from its confines by seamlessly integrating it with reality,
and thus making the entire world your computer.


                The Sixth Sense prototype is comprised of a pocket projector, a
mirror and a camera. The hardware components are coupled in a pendant like
mobile wearable device. Both the projector and the camera are connected to the
mobile computing device in the user’s pocket. The projector projects visual
information enabling surfaces, walls and physical objects around us to be used as
interfaces; while the camera recognizes and tracks user's hand gestures and
                                         13
physical objects using computer-vision based techniques. The software program
processes the video stream data captured by the camera and tracks the locations of
the colored markers at the tip of the user’s fingers using simple computer-vision
techniques. The movements and arrangements of these markers are interpreted into
gestures that act as interaction instructions for the projected application interfaces.
The maximum number of tracked fingers is only constrained by the number of
unique fiducials, thus Sixth Sense also supports multi-touch and multi-user
interaction.




                                          14
                                 CHAPTER 6


                              APPLICATIONS

The Sixth Sense prototype implements several applications that demonstrate the
usefulness, viability and flexibility of the system. They are:
    1. The map application lets the user navigate a map displayed on a nearby
       surface using hand gestures, similar to gestures supported by Multi-Touch
       based systems, letting the user zoom in, zoom out or pan using intuitive
       hand movements.
    2. The drawing application lets the user draw on any surface by tracking the
       fingertip movements of the user’s index finger.
    3. The Sixth Sense system implements a gestural camera that takes photos of
       the scene the user is looking at by detecting the framing gesture. The user
       can stop by any surface or wall and flick through the photos he/she has
       taken.
    4. Sixth Sense also lets the user draw icons or symbols in the air using the
       movement of the index finger and recognizes those symbols as interaction
       instructions.
    5. The Sixth Sense system also augments physical objects the user is
       interacting with by projecting more information about these objects
       projected on them. For example, a newspaper can show live video news or
       dynamic information can be provided on a regular piece of paper.
    6. The gesture of drawing a circle on the user’s wrist projects an analog watch.




                                          15
                                  CHAPTER 7


                                CONCLUSION

In the potential objects detection system named Sixth-Sense, it can be used to
obtain information from objects in the smart space. The view of smart space is
object-centered. MOTEs are attached to several selected objects to form the Sixth-
Sense Skeleton, by which we can obtain context from these objects. The context
will then be analyzed by the inference rules. However, currently we can only detect
the existence and probable location of the potential object. Considering the rich
information from the reference object hasn’t been sufficiently used, in the future it
can be devoted to obtain more information from the derived object, even to
estimate what type of object it might be.




                                            16
                          CHAPTER 8


                       REFERENCES

1. H. Chen, S. Tolia, and C. Sayers, "Creating Context-Aware Software
   Agents", GSFC/JPL Workshop on Radical Agent Concepts, NASA GSFC,
   Greenbelt, Sep. 2001, pp. 26-28.

2. M. Imai, "Human-Robot communication based on sensor information",
   SICE Annual Conference, Okayama, 2005.

3. A. Essa, "Ubiquitous sensing for smart and aware environments", IEEE
   Personal Communications, Oct. 2000.

4. www.pranavmistry.com

5. www.wikipedia.org




                                 17

								
To top