Docstoc

SNU_20091120_Bum-Jae You

Document Sample
SNU_20091120_Bum-Jae You Powered By Docstoc
					           Issues for a cognitive humanoid robot
           2009. 11. 20.



YOU, BUM-JAE
HEAD, Ph. D.

Center for Cognitive Robotics Research
Korea Institute of Science & Technology
(http://humanoid.kist.re.kr,
 ybj@kist.re.kr)
Paradigm Shift
• Paradigm Shift in Robotics
  – Past :
    Industrial robots in factories for repetitive tasks

  – Present & Future :
    Intelligent robots providing pleasure and useful services
    to human beings living together with humans

• Paradigm Shift in Daily Life
  – Past :
    Living with TV, audio, and internet

  – Present & Future :
    . Bi-directional multi-media society,
    . Information service by wireless network,
    . Daily life together with IT-based robots
Technology Trend




Hardware-   Software-oriented Robotics
 Oriented    (Service-oriented Robots,
 Robotics        Network Robots)

Control +           Intelligence +
Mechanics          Human Interface
 For industrialization

 Benefits for users                   Price/Cost of robots

 More benefits             Existing   Reduce cost
 to customers               Robot
                                      • Extended from a mobile
• Improve usability           +         multi-functional robot
• More key applications    Network    • Price down
• More personal services              • Varieties of technologies



           Fusion of IT (Information Technology)
                with RT (Robot Technology)


         URC (Ubiquitous Robotic Companion)
                for Industrialization
URC - Concept
Ubiquitous
Robotic
Companion

                   Sensing




             Processing   Action




               Previous Robot
URC Infra & Robot

                 Sensors
                           Sensing
        in Environments



       URC infra
        system
      (Network &
        Server)

                                     Sensing   Processing




             Processing                Action
Ubiquitous
Computing                                                   URC
    Server
 KIST Robot Roadmap
                                                       Network Robots

        Motion               Wheel-based Mobility           Model-based Robot         Cognitive
                                                                                      Robot
1994     1999         2001       2002           2003          2004     2006
                                                                                         Learning,
                                            ‘Babybot                                    Reasoning
                                                  2’
                                                                                   Of human skills




                               Biomimetic
   Human-like
                               Robot
   Head-Eye
                               ‘MIMOT’
   Robot
   ‘HECTER’                                                 Network-
                                                               Based
                                                           Humanoid                    Manekin Robot
                                ‘Babybot
                                                             ‘MAHRU’
                                      1’
                Home Service
                Robot                                  Guide
                ‘ISSAC’                                Robot
                                                       ‘BUTLER’


                                                                  Network-
 Four-legged                                                         Based                    Small-size
 Robot                                                           Humanoid        Restaurant   Humanoid
 ‘CENTAUR’      Rescue Robot                 Guide Robot        ‘MAHRU-3’       Guide Robot      ‘URIA’
                ‘ROBHAZ’                     ‘JINI’                                   ‘ARO’
KIST Research Areas
• Cognitive Robotics for Grasping and Manipulation
   – Grasp affordance-based object manipulation
   – Sensory motor model-based hand/arm/mobility control
   – Cognitive architecture for task execution

• Network-based Humanoid Robotics
   –   Biped walking & Autonomous navigation
   –   Human modeling & Human-like motion generation
   –   Vision-based task skill learning & manipulation
   –   Balance control & compliant control in contact environments
   –   Three-dimensional sensor-based attention
   –   Real-time collaboration of multiple humanoid robots

• Core technologies for Service Robotics
   –   Safe manipulation based on passive compliance
   –   Visual navigation using object recognition
   –   Low-cost sound localization
   –   Low-cost stereo camera based on FPGA/ASIC

   – Autonomous outdoor navigation
   – Multi-robot coordination control for collaboration of dog/horse-like
     robots in outdoor environments
    Network-based Humanoid
    Objective - Develop core technologies for
   humanoid robots and for providing physical
                           services in homes

Name           MAHRU, AHRA
Height         150 cm
Weight         67 kg
Degree-of-     35 (leg 12, arm 12, hand 8, neck 2,
freedom        waist 1)
               stereo camera 1, force/torque sensor 4,
Sensors
               microphone 1, pose sensor 1
               • Walking (Max. 1.2 km/h)
Natural
               • Human-like Dancing during walking
Motion
               • Manipulate objects in home
               • Use external servers connected
               through wireless internet
               • Real-time recognition of faces, voices,
Distributed    gestures, and three-dimensional
Intelligence   objects
               • Learning Human skills & manipulation
               • Cooperation of multiple humanoids
               • Real-time remote control
        State-of-the-art (Platform)
Spec.              ASIMO          HRP-3     PARTNER      WABIAN-2            HUBO        MAHRU
Organization       Honda          AIST          Toyota   Waseda Univ.        KAIST        KIST
Shape




System         Embedded      Embedded      Embedded      Embedded       Embedded     Network
H(cm)/W(Kg)    130/54        160/68        145/40        150/65         125/56       150/67
DOF (Hand)     26 (+8)       30 (+12)      27 (+17)      35 (+6)        31 (+10)     27 (+8)
Servo Motor    DC / BLDC     DC            DC            DC             DC           DC
Joint Servo    Position &    Position &    Position &    Position &     Position &   Position &
Control        velocity      velocity      velocity      velocity       velocity     velocity
               Camera,       3D Camera,    Camera,       3D Pose,       Camera,      3D Camera,
               3D Pose,      3D Pose,      3D Pose,      6-axis F/T,    4-axis F/T   3D Pose,
Sensors        6-axis F/T,   6-axis F/T,   6-axis F/T    Photo                       6-axis F/T,
               Microphone,   LRF,                                                    Microphone
               RFID Reader   Microphone
               -             7 DOF arm     Dexterous     Toe joint,     -            -
Features                                   hand          7 DOF arm &
                                                         leg
      State-of-the-art (Walking)
Spec.               ASIMO            HRP-3            PARTNER       Wabian-2       HUBO           MAHRU
Organization        Honda            AIST              Toyota   Waseda Univ.       KAIST           KIST
Shape




Indoor Floor    ○ (2.7Km/h)      ○ (2.5Km/h)      ○ (2.5Km/h)   ○ (1.8Km/h)    ○ (1.5Km/h)    ○ (1.2Km/h)

Indoor Stair    ○                ○                ○             X              △              X

Running         ○ (6.0Km/h)      X                ○ (8.0Km/h)   X              △              X

Outdoor Floor   X                X                X             O (1.5Km/h)    X              X

Pattern         Real-time        Real-time        Offline       Offline        Real-time      Real-time
                Omni-pattern     Omni-pattern                                  Omni-pattern   IPZCSA
Autonomous      ○                ○                X             X              X              ○
Walking         (RFID-based      (Localization,                                               (Localization,
                Localization,    Path Planning,                                               Path Planning,
                Path Planning)   Static                                                       Static
                                 Obstacle                                                     Obstacle
                                 Avoidance,                                                   Avoidance)
                                 Obstacle
                                 Detection)
WBC             △                ○ (Offline)      X             X              X              ○ (Online)
Biped Walking
•   Biped walking in whole body control framework
•   Real-time variable walking pattern generation made by IPZCSA
    approach to reduce preview distance into the half of others
Biped Walking
Autonomous Walking
Path tracking by biped walking
Localization: StarGazer by Hagisonic Co. Ltd. Korea
Difficulties
    We can not use the localization data from sensors directly because
    of the sway for walking.
    We can not control the pose of the robot immediately.
                                                                                     x-y plot of the robot




Proposed Ideas
                                                                                                         Robot position estimation
                                                        0.3
                                                                                                         StarGazer

                                                        0.2



   Pose estimation by eliminating                       0.1


                                                          0


   sway effects                                         -0.1




   Online re-planning of the path                       -0.2


                                                        -0.3



   and foot steps                                       -0.4
                                                               0   0.1   0.2   0.3    0.4     0.5    0.6
                                                                                      x position[meter]
                                                                                                             0.7     0.8    0.9




State-of-the-art
    |Localization error| < 5 cm
    |Tracking error| < 1 step
    Static obstacle avoidance

                                      직진보행 실험           장애물 회피 실험
                                      ㄱ 자 보행 실험          ㄱ 자 보행 궤적
                                      ㄹ 자 보행 실험          ㄹ 자 보행 궤적
        State-of-the-art (WBC)
Spec.            ASIMO   HRP-3   PARTNER   WABIAN-2       HUBO    MAHRU
                 Honda   AIST     Toyota   Waseda Univ.   KAIST    KIST
Shape




Whole body
                   X       X        X           X          X        ○
control
Offline motion
                  ○       ○         △           X          △        ○
imitation
Online motion
                   X       X        X           X          X        ○
imitation
Adapt to
external          ○       ○         △           X          △        ○
disturbance
Manipulation
                  △       ○         ○           X          X        △
in contacts
Push & carry      ○       △         X           X          X        X
Human guide       ○       ○         X           X          X        △
Whole body Control
 Biped walking is not enough to make services for humans.
 Whole body control to maintain the balancing of a robot is critical in
 order to execute various tasks using robot arms and hands in daily
 contact environments and to move arms during walking.

                    Desired CoM-ZMP
                                        Unbalanced
                                        motion
                   CoM-ZMP controller
                                             Desired Motion
                                             - Remote motion
          Real          MECoM                - Manipulation
        CoM-ZMP                              - Contact
                                             - ETC

                                            Balanced motion
                       Humanoid             (Walking, etc)




 MeCoM(Motion Embedded CoM) Jacobian-based whole body control
 approach is proposed.
     Real-time whole body motion control for balancing
     Real-time motion generation for non-real-time motion commands
     (100Hz non-real-time motion commands, motion error is less than
     5 cm, tracking within 20 msec)
Whole body Control
     Compliant whole body motion generation (Force-based compliant
     motion generation) approach w.r.t. external disturbance in wrist is
     proposed.
         Position error is less than 2 cm.
         Orientation error is less than 5 degrees.
         Simultaneous motion generation by external disturbances and
         external motion commands is possible.

Strain Control Framework
External weight
                           Embed the                               Whole-body
  Mass measuring           Mass into                                   control
   (Mass, CoM)             Hand Mass

External disturbance
                                            Convert Joint         MECoM
  External Force on       Distribute         Torque into
        Hand            Force to Joint          Angle
                           Torque
External motion
commands
    External Joint         Stiffness
   Motion (MOVEN)           Control                                   Movie 1

                                                                      Movie 2

                                                                      Movie 3
     Online Motion Imitation
     • There is proposed a real-time motion transform using spring-
       damper model between a human body and the real robot.
     • Human motion is captured by the wearable motion capture
       system ‘MOVEN’.

Human
(Wearable Motion            Virtual Spring &
Capture System              Damper
MOVEN)

                               • Reflective force
                                   by joint limit




        Real
       Robot
                                      Used models for dual arms and hands with body
                                    - Virtual Spring & Damper : Hand(6) + Shoulder (2)
                                    - Rotational Damper : Each Joint(13)
                                    - Reflective Force by Joint Limit : Each Joint (13)
   Whole body Control
Real-time Motion Imitation




                                Upper body
                                    motion
                             during walking
Human-like Natural Motion
       Motion Capture & Human Body Modeling


                                              Motion
                                              Transform &
                                              Optimization




  Dynamic
Simulation




Real Robot
Application
        State-of-the-art (Intelligence)
Spec.           ASIMO   HRP-3   PARTNER   WABIAN-2       HUBO      MAHRU
Organization    Honda   AIST     Toyota   Waseda Univ.   KAIST       KIST
Shape




                                                                   CORBA
Middleware &
                  -     CORBA      -           -           -     (Multi-robot,
Architecture
                                                                     PnP
®Face            △       △         X           X          X           ○
®Gesture         △       ○         X           X          X           ○
®Object           X      ○         X           X          X           ○
®3D Pose          X      ○         X           X          X           ○
®Voice           △       △         X           X          X           ○
Sound LOC        ○        X        X           X          X           ○

RF-ID            ○        X        X           X          X           X
Visual Servo      X      ○         X           X          X           ○
Attention        △       △         X           X          X           ○
Task Modeling     X      △         X           X          X           ○
    Distributed Control Architecture
     Humanoids                NET               KIST Humanoid Server       NET            Other Servers

                                           DB sharing knowledge
         Task                COM AP                                                            Client 1
                                                   DB for Knowledge                            (Voice)
                                                                                               +
                                           Manager for robots & servers    Plug & Play        Client 2
                             Wireless                                                         (Vision)
                               AP                  Resource Manager         Internet
                                                                                               +
                                                                                               Client 3
                             Wireless                                                       (Interaction)
                               AP




                                                                                              …
                                                                                              …
                                            Robot Manager
    Multiple Humanoids
                                                  NbH Manager for i-th
                                                      humanoid                                Client n

                         User
                         Command
                              Manual
                                                          Script-based
                            Commander                                                       URC Infra
                                                          Command &
                                                        Scenario Manager
                              Monitor
                                                     Timing Generation
                         Real-time Monitoring                                            : Middleware
                                                                                          (CORBA)
•    CORBA-based modular software framework
•    Multiple robots coordination: 4 robots (max.)
•    Loop frequency: 5-10 Hz
Distributed Control Architecture
•   Middleware(CORBA)-based modular software framework
•   Real-time transmission of massive video/audio/control data (> 15 Hz)
•   Real-time service management and monitoring
•   PnP(Dynamic management) of robots



       Robots            Server System


                 ...
                                                                PnP
                          Central Control
                                                        Face Recognition

                                                       Object Recognition




                                              Distributed Intelligence




                          Server Computers
  Real-time Control Framework
• Operating system: Linux Xenomai
• Control frequency: 200-400 Hz                               IHC
                                                           Application

  3rd party
 Simulation                                           network
                                        IHC
   Engine                            Application
                                                                         3rd party
                                                                          Control
                                callback                                 Algorithm
                                function
                  Dynamics
                 I/O Module
                                                             Mogen Module
                 Kinematics                                  Mogen Module
                 I/O Module         IHC Core
                                                             Mogen Module
                  Hardware
                 I/O Module                                       ..….


                                  Robot
                   Robot      Definition                                 Commands
              Parameters            XML                                  Registration
                    XML                                                  XML

                                                   Mogen Module
                                                   Registration
                                                   XML
Object Tracking
     Robust color modeling algorithm is critical for reliable human-robot
     interaction and task skill modeling.
     Bayesian adaptive color modeling approach using depth and color
     information is proposed.


Robust color modeling & object tracking




                                                                Face Recognition




Robust hand tracking
   3D Object Recognition
           Conventional 2D or affine-invariant object recognition is not enough in
           view of reliability and applicability to humanoid robots.
           New approaches for real-time 3D object recognition are proposed.
               3D object model generation and matching using edge and depth
               information of an object
               Real-time 3D object tracking using particle filters with back-
               projection sampling
               Real-time 3D pose estimation using Levenberg-Marquardt
               optimization technique




Edge Points           Depth Points        Final Model




Particle Filtering with Back-Projection based Sampling   Rea-time 3D Pose Estimation & Tracking/Matching
3D Object Recognition
연구결과 요약
 Group objects.
               로봇 작업 구현 : 테이블 정리
       ( 작업 지식 모델링 + 3차원 물체 인식 및 자세 추적 )
 Open 요약
연구결과the dish washer
             로봇 작업 구현 : 식기 세척기
      ( 작업 지식 모델링 + 3차원 물체 인식 및 자세 추적 )
 Open 요약
연구결과the cooker and …
              로봇 작업 구현 : 전자 렌지
     ( 작업 지식 모델링 + 3차원 물체 인식 및 자세 추적 )
 Open 요약
연구결과the cooker and …
             로봇 작업 구현 : 전자레인지 + 컵 전달
 ( 작업 지식 모델링 + 3차원 물체 인식 및 자세 추적 + 대상자 검출 및 손 추적 )
3D Sound Source Localization
  Dual Microphone Arrays
  TDOA Feature Matrix-based Approach

         zL                  zR



                   R2
                             L1      L2
                                  R0 (0,0,0) yR


                                     L3
(0,0,0) L0         R3
              R1                            yL
                        xR
    xL
  Industrialization     -1




                             Restaurant guide robot ‘ARO’
                                              ED Co. Ltd.



Manikin Robots
Sejong Robot Co. Ltd.
Industrialization        -2




 Tele-Education Robot
                ‘VANI’
   Samil CTS Co. Ltd.
  New Directions
• Question
  We say we are developing an intelligent robot.
  Is the robot intelligent enough to be used in real world?
• State-of-art
   – Too much time is required to establish and stabilize a robot
     hardware & software platform until now.
   – Only a few core technologies on recognition and behavioral
     intelligence have been developed. It is not robust enough to
     apply for real situation.
   – Autonomous task execution is impossible in the sense that it is
     almost impossible to adapt to unknown environmental changes
     such as new objects, functions, and knowledge.

• More questions
   –   What   makes a robot intelligent?
   –   What   is necessary for autonomy of a robot?
   –   What   makes a robot alive like living animals?
   –   What   makes sensors more intelligent and robust?
New Directions
• Fusion of Technologies
   – Robotics + Cognitive (Brain) Science + Neuro
     Science + Biology
   – Develop core technologies for
      •   the adaptation to environmental changes,
      •   intelligence for learning and cognition,
      •   autonomous task execution, and
      •   bio-interface for natural human-robot interaction.

• Fusion of cognitive science with robotics
   – Cognitive models in humans and animals
     Computational models of the cognitive models
     Real robot applications and evaluation
     Update the computational models


  “No intelligence without physical bodies”
USA
1.   ACIP(Architectures for Cognitive Information Processing)
2.   CMU(Carnegie Mellon University) Tekkotsu                                     UK
3.   AI MIT Cognitive Robotics, Humanoid Robotics Group                           Foresight Cognitive Systems Project
4.   Naval Research Laboratory. Human-Robot Interaction and Cognitive Robotics




     Canada
     The cognitive robotics group,
     University of Toronto



        Spain
        UC3M, Madrid


                                                 Germany
                                                 1. Technical University of Munich. IDSIA.
                                                 2. Bremen University. Cognitive Robotics
                                                 3. Institute of Robotics and Mechatronics.
                                                   German Aerospace Center


                 Italy                                                           South Korea
                 Cognitive Humanoids Lab                                         1. Cognitive Robotics, KIST
                                                             EU                  2. Robotics Research, KAIST
                 IIT(Italia institute of technology)
                                                             1. SENSOPAC         3. INCORL, Hanyang University
                                                             2. Cogniron
     France                                                                      Japan
     ENSTA. Cognitive Robotics.                                                  1. Brain-Style Computing Group, RIKEN
                                                                                 2. HRCN(Humanoid Robotics Computational
                                                                                      Neuroscience), ATR
Cognitive Robotics                      (Wikipedia)
 Cognitive robotics is concerned with endowing robots with
 mammalian and human-like cognitive capabilities to enable the
 achievement of complex goals in complex environments.

 Cognitive robotics is focused on using animal cognition as a
 starting point for the development of robotic computational
 algorithms, and it involves the application and integration of
 various artificial intelligence disciplines but is primarily inspired
 by psychology and brain science research.

 Robotic cognition embodies the behavior of intelligent agents in
 the physical world. This implies that the robot must also be able
 to act in this real world.

 So, a cognitive robot should exhibit: knowledge, belief,
 preferences, goals, informational attitude, motivational attitudes
 (observing, communicating, revising beliefs, planning), and
 capabilities to move in the physical world, and to interact safely
 with objects in that world, including manipulation of these
 objects.
Core Research Issues
Core topics include
   cognitive sensors and sensing,
   Innovative bio-mimetic mechatronics,
   autonomous knowledge acquisition,
   knowledge representation,
   motivation,
   automated reasoning,
   planning, and
   (target-oriented) learning.

Some of fundamental questions:
   How much human programming should or can be
   involved to support the learning processes?
   What is the effective way with robots for reward and
   punishment?
       In humans, when teaching a little infant for example, the
       reward would be a chocolate or some encouragement,
       and the punishment will have many ways.
For a cognitive humanoid
Core topics include
   cognitive sensors and sensing,
   Innovative bio-mimetic mechatronics,
   autonomous knowledge acquisition,
   knowledge representation,
   motivation,
   automated reasoning,
   planning, and
   (target-oriented) learning.
Cognitive architecture
for reasoning and planning
 Cognitive architectures (Newell, 1990)
    Computational theory of human cognition
    Infrastructure for an intelligent system that stays
    constant across domains
    Commitments to specific memories, representation and
    organization of knowledge, and mechanisms that use
    the knowledge


 ICARUS architecture (Langley & Choi, 2006)
    Some features shared with others like Soar and ACT-R
    Specifically designed for physical domains
    Architectural distinction between concepts and skills
    Architectural commitment for hierarchical organization
    Balance between persistence and reactivity

                                                             41
    Problem-solving
           3
           2
           1                                                         2         1        3
     Initial state                                                         Goal state

Problem-solving
                     (unstacked 2 1)               (unstacked 2 1)             (clear 1)
trace:




            (unstacked 3 2)            (clear 2)         (unstackable G-R 2 1)




   (unstackable G-R 3 2)                                 (unstacked 3 2)

                                                                                            42
      Problem-solving and Learning
                3
                2
                1                                                               2       1        3
        Initial state                                                               Goal state

4 new skills:                           (unstacked ?2 ?1)                 (clear ?1)


                                     (unstackable ?G-R ?2 ?1)

                                        (unstacked ?2 ?1)             (unstacked ?2 ?1)



                                               (clear ?2)              (unstackable ?G-R ?2 ?1)



                    (unstackable ?G-R ?3 ?2)                (unstacked ?3 ?2)           (clear ?2)
                                                                                                     43
   Cognitive Cycles in ICARUS

                                  Perceptual
                                    Buffer
Long-
Long-Term        Conceptual       Short-
                                  Short-Term
Conceptual        Inference         Belief         Perception
 Memory                            Memory




                                 Skill Retrieval
                                 and Selection     Environment




Long-
Long-Term      Problem Solving   Short-
                                 Short-Term           Skill
Skill Memory    Skill Learning   Goal Memory        Execution

                                     Motor
                                     Buffer

                                                                 44
   Cognitive Cycles in ICARUS

                                  Perceptual
                                    Buffer
Long-
Long-Term        Conceptual       Short-
                                  Short-Term
Conceptual                          Belief         Perception
 Memory           Inference        Memory




                                 Skill Retrieval
                                 and Selection     Environment




Long-Term
Long-          Problem Solving   Short-
                                 Short-Term           Skill
Skill Memory    Skill Learning   Goal Memory        Execution

                                     Motor
                                     Buffer

                                                                 45
Cognitive Grasping
Objective - Develop a new cognitive approach to handle
objects robust to unknown changes in environments and
objects.

Issues
    Where and how to grip?
    How to move the arm?
    How to grasp the object using a gripper or multi-finger
    hand?

   How to learn the way for grasping?
   How to control multiple fingers for balancing and
   compliance?
CutKosky’s Grasp Hierarchy
    Grasp Affordance                   (Neurophysiology)

    AIP (Anterior Intraparietal Sulcus)                Affordance


                                                       Affordance




                        Affordance




     Monkey(Macaque) Cerebral Cortex               Human Cerebral Cortex


•   Fagg and Arbib(1998), H. Sakata(1997), M. Matelli(1986) : Monkey’s AIP
•   J. Culham(2004), E. Chinellato(2006) : Human AIP
•   J. J. Gibson(1986), Affordance is action possibilities that objects offer to
    an organism in an environment.
     Grasp Affordance:                                                        The FARS model

                  Inferior
                  Premotor
        F6         Cortex                   F4                                           VIP
                                (arm goal position)                                (position)                   Parietal
                                                                                                                 Cortex
                                                                                                                  How (dorsal)
                                                              (object/grasp transform)
46
                                                                        AIP                     PIP
                 (grasp type)      F5                                              (shape, size, orientation)
                                                              Affordance
                                                                                                                                 Visual
                                                                                                                                 Cortex
         F2
     (abstract
      stimuli)
                                                                                                      IT
                                                                                                       What (ventral)

                                                                expectation

                                                                                     A7
                                                     (sensory                            (internal model)
                                                  hyperfeatures)        SII
                                MI
             (muscle assemblies)


                                                                              (elementary
                                                                        SI       sensory
                                        motor commands                          features)

                                                         sensory info
                                                                                            The Complete FARS model(1998)
                                                      hand                                   (Fagg-Arbib-Rizzolatti-Sakada)
Grasp Affordance:             Control Schema




                                                    (Ventral Stream)




• E. Oztop and M.Kawato (2009), The Control of Grasping
 CoRoS in Hanyang University
                                     CoRoS
                                Two Practical Cognitive
                                   Robot Systems
 CASPer(Context-adaptive Action-coupled        RoKIS(Robot-centered Knowledge Inference
Synthetic Learning & Perception System for                     System)
                  Robots)


                                             Flow




                                                                Robot-centered Knowledge
                                                               Inference Framework (RoKIF)

                                                                                   Recognition results
                                                    Sapience


                                                                                                         Intervals of
                                                                                                           Buffers




6 layers of neo-cortex



                                                                               Dependable Instantiation of
        affordance                                                             Object and Spatial Relations
                                                                                        by RoKIF

                                                                                                                        51
    KIST Strategy - Concept

                    Object Affordance
Objects Features                            (Neural Net,
             (ID,                           Ontology, …)           Grasp/
        Images,                                                  Manipulation
       3-D Info,
       Features,                                                 Information
     Touch Info,
            ETC)
                     • Where can I grasp?
                     • Which part is good to handle?
                     • How to learn the affordance?


                                       Sensory Motor Model

    Real-time
    Compliant        Motions for
     Control            Grasp &
                    Manipulation
                                                       • Visio-motor Model
                                                       • Vision-Dual Arm Model
                                                       • How to learn the model?
Learning
Self-learning
Develop a new cognitive approach to handle objects robust
to unknown changes in environments and objects.
    Walking
    Visual Motor Map
    Knowledge
    …

Task Skill Learning by
   Teaching
   Imitation
   Physical interaction
   Human-robot interaction

Learning Engine & Framework
   Neural Network
   Bayesian Network
   …
   Task Skill Learning
       Object-oriented human task behavior modeling approach is proposed.
       Object-oriented task representation approach is implemented.




                                                                     Put a cup
                                                              into a trash bin.
Open and close a drawer.
Task Skill Learning
  How   to   use tools and/or home appliances?
  How   to   pack a box?
  How   to   open and/or close a valve?
  How   to   manage a room?
For the future happy life
  living together with
      smart robots



      Thank you!

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:3
posted:3/22/2013
language:English
pages:56