Iphone Application Proposal - Download as DOC

Document Sample
Iphone Application Proposal - Download as DOC Powered By Docstoc
					               University of Southern California
 CSCI 588 Specifications and Design of User Interface Software

              Class Project Final Report

Project Title:

Handy Facebook – Hand gestures to manipulate social networking

Assigned Team Number: ____6______

Team members (4 members):

1). Name: Jaskaran Uppal______
    Last 4 digits of Student ID _____0419____________

    Major: Computer Science______
    Location (campus or off campus) _On Campus_____

2). Name: Sandeep Mallela_____
    Last 4 digits of Student ID ___9769 ___________

    Major: Computer Science_______
    Location (campus or off campus) _On Campus_________

3). Name: Rahul Perhar _______
    Last 4 digits of Student ID _4562______________

    Major: Computer Science________
    Location (campus or off campus) __On Campus ________

4). Name: Darpan Dhamija________________________
    Last 4 digits of Student ID _5550_ ____________

    Major: Computer Science________________________
    Location (campus or off campus) _On Campus________

Date ___12/10/2009_________________
Project status change

1) Is there anything changed from your original proposal and /or
   progress report?

2) If answer “Yes”, clearly state what has been changed.

  Project title: Changed from iRun, an iPhone application to
  Hand Recognition application
  Approach: From iPhone to OpenCV
  Platform: OpenCV, Java

3) Have you finished all the tasks that you targeted in your
   original proposal and/or progress report?
  _   Yes___

Project Report

1. Project Objective:
  The objective of the project is to recognize gestures made by
  the hand in front of a webcam. Upon successful recognition of
  the gestures, we could do manipulation (Like chat, wall post,
  like a post, Poke a friend) on a social networking website
  like Facebook etc.

2. Problem Statement:
  We have so many social     networking websites these days to
  bring friends together,     but to make the access to your
  friend’s profile without   using the mouse i.e maximizing the
  ease of access is what     we are aiming for. We designed a
  system that recognizes     hand gestures to manipulate the

3. System design and development:
  For creating the system we used OpenCV Libraries and Java to
  code for detection of the hand gestures. Then we used HTML,
  JavaScript to create the User Interface.

  Task Analysis:
    0: Recognize the gestures
    1: Use OpenCV libraries and Java Classes to create a code
    that captures the hand gestures.
    2: Create the gestures
         2.1: Run the program to start the webcam
         2.2: Make the gestures in front of the camera
         2.3: Apply canny Edge Detection Algorithm to the image
         2.4: Capture the images and store in .jpg format
    3: Train the system using Neuroph, giving as input the
    captured images.
    4: Upon training, create a training set and include it as a
    plug-in to your program code.
    5: Execute the code with the plug-in and make gestures in
    front of the camera, upon correct recognition the correct
    output is displayed.
    6: Now the UI can be integrated with the code.
    7: A home page displaying all the friends is shown
         7.1 Select any friend by making a gesture
         7.2 Chat with the friend
         7.3 Write on the wall
         7.4 Like a post
         7.5 Poke a Friend
    8: Exit the System
    Plan 0: Perform 1-2-3-4-5-6-7 in that order
    Plan 1: Perform 1-3-2-4-5-6-7, if training of the system
    is done first.
    Plan 7: Perform 7.1, 7.2, 7.3, and 7.4 in any order

Hand Gestures:

Canny Edged Images:

4. System functionality
The system includes usage of “Neuroph”, a neural network tool
that is used to train the system and gestures are matched to it.
  1) Getting Started:
     In this step we need to specify images that should be recognized, image
     size/resolution and color mode.
     Image dir:    directory   which   contains   the   images   that   should   be
     Junk dir: directory with images which will help to avoid (or at least
     reduce) the false recognition.

2) Create neural network:
  Network label - The label for the neural network, which is useful
  when you create several neural networks for the same problem, and
  you're comparing them.

  Transfer function - This setting determines which transfer function
  will be used by the neurons. In most cases you can leave the default
  settings 'Sigmoid', but sometimes using 'Tanh' can give you better
  Hidden Layers Neuron Counts - This is the most important setting
  which determines the number of hidden layers in network, and number
  of neurons in each hidden layer.

3) Train the system: To train the network select the training set
  from the list and click the 'Train' button. This will open the dialog
  for setting learning parameters. Use the default learning setting and
  just click the 'Train' button.

User Interface:

This is how the UI will look to the user.

Home page:

This will be displayed as the first page on the screen.   The User
has the option of selecting any one of the profile        page by
making any one of the gestures.
When a gesture is successfully recognized a profile        page is
shown as shown in the next screen shot. For e.g. if       the user
makes a “TWO” then Sandeep’s Profile Page is opened.

Personal profile page:
This shows a facebook profile page of the friend of the user. It
shows the photo, wall post text area, latest wall posts,
information, and friends. Now the user can make gestures in
front of the camera and do any of the following tasks like chat,
wall post, liking a post, poke. Here in this case, the user
entered “TWO” and profile page of Sandeep.

Chat Window:
The Chat window is popped when a user makes a “ONE” as a
gesture. It shows all the friends that are online. Since this is
just a working Low-fi prototype, we have not implemented the
real time chat, it just shows the chat window with the friends

Write on Wall:
The wall post is shown when a user makes a “TWO” as a gesture.
The user can type anything and that will be displayed on the
user’s wall. We have successfully implemented this. Besides
this, a notification email is also sent to the person on whose
wall the post is made.
Like a Post:
The user has the option of liking any post of his friend. The
user has to make a “THREE” and it will show up that the post is
being “liked” by the user. This was successfully implemented
upon successful recognition of the gesture.

Poke a Friend:
The Poke Window is shown when a user makes a “FOUR” as a
gesture. It is a modal window which shows that the user has
poked his friend and it automatically closes after 3 seconds,
just like in Facebook. We have successfully implemented this.
Besides this, a notification email is also sent to the person
who is being poked.
Email Notification to the user:

An email   is sent to the person on whose profile the user visits.
We used    Java SMTP to implement this. A notification email is
sent the    moment the person has anything written on his wall or
is being   poked.

The Limitations to the system includes the following:
    The error rate in gesture recognition is persistent
    It is a Low-Fi prototype so this can be scaled to a hi-fi
    Gesture recognition is dependent upon on available light.
    We have implemented this on a white background
    We can add more gestures to make things easier for the

5. Results and user evaluation
Home page:

Personal profile page:

Chat Window:

Write on Wall:

Like a Post:

Poke a Friend:

Email Notification to the user:

6. Conclusion:

We would like to conclude by saying   that this is a concept that
can be used as a Hi-fi prototype       on a large scale to make
accessibility easier for the users.   The user can reduce the use
of mouse and keyboard, provided we    use gesture recognition for
writing text too.

In this Project, I (Jaskaran Uppal) was responsible for
recognition of the hand gestures using edge detection algorithm
and Java classes along with Rahul Perhar. Sandeep Mallela and
Darpan Dhamija were responsible in doing the UI part. I
eventually did the Integration of both the modules along with

For the Future Work, we can add a number of gestures into this
system to make things easier. This project was done on a low-fi
basis, so this idea can be scaled to a hi-fi project. Besides
this, we can train the system better so as to reduce the error
rate. We can also implement a real time chat into the system.
This application can be used in Google maps and other
applications to provide ease to the user.

7. Comments/issues/complaints/suggestions
This course has been a great learning experience for us, as we
were exposed to some new technologies in the field of the user
interface. The Professor showed some amazing videos that were an
eye opener other than providing an insight into the latest
technology. For our project we would like to thank our TA for
giving us the idea as to where we should head with our project.
The Professor and TA must be commended for their support and


Description: Iphone Application Proposal document sample