Docstoc

AIR+

Document Sample
AIR+ Powered By Docstoc
					                         AIR+
  An Expansion on AIR: Advancement through Interactive
                        Radio

                 Fabian Kidarsa, Kristin Lee, Remington Furman, Brian Wolfe,
                                      Zachary Wachtveitl
                          Department of Computer Science and Engineering
                               University of Washington Seattle, WA




Abstract. Women in developing nations have a history of being suppressed and isolated
from each other. This leads to reduced education and effectiveness of women in rural life.
This reduced knowledge can lead to further repression of women by men of their
communities, a self reinforcing cycle. Allowing women isolated by geography or society to
interact with one another through a technological proxy can greatly increase their quality of
life. By providing a distributed audio forum, the AIR+ project enhances the power of
women in developing countries by allowing women in distant communities to interact and
share ideas. The AIR+ device could help revolutionize women’s place in developing
societies by allowing women to share their problems and ideas. This will allow them to
build virtual communities which span large geographic areas, and work together to promote
awareness of their ideas. 1 Introduction

Villagers in sub-Sahara area of Africa are subject to extreme poverty. This poverty causes
many social issues which affect the quality of life for women in these areas. The presence
of HIV/AIDS and famine in communities often traps the women away from others
because the duty to care for the ill usually falls to them. Often, this situation is
exacerbated by the men of these communities suppressing the women. Sterling and
Bennett argue that a significant cause of the suppression of women is their continuing lack
of education and information.
   For instance, picture a woman in a small village whose child was sick with some
disease or illness and this child recovers due to some remedy she provides. Now, assume
another woman in another small village has a child with the same disease or illness and
except she doesn’t know what kind of remedy or care to provide. These two women don’t
know each other and may never meet yet their villages border the same market.
    Wouldn’t it nice if these women had a device that transferred sound recordings to one
another by simply traveling within the same vicinity of another device? In this scenario,
the woman with the known remedy could create a topic, let’s say “Disease or illness
remedies”, and then could record a reply to her topic regarding her child’s disease or
illness and her remedy. When she travels to the market carrying the device moving into
the same vicinity as the other woman with the same device and the sick child, the topic
and reply would then be transferred. This other woman then could listen to this reply and
realize her child suffers similar symptoms and implement the same remedy as the other
woman had done.
    This scenario is just one of many applications of the AIR+ device derived as an
extension to Advancement through Interactive Radio (AIR) project developed in the
Department of Computer Science at University of Colorado by Revi S. Sterling, John
O’Brien, and John K. Bennett. The original project was designed to allow a media for
communication among women of the villages in the sub-Sahara. The AIR was designed to
allow women share ideas and knowledge among a larger community without requiring
expensive infrastructure.


2 Related Works

To facilitate communication of information between illiterate populations in developing
countries, several approaches have been investigated.
    The MobilED[2] project has investigated a different approach to this problem. The
project utilizes the existing mobile phone system to create a audio knowledge repository.
The system accepts SMS messages for querying the server for a MediaWiki entry. The
server then calls the MobilED user to read them the MediaWiki article using voice
synthesis. During the call, users can navigate the article using the cell phone's keypad.
They may also add audio annotations to any of the sections of the article. While the
MobilED allows their users to annotate a particular section of an article, it does not
provide an interface for creating new topics to the knowledge database. The information
available to MobilED users is also bounded by what is available on the server.
    The AudioWiki [12] project has expanded the idea of providing information in audio
format to include functionalities for its users to create their own contents, in addition to
annotations, in the knowledge database. AudioWiki users can start their own topics,
record their own contents, and store the recordings on the remote server. To eliminate the
dependence on querying via SMS messages, the group has designed the system to include
a voice recognition system for navigation. This approach effectively creates a medium for
illiterates and computer illiterates to have access to information. However, both MobilED
and AudioWiki store their information on a centralized server. Users without access to the
mobile phone network will not be able to utilize the system.
    Other approaches target a facilitated communication of information using a middleman
model. For example, the radio browsing project [9] connects audience of a community
radio station to the internet, via a radio program that broadcast web articles in response to
audience's questions. Another model that is widely used is the internet intermediation
model [5] [6], which provide the illiterate and computer illiterate population access to
information on the internet with help from trained staff at internet kiosks.
    The AIR project [11] has developed a device that creates a channel for women in
developing countries to have their voice on the community radio. The small, palm-size
device allows its user to record voice clips and transfers them to the community radio
station, via one or more hops through other AIR devices. The radio station then selects the
clips of interest and broadcasts them. The AIR devices were deployed and evaluated in
several communities in Kenya. While the idea was widely welcomed in the study, the AIR
device relies on a centralized system to process the clips. If a clip is not selected for
broadcasting at the radio station, the user's opinion will not be heard by other users.
   The device was created using an ARM processor as the core for processing power. The
AIR device contains USB Flash and USB 802.11 for storage and wireless networking
capabilities. The device is low-power and handheld, powered by 4 AA batteries. The user
interface consists of a single button, a microphone, and three status LEDs (green, yellow,
and red) to provide a variety of feedback on device status to the user. When a user presses
the button, the device starts a recording. The user presses the button and speaks into the
microphone; her voice is filtered and compressed for storage on the USB Flash Drive. The
messages are saved with metadata containing the originating device and the timestamp
when the message was created. When devices come in contact with one another, they
transfer metadata and then have the option of pushing or pulling messages between each
other. The decision whether or not to transmit a message is made by a probabilistic
adaptive algorithm whose main goal is having messages reach the radio station where an
employee of the radio station selects clips to transmit, sorts them and broadcasts them to
the surrounding community.


3 Project Purpose

The AIR+ project is designed to improve upon the ideas of the original AIR project by
reducing the potential for censorship and bias in the radio networks. The current system
allows for anyone to get airtime on the radio, but each individual clip still needs to be
approved by the people that run the radio station. This limits sensitive topics of discussion
that could come up. The AIR+ device will rely on users rating clips of interest to them to
help decide what gets played and will allow each device to individually play back clips.
This will allow the devices to serve as distributed audio blogs that are not subject to the
censorship of third party institutions.


4 Approach

The implementation of the AIR+ device was done using Maemo, a development
environment for the Nokia N800 Linux tablet see Fig. 1. The software was all developed
in C to run on the Nokia N800 tablet’s Linux kernel. The code and development is broken
up into four main areas: the sound, networking, GUI, and selection/replacement
algorithm. The state of the device is held in a database, which keeps track of users’ clip
ratings and the clips currently present on the device. These will be further explained by
following a single clip through its entire cycle of recording at one device to another user
listening to it on another device, illustrated in Fig. 2.
Fig. 1: Stock Image of a Nokia N800 Internet Tablet. This was the device that served as a prototype
for the AIR+ device.


   Before anything else, a clip must be recorded. Recording and playback are
implemented using the GStreamer sound libraries. The GStreamer libraries are able to
perform a multitude of encoding and decoding tasks. We chose to use the Ogg Speex
codec because it is a free codec that provides a large compression ratio designed for
recording speech for telephone conversations. GStreamer is very powerful, and could be
considered too powerful for use in an embedded environment. Using such a powerful
library, however, also allows for efficient prototyping because many different formats can
be used with very few code changes, allowing for optimization of the recording quality
with respect to the recording size.
Fig. 2. An illustration of AIR+ usage. A user records an audio clip. After negotiation, the audio clip
is transferred to a different user. This user can then listen to the clip.

   After a clip has been recorded and the file stored on the device, the clip must be
transferred to other devices so that it can be heard by other people. To facilitate file
transfer with the absence of a centralized system, AIR+ uses a store and forward model
for network communication. In this model, devices detect each other’s presence by
sending and checking for periodic UDP beacon packets. AIR+ will start a TCP connection
with a neighboring device if it has not been connected to recently. After the connection is
established, the devices will exchange metadata describing the clips available on disk. The
devices will then exchange files based on a heuristic that uses this metadata.
   The metadata collection, storage, and use are handled by the selection and replacement
functionality of the device. Users rate clips either positively or negatively when they listen
to them and this information is stored in a file on the device. By remembering simple
correlations with other users, the device attempts to predict which clips a user would want
to listen to and prioritizes the acquisition of these clips. The current implementation of
this functionality utilizes a database. The schema of this database can easily be changed to
implement more complex functionality and include more information (such as the length
of time that each clip is played) which could provide more accurate predictions of users’
interests.
   The GUI on the AIR+ device simulates an interface of 7 physical buttons and three
LEDs. It has been used to test user interfaces in order to find one that was intuitive and
worthwhile to actually build. This was designed to be simply replaceable with the pin
interrupts that would be used on an actual embedded device.
   Due to limitations in disk space and connection time between two devices, AIR+
prioritizes downloads and replaces clips based on user’s past listening experience. AIR+
relies on users to indicate explicitly whether they liked or did not like the content by
giving the clip a “thumbs up” or a “thumbs down”. This rating is stored in the metadata
and is the basis to the selection and replacement algorithm used by AIR+.
   The sound clips are organized into Topic and Reply, as shown in Fig. 3 below. Each
topic will be a folder, followed a specific sound clip file corresponding to the topic. Each
reply to a specific topic will be in that folder and it will be in the form of a sound clip file.
The structure was designed so user can easily navigate through the sound clips. If the user
wishes to hear a specific topic, then the user will traverse the following topic and listen to
the replies or sound clip that the user or other users have recorded.




Fig. 3. Audio Clip Structure. A depiction of how the audio clips are organized in AIR+ device.


   The users also have the option of rating a clip “thumbs up” or “thumbs down” to
indicate that they liked or didn’t like the content of the clip. This information is stored
along with the file names, and thus is transferred to other devices to allow informed
requesting decisions. This information can then be used to determine whether or not to
request the clip.


5 Implementation

5.1 Sound

Sound recording and playback is implemented with the GStreamer library, which is the
preferred framework for multimedia applications on the Nokia N800. GStreamer provides
an easy and abstract method to implement different methods for recording, compressing,
storing, and playing sound and video files. For this project, sound clips recorded by the
user through the built-in microphone are compressed with the freely available Speex
codec, which is designed for speech compression. The high compression from Speex
allows a large number of recordings to be stored on disk, and also reduces the time to
transmit clips between devices.
   The use of GStreamer provided for rapid prototyping of the sound functionality
because of its abstractions of audio devices and processors. GStreamer processes sound
with functional units called pipelines, which consist of sources and destinations for audio
data with optional filters that process the audio in between. Sources and destinations
include speakers, microphones, and audio files. Filters include codecs for compressing
and decompressing audio, effects, and other processing units.
   A simplified wrapper API for playing and recording compressed Speex files using
GStreamer was created for this project. The routines were designed for the simple
functionality required for the AIR+ device: begin and end recording to a file, play a file,
and stop playback. In addition, the library provides an optional notification when files
complete playback, to allow for automatic playback of a list of files. This simple
functionality allowed for easy implementation of recording and playback from the GUI
code.


5.2 Networking

The AIR+ network is a purely store-and-forward network. After a clip is recorded it is
stored until another device is discovered. The clip can then be forwarded to this next
device. Another option would be to implement full routing among devices, so that clips
could pass seamlessly from one device to another through an intermediate device without
needing to be fully stored on the intermediate. We chose not to implement this because we
assumed that the devices will be sparsely distributed and contact between devices would
be brief and involve few devices. This would allow for very few situations in which
routing would be useful.
   All of the networking code for the AIR+ is written using POSIX sockets programming
in C and is multithreaded using POSIX threads. There are three main steps for
communication: discovery, negotiation, and transfer. The entire process of connecting and
sharing files is shown in Fig. 4. All the networking is based on ad hoc networks, which
allow the devices to connect directly to each other, but this requires the devices to send
packets to discover the presence of neighbors. For this purpose, UDP broadcast packets
are sent by each device at regular intervals with the source’s IP address and MAC address.
When a device receives this beacon packet from another device, it spawns a new thread
which initiates a TCP connection with the sender of the UDP packet, beginning the
negotiation stage. The device initiating the TCP connection will from here on be called
the client, while the receiver is the host.
   Fig. 4. Block Diagram of the Network Functionality
    1) The host periodically sends out beacon UDP packets.
    2) Upon receiving a UDP packet from a host which has not had a recent connection, the
        client spawns a new TCP Client Thread.
    3) This thread initiates a TCP connection with which is accepted by the Host’s Server
        thread.
    4) The TCP Server thread spawns a new TCP Host Thread which handles file request
        servicing.
    5) Upon the setup of a new connection, the TCP Host Thread sends the client the
        database with file names and metadata.
    6) The TCP Client Thread requests a file from the file list if it has a good rating and isn’t
        already present on the device.
    7) The host sends the requested file.
    Steps 5 and 6 repeated until the connection is broken or there are no more files that the
    client desires, after which the connection is closed.

    After a TCP connection is established, the server sends a file list to the client. In the
current version, this is a database file which contains file names and user ratings for the
file. The client compares this list to the list of files it already possesses; it also performs a
selection algorithm based on the metadata. From the results of this, it requests, one at a
time, the desired files. These are sent by the server to the client and saved to the client’s
disk. When finished with the transfer, the connection is closed and the connection time is
remembered to avoid connecting with this device again in the next five minutes in order to
preserve battery life. Each device can behave as both host and client, so file transfer is
symmetric for all AIR+ devices.
5.3 Graphical User Interface

The graphical user interface uses GTK+ and Hildon library as its development
environment. Both libraries are preferred and supported by Nokia N800. The GTK+ have
the necessary support to develop the user interface prototype, such as button and image
widget, and user interaction facility using callback functionality.
   The initial design decision for AIR+ was to make the user interface as similar as
possible to the original AIR design. This involved adding three additional buttons and
keeping the three LEDs. The buttons are for playing the previous and next sound clip,
start recording, and going back and forth from Topic and Reply. However, the decision is
unfavorable since it increased the complexity and causes the design not proportional with
the form factor of Nokia N800. The Nokia N800 is a handheld device with horizontal
default orientation, while the original AIR has a vertical default orientation. Thus, AIR+
has more buttons than original AIR and with different form factor orientation to conform
to the design of Nokia N800.
   The current implementation of AIR+ consists of: (1) three LEDs, (2) seven buttons, and
(3) an indicator to indicate the location of microphone. A depiction of AIR+ user interface
prototype can be view in Fig. 5. The three LEDs behave as indicators to allow user to
know whether the device is recording, or playing either the Topic or Reply sound clip in
the device. The seven buttons have size about 100px by 100px. The dimension of the
buttons is adequate for 800px by 480px touch screen resolution for Nokia N800. As a
result, the buttons almost have the same size of an adult thumb, so the stylus for Nokia
N800 is not necessary when interacting with the user interface. The buttons are used to:
(1) choose to play a Topic or Reply, (2) play the next or previous Topic or Reply, (3) play
or stop the sound clip, (4) record a new Topic or Reply, and (5) apply rating to a Reply
clip. All buttons have a corresponding graphics that are made using GIMP software. They
are designed to have the modalities of a radio button, like the play, stop, fast-forward, and
rewind. The other buttons are considered outside the modalities of regular radio, like
choosing to either play a Topic or Reply, or rate how well is the Reply. Finally, the
microphone indicator is an additional icon to let user know where the microphone location
in Nokia N800 and allow them to effect quality of the recorded sound clip.
    The original AIR has microphone place near the rest of the user interface, while AIR+
uses the Nokia N800 touch screen to ease the prototyping. In addition, the Nokia N800 is
equipped with speakers on both side of the touch screen. Interestingly, the sound quality
of the recorded clip from Nokia N800 is good enough, although some issues might arise
from the sound gain during recording which can be solved by keeping a distance away
from the microphone.
Fig. 5. AIR+ GUI Prototype

    As mentioned above, the AIR+ has three LEDs. If the device is playing a Topic sound
clip, then the Topic LED will light up. The same goes when the device is playing a Reply
sound clip. The Topic and Reply LED can only light up one at a time, not simultaneously
to avoid confusion. The record LED can only light up when the device is recording a
sound clip.
    The buttons in AIR+ are as follows: (1) play/stop, (2) record, (3) select/accept, (4)
previous, (5) next, (6) thumbs up, and (7) thumbs down. Depending on the state of the
device as describe in Fig. 6, the buttons can become insensitive or unresponsive to user
interaction. The purpose of this design is to ease the process funnel in recording a new
sound clip and reduce the mistake of pressing the wrong button. For example, when
listening to the Topic sound clip, the thumbs up/down button will be insensitive, thus user
will not able to press them. In the physical version of the AIR+ device, the buttons should
have a backlight to indicate whether it is enabled or disabled. In addition, the AIR+
buttons are designed to be multimodal depending on the state of AIR+. For example, if the
user is currently listening to a Topic sound clip and the user presses the record button,
then AIR+ will start a recording for a new topic only and guide the user into the recording
process.
    The play/stop button to play or stop the playback of a sound clip and the record button
is to start recording a sound clip. The select/accept button is used to choose either
listening to Topic or Reply and to accept the newly recorded sound clip. The previous and
next buttons are used to select the Topic or the Reply that the user wishes to hear.
Depending on the state of the device (which is indicated by the Topic or Reply LED), the
previous and next buttons can navigate through Topics or Replies. The thumbs up and
down buttons are to update the rating of a Reply. They will update the sound clip rating of
specific Reply in the database. If the user thinks that the Reply is “good”, they can
“thumbs up” the Reply, and vice versa. A more detailed explanation on the purpose of this
button can be seen in the Core System section.
    As described in Fig. 6, the AIR+ begins in an IDLE state which is indicated with both
Topic and Reply LED turn on. In this circumstance, the IDLE state is a dummy state to
conserve power; no particular action is carried out by AIR+ at this state. Any button
action will take AIR+ out of IDLE state into PLAY_TOPIC state and it will start playing
the first Topic sound clip. Pressing the select/accept button will switch to PLAY_TOPIC
state and start playing the first Reply sound clip. There is a different in when playing the
Reply sound clip; instead of letting the user to press next or previous to navigate the Reply
list, the sound clips will be played continuously from one clip to the next. Thus, the user
does not need to press any button when listening to the replies. This schema is not
implemented when listening to Topic because user usually choose or tune the channel to
listen. Pressing the record button in either PLAY_TOPIC or PLAY_REPLY states will
start recording a new clip either for a Topic or a Reply respectively. At stage of
RECORD_TOPIC or RECORD_REPLY, the user can only press record, play/stop, or
select/accept button to stop recording and move to PLAYBACK state. In the
PLAYBACK state, the newly recorded sound clip will be played only once, the user need
to press the play/stop button to replay the sound clip. If the user not satisfied with the
newly recorded sound clip, then the user can press the record button to restart the
recording process. If the user is satisfied with the sound clip, then the user can press the
select/accept button and resume listening to either Topic or Reply sound clip depending
on the state where the record is initialized. In addition, if the device stays in the
PLAY_TOPIC or PLAY_REPLY state and not playing any sound clip, then the device
will immediately go to the IDLE state. Furthermore, the recording a Topic or a Reply is
limited for only 30 minutes to limit the size of the recorded sound clip.




Fig. 6. User Interface State Diagram


5.4 System Core and Utilities

The implementation of the core system was done using the SQLite C libraries to create a
database file to maintain all the required metadata. In total the database houses four tables,
each are maintaining a different set of data. The design decision to use databases came
after the realization maintaining metadata in any other way would be more complex. The
system core is also home to utility functions used by the sound, network, and graphical
user interface groups as well as the request and replacement algorithms.
5.4.1 Request Algorithm

The purpose of the request algorithm is to return a list to the networking protocol of files
(replies and topics) to download from the connected device. The algorithm is a very basic
deterministic machine utilizing database queries to generate the desired list based on
rating information. The returned list is generated based on a snapshot of the connected
device’s database downloaded in the network connection process.
    The first step in the algorithm is to attach the downloaded database as a table within the
current device’s database. Once attached, a database query comparing the attached table to
the current device’s table is performed. The result is a temporary table containing all the
files (replies and topics) not in the current device’s table. The table thus contains all the
files the current device would ultimately like to download; however (because of short
connection times, disrupted connections, and/or other such issues) the possibility of
downloading all the files listed in the table every time becomes impossible. For this
reason, the table is sorted in a specific way using a couple of sort queries into the
database. A flow chart of the sorting algorithm is shown in Fig. 7.
    The overall goal of the sorting structure is to obtain files the user will probably enjoy
the most first. To do this, a method of rating replies or topics was necessary. The method
developed to assist the algorithm involves three additional database tables plus a scheme
to rate individual replies. The three database tables are populated with the following data:

      Table A: Topics and corresponding rating
      Table B: Device IDs and corresponding correlation
      Table C: List of replies and their ratings from other devices not yet rated by current
         device

    The information contained in Table A and B is generated using the individual reply
rating supplied by the device user. The individual reply rating scheme is thumbs up or
down yielding a maximum value of 1 for thumbs up, 0 if reply has not yet been rated, -1 if
thumbs down is pressed once, and -2 if thumbs down is pressed twice. The value of -2 is
used in the replacement algorithm discussed later in this section. Once the user rates a clip
by pressing the thumbs up or down buttons on the GUI the algorithm utility functions
update the corresponding fields in Table A and B mentioned above. Topic rating, stored in
Table A, is updated every time a reply is rated within it thus for a thumbs up topic rating
is incremented once for thumbs down it’s decremented once and only once no additional
decrement for the -2 value. Device correlation, stored in Table B, is also updated every
time a reply is rated. This is down using the list of replies and ratings stored in Table C by
query for the reply and comparing the rating from the other device to the current users
rating. If the two match, then the correlation count is incremented once, otherwise it’s
decremented once.
Fig. 7. Request Algorithm Flow Chart
    Using the known rating information and the temporary table containing all topics and
replies not on the current device, the topics and replies can be sorted. At this point, a
design decision was made to download all unseen topics first on both branches of the
algorithm to make sorting using database queries easier. If the current device has a
positive correlation (correlation >= 0) then all replies rated thumbs up by the connected
device’s user sorted by topics liked the most in descending order by this user will be
placed after unseen topics in the list. The next set of files will be the unrated replies in the
same topic ordering as the files rated thumbs up. The final files in the list are the files
rated thumbs down by the connected user with the same topic ordering as the thumbs up
files.
    Now, if the current user and the connected user have a negative correlation (correlation
< 0) then the sorting is reversed except for the unseen topics. This means following the
unseen topics would be the files rated thumbs down by the connected user sorted by topics
liked the most in descending order by this user. Next, would be the files unrated in the
same topic ordering scheme and last would be the files rated thumbs up in the same topic
ordering scheme. With this file sorting structure, the returned list of files will have a better
probability of requesting files liked by the user first. This implies connection problems
will ideally cause the user to lose files they may not like anyways.
    This algorithm is a very simple version of what could be a very complex decision
making algorithm. For the purpose of the prototype, heavy algorithm development was
impractical due to tight time constraints; however, developing and evolving the algorithm
will be discussed as a future work possibility.

5.4.2 Replacement Algorithm

The purpose of the replacement algorithm is to decide when to remove files and what files
to remove. The when to remove files portion of the algorithm was implemented using the
disk size, current disk usage, and new file size to compute the overall disk usage and
compare the result to 90% of the disk size. Basically, if ((current disk usage + new file
size)/disk size) ≥ 90% then the algorithm is run, otherwise the file is downloaded and the
process is repeated. If the algorithm is run, then files are removed based on the user’s
dislike for the reply or replies. The replies are sorted by the lowest rated topic in
ascending order with the highest rated topic at the end. Within each topic, the replies are
then sorted by the “thumbs up” and “thumbs down” rating. As mentioned in the request
algorithm section, a reply rating of -2 means user wants the reply deleted. For this reason
the replies are sorted by -2, -1, 0, and 1. Once sorted in this manner the list returned is
complied by first taking all the -2 clips, maintaining topic ordering, and putting them at
the front of the list. The process is repeated for the -1, 0, and 1 rated replies such that
replies rated with a 1 will all be at the end of the delete list returned. The algorithm then
only deletes the number of files necessary to add the new reply while maintaining the used
space below 90%. The flow chart in Fig. 8 demonstrates the algorithm.
Fig. 8. Replacement Algorithm Flow Chart
6 Evaluation

In our evaluation of the AIR+ prototype, we focused on three areas of the design:
functionality, performance and usability. The following section discusses our plan and the
results of the evaluation.


6.1 Evaluation plan

Functionality. Recording, playing and exchanging voice clips are the three major
functionalities of AIR+. As part of the proof of concept, we tested these functionalities
using the black-box testing technique. First, we recorded a voice clip with the AIR+
prototype. The record function works properly if the recorded voice clip is stored on disk
without being corrupted. Next, we recorded six voice clips, with three topics, three replies
to the first topic and one the reply to the second topic. The playback function works
properly if the prototype plays the clips accordingly. Finally, we set up two prototypes,
each with voice clips on disk. We then moved the prototypes closer until they can detect
each other’s presence. The clip exchange function works properly if both prototypes
received the new clips from each other.


Performance. For performance evaluation, we considered distance, time and space
requirement for the prototype to work properly. We measured the maximum distance at
which two AIR+ devices can detect each other’s presence. We also measured the time it
took to receive a clip from a neighboring device. Finally, we measured the disk space
required to store recorded voice clips.



Usability. To evaluate the usability of the AIR+ prototype, we have conducted a user
study with eight volunteers. The participants are three female and five male computer
science students. Participants were asked to carry the AIR+ prototype with them for a
period of five days. During the five days, they recorded voice clips with the prototypes,
collected clips from other devices, and rated the clips as they were received.


   At the end of the five day period, the participants were asked to fill out a survey. The
survey was structured to collect experience on the following aspects:
    - Recording voice clips, as topics and replies.
     - Navigating through voice clips.
     - Design of the user interface.
The survey is designed to find out whether various features and the user interface are
intuitive for our participants.


6.2 Results


Functionality. All of the three functions in AIR+ passed the functionality test. However,
there is a stability issue with the GStreamer library which was used to handle clip
recording and playback. We noticed that the prototype became non-responsive when the
sound component received a high volume of activity. This is related to the fact that the
GStreamer library is heavily threaded. For instance, if we press the “Next” button too
quickly, the GStreamer library would not have enough time to clean up before initializing
a new thread to handle the next request, causing the rest of the program to be non-
responsive.



Performance. The results of the performance test show that the distance, time and space
requirement in AIR+ prototype are reasonable. The results are summarized as follows:


                 The maximum distance allowed between two devices for them to be able
                  to detect each other was found to be approximately 30 feet.

                 The average transfer rate in the AIR+ prototype was found to be 250KB
                  per second. For instance, it takes 10 seconds to transfer a recording of
                  30 minutes in length, which is approximately 3.5MB in size, between
                  two AIR+ devices. The transfer rate was found to be lower when the
                  clip size was smaller, since each file transfer requires a negotiation
                  phase. The time used in the negotiation became insignificant when the
                  clip size increases.

   In our space performance test, we have used recordings of relatively high quality. Fig.
9 shows the graph of clip size versus the clip length. It was found that clip size increases
linearly as clip length increases. For a clip that is five minutes long, it will take about
550KB of disk space for storage.
Fig. 9. The effect on file size as length of recording varies



Usability. The results of our user study show that the participants are generally positive
about our prototype. All eight participants found the prototype to be a reasonably usable
device. The distributed nature of the prototype was found to be the most interesting part
by the participants. A summary of the results is as follows:



Table 1. Summary of usability study survey results
                                                                Participant responses
              Category of questions in survey
                                                                Yes                No
         Learning to use the device is easy                      5                  3
         Recording clips is easy                                 8                  0
         Playing clips is easy                                   8                  0
         Navigating through clips is easy                        4                  4

         In the current prototype, the playlist does not resume from the topic which was
played last. Participants found this feature adding difficulty in navigating through clips.
Additionally, participants found the topic/reply organization not intuitive. Several
participants recorded both topic description and the content as a topic. Participants also
suggested that they would like to be able to manually delete recordings and receive
feedback from the device while it is receiving voice clips.

7 Societal Implications

Despite the enormous resources spent on promoting advancement through information
and technology in developing regions, sub-Saharan African women continue to have
declining living conditions and status over the past decades. The unequal development of
women stems from their lack of access to information and education, which hinders
women from being involved in their communities. While men in sub-African regions are
able to benefit from technology-mediated opportunities for development, women are often
excluded from the benefits because they are the designated caretakers. In order for
advancement to be sustainable, development initiatives need to ensure that women can
benefit from these opportunities to the same degree as men.
   With this idea in our mind, the primary goal of AIR+ is to empower women in
developing regions by promoting gender equality. Previous work on the original AIR
project partially reversed the typical flow of information in mass communication. Instead
of pushing information to their listeners only, community radio stations are now able to
pull opinions and comments from them. By allowing women to express their opinion,
women are more likely to be able to be involved in the development of their communities.
   While the test study conducted by the AIR project will likely demonstrate success in
Kenya, its reliance on a centralized system can be a possible hurdle to sustainability. To
address this problem, AIR+ uses the distributed approach. We eliminate the need to rely
on the participation of a radio station by designing a system that allows users to record,
play, and collect voice clips from other users. AIR+ also automates the selection process,
having the potential to reduce bias in mass communication. These improvements allow
AIR+ to promote gender equality and therefore are able to improve women’s living
conditions and status in their communities.


8 Conclusion

Future work should focus on decreasing the cost of the device. Currently, the prototype
was created on a Nokia N800 Linux internet tablet for ease of development and proof of
concept. However, the cost of the N800 device is nearly twice the original cost of the AIR
system. The original idea for AIR+ project was to port the project to a smaller embedded
device, such as the Intel iMote2™, running the Linux kernel. Using a smaller embedded
device would decrease cost because no additional unnecessary features are associated with
the purchase of the device, such as the large touch screen LCD on the N800. Also, the cost
of using a small embedded device would be similar to the original AIR project. This was
not done due to time constraints on finishing the project.
Fig. 10. Concept Device Drawing.

   An idea for the concept device would be a small box, approximately 10 cm x 5 cm x 5
cm. The user interface would consist of six buttons, three LEDs, a speaker and a
microphone. A concept drawing is shown in Fig. 10 for reference. A user of this device
would be able to press a button to record a sound clip, listen to sound clips present on the
device and rate these clips during this process. The device would automatically transfer
sound clips between devices when they were within range of each other and remove old
clips.
   Another area for future improvement could be more complex request and replacement
algorithms. Currently, the algorithms are fairly basic and don’t base the decisions on
many parameters. One reason for this is the amount of metadata stored in the prototype
version is a minimal amount to create a basic decision method. However, when the
amount of metadata maintained is increased and more parameters are added, the
algorithms could request the downloading of only files guaranteed to be enjoyed by the
user. Also, with the same additions, the replacement algorithm could manage the deletion
of files in a much more sophisticated way.
   Improvements could also be made to the network connection method. Currently when
multiple devices are in the same vicinity, a connection is attempted and hopefully
succeeds with all the surrounding devices from which files are then transferred as
specified by the request algorithm. Given more metadata about specific devices, there
could be a utility function making some database queries and returning a list of devices in
the order the network code should connect to them hopefully ensuring files are
downloaded from liked devices first.
   Another future improvement, not regarding basic device functionality, would be to use
voice recognition. Though the complexity of making voice recognition work would make
the task time consuming, it may be interesting to see how well it would work. Voice
recognition could be used to navigate the file structure as well as entering the commands
the user wishes to perform. This would eliminate the need for the GUI and the buttons,
making the device much simpler to use.
    Based on the result taken from evaluation period, it is necessary to improve the user
interface for better user experience. One of the suggested improvements is to make the
user interface more intuitive. In order to make such improvement, the project needs to
conduct a usability study through paper and software prototyping. Additional marking can
also be added to the current AIR+ user interface, which is a frame as shown in Fig. 11,
below. The ‘Playing’ text is added to notify the user about the status of the device. This
minimal change can give more information to the user and allow them to be aware on
what type of sound clip is currently playing. However, the effectiveness of such changes
still need to reviewed and tested.




Fig. 11. Possible look of GUI with the suggested future work incorporated

   Aside from areas of improvement for future development, some important lessons were
learned during the development of the current prototype device. The first was the
consideration in designing the user interface to be intuitive. In the future, the user
interface design should also incorporate different cultural and ethnic background to make
it more user-friendly and not offensive.
   Second, a robust and working prototype requires significantly more testing than
expected. In the time frame of three months, most of the time was spent developing and
testing each individual module’s code for expected results. Once all the code was
integrated into a complete system, small scale tests were performed (but the three month
development period was almost over). During the user study performed in the last week,
many bugs were discovered, and more may have been discovered in further development
testing. Thus more testing should have been performed.
   Third, it is very important to create a schedule or timeline of milestones that is realistic
but keeps the project moving at a successful pace. Developing a schedule or timeline like
this takes a well thought out project plan and input from each member of the group. The
AIR+ schedule was too optimistic is some places and not detailed enough. Two important
things the group could have done to help meet the schedule created are increased
communication and accountability. Nearing the last few weeks of development group
communication broke down and some of the team had no idea of the project’s current
state. Communication is imperative in the successful development of a prototype device
such Accountability would have helped keep progress moving at a much steadier pace
since more strict deadlines would be present and consequences for not meeting the
scheduled deadlines would exist.
   Overall, the project was an involved learning experience the group is glad to have
participated in.
References

 1. Annan, K. In Africa, AIDS has a Woman’s Face. New York Times International Herald Tribune. (2002, December 29).
 2. Ford, M. and Botha, A. "MobilED - An Accessible Mobile Learning Platform for Africa." IST Africa 2007 conference,
    May 2007.
 3. Hemphill, T., Thrift, P., "Surfing the Web by Voice." ACM Multimedia 95, November 1995.
 4. Human Development Report 2005, UNDP, New York, NY, 2005.
 5. James, J., “Technological blending in the age of the Internet: A developing country perspective”, Telecommunications
    Policy Volume 29, Issue 4, May 2005, Pages 285-296.
 6. James, J., "The global digital divide in the Internet: developed countries constructs and Third World realities." Journal of
    Information Science, 2005, Pages 114-123.
 7. Jensen, M. The Wireless Toolbox: A Guide to Using Low-Cost Radio Communication Systems for Telecommunication in
    Developing Countries – An African Perspective, Ottawa: IDRC, 1999.
 8. Momo, R., “Expanding Women’s Access to ICTs in Africa,” in Gender and the Information Revolution in Africa, E.
    Rathgeber and E. Adera, Eds. Ottawa: IDRC, 2000.
 9. Pringle, Ian, David, M.J.R. “Rural Community ICT Applications: The Kothmale Model” The Electronic Journal on
    Information Systems in Developing Countries. 8, 4, 1-14. 2002.
 10. Sibanda, J. “Improving Access to Rural Radio by `Hard-to-Reach' Women Audiences,” in Proc. First International
    Workshop on Farm Radio Broadcasting, Rome, 2001.
 11. Sterling, S. Revi, O’Brien, John, Bennett, John K. “AIR: Advancement through Interactive Radio,” Department of
    Computer Science, University of Colorado, September 2008.
 12. Thies, W., Kotkar, P., Amarasinghe, S., "An Audio Wiki for Publishing User-Generated Content in the Developing
    World." HCI for Community and International Development, 2008.
 13. Warthman, F. Delay Tolerant Networks, A Tutorial, Available: http://www.ipnsig.org/reports/DTN_Tutorial11.pdf 2003.
 14. Zabala, E., “Gender equality in education: world fails to meet MDG 3,” Choike, Available:
    http://www.choike.org/nuevo_eng/informes/3207.html 2005.
Appendix: User Instructions


Using the Program


 User interface




         1.   Recording status LED                                        6. Previous clip button
         2.   Microphone                                                  7. Record button
         3.   Mode status LEDs                                            8. Play/stop button
         4.   Thumbs up button                                            9. Mode selection button
         5.   Thumbs down button                                          10. Next clip button

 Selecting modes
   The device’s current mode is indicated by the mode status LEDs on the right hand side of the screen (3). These
indicate whether the device is currently navigating the Topic list or a Reply list, or idle (both LEDs off). Pressing the
mode selection button (9) will change between Topic and Reply list modes. If the device is currently idle, the mode
selection button will put the device in Topic mode.

 Playing available clips


 Topics
  While in topic mode, you can navigate between topic descriptions, using the previous (6) and next (10) clip
buttons. Pressing the mode selection button (9) will set the device in Reply mode, and play the replies to the current
topic description. Pressing the play/stop button (8) will cause the current topic description to be replayed. After five
minutes of inactivity in Topic mode, the device will go into its idle state. If the device contains no topic descriptions
the Topic mode indicator LED will be blinking.

 Replies

   While in reply mode, reply clips will automatically play in chronological order. You can skip forward or
backwards in the list using the previous (6) and next (10) clip buttons. Pressing the play/stop button will stop or
resume automatic playback of reply clips. Pressing the mode selection button (9) will set the device back into Topic
mode, and play the current topic description. If the current topic contains no replies the Reply mode indicator LED
will be blinking.

 Rating clips
   At any time, pressing the thumbs up (5) or thumbs down (4) buttons will rate the currently playing reply clip
accordingly. Ratings determine clip transfer and deletion priorities. Topic descriptions are not directly rated.

 Recording new clips
  The microphone is located on the top edge of the device, just above the microphone indicator on the screen.

 Topics

   While in Topic mode (or the idle state), pressing the record button (7) immediately starts recording a new topic
description. Pressing the Mode selection button (9) will end recording. Topic descriptions should be short and
descriptive, in order to facilitate easy navigation of material. Recordings may not be canceled.

 Replies

   While in Reply mode, pressing the record button (7) immediately starts recording a new reply to the current topic.
Pressing the Mode selection button (9) will end recording. Replies are limited to 30 minutes in length. Recordings
may not be canceled.
Appendix: Development Environment Setup Instructions

Setup
=====

In general, setup for this project is required in two major areas: the
development server which has the cross compilation environment (Scratchbox),
and each Nokia N800. There are no specific hardware requirements for the
server, except that it needs network access to the internet. Each Nokia N800
needs to be setup so that they have the correct libraries and tools
installed.

Setting up Scratchbox
---------------------

1. Install Linux on the server.

It is recommended to use a Debian distribution or derivative for this
project. The Debian APT package support will assist in installing scratchbox
and other essential tools.

Add the following to /etc/apt/source.list
deb http://scratchbox.org/debian/ apophis main

2. Install any desired tools, such as:
- vim/emacs/etc for editing
- gnome for desktop environment
- svn or cvs for version control

Scratchbox can be installed using a script or manually. It is recommended to
use the script for this installation.

3. Download the script at
http://repository.maemo.org/stable/chinook/maemo-scratchbox-install_4.0.sh

4. Execute the following commands:
$ sudo chmod a+x ./maemo-scratchbox-install_4.0.sh
$ sudo ./maemo-scratchbox-install_4.0.sh

Use the '-u <USER>' option to add a specific user to scratchbox. You will
need to restart terminal finish installation.

In case the script file fails to execute complete, the alternative solution
will be to do manual installation:

4b. The following packages need to installed:
scratchbox-core 1.0.8
scratchbox-libs 1.0.8
scratchbox-devkit-cputransp 1.0.3
scratchbox-devkit-debian 1.0.9
scratchbox-devkit-doctools 1.0.7
scratchbox-devkit-maemo3 1.0.1
scratchbox-devkit-perl 1.0.4
scratchbox-toolchain-cs2005q3.2-glibc2.5-arm 1.0.7.2
scratchbox-toolchain-cs2005q3.2-glibc2.5-i386 1.0.7

Using Debian, the 'apt-get install <package>' command will perform the
installation.

Setting up Maemo
----------------

1. Download the script located at
http://repository.maemo.org/stable/4.0/maemo-sdk-install_4.0.sh

2. Execute the script by running as each user in Scratchbox (NOT as root)
$ sh maemo-sdk-install_4.0.sh

3. Scratchbox should be installed and set up at this stage.

4. To access Scratchbox, type:
$ /scratchbox/login

It should enter a new shell as a development environment for CHINOOK-ARMEL.
To change the target development environment, run 'sb-menu'.

Setting up Nokia N800
---------------------

1. Update the firmware of Nokia N800 to OS2008 via
   https://www.nokiausa.com/A4410958 (An installer should be downloaded and
   run on a PC; the software update will be done by using a USB cable.)

2. Run the installer and follow the step-by-step instruction.

3. Once the update finishes. Access the application catalog by
ApplicationManagerMenu -> Tools -> Application Catalog

4. Add the following repository to the list:

*   Catalog name: Maemo hackers
*   Web address: http://maemo-hackers.org/apt
*   Distribution: bora
*   Components: main

*   Catalog name: Maemo extras
*   Web address: http://repository.maemo.org/extras
*   Distribution: bora
*   Components: free

*   Catalog name: Maemo
*   Web address: http://repository.maemo.org/
*   Distribution: bora
*   Components: free non-free

5. Browse and install openssh (server and client). REMEMBER the root password
that is setup during the installation of openssh.

6. Execute the following commands to install wget and obtain gainroot

$ ssh root@localhost

# apt-get install wget
# wget http://chalbi.cs.washington.edu/gainroot

# mv gainroot /usr/sbin/
# chmod 755 /usr/sbin/gainroot
# exit

$ sudo gainroot --> For TEST

Setting up Sqlite3 (Important)
------------------------------

Sqlite3 is used the database for the AIR+ project. Both the server and
Nokia N800 need to be setup to access the necessary sqlite3 library.

For Scratchbox

1. Access the root shell of the host server

2. Type: 'apt-get install sqlite3 libsqlite3-dev libsqlite3-0'
to install sqlite3 and its development library

3. To run sqlite3, try to execute 'sqlite3' from the shell

For Nokia N800

1. Access the root shell, either via 'sudo gainroot' or
'ssh root@localhost'

2. Type: 'apt-get install sqlite3' to install the sqlite3 binaries

3. Test the installation by running 'sqlite3' from the shell

Setting up GStreamer (Important)
--------------------------------

GStreamer is being used as the audio API and must be installed for both the
server and Nokia N800

For Scratchbox:
1. Access the root shell of the host server
2. The following libraries are needed for developing under gstreamer:
gstreamer0.10-esd
gstreamer0.10-plubgins-good
libgstreamer0.10-dev
gstreamer0.10-alsa
gstreamer0.10-plugins-base
gstreamer0.10-esd-x
libgstreamer0.10-plugins-base0.10-0
gstreamer0.10-0
gstreamer0.10-plugines-ugly-multiverse
gstreamer0.10-plugines-ugly
gstreamer-tools
Execute: 'apt-get install <package ......>' to install them.

GStreamer and OGG support for Nokia N800 (Important)
----------------------------------------------------

1. In the web browser, go to
http://maemo.org/downloads/product/OS2008/ogg/

2. Download and install Ogg Support v0.7 to enable the ogg support on the
Nokia N800

3. Access the root shell.

4. Type: 'apt-get install gstreamer-tools' get the tools to play a .ogg file
(gst-launch)

Compiling
---------

1. Download the sources for AIR+ binary.

2. Setup a repository on the new server to host AIR+ source code.

3. Go to AIR+ root folder

4. Execute 'make' to create the 'air' binary

5. Tranfer the 'air' binary into the target device, a Nokia N800, and run the
binaries from the target device.

Development Tips
---------------

It is often necessary to download the cross-compiled binaries from the
server to Nokia N800 for testing. A useful tool to find out the IP
address of the N800 is HomeIP, which displays the current IP
address on the desktop, making it easy to find:
http://maemo.org/downloads/product/OS2008/homeip/

Useful sites
------------
http://www.scratchbox.org/documentation/user/scratchbox-
1.0/html/installdoc.html
http://repository.maemo.org/stable/4.0/INSTALL.txt
http://maemo.org/development/documentation/tutorials/maemo_4-0_tutorial.html
https://www.nokiausa.com/A4410958
http://www.cs.washington.edu/education/courses/cse461/08wi/projects/n800intro
.html

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:18
posted:12/18/2011
language:English
pages:29