Component-Based HCI UI Framework and High-Level HCI Library for Java by fso56144


									Component-Based HCI UI Framework and
   High-Level HCI Library for Java

             Rawlings, Ori

             April 1, 2009

   Support for Java within the NUI Community has fallen to the wayside. Pro-
viding a simple, familiar human-computer interaction (HCI) library for Java de-
velopers can not only expand the NUI Community, but also introduce a wealth
of existing Java applications to multi-touch and HCI environments.

   I propose the development of a component-based UI framework for Java. This
framework will allow Java developers to create interfaces for their Java applica-
tions that support a variety of advanced forms of human-computer interaction
(HCI). Developers instantiate UI components, and component listeners to hook
into their application code. The components of the framework will already know
how to respond to a variety of HCI actions (ex. scaling and rotating based on
gestures from a multi-touch surface). But, the framework will simultaneously
expose a component-based system for defining new gestures and HCI actions.

  Due to the scope of this framework, I intend to implement support for multi-
touch initially. Later work can introduce support for other HCI mediums (i.e.
tangibles, voice-recognition, eye-tracking, etc.).
1      About the Author
  My name is Ori Rawlings. I’m 21 years old and currently a 3rd Year Under-
graduate student at Illinois Institute of Technology, earning a degree in Com-
puter Science.

  I live in Chicago, IL, time zone: UTC/GMT -6 hours. Although, due to
daylight savings, in the summer the time zone is UTC/GMT -5 hours.

   I’m most experienced with Java development, although I have worked with
OCAML, Prolog, Javascript, and Ruby in the past. At school, I’ve partici-
pated in research concerning the Semantic Web, and am currently working on a
project to build a semantically-aware news search engine. I’ve built server and
client code in Java before, and am currently developing a RSS/Atom listener to
provide our news search engine with articles. I also have experience with web
development in Java. I’m also familiar with HTML, and CSS. As you may be
able to tell, I am also a fan of L TEX.

   I’m the Vice President of the Local ACM Chapter at IIT and am a big
proponent of free and open software. I work almost exclusively with open source
libraries, tools, and software. I prefer UNIX-style environments. I have yet to
contribute to open source software, but have published my own code openly in
the past1 .

   When writing code, I try to follow a Test Driven Development strategy. I
also have interests in Agile Development.

  You can email me at You can also find the latest
version of this project proposal online at

2      Motivation
  Since first seeing Jeff Hans multi-touch demo, I have been captivated by the
possibilities of advanced human-computer interaction (HCI). Upon seeing NUI
Group as a participating organization in 2009 Google Summer of Code, one
can imagine my excitement. Although, being primarily a Java developer, one
can also imagine my disappointment upon discovering the lack of Java activity
within the NUI Community.

   I’ve longed to experiment with HCI, but would prefer to utilize my existing
strengths with Java. I then realized that the existence of a proper, and capable
Java library for multi-touch would grant me this ability to experiment quickly
and easily. Yet, I also recognized that even if a proper, and capable multi-touch
Java library existed, it would remain difficult to rapidly prototype new multi-
touch applications as well as enable existing application code with multi-touch
support. It dawned on me that a very simple, easy to use HCI UI framework
could introduce a way for Java developers to quickly build HCI enabled applica-
tions. Component based UI frameworks are common in Java. Considering the
familiarity Java developers have with such component based UI frameworks,
why not implement such a framework for HCI? HCI enabled applications can
be introduced from the Java community, without the overhead of learning any
complexities of multi-touch, eye-tracking, etc.

3    Technical Details
  While I aim to ultimately implement a framework that supports many medi-
ums of HCI, I will focus solely on multi-touch during the course of the summer

   The framework will not implement any vision-tracking itself. Instead it will
utilize the TUIO protocol to receive input from multi-touch surfaces. This makes
the framework independent of tracking software, and capable of processing input
from a machine located on the network. The TUIO Client library for Java will
be utilized for the raw processing of the network data.

  On top of the TUIO client, a gesture recognition and processing library/framework
will be built. This library will analyze TUIO input and maintain a set of gesture
components, all associated with a UI component. As gestures occur, the frame-
work will fire events, triggering reaction code in the UI components. Gestures
will be described using a component-based architecture. Complex gestures can
be described as a sequence and combination of more simplistic gestures. The
developer can use the gesture component framework to combine primitive ges-
tures, defining new gestures. Although, they shouldn’t need to interact with
the actual gesture processing code. Regardless, it will remain exposed for very
advanced users who want to utilize its functionality.

   In the future, the gesture recognition and processing library will need to deal
with more than just TUIO input. Therefore, care must be taken to modularize
interface between the library and the data provided by various HCI devices.

   A component-based HCI UI framework will be built on top of the gesture
library/framework. The UI components will provide the gesture library with
a context (like the component’s touchable area). The gesture library will fire

events at the component when gestures supported by the component occur
within its context.

  The UI framework will allow the developer to associate gesture listeners with
individual (or groups of) UI components. In this manner, the developer doesn’t
need to understand how the gesture processing occurs, or how the UI compo-
nents communicate with the gesture processor. The developer merely needs to
script behavior to be triggered when the gesture occurs on that UI component.

   The provided UI components will, by default, be associated with certain ges-
ture listeners. For instance, a simple image frame component will listen to
stretching and and twisting gestures by default. The image frame will be pre-
programmed to scale when a stretching gesture is applied and rotating when a
twisting gesture is applied. The developer does not need to implement these
reactions. This way, a very beginner user of the framework will be capable of
build UI with rich multi-touch support, before learning the intricacies of defining
and associating gestures.

  Rendering of the UI will be achieved through the use of Processing2 . Process-
ing is built in Java and provides native interaction with Java code. Additionally,
Processing supports both software and hardware based rendering, allowing op-
timization based on a user’s machine. Processing also supports both 2D and 3D
rendering, allowing the framework to support UI’s of both flavors.

  I’ve not finalized a list of UI elements that will be included by default, but
some possible examples include:
   • Text labels

   • Text fields
   • Buttons
   • Container frames

   • Scroll bars
   • Image frames
   • Web browser frame
   • etc.

4    Design Philosophies
  With component-based UI frameworks, it is easy to get carried away with
implementing all sorts of complex widgets. I want to avoid this. Instead, I want
to define a minimal set of components that can be reused and combined to build
any conceivable UI.

  The developer should have access to no more than the minimal set of tools/components
to get the job done.

   I want the UI components to be like LEGO blocks (hope Im not violating
some trademark by referencing these). Each of the blocks themselves are very
minimal, essentially incapable of being broken down into something more sim-
plistic. But the LEGO blocks can be combined in an infinite set of combinations,
each one leading to a complex and advanced structure. I want the developer
to be able to construct their interface by combining these minimal components
into complex structures, much like a LEGO artist takes a bucket of bricks and
can build intricate sculptures and mechanical contraptions. Having any kind of
component within the interface that can instead be expressed as a combination
of more simplistic elements is unacceptable. These types of components add
unnecessary bloat and complexity to the framework, and ultimately rob the
developer of creative freedom.

   Even though the framework should contain only minimal components/widgets,
it may still make sense to provide more common and complex components to the
developer. For instance, a multi-touch keyboard is a complex component that
can be defined as a combination of lesser components. But such a component is
bound to be of common use to many developers. As convenience, it would make
sense to include such a component/widget in the framework. Although, such
a component should be implemented as a combination of the minimal compo-
nents, never implemented from scratch. This both encourages and exemplifies
the use of the framework as intended.

   There may be times when a developer requires behavior more specialized than
any of the existing components provide, and unattainable through combination
of the existing components. The framework should support extension of existing
components in order to achieve such behavior. Because the framework will
be based on an object-oriented paradigm, such extension will be a inherent
consequence of the implementation.

  As the developer is capable of building user-interfaces by combining com-
ponents, the developer should also be capable of defining HCI actions in the
same way. The API will also support a component-based approach to defining
gestures. Gestures can be described as a sequence in time of more simplistic

  Like the UI framework, the gesture component framework should adhere to
the same design principles. Gesture components should be minimal; more com-
plex gestures are supported by combination of other gestures. Common and
complex gestures can be provided by the framework, but only if they are imple-
mented in this way. Gestures should also be open to extension by the developer
when more specialized behavior is required.

  The learning curve of the framework should adhere to certain principles. The
functionality of the API should be exposed in layers. Developers should first
be able to simply compose UI components that, by default, react to common
multi-touch gestures. Upon learning how to compose UI components, the de-
veloper should then be exposed to linking gestures with UI components, thus
expanding their range of interaction. The developer should then be introduced
to composing new gestures and linking them to UI components. From there,
the developer can learn how to extend the existing UI components to achieve
functionality not possible by simple combination of minimal UI elements. The
developer can then move on to extending gesture components. Beyond that,
the developer can learn how to interact with the gesture processor directly in
order to achieve even more specialized functionality.

5    Sustainability of the Framework
   The key to sustainability of the framework is three fold. The framework needs
to be extensible, standard compliant, and meet certain performance levels. If
any of these are not adequately satisfied, the sustainability of the framework is

  I intend to achieve extensibility both through the design of the component
API, as well as through providing the capability of defining new gestures and
UI components.

   I intend to adhere to as many standards as possible. The key to a good
framework like this is interoperability with both existing and future software.
That is why on the hardware/tracker end, it will utilize TUIO for input. This
is already a well adopted protocol and I have no doubts that it will prove the
right choice. Although, it might be wise to consider other ways of delivering
MT input to the system. For now though, Im gonna focus mainly on TUIO.

  Lastly, the system needs to perform well. The project will be very hard to
sustain if the framework runs slowly. With this in mind, I will make consid-
erations wherever possible to optimize performance, including multi-threading,

6    Related Work
  TherioN on the NUI Group forums, states that he has been working of a
multi-touch Java library of similar description for several months. His work is
not currently open source, but he plans to open the code up soon. If he indeed
opens the source and his work is compatible with my design philosophy goals,
collaboration may prove very beneficial.

  Sparsh UI retains a similar architecture to the system I am proposing. Al-
though, there are several differences. Sparsh UI implements a vision tracker,
and lacks support for TUIO. This is a big disadvantage when it comes to in-
teroperability with other existing software systems. Sparsh UI introduces to
concept of a gesture server, that processes point data sent over the network
and sends resulting gesture data over the network to the application. This is
a good idea, as it introduces the possibility of a distributed multi-touch sys-
tem. Similar concepts can be applied to my proposed system in the future by
simply abstracting the gesture recognition and processing library to a network

   I’ve been made aware of Lux, a NUI multi-language framework. It is cur-
rently implemented (well, mostly implemented) in the AS3 Touch API. It is not
released yet, and I haven’t found documentation specific to the Lux specifica-
tion. It would most likely be beneficial support this specification. Developing
a framework that won’t integrate nicely with other software is a risk to the
sustainability of the project. Collaboration is possible here.

7    Future Work
  There are several places where work will continue on after the end of the

   First of all, the framework must continue to be expanded to include additional
forms of HCI, beyond just multi-touch. Support for tangibles seems like a logical
next choice due to its relatedness to surface computing. Later on, support for eye
tracking and voice-recognition could be introduced to allow further enhancement
of Java HCI applications.

   Second, the framework could be expanded to include 3D UI component prim-
itives, and support for gestures on 3D objects (ex. tilts). Assuming that the
framework remains close to my design goals, this kind of extension is a natural
use of the framework, and shouldn’t prove difficult.

  Third, attempts can be made, through collaboration with other projects (like
Lux), to define a standard UI framework interface for HCI. Modifications may
be required in order to comply with the defined standards.

   Fourth, an XML serialization of UI components could be defined. This would
allow developers to describe an entire HCI UI without needing to write any
code. The framework would be expanded to support both the reading and
writing of such XML files. The framework would read in XML files and convert
the description into the hierarchical composition of UI elements. It would then
launch the interface and render everything to the screen, allowing the user to
begin interacting. Writing to XML would allow Java code describing a HCI UI
to be converted to XML for later reading. This would provide the capability
for UI builders to be constructed that can save a UI designers work to disk.
The XML would describe both UI elements, but also gestures. Gestures are
described in the framework as a sequence of more simplistic gestures. This can
also be represented in XML using a concept like Gesture Definition Markup
Language (GDML)3 .



To top