Docstoc

Final Document Seth Revised2

Document Sample
Final Document Seth Revised2 Powered By Docstoc
					                           Final Document


                          Big Dog’s Kryptonite


James Crosetto (Bachelor of Science, Computer Science and Computer Engineering)

Jeremy Ellison (Bachelor of Science, Computer Science and Computer Engineering)

            Seth Schwiethale (Bachelor of Science, Computer Science)




                          Faculty Mentor: Tosh Kakar



                                   05/21/09
Contents
Introduction .................................................................................................................................................. 4

Research Review ........................................................................................................................................... 5

Design............................................................................................................................................................ 8

   Connection between the IP Camera and the User’s PC (video feed) ..................................................... 10

   Connection between the IP Camera and the User’s PC (commands)..................................................... 11

   Hardware: Sending signals produced by software to the car ................................................................. 12

       Theory of hardware operation............................................................................................................ 12

       Hardware Design................................................................................................................................. 14

       Backtracking ........................................................................................................................................ 18

       GUI ...................................................................................................................................................... 19

Implementation .......................................................................................................................................... 23

   Computer to Camera .............................................................................................................................. 23

       Server .................................................................................................................................................. 23

       Client ................................................................................................................................................... 26

   Camera to Microprocessor ..................................................................................................................... 41

       Camera ................................................................................................................................................ 41

       Microprocessor ................................................................................................................................... 57

   Microprocessor to Car ............................................................................................................................ 61

Work Completed (still to be updated) ....................................................... Error! Bookmark not defined.63

Future Work (still to be updated) ............................................................................................................... 65

Appendix A – Programming the Dragon12 Board....................................................................................... 66

Appendix B – Porting Code from the Dragon12 to the Dragonfly12 .......................................................... 72

Bibliography ................................................................................................................................................ 77

Glossary ....................................................................................................................................................... 77
1
Figures
Figure 1: Pulse width modulation ............................................................................................................... 13

Figure 2: The complete setup on the RC car ............................................................................................... 17

Figure 3: The microprocessor component is added to the design ............................................................. 18

Figure 4: Initial design of GUI ...................................................................................................................... 20

Figure 5: GUI design for a dropped connection .......................................................................................... 21

Figure 6: Sequence diagram for using the RC car ....................................................................................... 22

Figure 7: Creating the TCP Socket ............................................................................................................... 24

Figure 8: sockaddr_in structure .................................................................................................................. 25

Figure 9: Binding a socket ........................................................................................................................... 25

Figure 10: Listen for a connection .............................................................................................................. 25

Figure 11: Accepting a connection.............................................................................................................. 25

Figure 12: Receiving commands ................................................................................................................. 26

Figure 13: Reading characters from a buffer. ............................................................................................. 26

Figure 14: Receiving an image .................................................................................................................... 28

Figure 15: Repainting image from MJPEG stream ...................................................................................... 28

Figure 16: Finding the JPEG image .............................................................................................................. 29

Figure 17: Opening a socket........................................................................................................................ 31

Figure 18: The GUI ...................................................................................................................................... 32

Figure 19: Speed and steering images ........................................................................................................ 34

Figure 20: Setting steering and speed states .............................................................................................. 34

Figure 21: Sending commands to the server .............................................................................................. 35

Figure 22: Logitech Rumblepad information .............................................................................................. 36

2
Figure 23: Getting controller events ........................................................................................................... 37

Figure 24: processing a command from the controller .............................................................................. 38

Figure 25: Sending commands to the camera and updating the GUI......................................................... 39

Figure 26: Sending the state to the camera................................................................................................ 39

Figure 27: Controller button ....................................................................................................................... 40

Figure 28: Activating the output using iod.................................................................................................. 42

Figure 29: Inside iod .................................................................................................................................... 44

Figure 30: Example code showing definitions of IO variables .................................................................... 45

Figure 31: Example code for using ioctl().................................................................................................... 46

Figure 32: Test program to figure out how to use ioctl() with the Axis 207W camera .............................. 48

Figure 33: Test program to measure the speed at which ioctl can trigger the camera's output ............... 50

Figure 34: Initial code to trigger the camera's output within the server ................................................... 51

Figure 35: Six bit signal sequence code ...................................................................................................... 54

Figure 36: Timer register initialization ........................................................................................................ 57

Figure 37: Microprocessor code to interpret the signals from the camera's output (initial design) ......... 59

Figure 38: Microprocessor code to interpret the signals from the camera's output (final design) ........... 60

Figure 39: PWM register initialization ........................................................................................................ 61

Figure 40: Microprocessor code to create correct PWM signal for the RC car's control boxes(initial
design) ......................................................................................................................................................... 63

Figure 41: Microprocessor code to create correct PWM signal for the RC car's control boxes (final design)
.................................................................................................................................................................... 65

Figure 42: Timetable .................................................................................. Error! Bookmark not defined.66

Figure 43: Connecting two Dragon12 boards ............................................................................................. 68

Figure 44: Resetting Dragon12 board in EmbeddedGNU ........................................................................... 69

Figure 45: Choose HCS12 Serial Monitor file .............................................................................................. 70

Figure 46: The Serial Monitor has been successfully loaded ...................................................................... 71
3
Figure 47: Options to control the monitor.................................................................................................. 72

Figure 48: Creating a new Dragonfly12 project .......................................................................................... 73

Figure 49: Dragonfly12 initial project layout .............................................................................................. 74

Figure 50: Downloading code to Dragonfly12 using EmbeddedGNU ......................................................... 76




Tables
Table 1: Signal conversion between IP Camera output and Steering Box input. ....................................... 15

Table 2: Signal conversion between IP Camera output and Speed Control Box input. .............................. 16

Table 3: Number of signals sent and the corresponding speed and direction of the RC car ..................... 51

Table 4: Steering and Speed signal sequences sent and the corresponding time taken to send the signals
.................................................................................................................................................................... 54




Introduction
        This project involves modifying an R/C car to be controlled from a personal computer
over a wireless network connection. As long as a suitable wireless network connection is
available (refer to the requirements document), the user will be able to run the program,
establish a network connection between the computer and the car, and then control the car
(acceleration and steering) while being shown a live video stream from an IP camera on the car.

        The design involves both software and hardware. The software will run on a user’s
personal computer. It will allow the user to connect to the IP camera on the car using an
existing wireless network, control the steering and acceleration of the car through keyboard
commands, and view a live video stream from the camera on the car. The IP camera supports
both video transmission, wireless network connectivity and the capacity for an embedded
server as we will see in the design. The signal output on the camera will be connected to a
microprocessor. The microprocessor will have outputs that connect to the control boxes on the
car, thus allowing control signals received by the camera to be relayed to the car. A more
detailed description of the requirements can be found in the requirements document.

4
Research Review
        We researched several possibilities for implementing hardware on the R/C car that can
both establish a wireless network connection and send signals to the control boxes. Initially,
we wanted to use a wireless card that can connect to the internet using cell phone signals. This
would allow the car to be used anywhere a cell phone signal is present. However, this idea was
quickly revised for several reasons. First, we needed to find a small computer which supported
this type of card. The only computers we found that might work were laptops; but most were
too large for the car and the smaller laptops generally ran Linux which wasn’t supported by the
wireless cards (for example see Verizon Wireless 595 AirCard®).[1] We decided to try to find a
small device that acted like a computer (such as a PDA or blackberry) that could connect to a
cell phone network and run some basic applications. However, most didn’t have any output
that we could use to send signals from the device to the R/C car’s control boxes and a cell
phone network card would still be required for the user’s computer. Also, the cards required a
data plan[1] which would have been required to run the car, which we didn’t want. Finally, the
details of how the cell phone networks work are proprietary, so we would have had a very hard
time configuring our software to work correctly using them.

        Our second option was to use an IEEE 802.11 compatible wireless network. This way we
could have more control over the network (and we could set up a network ourselves for
testing) and most computers can connect to this kind of wireless network. All we needed was a
computer that we could put on the car. As already mentioned, most laptops were too large for
the car so we decided to look for a small device (such as a PDA) that supported wireless
networking or that had a port where we could plug in a wireless adapter. Once again, we were
unable to find one that could support wireless connectivity and have an output we could use to
send signals to the car’s control boxes. Further research yielded the discovery of IP cameras. [2]

         IP cameras can connect to a network (through either a network cable or wirelessly) thus
allowing remote control of them.[2] We decided that we could use an IP camera as long as it
could connect wirelessly, provided some sort of output for sending signals to the R/C car’s
control boxes, and allowed simple programs to be written for and run on it. The reason we
needed simple programs to run on the camera is that we need to be able to customize our
ability to control the camera so we can use it to control the RC car. The camera that seemed to
fit perfectly with these requirements was the Axis 207W.[3] It is small so it will be easy to mount
onto the R/C car. It uses IEEE 802.11b/g wireless connectivity.[4] It has a video streaming
capability as well as an output for sending signals to a device (such as an alarm).[4] The camera
also has an embedded Linux operating system[4] and Axis provides an in depth guide[5] to
5
writing scripts for the camera, which allows for custom control of the camera. There is also an
API for manipulating video called VAPIX®that is provide by Axis for use with their cameras. The
camera uses a 4.9-5.1 Volt power source with a max load of 3.5W.[4] Therefore, we could easily
use batteries to supply power to it when it is mounted on the R/C car.

        Our research to find a reliable and flexible means of communication, between the
remote computer and the IP Camera, have led us to decide on using the TCP/IP
communications protocol suite[6] in our design. This connection will be the means of sending
driving commands from the remote computer to the IP Camera and a solid understanding of
TCP/IP Sockets has benefited our understanding of how the connection could be maintained.

         We have decided to use a Client/Server Architecture to utilize the TCP/IP Sockets [6] as a
means of communication. It makes sense that the server should be on the Camera/Car side, as
it will process requests. To embed the application on the IP Camera, it will have to be written in
C. The bare bones of a Socket Server involve: creating a socket, binding it to a port and
hostname, start listening to the port, accepting connections, and handle/send messages. In C
there are methods to perform these tasks that we plan on building off of, such as:

int   socket(int domain,    int type, int protocol),
int   bind(int s, structsockaddr *name, intnamelen),
int   listen(int s, int backlog),
int   accept(int s, structsockaddr *addr, int *addrlen).

These are some of the essential methods we’ll be able to use after including socket.h, and
inet.h. With this as the basis of our communication between the Client and our Server (on the
Car/Camera side), the rest of the task of writing the Server will lay in the design of handling
incoming messages from a the input of our Socket; which will be discussed in detail in the
Design and Implementation section.

        Since the server will need to run on the IP Camera’s hardware, the project requires
embedding C-application(s) onto the camera. We researched the specifications of the camera’s
processor and embedded system to find out what our target environment would be. The Axis
207W has an ARTPEC processor with linux 2.6.18 running on an armv4tl architecture. Since the
camera did not have a compiler on it and our Server would have to be developed on one of the
Intel 0x86 architecture machines, the code would have to be compiled using the proper 0x86 to
ARM cross-compiler to ensure the program would run on our target system. We were provided
with a cross-compiler to build by Axis; Appendix C will discuss the unexpected issues we faced
in simply setting this up.

     Even though the Server is to be written in C, since we are using TCP/IP as our means of
communication the Client does not have to be a C application. We have chosen Java as the
6
language to write the Client in, since it: is Cross-Platform, has AWT and Swing libraries for
creating a GUI, and can deal with TCP/IP Sockets. At this point in research, there are still some
unknowns as to how all this will work together, but at an abstract level, all of the components
are there so Java seems like the perfect choice to develop the User side software in.

         The VAPIX® application programming interface is RTSP-based (Real Time Streaming
Protocol)[7]. The VAPIX® API is provided by AXIS as a means to remotely control the media
stream from the IP Camera to the remote computer. We have researched scripting techniques
for utilizing the API, but ultimately decided that creating our own HTTP connection to get the
MJPEG Stream from the camera was the most straightforward method of handling the video
feed. Since we have decided to write the Client/GUI in Java, getting the media stream through
an HTTP connection with the IP Camera’s MJPEG Stream source seemed to be the simplest way,
with less overhead than using the VAPIX® API. We will discuss what this means in more detail in
the Design and Implementation section.

        In addition, we needed to find out the specifications of how the R/C car is controlled so
we contacted the manufacturer (Futaba) of the control boxes that are on the car. We found
out that both the steering and motor (speed) controls use a 1.0-2.0 millisecond square pulse
signal with amplitude of 4-6 volts with respect to the ground. The signal repeats every 20
milliseconds. Thus, we need the output signal from the camera to match this in order to control
the car.

        Next we researched how we can control the signal output of the IP camera to produce
the signal required by the R/C car’s control boxes. We contacted Axis and learned that the
camera output has been used for controlling a strobe light so a controlled square pulse seems
possible. The scripting guide provided by Axis also gives some example scripts for controlling
the I/O of the camera, so we should be able to write scripts that can be used to send signals to
the R/C car’s control boxes. However, further research revealed that the camera is unable to
produce the signal required to control the RC car. According to the Axis scripting guide[5] the
camera can activate the output with a maximum frequency of 100Hz. This is inadequate
because the steering and motor speed controls use a Pulse Width Modulated (PWM) signal
with a period of 20msthat is high between 1 and 2ms. A circuit could possibly be designed to
convert the output signal of the camera to the correct PWM signal, but we decided it would be
easier and allow for greater flexibility if a microprocessor was used. Therefore, we decided to
use a microprocessor to produce the signal output required by the RC car’s steering and speed
control boxes.

       The microprocessor will use the output of the IP camera as input and convert the signal
to what is required by the RC car’s control boxes. We decided to use a Dragon12 development
7
board to design the program that will change the signal. We will then flash an actual
microprocessor, the Dragonfly12, which will be used on the RC car with our program. The
Dragon12 development board will be used with the CodeWarrior development suite. A
detailed guide for setting up the Dragon12 board to work with CodeWarrior is given in
Appendix A.

       A lot of research was done on how to use the Dragon12 board for microprocessor
development. Most of it involves learning how to use the different aspects of the Dragon12
board to accomplish different things. An aspect of the research that pertains directly to our
project is the use of a microprocessor to produce a controlled output signal. We found out that
an output signal from the microprocessor can be precisely controlled using the clock frequency
of the microprocessor. We successfully used the microprocessor clock frequency on the
Dragon12 board to blink two LEDs at different rates. One blinked about once every 1/3 of a
second and the second one blinked twice as fast. This was accomplished using the example
projects and tutorials provided by Dr. Frank Wornle of the University of Adelaide[10].

        Further research revealed that the microprocessor contains a PWM signal output. Using
that along with the microprocessor timer interrupts allows for the translation of an input signal
into an output signal. The PWM signal that is generated is based on the input signal received.


Design
As was introduced in the requirements document, we have the following issues to tackle:

       Establish a connection between the RC car’s Computer and the User’s (or driver’s) PC.

       Establish a connection between the RC car’s Computer and the hardware that controls
        the RC car.

       Get a real time visual of the car’s position (to allow interactive driving opportunity).

       Send commands from User’s PC to the RC car: this will require both software and in the
        case of the RC car’s side, additional hardware.

       Finally, when everything is functional, provide an attractive, user-friendly GUI.

We start with this basic picture to illustrate, in an abstract way, the modules necessary to fulfill
these requirements. Then, we will sequentially build off of this to achieve a design that
addresses full functionality:



8
Our first objective was finding a way to connect the two sides of the system; the “PC” being the
user side and the rest being on the car side. As described in the research section, we have
decided on an IP camera to sit on the car-side and handle that end’s connection over a wireless
network.




9
Connection between the IP Camera and the User’s PC (video
feed)
        The camera’s system includes an API called VAPIX®, which is based on the Real Time
Streaming Protocol (RTSP) designed to be used by sending predefined commands like SETUP,
PLAY, PAUSE, and STOP to ‘control’ the video feed. These commands are sent as requests that
look like HTTP headers and corresponding responses of similar form are sent back from the
camera.

       We wanted to get rid of the overhead of the API and avoid the time consumption of
learning it and figuring out how it might be incorporated into a Java application. We decided
that we wanted to directly use the MJPEG stream that the camera serves out. This is accessible
from a URL looking something like:

“http://152.117.205.34/axis-cgi/mjpg/video.cgi?resolution=320x240”

By creating our own HTTP connection in our Java application, we can get this as an InputStream.
But now that we have the MJPEG stream, knowing what it looks like is important, so we know
how to go about using it. From VAPIX_3_HTTP_API_3_00.pdf section 5.2.4.4:

Content-Type: image/jpeg

Content-Length: <image size>

10
<JPEG image data>

--<boundary>

So, the JPEG images are there, but we can’t simply put the MJPEG stream into images blindly. If
we can get the MJPEG stream as a Java InputStream, we are able to continuously find the
beginning of a JPEG and add bytes, starting there, to an Image until we get to the boundary.
Our video feed problem is resolved to this: continuously find whole JPEG images within the
stream. Each time one is found, display it. Since the IP Camera is dishing out images at,
depending on your preference, around 30 frames per second, we have a considerably solid
video feed. We’ll explain the Java implementation of this in detail in the Implementation
section.



Connection between the IP Camera and the User’s PC
(commands)
From our research on the TCP/IP protocol suite, we realized that the most reliable way to send
commands would be to use the Client/Server Architecture using a connection between TCP/IP
sockets[6]. Naturally, the car side is the Server, as it will accept a connection and process
requests. The Client is on the user’s PC. Additionally, since it will not be necessary in the scope
of this project to ever have more than one ‘controller’ for a car, the Server only allows one
client to connect to it.

The Axis 207W has an embedded Linux operating system 2.6.18 running on the armv4tl
architecture. To embed the Server program on the IP Camera, it must be written in C and
compiled using Axis’ 0x86 to ARM cross-compiler to run on the IP Camera’s hardware. In this
way, we will be able to write our own Server that will run on the 207W. To implement the basic
functionality of a Server, we will use the methods touched on in our research section, creating a
socket and listening on it. Our Client can be written in almost any language, we will use Java, as
it will be highly compatible with our GUI. With a Server on the camera, with the car, that can
accept requests from the Client on the driver’s computer, we have the communication from the
user’s PC to the IP Camera covered. What is sent from the Client and received by the Server and
how it’s handled is the issue of design in how the car will be controlled.




11
Hardware: Sending signals produced by software to the car
        The hardware aspect of this project focuses on programming a microprocessor to take a
signal sent to it by the output of the IP camera and relay a modified signal to the steering and
speed control boxes on the RC car. The details of this process are explained below.

Theory of hardware operation
The hardware design of this project was broken down into several key components:

    1. R/C Engine
    This R/C car is driven by a simple electric motor. As current flows through the motor in the
positive direction, the engine spins in the forward direction. As current flows the opposite
direction, the engine spins in reverse. However, the details of operation for the engine are
unnecessary for this project since there is a controlling unit which determines the appropriate
current to be sent to the engine based on its received pulse signal. The details of the controlling
units function is elaborated upon in part 2 of this section.

    2. Controlling Units of the R/C car
    We found that there were 3 components which controlled the functions of the car. The first
controlling unit is the servo. This servo box currently takes in a radio frequency signal and
divides it into two channels: one which leads to a steering box, the other to the speed control.
Each channel consists of three wires: ground, Vcc, and a pulse width modulated signal. The Vcc
carries a voltage between 4.6-6.0V. The pulse width modulation is what determines the
position of the steering and speed boxes of the car. The pulse is a square wave, repeating
approximately every 20ms. This pulse ranges between 1.0 and 2.0ms with 1.5ms as the normal
center. As the pulse deviates from this norm towards the 1.0min or the 2.0max, it changes the
position of the steering or speed (depending on which channel it’s feeding). For example, as the
pulse to the speed control moves towards 2.0ms, the speed of the engine is going to increase in
the forward direction. As the pulse moves towards 1.0ms, the engine speed is going to increase
in the reverse direction.

       The graphs below are a demonstration of the pulse width modulation. As the pulse
changes, it changes the amount of voltage that is delivered. In the examples below, the first
case shows 50% of the pulse is at the Vcc, therefore, 50% of the Vcc is delivered. In the second
case, 25% of the pulse is at the Vcc, therefore, 25% of the Vcc is delivered.




12
                  Figure 1: Pulse width modulation


    3. IP Camera
    The details of this camera have been outlined above. The camera carries an embedded
Linux operating system which should allow for custom controlled output of the alarm port. This
output will be programmed to respond accordingly to input signals from the network
connection and be in the form of the necessary pulse needed for the speed and steering
controls. The voltage which this camera requires – 4.9-5.1V – will also be within the range that
could be provided from a battery pack.

    4. Microprocessor
    A programmable microprocessor is what we are going to use to replace the servo box and
recreate the pulse width modulation. Since the camera has a max output interval of 10ms, it
would not be adequate for supplying a PWM signal for the steering and speed commands.
Instead, the microprocessor will receive a signal from the IP Camera and translate it into the
appropriate signals for the steering and speed control units. The output signals will vary in
length between 10 and 110 ms for speed and 120 and 220 ms for steering (see Table 1).

       The microprocessor will be written, debugged tested using CodeWarrior and the
Dragon12 Rev E development board. The programming language will consist of a combination
of C and Assembly.

     5. Battery Pack
13
The standard battery pack on this vehicle is a series of six 1.2V, 1700mAh batteries. Since
equipment is going to be added, a second battery pack was added to supply the camera and
microprocessor with power. Once the IP camera and microprocessor have been implemented,
the appropriate amount of batteries will be added. The current run time for the standard car
and battery pack is approximately 20 minutes. The battery pack for the IP camera and
microprocessor should last at least as long.

Hardware Design
        The output of the IP camera can be activated a maximum of 100 times a second (period
of 10ms) according to the Axis scripting guide[5]. As noted above, the RC car requires a 1-2ms
pulse every 20ms so this is much too slow. Therefore, we added a microprocessor which will
take the signal from the IP camera’s output and convert it to the required signal for the control
boxes on the RC car. The output signal from the camera will consist of two alternating signals
representing the acceleration and steering of the car. The signal for the acceleration varies in
length between 10ms and 110ms in increments of 10ms. This signal represents 11 degrees of
acceleration; 5 for moving forward, one representing no movement, and 5 for moving in
reverse. The first 5 increments (10, 20, 30, 40, 50) are used for moving forward, decreasing in
speed as the signal time increases (10 is the fastest speed, 50 is the slowest speed of the car). A
60ms signal represents neither moving forward nor backwards. The largest 5 increments (70,
80, 90, 100, 110) represent moving backwards, increasing in speed as the signal length
increases (70 is the slowest, 110 is the fastest). The second signal, used to control steering,
varies between 120 and 220ms in increments of 10ms also. This signal represents 11 degrees
of freedom; 5 for turning left, one for moving straight, 5 for turning right. The first 5
increments represent turning left (120, 130, 140, 150, 160), decreasing in the degree of
sharpness as the length of the signal increases (120 is the sharpest left and 160 is a slight left
turn). A 170ms signal represents no turn. The last 5 increments represent turning right (180,
190, 200, 210, 220). The degree of sharpness of the turn increases as the signal time lengthens
(180 represents a slight right turn and 220 represents the sharpest right). The reason for
limiting the signal increments to 11 instead of allowing for continuous control is that the
camera can’t output a signal that varies continuously. It can only output signals with period
lengths that increment in 10ms. Thus, the signal must be divided, and allowing for 11 different
signal lengths will provide control that will seem nearly continuous to the user.

        The input to the steering box and speed box on the RC car requires a 20ms pulse-width
modulation signal. The high part of the signal has a variance between 1 and 2ms. In order to
correspond with the eleven different signals that can be sent from the camera to the
microprocessor, the high part of the signal sent to the steering and speed boxes of the RC car
will be divided into eleven different lengths of roughly equal sizes. The different lengths are 1,
14
1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9 and 2ms. They correspond with the input signals from
the IP camera to the microprocessor for steering and speed control as shown in Tables 1 and 2
below.

        The microprocessor has a single input connected to the output of the camera, which has
two outputs. One will be connected to the steering control box and one will be connected to
the speed control box on the RC car. The steering control will receive the appropriate pulse-
width modulated signal from the microprocessor when the microprocessor receives a signal
representing a steering command from the IP camera (between 120-210ms in length). The
pulse width modulated signal sent to the steering box will vary appropriately as well. For
example, if the microprocessor receives a 120ms signal, which represents a hard left command
from the user, it will send a 20ms signal with 1ms high and 19ms low to the steering box on the
RC car, which represents a hard left. Likewise, if the microprocessor receives a 150ms signal
from the IP camera, it will send a 20ms signal with 1.33ms high and 18.67ms low, which
represents a slight left turn. The chart below (Table 1) illustrates the conversions of the signal
outputted from the IP camera, which is received by the microprocessor, to the input signal to
the steering box, which is sent by the microprocessor.

Table 1: Signal conversion between IP Camera output and Steering Box input. The time given as the
Microprocessor output is how much of the 20ms pulse width modulated signal is high.

IP Camera Output Signal Period        Steering Box Input Signal Period (High)             Car Action
    (Microprocessor Input)                   (Microprocessor Output)
             120ms                                    1.0ms                                Left Turn
             130ms                                    1.1ms                                Left Turn
             140ms                                    1.2ms                                Left Turn
             150ms                                    1.3ms                                Left Turn
             160ms                                    1.4ms                                Left Turn
             170ms                                    1.5ms                                No Turn
             180ms                                    1.6ms                               Right Turn
             190ms                                    1.7ms                               Right Turn
             200ms                                    1.8ms                               Right Turn
             210ms                                    1.9ms                               Right Turn

15
             220ms                                    2.0ms                               Right Turn



         The output to the speed control box on the RC car will work similarly. For example, if
the microprocessor receives a 15ms signal from the IP camera, which represents the fastest
speed, it will send a 20ms signal with 2ms high and 18ms low to the speed control box of the RC
car. This corresponds to the fastest speed possible for the RC car. The chart below (Table 2)
illustrates the conversions of the signal outputted from the IP camera, which is received by the
microprocessor, to the input signal to the speed control box, which is sent by the
microprocessor.

Table 2: Signal conversion between IP Camera output and Speed Control Box input. The time given as the
Microprocessor output is how much of the 20ms pulse width modulated signal is high.

IP Camera Output Signal Length        Speed Box Input Signal Length (High)               Car Action
    (Microprocessor Input)                 (Microprocessor Output)
              10ms                                    2.0ms                            Move Forward
              20ms                                    1.9ms                            Move Forward
              30ms                                    1.8ms                            Move Forward
              40ms                                    1.7ms                            Move Forward
              50ms                                    1.6ms                            Move Forward
              60ms                                    1.5ms                            No Movement
              70ms                                    1.4ms                            Move Backward
              80ms                                    1.3ms                            Move Backward
              90ms                                    1.2ms                            Move Backward
             100ms                                    1.1ms                            Move Backward
             110ms                                    1.0ms                            Move Backward
        Once completed, the connection between the IP camera, microprocessor, and RC car
steering and speed control boxes will look something like Figure 6. The microprocessor will
have a single input from the output of the IP camera. It will have two outputs; one will connect
to the speed control box and the other to the steering box. The microprocessor will continually
send signals to the two control boxes based on the input signals received. Since the IP camera
16
is only able to send one signal at a time, the signals for steering and speed control will alternate.
Thus, every other signal is for steering and the signals in between are for speed. While the
microprocessor is receiving a signal corresponding to steering from the IP camera, it will
continue to send the previous signal for speed to the speed control box. The reverse is true
when the microprocessor receives a signal from the IP camera for speed. This makes it so that
the commands are ‘continuous’; there is no break while the signal for the other kind of control
is being sent. This may produce a little lag, but it shouldn’t be too noticeable as the longest
signal is about 1/5 of a second (210 ms). The IP camera, microprocessor, and RC car will all be
powered by batteries.




                               Figure 2: The complete setup on the RC car
17
         Figure 7, below, represents the addition of the microprocessor into the overall project.
It will communicate between the IP camera and the RC car.




                     Figure 3: The microprocessor component is added to the design

Backtracking
        The final aspect of the communication between the IP camera and the RC car is the
implementation of a backtracking system. In the case that the network connection between
the IP camera and the user’s computer is lost, the RC car will backtrack to its starting location.
While backtracking, it will periodically attempt to reconnect with the user’s computer. If
reconnecting is successful, it will discontinue backtracking and control will return to the user.

         This will be implemented as part of the server on the IP camera. Every time a command
is sent to the IP camera from the user it will be placed in memory on a stack. Each command
will consist of the type, either speed control or steering, and the duration of the command. The
only difference is that the direction will be reversed (moving forward will become backward
and vice versa). For example, if the command is move forward at the fastest speed, then the
type of command is speed and since the direction is reversed the length is 110ms (see Figure 5).
As soon as the network connection between the user’s computer and the IP camera is lost, the
commands get popped off of the stack and sent to the microprocessor. Thus the car will return


18
to its original location. If the connection between the user’s computer and the IP camera is
reestablished then the backtracking stops and commands resume being added to the stack.

GUI
The original design was to use the language Processing to write the GUI. It ‘s main purpose is
for generating and manipulating images, so we wanted to directly connect with the RTSP server
on the IP Camera and display the received video feed. The GUI should also have an instance of
our Client that will be written in Java (Processing is built on top of Java using its libraries. Behind
the scenes, a Processing application is wrapped in a Java Class). Since our design changed from
initially planning on using the RTSP server and VAPIX® API, to parsing the MJPEG stream
ourselves, we have also decided to start out by writing the GUI in Java also so we can directly
use some of Java’s libraries that will be necessary for the parsing process. If time allows, we
would like to incorporate Processing into the project as the language for the GUI. That would
allow us to be able to have a user-friendly interface, where you can control the car through our
Client using the arrows on your keyboard and directly view the video feed from the IP camera
mounted on the Car. Below are a couple mockup screen shots of what our GUI might look like.
We will display a monitor of the relative speed, based on the commands being sent to the
server. We would also prefer to have a way to monitor the signal strength of the IP Camera’s
wireless connection. You can also disconnect using the GUI.




19
     Figure 4: Initial design of GUI




20
                            Figure 5: GUI design for a dropped connection




Below is a sequence diagram including all of our modules added to fulfill our requirements. Also
refer and compare to Use cases in the Requirements Doc:




21
Figure 6: Sequence diagram for using the RC car
22
Implementation
       The implementation covers everything that was done after the initial design. This
includes problems encountered, the design changes made to handle the problems, and the final
outcome. There will be four main sections embodying the four major areas of design for this
project: Computer to Camera, Camera to Microprocessor, Microprocessor to Car, and Powering
the Project. Computer to Camera focuses on the implementation of the user interface, controls,
and communication between the server on the camera and the program on the user’s
computer. Camera to Microprocessor describes how signals are sent from the camera to the
microprocessor and how they are received by the microprocessor. Finally, Microprocessor to
Car covers how the microprocessor creates the proper pulse width modulated signals and sends
them to the car. Powering the Project covers the circuit design and implementation for
powering the microprocessor, car, and camera via batteries.

Computer to Camera
       Here we have the two side of software to implement: the Server, on the camera side
and the Client, on the User’s PC. The Server is in charge of receiving user commands and then
outputting the appropriate signals via the camera’s output to the microprocessor. The client
provides the GUI, including video feedback from the camera, receives commands, and then
transmits them to the Server.


Server
        We wanted to keep the server as simple as possible. It serves the sole purpose of
receiving messages from the client, and handling them accordingly. There is one C file to handle
this. As noted in the design section, this C file had to be compiled using the appropriate 0x86 to
ARM cross-compiler for Linux 2.6.18 on the armv4tl architecture. Setting this up on a computer
so we could develop our server presented a major hang up. We could not get the cross-
compiler to build on our machine. This strenuous process as well as our eventual solution will
be discussed in detail in Appendix C. If you are planning on developing on the Axis Camera, this
may save you a great deal of time and hair.

cross-compiler section

Getting the code on the camera
The Axis 207W has both an ftp server and telnet on it. This makes it possible to transfer our
compiled code to the camera’s directories and then run the program. First, make a ftp
23
connection to the camera using the command ftp <camera’sIPaddress>. Enter your
username and password as prompted. The compiled C program needs to be transferred in
Binary mode, so enter the command binary. Now you must change to a writable directory,
either the /tmp folder or the /mnt/flash folder. Enter put <filename>. The compiled
file should now be in the directory you chose.

In our project we used telnet to run the Server. While the camera has telnet, it is not enabled
by default. Page 16 of the Axis Scripting Guide ADD REFERENCE describes how to enable telnet
on the camera.

How to enable telnet
To enable telnet...

      1. Browse to http://192.168.0.90/admin-bin/editcgi.cgi?file=/etc/inittab

2. Locate the line # tnet:35:once:/usr/sbin/telnetd

      3. Uncomment the line (i.e. delete the #).

      4. Save the file.

      5. Restart the product.

Now, telnet is enabled.

figure 7

Use the command telnet <camera’sIPaddress>. Change into the directory where the compiled
Server is and run it.

Our Implementation of the Server
The Server first initializes the socket, as described in the research section. Note: there is a
method, Crash which simply prints the error encountered along with the String passed.

if ((serversock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) {
            Crash("failed creating socket");
      }

Figure 7: Creating the TCP Socket

Fill in the sockaddr_in structure with the required information. Set a Hostname and Port to be
used:

memset(&echoserver, 0, sizeof(echoserver));                      /* Clear struct */
echoserver.sin_family = AF_INET;                                 /* Internet/IP */
24
echoserver.sin_addr.s_addr = htonl(INADDR_ANY);               /* Incoming addr */
echoserver.sin_port = htons(atoi(argv[1]));                   /* server port */

Figure 8: sockaddr_in structure

Bind the Socket to the local address:

if (bind(serversock, (structsockaddr *) &echoserver, sizeof(echoserver)) < 0)
{
            Crash("bind() failed");
      }

Figure 9: Binding a socket

Start listening for connections on the socket:

if (listen(serversock, MAXPENDING) < 0) {
            Crash("listen() failed");
      }

Figure 10: Listen for a connection

Accept a connection and then pass off the connection to a method to handle incoming data:

if ((clientsock = accept(serversock, (structsockaddr *) &echoclient,
&clientlen)) < 0)
            {
                  Crash("failed accepting client connection");
            }
            fprintf(stdout, "Client connected: %s\n",
inet_ntoa(echoclient.sin_addr));
            HandleClient(clientsock);

Figure 11: Accepting a connection

After a connection has been established on that port, the Server starts handling messages. The
specifications, due to the requirements of the Camera to Microprocessor issue covered later,
are that the server receives an integer from the client, and based on that integer sends out the
proper pulse to the microprocessor.

voidHandleClient(int sock) {
      char buffer[BUFFSIZE];
      int received = -1;
      int state = 0;


        //replaces above commented out code
        while(recv_all(sock, buffer)){
              state = atoi(buffer);
              pulse(state);
25
                 state = 0;
        }…

Figure 12: Receiving commands

The recv_all is in a header file we have included replacing the recv method in socket.h. This is
taken from an example in the book: HACKING (INCLUDE AUTHOR). The reason for this is that
our integer system gets into the double digits, so the Client will be sending strings that might
look like “12”. Since recv reads in a buffer of chars, there’s no assurance that we wouldn’t send
off a “1” and then a “2” to pulse. This would clearly not be a reliable way to drive the car, so we
want to insure that if we send a “12” from the Client, then the server will wait till it gets the
whole command before it sends out a pulse. We can simply send a terminator, in our case “\n”,
from the Client, and on the Server side, simply wait till we hit that terminator before we send
out a pulse command. The header file contains the method recv_all which loops using the recv
method and keeps adding the chars it gets to the buffer until it gets a “\n” then it puts a null
terminator at the end of our “String”, “\0”, so the char array is valid and can now be converted
to an integer.

…
while(recv(sockfd, ptr, 1, 0) == 1) {//read single byte
            if(*ptr == END[end_match]){
                  end_match++;
                  if(end_match == END_SIZE){
                        *(ptr+1-END_SIZE) = '\0';
                        returnstrlen(buffer);
                  }…

Figure 13: Reading characters from a buffer.

More details on how the integers are interpreted and how exactly the outgoing pulses are
generated in the pulse function will be discussed below.

Client
       The Client (including all the software on the User’s PC to drive the car) side has more
components and is responsible for: Getting the MJPEG stream, parsing the stream into JPEGs as
they arrive, continuously displaying them in a GUI, taking commands from some interface
(keyboard and gaming controller), and finally sending commands off to the Server. This is all
written in Java and the tasks are split up between three classes, AxisCamera, StreamParser, and
Cockpit.

AxisCamera:


26
        We can think of this class as a container of the MJPEG stream. The basic functionality of
this class includes:

        opening a HTTP to the camera
        getting the MJPEG stream as a Java InputStream
        defining a boundary for the MJPEG stream
        sending the InputStream to StreamParser
        getting back an image and painting it to itself (as it extends a JComponent)

Opening an HTTP connection can be done using an HttpURLConnection in Java. We used a
base64authorization from sun.misc.BASE64Encoder to encode the username and password in
the encode method. Once the HttpURLConnection is opened, if it requires authorization as our
Axis Camera’s web server does, we have to request authorization and then send off our
encrypted username and password in the form “Basic username:password”, then we can
connect.

fullURL = "http://"+hostName+mjpegStream; //hostName and mjpegStream are
changable in GUI
                  URL u = new URL(fullURL);
                  huc = (HttpURLConnection) u.openConnection();

                  // if authorization is required set up the connection with
the encoded authorization-information
                  if (base64authorization != null) {
                        huc.setDoInput(true);
                        huc.setRequestProperty("Authorization",
base64authorization);
                        huc.connect();
                  }

figure 14 making http connection to get MJPEG stream from

To synchronously get back image data from StreamParser and paint it to a JComponent we
used the following methodology:

AxisCamera implements Runnable, so it runs as a thread. The run method calls the
StreamParser’s parse method which will loop to get bytes from the MJPEG stream, explained in
the next section.

AxisCamera also implements ChangeListener using the stateChanged method to get segments
back from the StreamParser and adding them to the image so JComponentspaintComponent
method will continuously paint the updated image:

Public void stateChanged(ChangeEvent e) {
27
            byte[] segment = parser.getSegment();
            if (segment.length> 0) {
                  try {
                        image = ImageIO.read(new
ByteArrayInputStream(segment));
                        EventQueue.invokeLater(updater);
                  } catch (IOException e1) {
                        e1.printStackTrace();
                  }
            }

Figure 14: Receiving an image

The line: EventQueue.invokeLater(updater); invokes updater’s run method to be called
synchronously on AWT’s EventQueue. Updater is a new Runnable started within the
AxisCamera to repaint the JComponent.

public void connect() {
            try {

                         /**
                         *SeethestateChanged()function
                         */
                         updater = newRunnable(){

                                publicvoid run() {
                                      if (!initCompleted) {
                                            initDisplay();
                                      }
                                      repaint();…

Figure 15: Repainting image from MJPEG stream

initDisplay simply sets the dimensions of the JComponent, AxisCamera. When the state is
changed, meaning image data is received from StreamParser, the new data is synchronously
painted to the JComponent. As long as that data is a valid JPEG, we have a video stream.

StreamParser:

         This class handles parsing out the JPEG data from the MJPEG stream. The parse method
is called from AxisCamera. This essentially takes the InputStream and starts appending
everything to the internal buffer. When the boundary is found, the processSegment method is
called, which finds the beginning of a JPEG, if there is any, and then adds everything from that
point on to the segment. Lastly, it informs the listeners that the segment has changed. This
process strips anything in the stream that is not JPEG data.

public void parse(){
            int b;
28
            try{
            while ((b = in.read()) != -1 && !canceled) {
                  append(b);
                  if (checkBoundary()) {
                        //found the boundary process the segment to find the
JPEG image in it
                        processSegment();
                        // And clear out our internal buffer.
                        cur = 0;
                  }
            }…

protected void processSegment() {
            // First, look through the new segment for the start of a JPEG
            boolean found = false;
            inti;
            for (i = 0; i<cur - JPEG_START.length; i++) {
                  if (segmentsEqual(buffer, i, JPEG_START, 0,
                                JPEG_START.length)) {
                         found = true;
                         break;
                  }
            }
            if (found) {
                  int segLength = cur - boundary.length - i;
                  segment = newbyte[segLength];
                  System.arraycopy(buffer, i, segment, 0, segLength);
                  tellListeners();
            }
      }

Figure 16: Finding the JPEG image

parse() starts appending data from the InputStream until it hits the boundary, as described in
the research section. From VAPIX_3_HTTP_API_3_00.pdf section 5.2.4.4, the boundary of the
MJPEG stream that the 207W serves is: “--<boundary>”. Once the boundary is found, we
have a segment that is ready to be processed, meaning stripping out any unusable data.

processSegement() finds the beginning of JPEG data if there is any.
staticfinalbyte[] JPEG_START = newbyte[] { (byte) 0xFF, (byte) 0xD8 };

As you recall, from the research section there is data in here that is not actually JPEG image
data, so by finding the beginning of the JPEG, we are skipping over the junk we don’t need.
From the start of the JPEG till the point where we found the boundary contains the whole JPEG
image. Now that we have a new segment we tell the listeners, one of which is AxisCamera.

With StreamParser, we have insured that AxisCamera will only get back JPEG data, so with the
two combined, we are getting a reliable video stream (by rapidly updating JPEG images) from
the MJPEG stream.
29
Cockpit:

       With the video stream now covered, we have the tasks of creating a Client to connect to
the server and send commands and providing an interface to put everything together so the
user can easily use all of the components to drive the car. These two tasks are both handled in
Cockpit.java.

Cockpit goes through the process of:

        opening a TCP/IP socket to connect to the Server’s socket
        creating a GUI using Swing and AWT
        adding AxisCamera as one of its Components (giving the user a video feed)
        Listening to the arrow keys
        defining the combination of key states as an integer to send to the server
        sending off messages to server as integers based on what combination of speed and
         steering are determined by the arrow keys

The implementation of this should be straightforward looking at the Cockpit.java file. The
definitions of the state and integers sent off to the Server are defined in Table 3 below. Keys are
the default interface of controlling the car. This is done by making Cockpit implement the
KeyListener interface. We used the KeyPressed and KeyReleased along with the keyCode of the
arrows (37, 38, 39, and 40) to specifically listen for when the arrow keys are pressed and
released i.e. you want to pay attention to when someone has pressed the right arrow as well as
when they release it.

Cockpit.java sets up the GUI by adding all the components (what this looks like can be seen
below). This is all done using standard Swing and AWT tools. After everything is initialized, you
will see the GUI, where you can connect. By default the program will try to connect to a Camera
that has been hardcoded into AxisCamera. Since we were always using the same IP camera with
a static IP address, we have the program try to initialize with a video feed. If it doesn’t work
that’s ok, the GUI will load without an image. We made the IP address to connect to resettable
in the Setup tab. The Connect button action will call the openSocket() method. This will open a
connection to the MJPEG stream (if there isn’t already one open) and also connect to the server
on the camera.

private Boolean openSocket() {
            ...
            try {
                  if(!axPanel.connected){
                        axPanel = new AxisCamera(hostnameField.getText(),
mjpgStreamField.getText(), userField.getText(), passField.getText());
30
                               feed = new Thread(axPanel);
                               feed.start();
                               feedPane.add(axPanel);
                               feedPane.repaint();
                               Thread.currentThread().sleep(3000);

                         }

                         controller = new Socket(axPanel.hostName, port);
                         // Socket connection created
                         System.out.println("Connected to: " + axPanel.hostName
                                      + " -->on port: " + port + "\n'q' to close
                                      connection");
                         input = newBufferedReader(newInputStreamReader(controller
                                      .getInputStream()));
                         output = new
                               DataOutputStream(controller.getOutputStream());
                         isConnected = true;
                         button1.setText("Disconnect");
                         logo.setIcon(logos[1]);
                         returnee = true;
                         ...

        }

Figure 17: Opening a socket

Here controller is our Socket. We instantiate a new AxisCamera if necessary, and then use
the camera’s hostname and the port the server is listening on to open a Socket. We use the
Socket to create new input and output streams. The rest is just setting Booleans and text as
well as our little logo, which you’ll see in the screen shot. Now we have covered presenting an
interface and making the appropriate connections.

The current GUI looks like this:




31
Figure 18: The GUI
32
Here, all of the images are ImageIcons, which are simply updated as states change. The images
are available at http://code.google.com/p/networkrccar/downloads/list under putwClasses.zip.
These image files need to be in the same directory as the classes. The images are first loaded
into an array of ImageIcon, than the Icons are created with the createImageIcon method. This is
done at the beginning of the program along with the GUI initialization

final private String[] SPEEDPATHS = { "GUIspeed0.png", "GUIspeed1.png",
                  "GUIspeed2.png", "GUIspeed3.png", "GUIspeed4.png",
                  "GUIspeed5.png" };
final private String[] ARROWPATHS = { "GUILeftOff.png", "GUILeftOn.png",
                  "GUIRightOff.png", "GUIRightOn.png" };
final private String[] LOGOPATHS = {"GUIlogoOff.png" , "GUIlogoOn.png"};
...

protected ImageIcon[] createImageIcon(String[] path, String description) {
            ImageIcon[] icons = newImageIcon[path.length];
            for (inti = 0; i<path.length; i++) {
                  java.net.URLimgURL = getClass().getResource(path[i]);
                  if (imgURL != null) {
                        icons[i] = newImageIcon(imgURL, description);
                  } else {
                        System.err.println("Couldn't find file: " + path);
                        return null;
                  }
            }// for

              return icons;
       }

...
public void keyPressed(KeyEvent e) {

              if (isConnected) {
                    key = e.getKeyCode();
                    // only pay attention if a turn is not already pressed
                    if (key == 37 || key == 39) {
                          if (turnKeyPressed == 0) {
                                 turnKeyPressed = key;
                                 sendOut();
                                 if(key == 37)
                                       leftInd.setIcon(directions[1]);
                                 else
                                       rightInd.setIcon(directions[3]);
                          }
                    }

                     // change speed and then update gauge
                     if (key == 38 &&speed != 4) {
                           speed++;
                           sendOut();
                           speedGauge.setIcon(speeds[speed]);
                     }
                     if (key == 40 &&speed != 0) {
33
                                   speed--;
                                   sendOut();
                                   speedGauge.setIcon(speeds[speed]);
                          }…

Figure 19: Speed and steering images

Also, as you can see in these two listening methods, speed and direction is set. We still have the
functional side to take care of. As mentioned above, through implementing KeyListener and
using the methods keyPressed and keyReleased, we can get input from the user. What’s left is
to define what the input should be interpreted as and then send it off to the Server

private void setState() {
            if (speed == 0) {
                  if (turnKeyPressed == 0)
                        state = 0;
                  if (turnKeyPressed == 37)
                        state = 1;
                  if (turnKeyPressed == 39)
                        state = 2;
            }

                 elseif (speed == 1) {
                       if (turnKeyPressed == 0)
                             state = 3;
                       if (turnKeyPressed == 37)
                             state = 4;
                       if (turnKeyPressed == 39)
                             state = 5;
                 }

                 elseif (speed == 2) {
                       if (turnKeyPressed == 0)
                             state = 6;
                       ...
                 }
                 ...

                 else
                          state = 1111;
        }

Figure 20: Setting steering and speed states

The method setState, determines what integer should be assigned to state according to the
combination of speed and direction i.e. if the speed is 1 and the left arrow is pressed, the state
is set to 4. When the arrow key is released, state is set to 3. So both the keyPressed and
keyReleased methods call setState followed by sendOut. sendOut simply writes state as bytes
to the output stream. A note to be made about sending out our commands or states is that we

34
need to remember how the Server was expecting them. The Server is expecting an integer and
then the terminator “\n” so every time we send out a new state, we need to follow it with the
terminator.

private void sendOut() {

                // set state, that is the binary signal corresponding to current
                // speed-direction
                setState();

                // should never happen...
                if (state == 1111) {
                      System.out.println("invalid state!");
                      closeSocket();
                      System.exit(0);
                }

                if (isConnected) {
                      try {
                            output.writeBytes(state + "\n");…


Figure 21: Sending commands to the server

        In addition to using the arrow keys, we wanted to add support for a controller so that
the user could use either the controller or the keyboard to drive the car. We found a Java API
framework for game controllers called JInput. Following the Getting Started with JInput post on
the JInput forums, we were able to get the API installed and setup for use in our application.
(endolf) We then used the sample code given in the forum to write a simple class that looks for
controllers on a computer and displays the name of the controller, the components (e.g.
buttons or sticks), and the component identifiers. The code used was exactly what was given
on the forum so I won’t reproduce it here. We plugged in a Logitech Rumblepad 2 controller
and ran the program. It picked it up along with the keyboard and mice connected to the
computer. Part of the output for the Logitech Controller is given in the figure below.

Logitech RumblePad 2 USB
Type: Stick
Component Count: 17
Component 0: Z Rotation
    Identifier: rz
ComponentType: Absolute Analog
Component 1: Z Axis
    Identifier: z
ComponentType: Absolute Analog
Component 2: Y Axis
    Identifier: y
ComponentType: Absolute Analog
Component 3: X Axis

35
    Identifier: x
ComponentType: Absolute Analog
Component 4: Hat Switch
    Identifier: pov
ComponentType: Absolute Digital
Component 5: Button 0
    Identifier: 0
ComponentType: Absolute Digital
…



Figure 22: Logitech Rumblepad information

        The controller has 17 components, 4 of which are analog and 13 of which are digital.
The 4 analog components represent the x and y directions of the 2 analog joysticks on the
controller. We wanted to set up the controller so the user controls the car using the left analog
joystick. Therefore, we had to determine which joystick had which components. This was
accomplished using the program to poll the controllers given in the Getting Started with JInput
forum. It was modified to display the component name and identifier if the component was
analog. Running the program and moving the analog joystick that we wanted to use for
controlling the car showed that the two components were Component 2: Y Axis and
Component 3: X Axis.

        The next step was to implement the controller in our program. First we created a new
class called Control that implements the Runnable interface. This was done so that it could be
added as a thread to the existing program. Within the run method, an array of controllers
connected to the computer is gotten and then the program looks for a controller with the name
“Logitech RumblePad2 USB”. If it is found, then it is saved as the controller to use, otherwise
no controllers are used. Thus the program only works with Logitech RumblePad 2 USB
controllers.

        Next the program enters a while loop that continuously polls the controller for input
until the controller becomes disconnected. The code is shown in the following figure.


while (true) {

                        pad.poll();
                        EventQueue queue = pad.getEventQueue();

                        Event event = newEvent();

                        while (queue.getNextEvent(event)) {
                              Component comp = event.getComponent();
                              String id = comp.getIdentifier().toString();
                              float value = event.getValue();
36
                                   if (comp.isAnalog()
                                               && (id.equals("x") || id.equals("y"))) {
                                         //System.out.println(value);
                                         controllerCommand(id, value);
                                   }
                          }

                          try {
                                Thread.sleep(10);
                          } catch (InterruptedException e) {

                                   e.printStackTrace();
                          }

                          if(!hasController)
                                return;
                          }

        }

Figure 23: Getting controller events

       After polling the controller and getting the event queue, the program loops through the
queue and checks if an event is caused by a component that is part of the joystick being used to
control the car. If it is, the command gets sent to the method controllerCommand, which
processes the command. After looping through the queue, the threads execution is halted for
10ms and then execution continues. At the very bottom is an if statement that returns from
the run method, thus terminating the thread, if the controller is no longer being used.

       Now we can take a look at the controllerCommand method. The code is shown in the
following figure.

public void controllerCommand(String direction, float value){
            //moving backward
            if(isConnected){
                  if(direction.equals("x")){
                        //full left
                        if(value < -0.75){
                              steer = 0;
                        }
                        else if (value < -0.5){
                              steer = 1;
                        }
                        else if (value < -0.25){
                              steer = 2;
                        }
                        else if (value < 0.25){
                              steer = 3;
                        }
                        else if (value < 0.5){
                              steer = 4;
37
                                 }
                                 else if (value < 0.75){
                                        steer = 5;
                                 }
                                 else {
                                        steer = 6;
                                 }

                                 if(steer != prevSteer){
                                       sendOut();
                                       prevSteer = steer;
                                 }
                         }

                         //moving forward
                         else if(direction.equals("y")){
                               //stopped
                               if(value > -0.2){
                                      speed = 0;
                               }
                               else if (value > -0.4){
                                      speed = 1;
                               }
                               else if (value > -0.6){
                                      speed = 2;
                               }
                               else if (value > -0.8){
                                      speed = 3;
                               }
                               else {
                                      speed = 4;
                               }
                               if(prevSpeed != speed){
                                      prevSpeed = speed;
                                      sendOut();
                               }
                         }
                }

        }

Figure 24: processing a command from the controller

        First, the method checks if the program is connected to the camera, which is
represented by the isConnected variable. If it is connected, the command is processed and the
steering or speed variables are updated accordingly. The variables prevSpeed and prevSteer
keep track of the previous values of steer and speed. If nothing has changed then nothing is
sent to the camera. If something has changed, then the sendOut method is called. Initially, the
steer variable had only three possible values because there were only three possible steering
states: left, straight, and right. This made the car very hard to control with the controller
however, so more states were added. This required a redesign of the hardware as well. The
38
changes are described in the Camera to Microprocessor and Microprocessor to car sections
below. One thing to note is that the speed increases as the analog stick moves in the negative y
direction. This is because the negative y direction is actually the up direction of the joystick.

        The sendOut method sends the commands to the camera and updates the GUI. The
code is given below.

        private void sendOut() {
              if(steer == 3){
                    leftInd.setIcon(directions[0]);
                    rightInd.setIcon(directions[2]);
              }
              else if(steer< 3){
                    leftInd.setIcon(directions[1]);
              }
              else if(steer> 3){
                    rightInd.setIcon(directions[3]);
              }
              speedGauge.setIcon(speeds[speed]);
              sc.sendOut(steer, speed);
        }

Figure 25: Sending commands to the camera and updating the GUI

        The method first updates the icons on the GUI to reflect the changed state and then
calls the sendOut method of sc, where sc is an object of the SendCommand class. The
SendCommand class sends the correct value to the camera. The sendOut and setState methods
are given below.

        public void sendOut(intst, int sp) {
              speed = sp;
              steer = st;

                 setState();

                 try {
                         System.out.println(state);
                         output.writeBytes(state + "\n");
                 }
                 catch (IOException e) {
                       System.out.println("Failed to send command: " + e);
                 }
        }

        private void setState() {
              state = (speed<< 3) + steer;
        }

Figure 26: Sending the state to the camera

39
         First, the variable state is calculated from the values of speed and steer. Basically, speed
is bit shifted so that it becomes the highest three bits and steer is the lowest three bits of state.
Next, state is written to DataOutputStream output, which sends the integer to the camera.

       Initially, the commands were sent to the camera from within the Cockpit class, but this
had to be abstracted to the SendCommand class so that a controller could be used. Since the
controller used a separate thread for execution, it had to be able to send commands. This
would have required a Cockpit object, which would have created a whole new instance of the
program. Thus creating the SendCommand class eliminated this problem.

       The Cockpit class contains a Control object and a SendCommand object. It has a button
for enabling/disabling the controller. The code for the button is given below.

else if (e.getActionCommand().equals("toggleController")){
                  if(cont != null&&cont.isAlive()){ //controller connected
                        con.removeController();
                        while(cont.isAlive()){} // wait for thread to
terminate
                        if(cont.isAlive()){
                              System.out.println("Error removing
controller");
                        }
                        else
                              useConButton.setText("Use Controller");
                  }
                  else{ //controller not connected
                        cont = new Thread(con);
                        cont.start();
                        useConButton.setText("Remove Controller");
                  }
            }

Figure 27: Controller button

       The code removes the controller if there is one or enables one if there isn’t a controller.
The variable con is a Control that gets initialized on startup and is used within the controller
threads.

        The Cockpit class implements the ControllerListener interface so that it can detect
when controllers are plugged in or unplugged and act accordingly. If a controller is plugged in
then a new instance of the Control class is created and a new thread started. If a controller is
disconnected, then the thread is terminated. The instance of sendCommand, sc, is initialized
when Cockpit connects with the camera. It gets passed to the Control object, con, which uses it
to send commands to the camera. In this way, both the keyboard and the controller can be
used at the same time to control the car without sending conflicting commands.

40
Camera to Microprocessor
        This section covers the interaction between the camera and microprocessor. The
camera needs to send signals to the microprocessor using its output. The output is activated in
different sequences based on the commands received from the user’s computer. The
microprocessor then interprets the sequence so it can create the correct PWM signal. This
section will be broken into two parts: Camera and Microprocessor. The Camera section will
focus on how the camera is able to output the correct signal sequence based on the command
received by the user. The Microprocessor section describes how the signals are received and
interpreted before changing the PWM signal that’s sent to the RC car’s control boxes.

Camera
        The camera uses a program called iod to activate the output. This was described in the
Axis scripting guide. It works in the following way: iod is called with the argument “–script”,
followed by the output/input identifier, followed by a colon and a forward slash to activate or a
backslash to deactivate the output or input. In our case, we wanted to activate/deactivate the
output of the camera, which has the identifier of “1”. Thus, the command would look like
“$ ./iod -script 1:/” to activate the output or “$ ./iod -script 1:\” to deactivate the output. The
program is designed to be run from a script on the camera, as can be seen by the “-script”
argument. We needed to activate the output from within our own program, however, so we
ended up using the system command execl() to call the iod function. We implemented the
following code based on the signals generated as specified in Tables 1 and 2 above. The
program was just a test to measure how fast we could activate the output.

#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
#include<time.h>
#include<sys/time.h>
#include<sys/types.h>
#include<sys/wait.h>

/*
 * testExecl.c
 *
 * Created on: Apr 1, 2009
 *      Author: bigdog
 */


int main(intargc, char *argv[]){

41
        int i;

        pid_tpid;

        struct timeval start, stop;
        for( i=0 ; i<60 ; i++ ){
              gettimeofday(&start, 0);
              pid = vfork();
              if (pid == 0) /* fork succeeded */
              {
                    if(i%2==0) //turn on output
                          execl("/bin/iod", "-script", "iod", "1:/", NULL);
                    else //turn off output
                          execl("/bin/iod", "-script", "iod", "1:\\", NULL);
              }
              else if (pid< 0) /* fork returns -1 on failure */
              {
              perror("fork"); /* display error message */
              }
              else{
                    wait(NULL);
                    gettimeofday(&stop, 0);
                    longdifsec = stop.tv_sec -start.tv_sec;
                    long dif = stop.tv_usec - start.tv_usec;
                    printf("sec:%ldusec:%ld \n", difsec, dif);            }
        }

        return 1;
}

Figure 28: Activating the output using iod

        As can be seen, we create a for loop that activates and then deactivates the output 30
times each (loops a total of 60 times). First, we grab the current time. Then, a process is forked
using the system command vfork(), which makes a copy of the current process, before calling
execl(). This is because execl() takes over the calling process, causing it to terminate when the
execl() function returns. This way it only terminates the forked process and our loop continues.
The if statements are used to perform different commands based on the process. If the process
has a pid of 0, then we know that it is the forked process and we either activate or deactivate
the output. If pid is less than 0, an error has occurred and we exit. If neither of those conditions
are true, we know that it must be the main process. In this case, we wait for the forked process
to return, get the current time again, and calculate and display the difference between the start
and end times.

       We compiled and ran the program on the camera to see how fast we activate the
output. The results were very surprising. The camera was not able to output the signals in
increments of 10 ms consistently. Most of the signals generated were significantly longer than
10 ms, varying between 10 and 50 ms. This was before we even started streaming video. With
42
video streaming from the camera to a computer via the preview webpage of the camera, the
signal generation took even longer and with more variation. We were getting signals varying in
length between 50 ms and 200 ms. This was much slower than anticipated and wouldn’t be
satisfactory for controlling the car. The large variation made it so that the signals would be
impossible to time using a microprocessor as well. We needed a way to speed up the signals
and get a consistent signal. This was achieved through examining the contents of the
programiod.

       We tried multiple times to contact the camera manufacturer, Axis, to obtain information
about how to trigger the camera’s output without having to call the program iod. They were
very unresponsive, sending us to the FAQ section of the Axis website, which doesn’t contain
any information regarding the output of the camera. We were wasting a lot of time trying to
contact them with no results, so we eventually gave up and decided to try to figure out how the
output works on our own. iod is a compiled program and we didn’t have access to the source
code for it, so we knew that we needed to figure it out using reverse engineering.

        First, we searched the Internet using Google for the function iod to see if we could
obtain any information about how it worked. That didn’t provide any answers, so we decided
to see if we could find out how electronic device outputs were activated in Linux. A version of
Linux is what runs on the Axis 207W camera we were using, so we figured if we found out how
to do activate an output for something else, we could apply the concept here. An online search
for how to activate an output provided some useful information. The system command ioctl()
seemed to come up a lot for triggering hardware components, such as outputs. Further
investigation revealed that the ioctl() works by taking three arguments: a file descriptor (an
integer used to reference a file) that references a “file” that represents the hardware
component to be controlled, a integer representing the command to perform, and a bit
sequence representing what will be written to the file. The way it was used varied quite a bit
between different hardware components, so it was unclear how it could be used for the Axis
207W camera. Further searching didn’t reveal much information either. Eventually, we
decided to go to the source and examine the program iod to determine if and how it uses ioctl().

        Initially, we tried writing and compiling a program that used ioctl() and then do a hex
dump of the compiled program to find the hexadecimal sequence that represents ioctl(). Next,
we performed a hex dump on the compiled iod program and searched for the ioctl() hex
sequence obtained from the hex dump of our sample program that uses ioctl(). If we could find
the hex sequence than maybe we could determine what the hex representations of the
arguments to ioctl() were. The hex representation of the arguments could be passed directly to
the ioctl() system call without worrying what they actually represented. We were able to find

43
part of the hex sequence, but we weren’t able to determine if it represented the ioctl() function
because it wasn’t a complete match. One of the major problems was that the arguments to the
ioctl() could be anything because they were operating system dependent. Thus, the example
arguments we used in our sample program could be completely different from the ones used in
iod. The potential effect that different arguments would have on the compiled ioctl() call was
unknown as well. Also, iod could have been compiled using a compiler different from the one
we were using, thus potentially altering the hex representation of ioctl() from the one we
obtained from our owncompiledprogram. Thus, trying to find ioctl() and determine what the
arguments were from the hex dump of iod became extremely difficult. A different approach
was needed.

       We decided to take a look at the compiled iod program. Using the Open Office word
processor, we examined the contents of the iod program. Most of the output shown in Open
Office was compiled so it looked like garbage, but we were able to glean some information
about how it works from the text in the program. The text from strings within the program
doesn’t change during compilation, so it showed up directly as written in Open Office. Figure
10 below shows some of the output obtained from iod. A lot of information about how
outputs/inputs, iod usage, and system variables showed up in the text output. As can be seen,
several interesting things, including variables named IO_SETGET_INPUT and
IO_SETGET_OUTPUT and references to I/O devices are present. This seemed like exactly what
we needed to use for controlling the output.




Figure 29: Inside iod
44
         What next? Back to Google. A search for “IO_SETGET_OUTPUT axis 207w” resulted in
several example programs that used it as an argument to the ioctl() function. One in particular
was very useful. It was a file from the Linux Cross Reference section called etraxgpio.h.[1]An
excerpt from the file is shown below. The file contains definitions of variables used for the Etrax
processor. Etrax processors are used in the Axis cameras that support the Cris cross compiler.
This is different from what is used on the Axis 207W, but since it was the same company it was
likely that the variables would be at least similar, if not the same.
*
15 *
16 * For ETRAX FS (ARCH_V32):
17 * /dev/gpioa minor 0, 8 bit GPIO, each bit can change direction
18 * /dev/gpiob minor 1, 18 bit GPIO, each bit can change direction
19 * /dev/gpioc minor 2, 18 bit GPIO, each bit can change direction
20 * /dev/gpiod minor 3, 18 bit GPIO, each bit can change direction
21 * /dev/gpioe minor 4, 18 bit GPIO, each bit can change direction
22 * /dev/leds          minor 5, Access to leds depending on kernelconfig
23 *
24 */
49/* supported ioctl _IOC_NR's */
52 #define IO_SETBITS            0x2/* set the bits marked by 1 in the argument */
53 #define IO_CLRBITS            0x3/* clear the bits marked by 1 in the argument */
98 #define IO_SETGET_OUTPUT 0x13 /* bits set in *arg is set to output,
99                                             * *arg updated with current output pins.
100                                              */
Figure 30: Example code showing definitions of IO variables

         One of the main things that was interesting about this file is the mention of the gpio
files located in the dev directory. An examination of the dev directory on the Axis 207W
revealed several gpio files. These were the files that the file descriptors passed to ioctl() were
referencing. The only problem was determining which one was used to control the output on
the camera. Other interesting things were the variables IO_SETBITS and IO_CLRBITS, which
potentially could be what was used by ioctl() to activate and deactivate the camera output.
Also, IO_SETGET_OUTPUT shows up, and it looks like it is used to determine which pins act as
outputs. Now, how does all of this fit together?

       Searching online for “gpioioctl”, led to the discovery of an example program
thatusesioctl and gpio to control hardware output and input.(Hodge) The following lines of
code are taken from the program and seem to be a good example of how the camera output
can be controlled. Once again, this is for an Etrax camera, but the commands for our camera
would probably be very similar since both cameras are made by Axis.

if ((fd = open("/dev/gpioa", O_RDWR)) < 0)
>         {
>perror("open");
>exit(1);
45
>}else
>           {
>changeBits(fd, bit, setBit);
>           }


>voidchangeBits(intfd, int bit, intsetBit)
> {
>int bitmap = 0;
>
>bitmap = 1 << bit;
>
>if (setBit)
> {
>ioctl(fd, _IO(ETRAXGPIO_IOCTYPE, IO_SETBITS), bitmap);
> }
>else
> {
>ioctl(fd, _IO(ETRAXGPIO_IOCTYPE, IO_CLRBITS), bitmap);
> }
>
> }

Figure 31: Example code for using ioctl()

       First, the file “/dev/gpioa” is opened and then the ioctl() function is called. It gets
passed the file descriptor to “/dev/gpioa” and then a strange system struct called _IO that takes
two arguments, the last of which looks familiar. The IO_SETBITS and IO_CLRBITS variables were
defined in the header file etraxgpio.h, as shown in Figure [?] above. Finally, there is a bit
sequence that seems to have a one at the location where the bit is supposed to be cleared or
set. Only one bit location in the bitmap contains a one.This looks like exactly what we need to
control the output with some minor modifications to fit our camera. We aren’t using an Etrax
camera so the first argument to _IO(),ETRAXGPIO_IOCTYPE, will be different. At another
location in the program that is partially shown above, GPIO_IOCTYPE was used instead of
ETRAXGPIO_IOCTYPE. This may be what we need.All that is left is to determine which gpio file
controls the output and what the arguments to ioctl() should be.

        A test program was written to determine the gpio file and bitmap used to control the
output on the Axis 207W. It was assumed that _IO(GPIO_IOCTYPE, IO_CLRBITS) was the correct
second argument of ioctl() to deactivate the output and _IO(GPIO_IOCTYPE, IO_SETBITS) was
the correct second argument to activate the output. One problem, however, was that the
variables GPIO_IOCTYPE,IO_SETGET_OUTPUT, IO_SETBITS, and IO_CLRBITS were undefined.
We had no idea what header file we needed to include and where it was located. We found the
Linux program ‘beagle’ that is used to index files and then search within them for a keyword.
We knew that the header file had to be located in the cross-compiler for the Axis 207W. Using
beagle, the entire cross-compiler folder was indexed and searched for “IO_SETBITS”. This took
46
several hours, but eventually the file “gpiodriver.h” was found (see the last “#include”
statement in Figure [?]). It contained the definitions of the four variables shown above. Now
we were ready to test.

        The test program opened all of the gpio files it could and looped through them with
different bit sequences (the third argument to ioctl()) in order to determine the correct
sequence for activating the output of the camera. A few gpio files were scrapped right away
because they couldn’t be opened. Others caused a hardware error when trying to use ioctl()
with them. The code is shown below in Figure 12.

#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<fcntl.h>
#include<sys/ioctl.h>
#include<asm/arch/gpiodriver.h>

/*
 * testExecl.c
 *
 * Created on: Apr 1, 2009
 *      Author: bigdog
 */

int main(intargc, char *argv[]) {
      int l = 0;
      int a = 1;

       printf("before a");
       int fda = open("/dev/gpioa", O_RDWR);
       pri ntf("before b");
       intfdb = open("/dev/gpiob", O_RDWR);
       printf("before c");
       int fdc = open("/dev/gpioc", O_RDWR);
       printf("before g");
       int fdg = open("/dev/gpiog", O_RDWR);

       if (fda< 0)
              printf("Hello world a\n");
       else {
              printf("inside a\n");
       }

       if (fdb< 0)
              printf("Hello world b\n");
       else {
              printf("inside b\n");
              ioctl(fdb, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intb);
       }

       if (fdc< 0)

47
        printf("Hello world c");
        else{
        printf("inside c");
        ioctl(fdc, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intc);
         }

        if (fdg< 0)
               printf("Hello world g\n");
        else {
               printf("inside g\n");
               ioctl(fdg, _IO(GPIO_IOCTYPE, IO_SETGET_OUTPUT), &intg);
        }
        while (l < 1000) {

                 printf("gpioa%d\n", l);
                 ioctl(fda, _IO(GPIO_IOCTYPE,             IO_CLRBITS), a);
                 sleep(5);
                 ioctl(fda, _IO(GPIO_IOCTYPE,             IO_SETBITS), a);
                 sleep(5);
                 printf("gpiob %d\n", l);
                 ioctl(fdb, _IO(GPIO_IOCTYPE,             IO_CLRBITS), a);
                 sleep(5);
                 ioctl(fdb, _IO(GPIO_IOCTYPE,             IO_SETBITS), a);
                 sleep(5);
                 printf("gpioc %d\n", l);
                 ioctl(fdc, _IO(GPIO_IOCTYPE,             IO_CLRBITS), a);
                 sleep(5);
                 ioctl(fdc, _IO(GPIO_IOCTYPE,             IO_SETBITS), a);
                 sleep(5);
                 printf("gpiog %d\n", l);
                 ioctl(fdg, _IO(GPIO_IOCTYPE,             IO_CLRBITS), a);
                 sleep(5);
                 ioctl(fdg, _IO(GPIO_IOCTYPE,             IO_SETBITS), a);
                 sleep(5);
                 a = a << 1;
        }

        close(fda);
        close(fdb);
        close(fdc);
        close(fdg);
        return 0;

}
Figure 32: Test program to figure out how to use ioctl() with the Axis 207W camera

         The program shown above was what was eventually run without causing any errors.
First, gpioa, gpiob, gpioc, and gpiog were opened and given the file descriptors fda, fdb, fdc,
and fdg respectively. If a file failed to open, the message “Hello World *“ was printed,
otherwise “inside *” was printed, where * is either a, b, c, or g representing the corresponding
gpio* file. Then, we went into a for loop that tried different bitmasks (represented by the
variable ‘a’) within ioctl() and printed which gpio file was being tested and what loop iteration
48
was currently processed. We started with a=1 and bit shifted ‘a’ left by 1 each iteration. Thus
the final bit sequence would have the form 1* where * represents an undermined number of
zeros. The single ‘1’ in the bitmap corresponds with what was used in the example program
found online and shown in Figure [?] above. A five second pause was put between calls to
ioctl() so that we could see exactly when the output got activated. We hooked an oscilloscope
up to the camera output so that we could see when the output became activated. As soon as
the camera output became active we could look at the output printed on the command line to
see which gpio file and loop iteration the program was on. Using this strategy and with a little
patience, the camera output eventually became activated! Examining the command line
output, we determined that the file used for controlling the camera output was gpioa and the
bitmap was 1000. We could now control the camera output within our own program.

       We still needed to determine if ioctl() would be fast enough to control the output the
way we needed. To test this, we implemented another program that activated and deactivated
the camera output as fast as possible using ioctl(). The code is shown in Figure [?] below.

/*
 * outputSpeedTest.c
 *
 * Created on: May 9, 2009
 *      Author: bigdog
 */

#include<stdio.h>
#include<sys/socket.h>
#include<arpa/inet.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
#include<netinet/in.h>
//these are for the alarm output
#include<fcntl.h>
#include<sys/ioctl.h>
#include<asm/arch/gpiodriver.h>
#include<time.h>
#include<sys/time.h>

int main(intargc, char *argv[]) {

       int i,fda;
       struct timeval start, stop;
       //printf("COMMAND RECEIVED: %d\n", state);
       int a = 1 << 3;
       int j;
       int k;

       //six signals sent
       //low order bits sent first
       //e.g. if 010110=22 is sent, then the output is 011010
49
      fda = open("/dev/gpioa", O_RDWR);
      for (i = 0; i< 1000000; i++) {
            gettimeofday(&start, 0);
            ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
            ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
            gettimeofday(&stop, 0);
            long difsec = stop.tv_sec - start.tv_sec;
            long dif = stop.tv_usec - start.tv_usec;
            printf("sec:%ldusec:%ld \n", difsec, dif);
      }
      //set the output low at the end
      ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
return 1;
      }

Figure 33: Test program to measure the speed at which ioctl can trigger the camera's output

        The program activates and then deactivates the camera’s output a million times;
measuring and then displaying the time it takes during each iteration. We also used an
oscilloscope to measure the period of the signal and look at the waveform created by the
output. Although there was some variation in timing, we determined it could always be
activated and deactivated in less than one millisecond. Next, we tested it while streaming video
to a PC. The results were identical. The streaming of video had no effect on the speed at which
the output could be activated using ioctl().This was much faster than we had originally planned
for, thus allowing us to make some speed improvements.

       The ioctl() commands for controlling the output were added to the server code next.
The server activates the output in a function called pulse(). Figure [?] shows pulse().

void pulse(int state){
     inti;
     printf("COMMAND RECEIVED: %d\n", state);
     int high, low;
     for(i=0;i<state;i++){

         ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
         ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
          usleep(1);

     }
     //after the right amount of pulses have been sent
     //sleep for 20 ms for microprocessor to know end of signal
     //printf("%s","END_");
     high=ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
     usleep(20000);
     low=ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
}
50
Figure 34: Initial code to trigger the camera's output within the server

        The program takes in an integer representing the number of pulses to be sent to the
microprocessor. Next, the program executes a for loop that sends the required number of
signals to the microprocessor. A call to usleep() had to be added after deactivating the output
because the microprocessor wouldn’t register the signal as going low if the output was
reactivated too quickly after being deactivated. Thus every time the signal went low, we
paused before sending the next signal. We only slept for 1 microsecond but the usleep() system
call takes around 10ms on average to run because the operating system has to process the call.
If other programs are running at the same time, the delay could be longer. We measured the
time in a test program and it seemed to stay consistently at about 10ms. The final signal sent
was at least 20 ms in length. It marked the end of the signal sequence and was easily
distinguishable from the other signals because they would never reach 20 ms in length.

         The total length of the signal sequence varied between 20ms (just the end signal
representing straight and no speed) and about 160 ms (representing full speed and turning
right). See Table 4 below for a reference of number of signals vs. RC car speed and direction. In
the steering column, -1 represents turning left, 0 going straight, and 1 turning right. Speed
increases from 0 (stopped) to 4 (full speed).

Table 3: Number of signals sent and the corresponding speed and direction of the RC car

                                 # of signals         Speed            Steering

51
                                 0              0              0

                                 1              0             -1

                                 2              0              1

                                 3              1              0

                                 4              1             -1

                                 5              1              1

                                 6              2              0

                                 7              2             -1

                                 8              2              1

                                 9              3              0

                                10              3             -1

                                11              3              1

                                12              4              0

                                13              4             -1

                                14              4              1



        The signal sequence takes over a tenth of a second at speeds 3 and 4. This is noticeable,
especially when trying to steer the RC car. We thought about reversing the signal sequences so
that the faster speeds were represented by shorter signal sequences. Thus the car would be
much more responsive at higher speeds making it easier to steer. We didn’t end up
implementing that idea, however. Instead, we ended up changing it so that a sequence of six
bits determined the speed and direction.

       Initially, we wanted to loop through the bits in the sequence number and activate the
camera’s output if the current bit was a one or deactivate the output if it was a zero. Each
output would be followed by a 10ms pause (we’ll call it a pulse). The microprocessor would
wait 10ms and then read whether the output on the camera was active or not. If it was active,
it would count the bit as a one, if not then it would take it as a zero. In this way it would
52
reconstruct the number representing the command sent to the server from the user. For
example, if the user sent 34 (1000100), the camera’s output would be off for two pulses, on for
one, off for three more, and on for the final one (the pulses are sent starting with the lowest
order bits). The microprocessor would determine 34 was sent from the pulses it receives from
the camera’s output. The problem was that we weren’t able to get a consistent 10ms pause.
We tried using usleep(), but it wasn’t accurate enough. We thought about making the pause
longer (e.g. 20ms), but that would slow the overall signal sequence time down considerably if
we were to increase it enough that the variation in usleep() didn’t matter.

        Thus, we decided to split the six bit sequence so that the 3 highest bits determined the
speed and the last 3 bits determined the direction. The “sequence” is sent as an integer from
the client to the server where it parses the integer to separate the three highest bits from the
lowest three bits. The highest bits are stored as an integer and the lowest bits are stored as
another integer. For example, if the number 34 (100010 in binary) is sent to the server, it is
parsed as 4 (100 in binary) for speed and 2 (010 in binary) for steering. The code for our
program is shown in Figure [?]below.

void pulse(int state){
     int i, j;
     printf("COMMAND RECEIVED: %d\n", state);
     int speed, steer;

     speed= state >>3;//high bits
     steer= state-(speed <<3);//low order bits


     for(i=0;i<steer;i++){//steering bits
          ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
          for(j =0; j <5000; j++){}//pause
          ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
          for(j =0; j <10000; j++){}//pause
     }


     ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
     usleep(5000);//sleep for at least 5 ms to mark end of steering sequence
     ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
     for(j =0; j <10000; j++){}//pause


     for(i=0;i<speed;i++){//speed bits
          ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
          for(j =0; j <5000; j++){}//pause
          ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
          for(j =0; j <10000; j++){}//pause
53
     }


     ioctl(fda, _IO(GPIO_IOCTYPE, IO_SETBITS), a);
     usleep(20000);//sleep for at least 20 ms to mark end of speed sequence
     ioctl(fda, _IO(GPIO_IOCTYPE, IO_CLRBITS), a);
     for(j =0; j <10000; j++){}//pause
}

Figure 35: Six bit signal sequence code

        First, the values for speed and steering are calculated, where speed is the 3 highest bits
of the variable “state” and steer is the three lowest bits of “state”. Next, the output is activated
“steer” times. This represents the direction the car will be traveling. For loops were added as
pauses instead of usleep() because the length of a for loop pause can be determined more
precisely. In our case, we only need to sleep long enough for the microprocessor to register
that the output has been activated or deactivated. If we use usleep(), we will be waiting about
10ms minimum as discussed earlier. If we use a for loop, that time can be cut down
considerably. For example, in the program above, the for loop pause of 10000 takes less than 2
ms. After sending the appropriate number of signals to represent steering, at least 5 ms pause
is inserted to mark the end of the signal sequence. Since the for loop pauses never reach 5ms,
the microprocessor can distinguish the terminating signal from the rest. Next, the number of
signals representing the speed is sent followed with a pause of at least 20ms. The usleep(5000)
pause shouldn’t take much longer than 15 ms (5 ms for the pause and 10 ms for the time taken
to process the usleep() call), so the microprocessor can distinguish between the end of the first
sequence and the end of the second sequence based on the length of the terminating signal.
This provided a considerable drop in signal sequence length compared to the old method of
pausing using usleep() after each signal and having a terminating signal of 20ms. It also allows
for additional speed and steering states without incurring much of an increase in the total
signal sequence length. See the table below (Table [?]) for the updated signal combinations and
lengths.

Table 4: Steering and Speed signal sequences sent and the corresponding time taken to send the signals

                           # ofSpeed            # ofSteering         Approx. Total
                             Signals               Signals          Signal Sequence
                                                                      Length (ms)

                                 0                    0                     45

                                 0                    1                     47

54
     0   2   49

     0   3   51

     0   4   53

     0   5   55

     0   6   57

     1   0   47

     1   1   49

     1   2   51

     1   3   53

     1   4   55

     1   5   57

     1   6   59

     2   0   49

     2   1   51

     2   2   53

     2   3   55

     2   4   57

     2   5   59

     2   6   61

     3   0   51

     3   1   53




55
                            3                   2                  55


                            3                   3                  57


                            3                   4                  59


                            3                   5                  61


                            3                   6                  63


                            4                   0                  53


                            4                   1                  55


                            4                   2                  57


                            4                   3                  59


                            4                   4                  61


                            4                   5                  63


                            4                   6                  65




         As can be seen, there are many more states than in Table 4. The longest duration is only
about 65 ms however, which is nearly 100 ms shorter than the signal with the longest duration
from Table 4 of about 160 ms. More speed signals represent faster speeds. Thus 0 is no speed
and 4 is full speed. For steering, 0 represents full left, 3 represents straight, and 6 represents
full right.

        This concludes the Camera side of the Camera to Microprocessor connection. Now we
turn to the Microprocessor side.



56
Microprocessor


         Most of the microprocessor code was originally written for and tested on the Dragon12
development board, but needed to be ported to the Dragonfly12 microprocessor, which is what
is used on the RC car while it is driving. That was much more difficult than expected. The
procedure in its entirety can be found in Appendix B. The code in this section is what was put
on the Dragonfly12 which only has minor modifications from what was developed on the
Dragon12 development board.

      The microprocessor first had to be set up to be able to receive the signals from the
camera. The following code performs this step.

void init_Timer(void){


//uses PT0 for input
  TIOS = 0x00; //input capture on all ports (including PT0)
  TCTL4 = 0x03; //input capture on both rising and falling edges (PT0)
  TCTL3 = 0x00; //clear input control for control logic on other ports
  TIE = 0x01; //enable interrupt for PT0
  TSCR2 = 0x07; //set prescaler value to 128 (freq = 375 KHz)
  TSCR1 = 0x90; //enables timer/fast clear

}

Figure 36: Timer register initialization

        It is well commented so there isn’t much need to go into detail about what it does. The
main things to notice are that port PT0 is what is used to capture the input signals from the
camera’s output. It captures on both rising and falling edges. Thus, since the interrupt is
enabled for PT0, an interrupt will be triggered both when the camera’s output is activated and
when the camera’s output is deactivated. The last thing to notice is that the prescaler value is
set to 128. This means that the timer frequency is equal to the input bus frequency divided by
128. This is one of the cases where the Dragonfly12 differs slightly from the Dragon12
development board. The Dragon12 board has a bus frequency of 24 MHz and the Dragonfly12
has a bus frequency of 48MHz. We want the timer clock frequency to be 375 KHz so we divide
48MHz by 128 to get 375KHz (divide by 64 when using the Dragon12 development board). The
reason for 375 KHz is that the timer runs a 16 bit counter. Every cycle it increments by one. We
want this to increment as slowly as possible because it is used to time the signals sent by the
camera’s output. If it increments too quickly, it may overflow more than one time when
measuring a signal, which would make it impossible to determine the signals length. At 375
KHz the 16 bit counter overflows about 375/65 ≈ 6 times a second. The microprocessor has a bit
57
that gets set on a timer overflow, so we can accurately measure about twice as much time or
up to 1/3 of a second. This is much more time than we need for any of our signals, but the extra
margin for error doesn’t hurt.

        The following code is what was running on the microprocessor to measure the signals
sent to it from the camera’s output. This code is what was running on the microprocessor based
on the initial implementation of the output signals given in Figure [?].

#include<mc9s12c32.h>

unsigned int count = 0;

unsigned int length = 0;
unsigned int begin = 0;
byte total = 0;


interrupt 8 void TOC0_Int(void){

//rising edge
if(PTT_PTT0) {
begin = TC0;

   } else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;

      TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 18 ms)
if(length > 18) {
//not turning
if(count % 3 == 0) {
          PWMDTY2 = 15;
        }
else if (count % 3 == 1){ //turn left
          PWMDTY2 = 11;
        } else{
          PWMDTY2 = 19; //turn right
        }

        PWMDTY1 = count/3 + 153;
count = 0;

      }
else{ //not the end of the signal
count++;
      }

begin = 0;
   }



58
}

Figure 37: Microprocessor code to interpret the signals from the camera's output (initial design)

         This method gets called every time there is an interrupt on channel PT0. This is
determined by the “interrupt 8” at the beginning of the method header. First we determine
whether it was a rising or falling edge. If it is a rising edge than PT0 will be high and the variable
PTT_PTT0 will be equal to 1 (representing the status of PT0). In this case we just set the
variable “begin” equal to the current value in the counter TC0. Since PT0 is set up to capture on
both rising and falling edges, TC0 will capture the current value in the timer counter and store it
on a defined transition of PT0 (both rising and falling). If it is a falling edge, the length of the
signal is calculated in milliseconds. This is done by subtracting the value in “begin” from the
current value in TC0 which was captured when PT0 went low. TFLG2_TOF gets set if the timer
counter overflows. If the timer overflows we need to add on the maximum timer value
(0xFFFF) to TC0. Finally we divide by 375, since the counter is incremented 375,000 (375 KHz)
times a second this puts length in milliseconds (375000/375=1000=1KHz=1ms period). Next, we
check to see if the length is greater than 18 ms. If it is, we know that we have a terminating
signal and we change the PWM output signals accordingly (see Microprocessor to Car).
Otherwise, we know that it is just part of the signal sequence and count (the number of signals
received) is incremented.

         The following code was written to handle the camera’s output as produced by the code
shown in Figure [?]. It is only slightly different from the code shown in Figure [?],and the
initialization code shown in Figure [?] remained unchanged.

byte count = 0;

unsigned int length = 0;
unsigned int begin = 0;
int total = 0;

interrupt 8 void TOC0_Int(void){

//rising edge
if(PTT_PTT0) {
begin = TC0;
   } else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;

      TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 3 ms)
if(length > 19) { //end of speed sequence
           PWMDTY1 = count+153;
count = 0;
      } else if(length > 4){     //end of steering sequence
59
if(count == 0){
             PWMDTY2 = 11; //full left
           } else if(count == 6){
             PWMDTY2 = 19; //full right
           } else{
             PWMDTY2 = count + 12; //inbetween
           }
count = 0;
      } else{ //not the end of the signal
count++;
      }
begin = 0;
   }
}

Figure 38: Microprocessor code to interpret the signals from the camera's output (final design)

        The program is pretty much the same up to the first if statement involving the length. If
the length is greater than 19 ms, we know that it is a terminating signal and the sequence of
signals just received represents the speed. The PWM signal for speed is updated accordingly.
Otherwise, if length is greater than 4ms but less than 19 ms, we know that the terminating
signal for the steering sequence has just been received. The PWM signal for steering is updated
accordingly. If neither of those conditions are true, than a signal in a sequence was received
and count is incremented. Every time a terminating signal is received, count is reset for the
next signal sequence.

       No major problems were encountered during the design of the microprocessor. The
hardest part was making sure everything was set appropriately at all times: bits were cleared
correctly, timing was correct, the correct registers were being used, etc. We did encounter one
problem that we couldn’t fix and designed a work around for. It is as follows.

        Initially, we used a variable in the microprocessor to try to keep track of whether we
were on the first sequence or the second. That way we could terminate both the steering and
speed signal sequences using a signal of the same duration (e.g. 5ms), thus potentially speeding
things up. The variable would be 1 if the camera was sending signals representing speed and 0
if the camera was sending the signal sequence for steering. During testing, however, the
variable would become reversed for some reason, thus it would be 1 during the steering
sequence and 0 during the speed sequence. The microprocessor would send the wrong signals
to the car and so nothing would work correctly. We ended up just adding in the terminating
signals of different lengths to avoid the problem, as outlined above. The microprocessor could
determine whether it received the steering sequence or the speed sequence based on the
length of the terminating signal.


60
Microprocessor to Car
       This section focuses on the generation of the PWM signal that gets sent to the control
boxes on the RC car. It covers how the PWM register of the microprocessor is set up and how
the microprocessor determines the signal to send to the control boxes on the RC car based on
the input received from the camera.

        The microprocessor PWM register is initialized as shown in the following code.

//This uses pins PT1 for speed and PT2 for steering
void init_PWM(void){
//set up channel 0 and 1 for speed and channel 2 for steering
  PWMCTL = 0x10; //concatenate channels 0 and 1 into 16 bit PWM channel

     MODRR = 0x06; //Set PT1 and PT2 as outputs of PP1 and PP2 respectively

//channel 1 is low order bits, channel 0 is high order bits
//all options for the 16 bit PWM channel are determined by channel 1 options
  PWME = 0x06; //enable PWM channels 1 and 2
  PWMPOL = 0x06; //set polarity to start high/end low (channels 1 and 2)
  PWMCLK = 0x06; //clock SA is the source for channel 1 and SB for channel 2

//set clock B prescaler to 16 (B = E/32) E=48,000,000 Hz                B=1,500,000 Hz
//and clock A prescaler to 8 (A = E/16)    A=3,000,000 Hz
  PWMPRCLK = 0x54;
  PWMCAE =0x00; //left align outputs
  PWMSCLA = 0x0F; //SA = A/(15*2) = 100,000 Hz
  PWMSCLB = 0x4B; //SB = B/(75*2) = 10,000 Hz

//The combined periods of channel 0 and 1 represent the period
//for the 16 bit channel (channel 0 is high order, channel 1 low order)
//Period for 16 bit channel = (period of SA)*2000 = (1/100,000)*2000 = 0.02
seconds (50Hz)
  PWMPER0 = 0x07; //high order
  PWMPER1 = 0xD0; //low order
  PWMPER2 = 0xC8; //Period for channel 2 = (period of SA)*200 =
(1/10,000)*200 = 0.02 seconds (50Hz)

//clock period for channel 0 and 1 = 24*10^6/(150*200*16) = 1/50 sec = 50Hz
//Duty cycle for 16 bit channel = (150/2000)*0.02 = 0.0015 seconds
  PWMDTY0 = 0x00; //high order
  PWMDTY1 = 0x96; //low order
  PWMDTY2 = 0x0F; //Duty cycle for channel 2 = (15/200)*0.02 = 0.0015 seconds
}

Figure 39: PWM register initialization

        Most of the code can be understood from the comments. The main things to notice are
that channels 0 and 1 of the PWM register are concatenated together to create a single 16 bit
channel (versus 8 bit channels if used separately). This allows for finer control over the PWM
signal that is generated using channels 0 and 1. When concatenated, only the settings for
61
channel 1 are used and the settings for channel 0 are ignored. The RC car picks up speed very
quickly, so we use the concatenated channel to control the speed of the car. This allows us to
increment the speed in much smaller amounts compared with using an 8 bit channel. We use
channel 2 as the steering output. We don’t need fine control over steering so we leave it as an
8 bit channel. Setting the MODRR register equal to 0x06 set sthe ports PT1 and PT2 as the
outputs of ports PP1 and PP2 respectively. This is another difference between the Dragonfly12
and the Dragon12. The Dragonfly12 has fewer pins than the Dragon12, and it lacks pins for
ports PP1 and PP2. Thus, pins PT1 and PT2 are linked to ports PP1 and PP2 so that they output
the PWM signal.

        Another important part is the clock signal used to determine the PWM signal. It works
in the same way that the timer register does to slow down the bus clock. The bus clock of 48
MHz is slowed down based on the values in PWMPRCLK, PWMSCLA, and PWMSCLB. The PWM
register has four clocks it uses for creating signals called A, B, SA, and SB. A and B are slowed
versions of the bus clock. SA is a slower clock A and SB is a slower clock B. The value in
PWMPRCLK shown in the code above divides the bus clock by 32 for clock B (1.5 MHz) and by
16 for clock A (3 MHz). PWMSCLA is then used to slow clock A even more. The value in
PWMSCLA divides clock A by 30, making SA equal to 100 KHz. The value in PWMSCLB divides
clock B by 150, making SB equal to 10 KHz. Clock SA is used for channel 1 and clock SB is used
for channel 2.

        Next, the periods of the PWM modulated signals are calculated. Both the steering and
speed controls use 20ms PWM signals so a 20ms period is used for both. The period is equal to
the clock period multiplied by the value in the PWMPER register for the channel. Channel 1
uses clock SA (100 KHz) and so the value in the PWMPER register must be 2000 to create a
PWM signal with period 20 ms. Since, channels 0 and 1 are concatenated, PWMPER0 and
PWMPER1 are concatenated. PWMPER0 contains the high order bits and PWMPER1 contains
the low order bits. Likewise, channel 2 needs a 20 ms signal. This is done via the PWMPER2
register. Channel 2 uses clock SB, which operates at 10 KHz, thus a value of 200 is put into
PWMPER2 to create a period of 20ms.

       The last step is setting the duty cycle of each PWM signal. This is the fraction of the
period where the signal is high. The duty cycles for the PWM signals sent to the RC cars control
boxes vary between 1 and 2 ms. A 1.5 ms duty cycle represents no speed when sent to the
speed control box and straight when sent to the steering control box. This is the value initially
given as the duty cycle for both PWM signals. Since the duty cycle is given as a fraction of the
period, this means we need a duty cycle of 1.5/20 = 15/200. Since the PWMPER2 register
contains 200, the value 15 is put into the PWMDTY2 register. Likewise, the value 150 is put into

62
the concatenated PWMDTY0 and PWMDTY1 registers. This finishes the initialization of the
PWM registers.

      The next step is modifying the PWM signal based on the input received from the
camera’s output. Once again, we start with the code, as previously given in Figure [?] above,
which was used during the initial implementation that had only three steering states.

#include<mc9s12c32.h>

unsigned int count = 0;

unsigned int length = 0;
unsigned int begin = 0;
byte total = 0;


interrupt 8 void TOC0_Int(void){

//rising edge
if(PTT_PTT0) {
begin = TC0;

   } else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;

      TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 18 ms)
if(length > 18) {
//not turning
if(count % 3 == 0) {
          PWMDTY2 = 15;
        }
else if (count % 3 == 1){ //turn left
          PWMDTY2 = 11;
        } else{
          PWMDTY2 = 19; //turn right
        }

        PWMDTY1 = count/3 + 153;
count = 0;

      }
else{ //not the end of the signal
count++;
      }

begin = 0;
   }
}

Figure 40: Microprocessor code to create correct PWM signal for the RC car's control boxes(initial design)

63
        Referencing Table 4, we can see that the number of signals mod 3 can be used to
determine the direction. If the result is 0 than the car should go straight, if the result is 1 the
car should turn left, and if the result is 2 the car should turn right. This is implemented above,
where count is the number of signals. If we detect a terminating signal (length > 18), the duty
cycle for the steering (PWMDTY2) is updated accordingly with an appropriate value, as stated in
the comments. It can also be seen from Table 4 that the number of signals divided by 3 equals
the value for the speed. In the code above, we add on count/3 to PWMDTY1. It adds onto the
value of 153 because the car doesn’t start to move until a duty cycle of at least 154 is reached.
The speed picks up very quickly however, so the duty cycle is only incremented by one for each
increasing speed. Since the speed duty cycle of 153 represents 1.53ms, incrementing by one
only increases the period of the duty cycle by 0.01ms or 10 µs.

       The following code was given previously in figure [?] above, and represents the final
version implemented where there are up to 8 steering states and 8 speed states. None of the
code used to initialize the PWM register given in Figure [?] had to be changed.

byte count = 0;

unsigned int length = 0;
unsigned int begin = 0;
int total = 0;

interrupt 8 void TOC0_Int(void){

//rising edge
if(PTT_PTT0) {
begin = TC0;
   } else { //falling edge
//length = time between rising edge and falling edge in 1/1000 secs
length = (TC0 + (0xFFFF*(TFLG2_TOF)) - begin)/375;

      TFLG2_TOF &= 1;//clear bit
//end of signal (signal length greater than or equal to 3 ms)
if(length > 19) { //end of speed sequence
           PWMDTY1 = count+153;
count = 0;
      } else if(length > 4){      //end of steering sequence
if(count == 0){
             PWMDTY2 = 11; //full left
           } else if(count == 6){
             PWMDTY2 = 19; //full right
           } else{
             PWMDTY2 = count + 12; //inbetween
           }
count = 0;
      } else{ //not the end of the signal
count++;
      }
begin = 0;
64
     }
}

Figure 41: Microprocessor code to create correct PWM signal for the RC car's control boxes (final design)

        There are only small differences between this code and the code in Figure [?] in terms of
how the duty cycles are changed. The main difference is that separate signal sequences for
steering and speed are received by the microprocessor. If the terminating signal is longer than
20ms (length > 19), the previous signal sequence was for speed and so PWMDTY1 is updated
accordingly. In this case count is just added to 153. If length is greater than 4ms, then the
signal sequence was for steering. Since there are only 7 steering states (0-6), but 9 possible
values for PWMDTY2 (11-19), a couple of steering states had to be left out. We decided to
leave out states 12 and 18. This allowed the user to turn full left (state 11) or full right (state
19), while having finer control near the middle (states 13-17).


Future Work (still to be updated)
        Finish designing of a safety recovery system. This basically means that the R/C car will
         retrace its movements back to its starting location if the wireless connection between
         the user’s computer and the IP camera is unintentionally lost (if the user exits the
         program or terminates the connection, the car shouldn’t retrace its movements). This
         will require researching how to monitor the connection between the camera and the
         program running on the user’s computer. It will also require researching how to write a
         script for the IP camera that will record the movements of the car and then, in the event
         that the wireless network connection is lost, send the correct signals to the control
         boxes on the car to get it to retrace its movements in the reverse order. Also, on the
         user side, the program will let the user know that the connection has been lost and that
         the car is currently retracing its route back to the starting position. It should also
         continuously ‘look’ for the IP camera and reestablish a connection to it if possible. This
         will be done by James, Jeremy, and Seth.

        Implement the safety recovery system. First, this requires knowledge of how to write a
         script for the IP camera. The script must store the acceleration and steering commands
         sent to it from the user in the camera’s memory. Another script must monitor the
         connection between the user’s computer and the IP camera. As soon as it fails, the
         camera starts sending signals to the control boxes on the R/C car causing the car to
         retrace its route. If a connection is reestablished, the car stops retracing its route. On
         the user side, a message must be displayed letting the user know that the connection
         was lost and the car is retracing its route. The program will continuously try to

65
         reconnect with the IP camera and will let the user know when if it successfully
         reconnects. The hardware (car side) will be done by James and Jeremy. The software
         (user side) will be done by Seth.

        clean up code. Remove testing print statements, add all javadoc comments. fix
         alignment on setup tab of GUI.




Appendix A – Programming the Dragon12 Board
        This is a reference for setting up the Dragon12 development board to work with the
CodeWarrior development suite. This procedure erases the default monitor program, DBug-12,
and installs the HCS12 serial monitor on the Dragon12 board. The HCS12 serial monitor is what
is used by CodeWarrior to communicate with the Dragon12 board. This procedure is taken
directly from Dr. Frank Wornle’s website[10] (see the link “HCS12 Serial Monitor on Dragon-12”).
The HCS12 serial monitor can be obtained from his website as well. Added comments appear
within ‘*+’. The main difference is that EmbeddedGNU was used instead of HyperTerminal for
reprogramming the Dragon12 development board. EmbeddedGNU can be obtained from Eric
Engler’s website[11].

         Both uBug12 as well as the Hi-Ware debugger allow the downloading of
         userapplications into RAM and/or the Flash EEPROM of the microcontroller.
         WithuBug12, a suitable S-Record has to be generated. Programs destined for
         FlashEEPROM are downloaded using the host command fload. Both S2 records
         as well asthe older S1 records (command line option ‘;b’) are supported. Some
         applicationsmight rely on the provision of an interrupt vector table. HCS12 Serial
         Monitor expectsthis vector table in the address space from 0xFF80 – 0xFFFF. This
         is within theprotected area of the Flash EEPROM. The monitor therefore places
         the suppliedinterrupt vectors in the address space just below the monitor
         program (0xF780 –0xF800) and maps all original interrupts (0xFF80 – 0xFFFF) to
         this secondary vectortable. From a programmer’s point of view, the vector table
         always goes into the spacefrom 0xFF80 – 0xFFFF.




66
     The source code of the HCS12 Serial Monitor can be obtained from Motorola’s
     website (www.motorola.com, see application note AN2548/D and the
     supportingCodeWarrior project AN2548SW2.zip).

     A few minor modifications are necessary to adapt this project to the hardware of
     theDragon-12 development board:

            (1) It is proposed to use the on-board switch SW7\1 (EVB, EEPROM)as
            LOAD/RUN switch. The state of this switch is tested by themonitor during
            reset. Placing SW7\1 in position ‘EVB’ (seecorresponding LED) forces the
            monitor to become active (LOADmode). Switching SW7\1 to position
            ‘EEPROM’ diverts programexecution to the user code (jump via the
            secondary RESETinterrupt vector at 0xF77E – 0xF77F).

            (2) The monitor is configured for a crystal clock frequency of 4
            MHz,leading to a PLL controlled bus speed of 24 MHz.

            (3) The CodeWarrior project (AN2546SW2.zip) has been modified
            toproduce an S1-format S-Record. The frequently problematic S0header
            has been suppressed and the length of all S-Records hasbeen adjusted to
            32 bytes. This makes it possible to download themonitor program using
            DBug-12 and two Dragon-12 boardsconnected in BDM mode.

     The protected Flash EEPROM area of the MC9S12DP256C (Dragon-12) can only
     beprogrammed using a BDM interface. An inexpensive approach is to use two
     Dragon-12 boards connected to each other via a 6-wire BDM cable. The master
     board shouldrun DBug-12 (installed by default) and be connected to the host via
     a serialcommunication interface (9600 bps, 8 bits, no parity, 1 stop bit, no flow
     control). Astandard terminal program (e.g. HyperTerminal) [or EmbeddedGNU]
     can be used to control this board.The slave board is connected to the master
     board through a 6-wire BDM cable. Poweronly needs to be supplied to either the
     master or the slave board (i. e. one powersupply is sufficient, see Figure 1)
     [below].




67
Figure 42: Connecting two Dragon12 boards

        The BDM interface of DBug-12 allows the target system (slave) to be
        programmedusing the commands fbulk (erases the entire Flash EEPROM of the
        target system) andfload (downloads an S-Record into the Flash EEPROM of the
        target system).Unfortunately, DBug-12 requires all S-Records to be of the same
        length, which the SRecordsproduced by CodeWarrior are not. CodeWarrior has
        therefore beenconfigured to run a small script file (/bin/make_s19.bat) which
        calls upon GordonDoughman’s S-Record conversion utility SRecCvt.exe. The final
        output file(S12SerMon2r0.s19) is the required S-Record in a downloadable
        format. [This does not need to be done as S12SerMon2r0.s19 is already included
        in the download from Dr. Frank Wornle’s website+.

        Click on the green debug button (cf. Figure 5) to build the monitor program
        andconvert the output file to a downloadable S-Record file.

        Start HyperTerminal [EmbeddedGNU] and reset the host board. Make sure the
        host board is set to PODmode. You should be presented with a small menu
        (Figure 2) [below]. Select menu item (2) toreset the target system. You should be
        presented with a prompt ‘S>’ indicating thatthe target system is stopped.

68
Figure 43: Resetting Dragon12 board in EmbeddedGNU

       Erase the Flash EEPROM of the target system by issuing the command fbulk.
       After ashort delay, the prompt should reappear.

       Issue the command fload ;b to download the S-Record file of the HCS12
       SerialMonitor. Option ‘;b’ indicates that this file is in S1-format (as opposed to
       S2). Fromthe Transfer menu select Send Text File… . Find and select the
       fileS12SerMon2r0.s19. [When using EmbeddedGNU choose Download from the
       Build menu] You will have to switch the displayed file type to ‘All Files (*.*)’(see
       Figure 3)[below]. Click on Open to start the download.




69
Figure 44: Choose HCS12 Serial Monitor file




        Once the download is complete, you should be presented with the prompt
        (Figure 4)[below].The HCS12 Serial Monitor has successfully been written to the
        target board.Disconnect the BDM cable from the target system and close the
        terminal window.




70
Figure 45: The Serial Monitor has been successfully loaded




        Connect the serial interface SCI0 of the target system to the serial port of the
        hostcomputer and start uBug12*this can be obtained from Eric Engler’s
        website[11]].

        Ensure that you can connect to the target by entering ‘con 1’ (for serial port
        COM1).You should receive the acknowledgement message ‘CONNECTED’. Issue
        thecommand ‘help’ to see what options are available to control the monitor.
        Disconnectfrom the target using command ‘discon’ (Figure 5)[below].

        Note that HCS12 Serial Monitor provides set of commands very similar to that
        ofDBug-12: A user application can be downloaded into the Flash EEPROM of
        themicrocontroller using fload (;b), the unprotected sectors of Flash EEPROM
        can beerased using fbulk, etc. Following the download of a user program into the
        FlashEEPROM of the microcontroller, the code can be started by switching
        SW7\1 to RUN(EEPROM) and resetting the board (reset button, SW6).




71
Figure 46: Options to control the monitor




        This completes the installation and testing of HCS12 Serial Monitor on the
        Dragon-12development boards.




Appendix B – Porting Code from the Dragon12 to
the Dragonfly12
       This is the procedure for getting code that works with the Dragon12 development board
to work on the Dragonfly12. It requires a Dragon12 development board with D-Bug12 on it, a
72
BDM cable, and a Dragonfly12 microprocessor. The Dragonfly12 uses the MC9S12C32
processor while the Dragon12 uses the MC9S12DP256 processor. You should familiarize
yourself with the differences by reading the user guide for the MC9S12C32
athttp://www.freescale.com/files/microcontrollers/doc/data_sheet/MC9S12C128V1.pdfand
comparing it against that of the DP256 chip
athttp://www.cs.plu.edu/~nelsonj/9s12dp256/9S12DP256BDGV2.pdf. Any differences in terms
of the actual code will have to be taken into account separately from what is described in this
guide. This guide will cover porting code developed in Codewarrior to the Dragonfly12. Mr.
Wayne Chu from Wytec provided a lot of help to figure out this process.

       The first step is to create a new project. Select New Project from the File menu. You
should be presented with the New Project window shown in Figure [?] below.




Figure 47: Creating a new Dragonfly12 project



73
Enter a name for the project and click Next. The New Project Wizard should appear. Click Next.
On page 2, select MC9S12C32 from the list of Derivatives and click next. On page 3, select the
languages that you will be programming in. For this project, only C was selected. The next few
pages ask if certain Codewarrior features should be used, including Processor Expert,
OSEKturbo, and PC-lint. It is safe to answer no to all of them. On page 7, it asks what startup
code should be used. Choose minimal startup code. On page 8, it asks for the floating point
format supported. Choose None. On page 9 it asks for the memory model to be used. Choose
Small. On page 10 you can choose what connections you want. Only Full Chip Simulation is
needed. After clicking Finish the basic project should be created (see Figure [?] below). The
green button is used to compile the code.




Figure 48: Dragonfly12 initial project layout

       Whenever the code is compiled, it creates an s19 file and puts it in the bin folder within
the project. If you are using Full Chip Simulation, the file is probably called

74
“Full_Chip_Simulation.abs.s19”. This file is what will eventually be put onto the Dragonfly12.
First, the s19 file needs to be converted to the correct format. This is done using a program
called SRecCvt.exe. It can be downloaded from
http://www.ece.utep.edu/courses/web3376/Programs.html. Put SRecCvt.exe in the bin folder
containing the s19 file.

        The conversion can be done from the command line, but it is easier to create a bat file
that will do it for you. The following line for converting the s19 file was generously provided by
Mr. Wayne Chu. Open a text editor and put the following line in it:



       sreccvt -m c0000 fffff 32 -of f0000 -o DF12.s19 Full_Chip_Simulation.abs.s19



This assumes that Full_Chip_Simulation.abs.s19 is the name of the s19 file created by
Codewarrior. If the s19 file generated by Codewarrior is called something different then
Full_Chip_Simulation.abs.s19 should be changed to the name of the s19 file. DF12.s19 is the
name of the output s19 file. It can be called anything, just make sure it is different from the
name of the input s19 file. Save the file to the bin folder containing SRecCvt.exe with the
extension “.bat” (e.g. make_s19.bat). Double click on the bat file and the Codewarrior s19 file
should be converted automatically and saved as the output s19 file given in the bat file (e.g.
DF12.s19). This is the file that will be put onto the Dragonfly12 microprocessor.

        Connect power to a Dragon12 development board that has D-Bug12 on it. Plug one end
of the BDM cable into “BDM in” on the Dragon12 and the other end into the BDM connection
on the Dragonfly12. Make sure the brown wire in the BDM cable lines up with the “1” on both
of the BDM connections. Switch the Dragon12 to POD mode by setting SW7 switch 1 down and
SW7 switch 2 up. See the picture below for the complete setup.



        Start up EmbeddedGNU. Press the reset switch on the Dragon12 board. A menu may
appear similar to the one shown in Figure [?] below (make sure the Terminal tab is selected). If
nothing appears try clicking in the Terminal and pressing a key. Sometimes it doesn’t display
properly and this usually fixes it. If it does, choose option 1 and enter 8000 to set the target
speed to 8000 KHz. If the R> prompt appears, type “reset” and hit enter. You should now be
presented with the S> prompt. Enter “fbulk” to erase the flash memory on the Dragonfly12.
After a brief pause, the S> prompt should reappear. Now enter “fload” and press enter. The

75
terminal will hang. Go to the Build menu and choose Download. Navigate to the bin folder of
the Codewarrior project and select the converted s19 file (e.g. DF12.s19). Click Open. A series
of stars should appear while the file is downloading. Once it is finished, the S> prompt should
reappear. The program is now on the DragonFly12.




Figure 49: Downloading code to Dragonfly12 using EmbeddedGNU




Appendix C – Building the proper 0x86 to ARM cross-compiler
for the Axis 207W’s hardware
We were able to find the file emb-app-arm-R4_40-1.tar which contained the files
to build the cross-compiler that was configured for the armv4tl
architecture that is running on the camera’s ARTPEC processor. We


76
tried to build this originally on Ubuntu 8.10. We ran into an error
during the build process referring to a header file.




Bibliography
Axis 207 Network Camera.<http://www.axis.com/products/cam_207/>.

Axis 207/207W/207MW netowrk Cameras
Datasheet.<http://www.axis.com/files/datasheet/ds_207mw_combo_30820_en_0801_lo.pdf>.

Axis Scripting
Guide.<http://www.axis.com/techsup/cam_servers/dev/files/axis_scripting_guide_2_1_8.pdf>.

Axis. VAPIX version 3 RTSP API. 2008.

Engler, Eric. Embedded Tools.<http://www.geocities.com/englere_geo/>.

Hunt, Craig. TCP/IP Network Administration. Sebastopol, CA: O'Reilly, 2002.

Lethbridge, Timothy C. and Robert Laganiere. Object-Oriented Software Engineering: Practical Software
Development using UML and Java. New York: McGraw Hill, 2001.

Sun Microsystems, Inc. Java™ Platform, Standard Edition 6 API Specification. 2008.
<http://java.sun.com/javase/6/docs/api/>.

Verizon Wireless AirCard® 595 product page. October 2008
<http://www.verizonwireless.com/b2c/store/controller?item=phoneFirst&action=viewPhoneDetail&sel
ectedPhoneId=2730>.

Wikipedia. IP Camera. 19 Nov 2008. October 2008 <http://en.wikipedia.org/wiki/IP_Camera>.

Wornle, Frank. Wytec Dragon12 Development Board. October 2005.
<http://www.mecheng.adelaide.edu.au/robotics/wpage.php?wpage_id=56>.




Glossary
API (application programming interface): a set of functions, procedures, methods, classes or
protocols that an operating system, library or service provides to support requests made by
computer programs.



77
Bit: A binary digit that can have the value 0 or 1. Combinations of bits are used by computers
for representing information.

Computer network: a group of interconnected computers.

Clock frequency: The operating frequency of the CPU of a microprocessor. This refers to the
number of times the voltage within the CPU changes from high to low and back again within
one second. A higher clock speed means the CPU can perform more operations per second.

Central processing unit (CPU): A machine, typically within a microprocessor, that can execute
computer programs.

Electric current: Flow of electric charge.

Electric circuit: an interconnection of electrical elements.

Encryption: the process of transforming information (referred to as plaintext) using an
algorithm (called cipher) to make it unreadable to anyone except those possessing special
knowledge, usually referred to as a key.

GUI (graphical user interface): a type of user interface which allows users to interact with a
computer through graphical icons and visual indicators

HTTP (Hypertext Transfer Protocol): a communications protocol internet used for retrieving
inter-linked text documents.

IP camera: a unit that includes a camera, web server, and connectivity board.

Interrupt: Can be either software or hardware. A software interrupt causes a change in
program execution, usually jumping to an interrupt handler. A hardware interrupt causes the
processor to save its state and switch execution to an interrupt handler.

Joystick: an input device consisting of a stick that pivots on a base and reports its angle or
direction to the device it is controlling.

Light-Emitting Diode (LED): A diode, or two terminal device that allows current to flow in a
single direction, which emits light when current flows through it. Different colors can be
created using different semiconducting materials within the diode.

Linux: a Unix-like computer operating system family which uses the Linux kernel.

Memory: integrated circuits used by a computer to store information.


78
Microprocessor: a silicon chip that performs arithmetic and logic functions in order to control a
device.

Modulation:the process of varying a periodic waveform.

Network Bandwidth: the capacity for a given system to transfer data over a connection (usually
in bits/s or multiples of it (Kbit/s Mbit/s etc.)).

Pulse Width Modulation:uses a square wave whose pulse width is modulated resulting in the
variation of the average value of the waveform.

R/C (remote controlled) car: a powered model car driven from a distance via a radio control
system.

Register: Stores bits of information that can be read out or written in simultaneously by
computer hardware.

RF (radio frequency): a frequency or rate of oscillation within the range of about 3 Hz to
300 GHz.

RTSP (Real Time Streaming Protocol): A control protocol for media streams delivered by a
media server.

Servo: mechanism which converts a radio frequency to an electrical signal.

TCP/IP Socket: an end-point of a bidirectional process-to-process communication flow across
an IP based network.

TCP/IP (Internet Protocol Suite): a set of data communication protocols, including Transmission
Control Protocol and the Internet Protocol.

User Interface: means by which a user interacts with a program. It allows input from the user to
modify the program and provides output to the user which denotes the results of the current
program setup.

VAPIX®: RTSP-based application programming interface to Axis cameras.

Video transmission: sending a video signal from a source to a destination.

Voltage: difference in electric potential between two points in an electric circuit.

Wi-Fi: name of wireless technology used by many electronics for wireless networking. It covers
IEEE 802.11 technologies in particular.

79
Wireless network: computer network that is wireless.




80

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:11
posted:10/26/2011
language:English
pages:81
xiaohuicaicai xiaohuicaicai
About