Docstoc

Team SRS Google Code

Document Sample
Team SRS Google Code Powered By Docstoc
					              Software Requirements Specification
                     Enhancements to ADI
                 Mission Solutions Engineering




By: Nick Avignone, Spence DiNicolantonio, Chuck Dugenio, Mike Liguori,
                   Jonathan Palka, and Robert Russell

           Course: Software Engineering I (Fall 2009) Team 3
                     Instructor: Dr. Adrian Rusu
                      Date: November 2nd, 2009
                                                         2



                     Table of Contents

Introduction……………………………………………………………………………………...          3
Executive Summary……………………………………………………………………………..         3
Application Context……………………………………………………………………………..       5
Functional Requirements………………………………………………………………………..     5
Environmental Requirements…………………………………………………………………...   8
Software Qualities………………………………………………………………………………         8
Other Requirements……………………………………………………………………………..        9
Time Schedule…………………………………………………………………………………..           10
Potential Risks…………………………………………………………………………………..         10
Future Changes………………………………………………………………………………….           11
Acceptance Test Plan…………………………………………………………………………....     11
Training…………………………………………………………………………………….........        16
Glossary…………………………………………………………………………………………               16
References……………………………………………………………………………………….             17
Prototype………………………………………………………………………………………...            17
                                                                                                      3



1.0 Introduction

MSE has requested Rowan University to add functionality to the Advanced Display
Infrastructure, known as ADI. ADI is a collection of services that provides system independent
display infrastructure capabilities. ADI can manage different screens and user interfaces,
organize and synchronize operator actions and events, presents tactical graphics in a variety of
geo-referenced displays, and provides application error and data management.

The functionality they have requested are implementing an abstract camera module, a linear
algebra and geometry tool kit, and decision aids. MSE has the potential to use different cameras
for their programs. The abstract camera module will let them use different cameras without
having to change a lot of code. These functions will allow MSE to plug any camera into ADI and
be able to use it. The geometry and linear algebra toolkit will be able to tell if a point is inside a
predefined zone. If a point is moving the toolkit can also determine if it stays on its course, if it
will enter or exit a zone. They can also use this toolkit to see if two zones are intersecting and the
area of the intersection. Once a point is inside a zone, the decision aids will help the user make a
decision. These decisions can be based on where a point is in a zone, how far away from a zone
the point is, and many other decisions.

The camera module which is to be implemented into ADI will save MSE time. MSE will no
longer have to modify code in order to use a new camera, they simply extend the new camera
module and the camera will be able to interact with the user through the new camera module.
The camera module will also reduce coupling between the camera and ADI. The basic functions
the camera module will provide are field of view control, position and movement control,
orientation and rotation control, and focus locking capabilities.

The geometry and linear algebra toolkit is a standalone API that can be used in ADI. This API
will deal with intersecting lines and shapes, both two dimensional and three dimensional. This
API will also be used to determine whether a point is inside of a zone. Also, based on a point‟s
current position and trajectory, this toolkit can predict which zone, if any the point will be in if it
continues on its path. The conversion package will allow the user to convert a given position
obtained from the Linear Algebra API into WGS-84 and vice versa.

The decision aids will use the information generated by the geometry and linear algebra toolkit
and determine what to do about a point in or outside a zone. Depending on the zone and what the
point represents, a decision can be made. As a point moves around inside a zone, the decision
can changed based on where it is inside of the zone.

2.0 Executive Summary

MSE created an Advanced Display Infrastructure, known as ADI. ADI is a collection of services
that provides system independent display infrastructure capabilities. ADI can manage different
screens and user interfaces, organize and synchronize operator actions and events, presents
tactical graphics in a variety of geo-referenced displays, and provides application error and data
management. They have requested Rowan University to add functionality to their ADI.
                                                                                                4




Our upgrades should include the following key features:

Camera:
   Implement a camera module that will get and set its orientation angles.
   Get and set the field of view angle.
   Get and set the position of the camera module in 3d space, as well being able to change
     the camera‟s orientation and position once set.
   This module should be independent of proprietary rendering software.

API
      2D circles and 3D spheres should be considered for calculations in the API.
      The API will provide the ability to determine whether a point exists in a given 2D area or
       3D volume, and whether a point will cross or intersect with another point in a given
       amount of time.
      The API will be able to determine whether two lines will intersect and where, whether a
       line and 2D or 3D shape will intersect and where, and whether two 2D or 3D shapes will
       intersect and where.

Conversion
    Conversions from WGS-84 to the Cartesian Coordinate system will be available.
    Conversions from the Cartesian Coordinate system to WGS-84 will be available.

Tactical Decision Aids
    Create a framework for a Tactical Decision Aids (TDA).
    The framework will connect information given from lower-order services to make a
       Tactical Decision Aid which suits the scenario, which assists the user by displaying
       information in a different manner making it easier for the user to make a decision or solve
       a problem.
    This TDA will also include an optimization feature which will calculate the optimal
       solution to a problem.

The most important risks we should take into consideration are:
    External Shortfalls: While this project is only a stand-alone API, it will be up to the user
      to develop any required wrappers to provide render-specific interfacing with the
      deliverable.
    Straining Computer Science Capabilities: The outcome of this project will be a success
      provided that the requirements for decision aids do not exceed the abilities of an abstract
      API.
                                                                                                    5



3.0 Application Context

MSE has multiple cameras at their disposal. The current cameras are tightly coupled to the
renderer. This coupling makes it hard to change a camera for the renderer. By implementing a
camera module, MSE can swap in any of the cameras to use with the renderer. This will save
MSE time by allowing them to have the current camera being used interact with the user through
the new camera module. MSE will not have to spend as much time changing functions around to
use a different camera.

The new camera module will make the camera independent from the renderer. No single camera
is coupled to the renderer, each camera will now go through the camera module. When a camera
is swapped in it will simply use the functionality of the new camera module.

ADI uses various shapes, vectors, and objects in order to represent many entities. These entities
such as buildings, radar devices are placed in many different positions which may intersect or
overlap. There are no features that can easily determine if these objects cross or overlap. The
Linear Algebra API will make it easy for MSE to determine if objects intersect or if they will
intersect in a given amount of time. This makes it less difficult for the user to see how the current
position of objects compares to the placement of another object. The user will be able to convert
the output of the Linear Algebra API into WGS-84.

The need for the user to determine if the placement of these objects is optimal is vital when
determine positions. Currently there aren‟t methods which can definitively determine if an object
is too close to another object as well as if the object is in an adequate position in the given area.
This makes it difficult for the user to conclude if the position of an object can be improved. The
decision aids will allow the user to simply determine if the location of a set of objects provides
optimal coverage inside or outside a given area.

4.0 Functional Requirements

       4.1 Camera Module
             4.1.1 Behaviors
                   4.1.1.1 Euler Orientation: The module will provide a method of getting
                           and setting the camera's orientation in terms of Euler angles
                           relative to the default orientation in which the camera focus' faces
                           the negative Z axis and with local Y axis parallel to the world Y
                           axis.
                   4.1.1.2 Axis/Angle Orientation: The module will provide a method of
                           getting and setting the camera's orientation in terms of axis and
                           angle of rotation relative to the default orientation in which the
                           camera focus' faces the negative Z axis and with local Y axis
                           parallel to the world Y axis.
                   4.1.1.3 Field Of View: The module will provide a method of getting and
                           setting the field of view angle.
                                                                                         6



              4.1.1.4 Location: The module will provide a method of getting and setting
                      the position of the camera in 3D space in terms of Cartesian
                      coordinates.
              4.1.1.5 Focus Lock: The module will provide a method of locking the
                      camera's focus on a particular point in 3D space in terms of
                      Cartesian coordinates.
              4.1.1.6 Relative Movement: The module will provide a method to change
                      the camera's position relative to its current position, using a
                      Cartesian vector.
              4.1.1.7 Relative Euler Rotation: The module will provide a method to
                      change the camera's orientation relative to its current orientation,
                      using Euler angles.
              4.1.1.8 Relative Axis/Angle Rotation: The module will provide a method
                      to change the camera's orientation relative to its current
                      orientation, using axis and angle of rotation.
4.2 Linear Algebra API
       4.2.1 2-D
              4.2.1.1 Point Inside an Area: The API will provide the ability to
                      determine whether a given point exists in a given area.
              4.2.1.2 Vector Crossing a Boundary: The API will provide the ability to
                      determine whether a given vector will cross a given boundary in a
                      given time based on a given trajectory.
              4.2.1.3 Vector Intersecting a Point: The API will provide the ability to
                      determine whether a given point will intersect another given point
                      in a given amount of time.
              4.2.1.4 Multiple Lines Intersecting: The API will provide the ability to
                      determine whether two given lines will intersect and will return the
                      point of intersection.
              4.2.1.5 Line Intersecting Shape: The API will provide the ability to
                      determine whether a given line will intersect a given 2D shape and
                      will return the point of intersection.
              4.2.1.6 Shape Intersection: The API will provide the ability to determine
                      whether two given 2D shapes will intersect and will return the area
                      of intersection.
              4.2.1.7 Shape Inside Another Shape: The API will provide the ability to
                      determine if a given shape is wholly contained within another
                      given shape.
       4.2.2 3-D
              4.2.2.1 Point Inside a Volume: The API will provide the ability to
                      determine whether a given point exists in a given volume.
              4.2.2.2 Vector Crossing a Boundary: The API will provide the ability to
                      determine whether a given point will cross a given boundary in a
                      given time based on a given trajectory.
                                                                                         7



              4.2.2.3 Vector Intersecting a Point: The API will provide the ability to
                      determine whether a given vector will intersect another given point
                      in a given amount of time.
              4.2.2.4 Multiple Lines Intersecting: The API will provide the ability to
                      determine whether two given lines will intersect and will return the
                      point of intersection.
              4.2.2.5 Line Intersecting a Volume: The API will provide the ability to
                      determine whether a given line will intersect a given 3D shape and
                      will return the point of intersection.
              4.2.2.6 Multiple Shapes Intersecting: The API will provide the ability to
                      determine whether two given 3D shapes will intersect and will
                      return the volume of intersection.
              4.2.2.7 Volume Inside Another Volume: The API will provide the ability
                      to determine of a given volume is wholly contained within another
                      given volume.
4.3 Conversion
       4.3.1 Cartesian to WGS-84: A given Cartesian coordinate will be able to be
              converted into the WGS-84 coordinate system based on a unit conversion
              factor provided by the user.
       4.3.2 WGS-84 to Cartesian: A given WGS-84 coordinate will be able to be
              converted into the Cartesian coordinate system based on a unit conversion
              factor provided by the user.
4.4 Decision Aids
       4.4.1 2D
              4.4.1.1 2D Line in an Area: The TDA will be able to determine what
                      proportion of a line falls within an area, if the line doesn't lie
                      completely in the area.
              4.4.1.2 2D Shape in an Area: The TDA will be able to determine what
                      proportion of a shape falls within an area, if the shape doesn't lie
                      completely in the area.
       4.4.2 3D
              4.4.2.1 3D Shape in a Volume: The TDA will be able to determine what
                      proportion of a shape falls within a volume, if the shape doesn't lie
                      completely in the volume.
              4.4.2.2 3D Volume in a Volume: The TDA will be able to determine
                      what proportion of a volume falls within another volume, if the
                      volume doesn't lie completely in the other volume.
       4.4.3 Optimization
              4.4.3.1 2D Optimization (Best Case): The TDA optimization feature will
                      be able to determine the best coverage for a set of shapes inside the
                      area.
              4.4.3.2 3D Optimization (Best Case): The TDA optimization feature will
                      be able to determine the best coverage of a set of volumes within
                      another volume.
                                                                                                 8



                      4.4.3.3 2D Optimization (Worst Case): The TDA optimization feature
                              will be able to determine the worst coverage for a set of shapes
                              inside the area.
                      4.4.3.4 3D Optimization (Worst Case): The TDA optimization feature
                              will be able to determine the worst coverage of a set of volumes
                              within another volume
              4.4.4   General
                      4.4.4.1 Manual Positioning: The TDA will allow the user to modify the
                              positions of the lower level components.

5.0 Environmental Requirements

      There are no environmental requirements or restrictions for the development of these
       API's.
      All modules are standalone.

6.0 Software Qualities

      Evolvability: In future releases, new camera behaviors will be added along with new
       calculations for the Linear Algebra API. It is important that this system welcomes
       change.
      Interoperability: Third party software, such as Intermaphics, will still provide the
       functionality for certain camera behaviors and calculations, but our module will
       encapsulate all software that provides camera functionality and computations. This will
       remove the dependency between ADI and third party software.
      Maintainability: It is important that camera behaviors can be changed or calculations
       can be altered.
      Correctness: The resulting answer of any given calculation must be accurate, as these
       calculations will be used in ADI for collision detection and predicting trajectory in a
       given sector. This information must be valid so the user of ADI is not misinformed.
      Re-usability: The ADI system is not concerned with how camera behaviors are
       implemented or how computations are derived, so it is critical that code is reused since all
       camera behaviors will rely on similar logic to be exposed to ADI. The same applies to the
       Linear Algebra API.
      Understandability: Our enhancements will follow proper Java documentation standards.
       The code will be properly indented and easy to follow.
      Reliability: The camera functionality must behave normally so the user can navigate a 2-
       D or 3-D world.
      Repair-ability: If errors are found in the way the camera behaves or the output of various
       computations, the amount of code that needs to be changed should be minimal.
      Robustness: The Linear Algebra API will check for invalid input before performing the
       given calculation.
                                                                                              9



     Performance: The camera module must be developed in such a fashion that it does not
      utilize more system resources than the existing camera module. The user does not want to
      wait for the system to respond.
     Verifiability: The user of ADI will be able to notice if the camera is not performing the
      right behavior and if the output of computations from the Linear Algebra API cannot be
      verified.
     User Friendliness: The developer that uses our enhancements should be able to follow
      the documentation.
     Timeliness: The Linear Algebra API and the camera module will be delivered on time.
     Visibility: Proper documentation and design standards will allow another developer to
      see how our enhancements were developed.
     Size: These enhancements will be designed and implemented to ensure they are as small
      as possible.
     Productivity: Our process and methods for developing these enhancements is sound
      enough that we will be able to deliver these products to the client on time.
     Portability: Our enhancements are standalone for ADI. ADI is a desktop application and
      is not meant to be available on various platforms. As long as the development is in C++
      or Java, requirements have been met.
     Safety: Our enhancements are to be used by the ADI system. Our enhancements will not
      store any information.

7.0 Other Requirements

     All implementation will be done in Java.
     Documentation that adheres to Sun‟s Javadoc standards will be provided so developers of
      ADI can understand how these API‟s behave in less than five hours.
     All Euler angles are applied in y, x, z order.
     Default camera orientation will be facing down the negative Z axis with its local Y axis
      parallel to the world Y axis.
     Default camera position will be at (0, 0, 0) Cartesian coordinates
     The camera module will be independent of proprietary rendering software.
     The camera module will serve as an API to define and maintain a theoretical camera
      object.
     The only shapes being considered for 2D calculations are circles and squares.
     The only shapes being considered for 3D calculations are spheres and cubes.
     It will be up to the user to apply camera data to the chosen renderer as required.
     All input to the camera module will be expected in degrees and cartesian coordinates.
      However, the provided conversion module can be used in combination to provide input as
      WGS-84 data.
                                                                                                10



8.0 Time Schedule

The following deadlines have been set by the team and by the client requests. The deadlines will
insure that these enhancements will be completed and delivered in its entirety with all
functionality. Our contact, Ms. Kimberly Davis, will be kept informed about the teams progress
so that the client is aware of the projects overall progress and changes can be made if necessary.
These dates are tentative and may be subject to change.
     Phase 1: Requirements analysis and prototyping are to be completed by Friday, October
        23rd, 11:59PM
     Phase 2: Architectural Design and module Design are to be completed by Friday,
        November 13th, 11:59PM
     Phase 3: Implementation, testing and modifications are to be completed by Monday,
        December 7th, 11:59PM

9.0 Potential Risks

      Personnel Shortfalls: Members of the team need to make sure that they are completing
       their parts of the project in a timely manner, and finding time to meet with the group. If
       one person in the group doesn‟t fully complete their sections of the project or can‟t make
       a meeting, the rest of the group will be held up, which will lead to deadlines not being
       met and the project not being fully completed.
      Unrealistic Schedules: Deadlines that are decided upon must be within the range given
       in section 8.0, Time Schedule. If the schedules that we give ourselves or that the client
       gives us extend these dates, this will directly result in deadlines not being met and the
       project not being successfully completed by the end of the semester.
      Developing the Wrong Software Functions: The team must ensure that all
       requirements are developed to meet the needs of the customer. Failure to include key
       functionality will cause a need for time consuming and expensive maintenance very late
       in the development process.
      Developing the Wrong User Interface: Because the project is strictly an API and
       collection of classes, there is no risk associated with user interface design.
      Gold Plating: Members of the team must ensure that they are not adding functionality
       that is not strictly required. This will use up much needed time and could cause a delay in
       project delivery
      Requirements Change: It is very possible that additional camera functionality will be
       added to the list of requirements during the development of this project. The chosen
       design structure for the camera must take these possible changes into consideration to
       provide easy extension.
      External Shortfalls: While this project is only a stand-alone API, it will be up to the user
       to develop any required wrappers to provide render-specific interfacing with the
       deliverable.
      Real-Time Performance Shortfalls: There are no real-time dependencies that could
       pose a risk to this project.
                                                                                                11



      Straining Computer Science Capabilities: The outcome of this project will be a success
       provided that the requirements for decision aids do not exceed the abilities of an abstract
       API.

10.0 Future Changes

All the existing systems will remain unchanged. Any item not specifically defined here is
considered outside the scope of the project.

      In Scope
           o Camera Module
                  More camera behaviors may be added.
           o Linear Algebra API
                  New calculations may be added.
                  New shapes may be added.
           o Conversion
                  New coordinate systems may be added to ADI.
           o Decision Aids
                  Have the TDA incorporate other outside elements (weather, physics, etc.)
                    involved in the decision making process.
                  Other services, besides optimization, may be added.
      Out of Scope
           o Multiple users interacting with the same instance of ADI simultaneously.
           o Re-design and re-implement existing ADI systems in JOGL.

11.0 Acceptance Test Plan
       11.1   Camera
              11.1.1 Behaviors
                     11.1.1.1 If the user presses „z‟, the camera's orientation will be set with
                              Euler pitch, yaw, and roll angles (45.0, 45.0, 0.0). The user will
                              see a red square.
                     11.1.1.2 If the user presses „x‟, the camera's orientation will be set with
                              angle of 62.79 degrees and axis vector (0.281, 0.678, 0.678). The
                              user will see a red square.
                     11.1.1.3 If the user presses „f‟, the camera's field of view angle will be set
                              to an angle of 50.0 degrees. The user will see a green triangle on
                              either side of the screen
                     11.1.1.4 If the user presses „c‟, the camera will be moved to the point
                              (200.0, 100.0, 50.0). The user will see a green and blue square.
                     11.1.1.5 The user will be able to lock focus on the current focus point by
                              pressing 'l'. All subsequent movement will be around this point.
                     11.1.1.6 The user will be able to move the camera in all six axial
                              directions, relative to the current position and orientation of the
                              camera.
                                                                                       12



              11.1.1.7 The user will be able to rotate the camera with six degrees of
                       freedom, relative to the current orientation of the camera by
                       providing Euler angles of rotation.
              11.1.1.8 The user will be able to rotate the camera with six degrees of
                       freedom, relative to the current orientation of the camera by
                       providing Axis and angle of rotation
11.2   Linear Algebra API
       11.2.1 2-D
              11.2.1.1 There are two airplanes flying at the same altitude. Airplane One
                       is currently at (7, 4) and Airplane Two is at (14,2). Airplane
                       One‟s trajectory is x=1 and y = 3. Airplane Two‟s trajectory is
                       x= -2 y= 3. Will these two planes crash into each other? 7+ (1x)
                       = 14 + (-2x) and 4+ (3x) = 2 + (3x). No such X, so they will not
                       crash into each other.
              11.2.1.2 There is an enemy territory that is a no fly zone for us. They
                       have a head quarters at (34, 37) which creates a circular radar
                       detection with a radius of 24. One of our jets has gone astray and
                       its current coordinates are (20, 56). Is the jet within the radar‟s
                       detection? √ ((34-20)2 + (37-56)2) = 23.6 which is < 24, so the
                       jet is inside the no fly zone.
              11.2.1.3 We just set up some new land mines around the base in case of
                       intruders. We found that an intruder is currently at (23, 37) with
                       a trajectory of x=3, y=9. Our landmine is at point (50,119). Will
                       the intruder hit the landmine when he passes in the next 9
                       seconds? Given the trajectory, the intruder is following a path of
                       y = (9/3) x – 32. Substituting in x = 50 means y = 118. So, when
                       the intruder passes in 9 seconds he will be at point (50,118) and
                       will not hit the land mine.
              11.2.1.4 Two of our vehicles are coming home from a night mission.
                       They are both heading in the same direction and might cross
                       paths. Currently, vehicle 1 is at point (23, 35) heading x = 1,
                       y=3. Vehicle 2 is currently at point (42, 32) heading at x=3, y= -
                       6. Will they crash? Yes, they will crash at point (30, 56) given
                       their current trajectory.
              11.2.1.5 We have located an enemy radar a point (39, 23) which has a
                       circular sweeping radius = 6. One of our airplanes is heading
                       near that radar tower and might be seen. It is currently at
                       position (20,-58) heading x=1, y=5. Will our airplane be in their
                       radar view? Yes, given its current trajectory it will be in the
                       radar‟s view when it reaches point (35.36, 18.307) and will be
                       out of the radar when he reaches (37.35, 28.76).
              11.2.1.6 Our base setup a new radar system which is very close to one we
                       already have. Both have a sweeping radius = 4. Radar 1 is at (20,
                       25) and the new Radar 2 is at (26, 25). In order to determine if
                       our radars are too close to each other we must see the area of
                                                                                  13



                intersection. The points of intersection are (23, 27.64) and (23,
                22.35) of the intersected area.
       11.2.1.7 Our portable radar jammer is on its way to block the radar of the
                enemy base. The enemy base is located at (35, 48) and spreads
                out with a radius of 10 in all directions. Our radar hammer can
                produce a jamming radius of 16 in all directions from where we
                put it. We are putting the radar at (32, 44). Will the radar cover
                the base? Yes, the base with equation of a circle of (x -35) ² + (y-
                48) ² = 10² is enclosed by the radar jammers circle of (x -32) ² +
                (y-44) ² = 16². There are no intersection points.
11.2.2 3-D
       11.2.2.1 We just sent out our unmanned aerial vehicle to scout for
                advancing intruders by land and air. Currently our unmanned
                aerial vehicle is at point (25, 32, 15) and has a spherical area of
                vision of radius = 15. The advancing intruder is currently at
                position (26, 32, 0). Will they be spotted? No they will be not.
                The spherical radius is only 15 and the intruder‟s distance =
                Square Root [(25-26)²+ (32-32)²+ (15-0)² ] = 15.033
       11.2.2.2 Our tower has an effective viewing range at night of 5 units. The
                tower is a point (32, 15, 5). An enemy plane is coming toward
                from (27, 10, 15) at a trajectory x=-1, y=-5, z=2. Will the plane
                cross the boundary? The plan will not be seen by the tower.
       11.2.2.3 We have a long range missile heading toward an enemy
                helicopter stationary in the air. The missile was heading from
                our base at (0, 0, 0). Its trajectory is x=1, y=4, z =2. The
                helicopter is at point (5, 20, 12). Will the missile hit the
                stationary helicopter in 5 seconds until it detonates? In 5 seconds
                the missile will be at (5, 20, 10) and will not hit the helicopter in
                the given time.
       11.2.2.4 Two of our planes are coming home from a night mission. Their
                lights were both broken in the mission. Currently they are both
                heading home but they don‟t know if they will collide with their
                current trajectory. Plane 1 is currently at (22, 45, 19) heading
                toward point (34, 57, 19). Plane 2 is currently at (36, 41, 19)
                heading toward point (2, 58, 19). If the helicopters stay at their
                current trajectories, they will crash into each other at point (24,
                47, 19).
       11.2.2.5 There is a missile heading toward the base. We have a helicopter
                hovering in field that may be able to reach if it is in range. The
                helicopter is hovering at point (12, 16, 5) and has an effective
                range of 6. The missile is currently at point (36, 40, 11) heading
                at x=-1, y=-1, z=0. The helicopter will be able to shoot the
                missile down. The missile will be in range at point (12, 16, 11).
       11.2.2.6 There is an enemy radar jammer at the point (0, 15, 0) that can
                jam radar signals within 5 units. A friendly radar tower is at the
                                                                                           14



                        point (5, 15, 0) with an effective range of 15 units. Will this
                        friendly radar tower cover be jammed by the enemy radar
                        jammer? Yes, the friendly tower will only cover a volume of
                        837.75 units cubed.
              11.2.2.7 Our current radar detection of radar 1 has a radius = 12 in all
                        directions and is located at (7, 13, 10). We have a new radar unit,
                        radar 2 located at (9, 15, 20), has superior range of radius =
                        20.which we are putting near radar1. We want to know if radar 2
                        covers all the area radar 1 so we can get rid of radar 1. Radar 2
                        does not cover all the area of Radar 1.
11.3   Conversion
       11.3.1 Given the Cartesian coordinates (5000000 m, 3500000 m, 3000000 m),
              the corresponding WGS84 coordinates are latitude: 26.319, longitude:
              34.992 and height 426775.1 m
       11.3.2 Given the WGS84 coordinates latitude: 44 longitude: 40 and height 50 m,
              the corresponding (X, Y, Z) Cartesian coordinates are (3514678.18 m,
              2949165.17 m, 4401003.32 m).
11.4   Decision Aids
       11.4.1 2-D
              11.4.1.1 An area with the shape of a square is created with length = 4
                        units. The top left corner of the area will be placed on the point
                        (0, 0) with the top side of the area parallel to the X-axis. A line is
                        created with length of 4 units from (2,-2) to (2, 2). The user calls
                        the coverage method and functional requirement 4.4.1.1 will be
                        called. The TDA will output “2 units”.
              11.4.1.2 An area with the shape of a square is created with length = 4
                        units. The top left corner of the area will be placed on the point
                        (0, 0) with the top side of the area parallel to the X-axis. A
                        square is created with length = 2 units. The Top left corner of the
                        square will be placed on the point (-1,-1) with the top side of the
                        square parallel to the X-axis. The user calls the coverage method
                        and functional requirement 4.4.1.2 will be called. The TDA will
                        output “2 units squared”.
       11.4.2 3-D
              11.4.2.1 A volume with a shape of a cube will be created with length = 4.
                        The origin (starting point) of the volume will be placed on the
                        point (0, 0, 0) with the top side of the volume parallel to the X-
                        axis. A cube is created with length = 2 units. The top left corner
                        of the square will be placed on the point (-1,-1, 2). The user calls
                        the coverage method and functional requirement 4.4.2.1 will be
                        called. The TDA will output “2 units cubed”.
              11.4.2.2 A volume with a shape of a cube will be created with length = 4.
                        The origin (starting point) of the volume will be placed on the
                        point (0, 0, 0) with the top side of the volume parallel to the X-
                        axis. A cube is created with length = 2 units. The origin (starting
                                                                                 15



                point) of the cube will be placed on the point (-1,-1, 0). The user
                calls the coverage method and functional requirement 4.4.2.2
                will be called. The TDA will output “2 units cubed”.
11.4.3 Optimization
       11.4.3.1 Spotlights are required on a premise for guards to see at night.
                We need to find optimal positions for 2 spotlights. Both the
                lights have a lighting radius = 5 units. We call the optimize
                method which calls functional requirement 4.4.3.1. The optimal
                positions for the two lights are found which gives the most
                lighting.
       11.4.3.2 We need to implement a motion sensor system in a secure
                building. The system will consist of 10 motion sensors. The
                motion sensors have a monitored range of 30 ft. x 30 ft. We need
                to find an optimal layout for the motion detectors to have the
                whole building secured by the motion sensors. We call the
                optimize method which calls functional requirement 4.4.3.2. The
                optimal positions for the two lights are found which gives the
                most volume covered by the motion sensors.
       11.4.3.3 We have installed a new radar detection system close to the
                current operational radar system. The current radar system is
                placed at (46, 21) and the new radar system is placed at (42, 18).
                Both radar systems have a coverage radius = 5. We call the
                optimize method which calls functional requirement 4.4.3.3
                which decides if the radar systems are functional at their current
                positions, or if they are too close in proximity and need to be
                moved apart.
       11.4.3.4 A no fly zone is created with a radius of 10 in all directions from
                a center tower. The no fly zone is centered at (0, 0, 10). Our
                airplane detections systems need to be spread out well in order to
                cover our no fly zone. Each radar has a detection distance of 3 in
                all directions. Currently two radars are placed at (1, 1, 10) and (-
                1,-1, 10). We call the optimize method which calls functional
                requirement 4.4.3.4. The method shows us whether or not the
                radars are placed in proper locations.
11.4.4 General
       11.4.4.1 An area and 3 shapes are created, shape A, shape B, and shape
                C. The user uses the optimize method to find the best coverage
                of the 3 shapes in the area. The optimize method places shape C
                in position (0,3).The user knows the positioning of shape C is
                not possible due to certain scenario circumstances. The user is
                able to move the shape C to position (4, 4) which gives an
                almost optimal coverage.
                                                                                                 16



12.0 Training

      Developers using these API‟s will be provided documentation that adheres to Javadoc
       standards.
      How to implement code using these API‟s will become clear by reading the
       documentation provided.

13.0 Glossary

2-D (Two-Dimensional): Describes a model in two dimensions, (x, y for instance). This is the
minimum number of coordinates needed to describe objects such as a surface.

3-D (Three-Dimensional): Describes a model in three dimensions, (x, y, z for instance). These
coordinates commonly refer to the length, width, and dept of an object. This is the minimum
number of coordinates needed to describe cylinders, cubes and spherical objects.

ADI: This is the abbreviation for MSE's Advanced Display Infrastructure. ADI is a collection of
services that provides system independent display capabilities.

API (Application Programming Interface): An interface which defines how an application
uses libraries and operating system services.

Cartesian Coordinates: Specifies the position of a point uniquely in 2D or 3D space by means
of numerical coordinates, which are the signed distances from the fixed point of origin, located at
the intersection of all axes.

Cube starting point: A convention used, as the corner of the cube which is at the top left and
closest, when looking down on the cube.

Euler Angles: Depicts the orientation of an object from its relative position in a three
dimensional space. See, Pitch, Roll, and Yaw.

Pitch: The camera pitch describes camera rotation moving up and down (rotating around its
local X-axis).

Roll: The motion describing tilting the camera (rotating around its local Z-axis).

Tactical Decision Aids (TDA): A support system which aids a user in making a tactical or
strategic decision.

WGS-84 (World Geodetic System): The world geodetic system is a standard coordinate system
model for the earth based upon its reference to the origin, the center of the earth. The system was
created in 1984, last updated in 2004.
                                                                                                17



Yaw: The angle of rotation when moving the camera left or right (rotating around its local Y-
axis).

14.0 References

      http://www.oc.nps.edu/oc2902w/coord/llhxyz.htm
      http://www.colorado.edu/geography/gcraft/notes/datum/gif/xyzllh.gif
      http://www.oc.nps.edu/oc2902w/coord/llhxyz.htm

15.0 Prototype

      A demonstration of a prototype, which will show basic camera functionality, will be
       shown during a customer meeting on Tuesday, October 27th, 2009 at 2:00 PM at Rowan
       University Robinson Hall, 3rd Floor Computer Science Conference Room.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:10
posted:11/16/2011
language:English
pages:17