Android Application Framework - DOC by wrh20888

VIEWS: 0 PAGES: 22

More Info
									Christopher Fontaine
03/29/2010
Project Snapshot

(Study into the History of OS X Added and Ongoing)


Part 1: A brief review of the history of the development of the system.

Section 1: Android OS

      Android is the name of a Linux-derived operating system created for use
with mobile devices. Development of the Android OS began back in July 2005,
when Google Inc. purchased Android Inc., a startup company based in Palo
Alto, California. Andrew Rubin, co-founder of Android Inc., was tasked with
leading the team to develop a mobile platform through which Google could
leverage its formidable search capability in response to the then recent
release of Apple‟s iPhone. Due to its open-source nature, its upgradeability
and flexibility was advertised heavily to mobile phone makers and carriers.

       The Android Operating System 1.0, based on Linux 2.6.29, was officially
unveiled on November 5th, 2007, alongside the creation of the Open Handset
Alliance, a group of companies dedicated to developing and promoting open
standards for mobile devices. The Android Software Development Kit was
released a week later on November 12th. Since then, a number of updates have
been released for the Android OS, beginning with the April 30th, 2009 release
of 1.5 update codenamed Cupcake. This was followed by the September 19th
release of the 1.6 update codenamed Donut. The most recent update, on October
26th, pushed Android‟s version number to 2.0. Codenamed Éclair, this update
diverges from the previous two in that it is based on Linux kernel version
2.6.29, rather than 2.6.27.

Section 2: iPhone OS

      The iPhone OS is a derivation of Mac OS X, and therefore they share
many critical components such as the OS X kernel itself, BSD sockets for
networking, the Cocoa API, and Objective-C and C/C++ compilers. The most
critical difference between the two is streamlining. The iPhone OS has many
applications and other data, such as drivers, fonts, screensavers, etc,
stripped out as a space saving measure. In addition, the iPhone OS uses a
touchscreen optimized U.I. in favor of the obviously desktop oriented OS X.

      Mac OS X and the iPhone OS both use the XNU (“X is Not Unix”) operating
system hybrid kernel. XNU was originally developed by a computer company
called NeXT, created by Steve Jobs in 1985, for use in their NeXTSTEP
operating system. After NeXT was acquired by Apple Inc. in late 1996, XNU was
updated using the Mach 3.0 operating system microkernel, parts of the FreeBSD
operating system, and a C++ API for driver development called I/O Kit. This
new operating system would be known, initially, as Rhapsody. From there,
additional APIs, such as Carbon which provided backwards compatibility for
older Mac OS 8 and 9 applications, and support for more programming
languages, such as C, Objective-C and Java, were added. Rhapsody‟s core
components would be split off into the open-source Darwin operating system,
while Apple‟s new operating system would be labeled and released as Mac OS X
Server 1.0. One year later, this would be followed by a public beta of Mac OS
X, codenamed Kodiak, for the desktop. One year after that Mac OS X v10.0,
codenamed Cheetah, would be released. Six months later, v10.1, codenamed
Puma, would be unveiled. Versions 10.2 “Jaguar” to version 10.6 “Snow
Leopard” would be released in roughly 1 year intervals from that point
forward. The iPhone OS was originally revealed on January 9th, 2007. Since
then, it has undergone 3 major releases, with a 4th that is currently in
development.

Part 2: A description of the hardware each system is running on

We will take a moment to look at the platforms each OS runs on. Because the
Android OS is available on multiple hardware platforms, we will simply choose
one and use that as a basis for how other hardware platforms might be
developed.

Section 1: Google Nexus One

      The Google Nexus One is a smartphone developed by Google Incorporated
and manufactured by HTC (High Tech Computer) Corporation based in Taiwan. It
runs the Android 2.1, codenamed Éclair, operating system. The Nexus One is an
unlocked device, meaning it can run on a variety of network providers.
Currently T-Mobile and AT&T offer the smartphone, with Verizon and Vodafone,
based in Europe, to follow later this year. The Nexus One was originally
released on January 5th, 2010 with a total of 135,000 units sold during a 74-
day period. This number drastically pales in comparison to the 1st generation
iPhone and Motorola Droid, which sold 1 million and 1.05 million units during
their first 74-day periods respectively. It is suspected that the Nexus One‟s
lack of advertising in comparison to its two major competitors is what
resulted in its poor sales performance.

      The Nexus One utilizes the Qualcomm QSD8250, codenamed Snapdragon, SoC
(System-on-Chip). The Snapdragon SoC integrates a 600Mhz DSP (Digital Signal
Processor), a 3G modem, 802.11 b/g Wi-Fi, Bluetooth 2.1+EDR, Standalone and
Assisted GPS provided by Qualcomm‟s gpsOne-branded engine, quad-band mobile
telephony and broadband support that allows access to GSM, GPRS, HSPA and
EDGE networks, and an ATI-developed Adreno 200 GPU which provides up to 720p
display support and various types hardware-based media decoding. With the
overall features of the Qualcomm SoC outlined, we will now take a more in-
depth look at the Scorpion CPU, which is the heart and brain of the Nexus One
smartphone.

      At the heart of the Qualcomm SoC is a 1Ghz Scorpion Core, created on a
65nm fabrication process. The Scorpion Core incorporates a superscalar, in-
order, dual-issue CPU utilizing the ARM version 7 instruction set. It
resembles a hybrid of a Cortex A8 and A9 processor, which both use the ARM
instruction set as well. Direct memory access is provided by a 32-bit LPDDR1
(Low Power DDR1 or Mobile DDR1) interface. Also included are L1 and L2
caches, a trace cache, and a set of NEON (a market name for an advanced
single instruction, multiple data instruction set that allows for
acceleration of media and signal processing applications) and VFP (Vector
Floating Point) version 3 extensions collectively labeled the VeNum
Multimedia Engine. Besides the Qualcomm SoC, the Nexus One implements several
other pieces of hardware that support the SoC as well as provide additional
functionality.




      The Qualcomm RTR6285 is a radio frequency transceiver chip that
provides multi-band support for UMTS (Universal Mobile Telecommunications
System) and Enhanced GPRS (General Packet Radio Service) networks. This chip
supports the QSD8250 SoC, as the aforementioned HSPA (High Speed Packet
Access) falls under UMTS and GSM, GPRS, and EDGE all fall under Enhanced
GPRS. It also features Receive Diversity, which allows it combine RF signals
from multiple antennas in order to increase overall signal strength, as well
as the ability to receive GPS signals.

      There are two power management integrated chips. The first is the
Qualcomm PM7540, which provides power management for various onboard devices
and functions such as the vibrator, keypad backlight, the AM-OLED screen, the
onboard camera, charger detection and LED flashbulb. It also acts as a USB
transceiver. The second PMIC is the Texas Instruments TPS65023, which
directly manages the Nexus One‟s 1400 mAh, 3.7 lithium-ion battery. Three
power rails provide power to the CPU, peripherals, I/O, and memory.

      Internal storage is provided by a Samsung 943 KA1000015M-AJTT multichip
package that houses 4 gigabits of NAND flash storage as well as 4 gigabits of
Mobile DDR system memory. A MicroSD card slot that is equipped with an
included 4GB card provides additional storage. This is upgradeable to up to
32GB.
      The Nexus One uses a 3.7-inch AM-OLED (Active Matrix – Organic Light
Emitting Diode) display developed by Samsung. It has a resolution of 480x800,
resulting in 252 pixels per inch. A capacitive multi-touchscreen developed by
Synaptics is layered on top. A 5.0 megapixel camera and video recorder,
capable of an image capture resolution of 2592x1944 pixels or video recording
at 720x480 pixels at 20+ frames per secod, is accompanied by an LED
flashbulb. Other components include a GSM power amplifier, a voice processor
that includes ambient noise cancellation, and a separate low-power 802.11n
and Bluetooth 2.1+EDR transceiver chip.

      The over BOM (Bill of Materials) cost for the manufacture of a single
Google Nexus One is $174.15




Section 2: iPhone 3GS
      The iPhone 3GS is the latest version of the iPhone, which was developed
by Apple Inc. and manufactured by the Foxconn Technology Group based in
China. The original iPhone was released June 29th, 2007 and sold 1 million
units in its first 74 days. The 3GS was released June 19th, 2009 and sold over
1 million units in its first 3 days. It originally ran the iPhone OS ver.
3.0, which was launched at the same time as the 3GS, and now runs the latest
iPhone OS ver. 3.1.3. Like previous iPhones, it is exclusive to AT&T. It
features a faster processor, higher resolution camera, and support for higher
speed data transfers compared to its predecessors.

      The iPhone 3GS uses a derivative of the Samsung S5PC100 system-on-chip,
which consists of a Cortex A-8, a dual-issue, 13-stage CPU using the ARM
instruction set and built on a 65nm process, running at 600Mhz. It features a
32kb L1 I-cache, 32kb L1 D-cache, a 256kb L2 cache, and the same NEON
extensions found in the Scorpion Core. Memory access is provided by a 32-bit
interface to 2 gigabits of embedded DRAM. In addition, there is an external
multichip package containing 128 megabits of NOR flash which contains the OS
and 512 megabits of mobile DDR system memory. As well, 16GB of MLC NAND flash
acts econdary storage for applications and data. The SoC also houses a USB
transceiver.




      The iPhone 3GS, unlike the Nexus One, boasts discrete video and audio
facilities (though the GPU is still built into the SoC). The PowerVR SGX 535
GPU, which is integrated into the SoC, is built on a 65nm process, uses a
fully programmable universal shader architecture, and operates at 200MHz. It
offers over the 7 times the performance of the GPU found in its predecessor,
the 3G. It supports OpenGL ES 1.0 and 2.0, OpenCL, DirectX 10.1, and Shader
Model 4.1. A Cirrus Logic ultra low power audio codec handles the speaker,
headphone port, and mic.

      There are four signal power amplifiers present on the 3GS. Three WCDMA
and HSUPA amplifiers manufactured by Triquint, and one GSM and EDGE amplifier
manufactured by Skyworks. They all feed into an RF transceiver manufactured
by Infineon on a 130nm process. It features quadband GSM and EDGE support,
and tri-band WCDMA (Wideband Code Division Multiple Access) and HSDPA
support. This, in turn, feeds into a digital baseband processor also
manufactured by Infineon that is compromised of two ARM926 and ARM7 cores.
The 3GS‟s GPS receiver is also manufactured by Infineon, and feeds into a
Broadcom 4325 Wi-Fi and Bluetooth Transceiver that supports the 802.11 b/g
and Bluetooth v2.1+EDR standards, along with support for FM radio. Two power
management integrated chips, one developed by Dialog Semiconductor and the
other by Infineon, manage power for the Samsung SoC and the just described RF
chain respectively. An Infineon GPS receiver; AKM Semiconductor 3-axis
electronic compass, which also integrates an ADC (analog-to-digital
converter), DAC (digital-to-analog converter), and temperature sensor; and an
STMicroelectronics 3-axis accelerometer round out the internal hardware for
the 3GS.

      The 3GS uses a 3.5-inch IPS (In-Plane Switching) LCD display with a
resolution of 320x480 pixels, resulting in 163 pixels per inch. A capacitive
multi-touch screen is layered on top. Both of these components are
manufactured by Toshiba. A 3.0 megapixel camera and VGA resolution video
recorder is included which possesses facilities for geotagging and automatic
focus, white balance, and exposure.

      The overall BOM cost for the manufacture of a single iPhone 3GS is
estimated to be $178.96.
Part 3: A description of each operating system

Section 1: Android OS

      The Android system architecture is broken up into several layers.
Starting from the bottom to the top they are the Linux 2.6.29 kernel, which
handles device drivers, memory management, process management, power
management and networking; the native Android libraries which handle window
composition, 2D and 3D graphics, media codecs, storage, and web rendering;
the Android runtime, which contains a register-based, slimmed down Java
virtual machine called Dalvik as well as the core Java libraries; the
Application Framework layer, which holds the vital Activity Manager that
manages application life cycles and user navigation; and finally the
Applications layer, where application code resides. As we analyze Android and
its various mechanisms, we will be stepping in and out of these layers as we
go along. A description of application fundamentals, including inter-process
communication and thread management; memory management; networking support;
power management; and the Android SDK will be given.




      All applications in Android run in their own Linux process, with an
associated thread, which is tagged with a Linux user ID, and each process
exists inside its own Dalvik virtual machine. In addition, default
permissions are set so application data is only visible to the user ID
assigned to the application‟s process, basically to the application itself.
This means that all application code and data exists wholly separate from any
other application. While it is possible to arrange for applications to share
the same user ID, Linux process, and/or virtual machine, normally
applications are required to communicate with each other via message passing
in the form of remote procedure calls. However, before we get into inter-
process communication, it is important to know the various components of an
application, as this will determine how they can communicate.

      Android applications all have the capability of using „parts‟ of other
Android applications. For example, let‟s say application A1 has need of a
certain type of GUI, and that GUI has already been implemented in application
A2. Rather than re-writing the GUI again, A1 can simply use the GUI of A2.
This is not an example of incorporating application code or linking. Rather,
A1 simply starts that specific „piece‟ of A2 and goes from there. In order
for this to be possible, all Android applications exist as a set of
„components‟. This means that, unlike traditional software, there is no set
starting point for the program, such as main(). Rather, when an application
is started, the necessary components are started as needed. There are four
types of application components in Android, they are Activities, Services,
Broadcast Receivers, and Content Providers. These components, or base
classes, all implement a series of subclasses and methods that perform their
various functions. All components are executed in the main thread of the
application.

      Activities are user interfaces for specific actions within an
application. Using a text messenger as an example; one activity displays a
list of contacts, another activity sends a message to the chosen contact, yet
another activity reviews old messages that have been sent or received
previously. An application can have one activity or several. Typically, a
specific activity is marked to start when the application does, and any
further activities are started from within the current one. Each activity is
usually given one or more windows to act in, either full screen or windowed
on top of others. The actual visual content of the windows are managed by the
View base class, with a view managing a particular area of the window. These
views can be organized within a hierarchy, with parent views containing and
organizing child views. Views located at the bottom of the hierarchy, leaf
views, are where the actual user interaction with the activity takes place.
For example, when a user is presented with an image, and an action is
initiated when the user taps the image. The „content view‟ is the view
located at the root of the view hierarchy.

      Services can be described as activities that do not require a user
interace. They run in the background, possibly indefinitely. Examples of
services might be something that downloads a file over a network, or
calculates a result and makes it available for any activity that needs it. A
media player, for example, might have several activities that allow the user
to look at a list of songs and select one for playback. The actual act of the
playing the music is left to a service, which allows the music to continue
playing even if the user moves on to another application. An application can
have one or several services running. It is possible to start a service or
connect to an ongoing one. Communicating with a connected service uses
whatever interface that service exposes. Like all other components, services
run in the main thread, however additional threads can be spawned to handle
more computationally intensive services.

      Broadcast Receivers, as the name implies, receive and react to
broadcasted announcements. Such announcements usually originate from system
code, such as a change in time zone, a low battery, or some change in user
preference. An application can have any number of broadcast receivers, each
allowing the application to respond to a particular announcement. While a
broadcast receiver usually does not have a user interface, it is capable of
starting an activity that does.

      Content Providers makes certain application data available to other
applications. This data can be stored in a hard drive, a database, or some
other accessible storage medium. Like all the components, the Content
Provider is a base class that implements a set of methods that allows this
data to be accessed by other applications. However, these methods cannot be
directly accessed. Instead, a Content Resolver, which can talk to any Content
Provider, acts a medium between the provider and applications that need its
data.

      As stated before, components of an application are started as
necessary, rather than the application being started from some specific
point. Similarly, they can be stopped as necessary. Content Providers, as
alluded to above, are started by a Content Resolver. The remaining three
components are started by an intent, which belongs to the Intent base class
and is a type of asynchronous message. An intent names the action being
requested and specifies the location of the data to act on. As stated before,
activities are typically launched from other activities, and there are
separate methods available for launching activities with an expectation of
receiving a result or not. Similarly, there are separate methods available
for starting a service and binding to a service that is already started. This
establishes an ongoing connection between the caller and the service. Intents
can also be broadcasted, in which case it will be received and acted upon by
all listening Broadcast Receivers. Broadcast Receivers and Content Providers
are only active when responding to a message, and do not need to turned off
explicitly, unlike the remaining components. There are separate methods for
shutting activities and services. Activities and services can shut down
themselves or other activities and services.

      As stated before, all components run in the main thread of a process.
This can very easily put a large burden on that thread and cause the
application to be slow or non-responsive. To resolve this, it is possible to
allow components to run on separate processes as well as spawn worker threads
for any process. Each component has an attribute that determines what process
that component runs on. Certain components can run on separate processes, or
even share a single process. Components of different applications are even
capable of running in the process, provided that their associated
applications share the same Linux ID. Processes can be shut down by the
system, such as in the event of a low battery or system memory. In this case,
any components associated with that process are destroyed. However, the
process can be restarted when necessary.

      Additional threads can be created to perform tasks that are time
consuming or computationally intensive, in order to relieve the burden on the
main thread and keep the application responsive. Threads are created using
the Tread base class, which provides a number of sub-classes for managing
threads; such as running a message loop within a trhead, processing messages,
or creating a thread within a message loop. A message loop continually
receives, processes, and sends messages to be processed by other threads.

      Processes communicate by passing messages in the form of remote
procedure calls. This means that a method is called locally, but executed
remotely in another process. The method call and all the associated data are
decomposed then transmitted from the local process and its address space to
the remote process and its address space, where it is rebuilt and executed.
Return values are sent using the same method, but in the opposite direction.
Android automates all of these tasks, so the only thing the writer has to
worry about is actually defining the remote procedure call interface. RPC
interface definitions are created using a special tool called AIDL, or
Android Interface Definition Language. This definition must be made available
to the local and remote process in order for them to communicate using it.
Remote processes are usually directly handled by a service, which has the
ability to inform the system about the state of the process as well as what
other processes are connected to it. Methods that can be called by more than
one thread, such as a Content Provider receiving data requests from multiple
processes, must be written to be thread-safe. This means that the method must
function correctly when executed simultaneously by multiple threads.

      In order for any application to run, it needs a certain amount of
memory where it can store its code and associated data. Because mobile
devices, which Android was created run on, tend to have a very limited amount
of memory, in comparison to modern desktops at least, memory management
becomes very important. In Android, memory management occurs in two places;
under the Dalvik virtual machine at a per-application level and the Android
kernel at a per-process level. Each application runs in its own Dalvik
virtual machine, with its own process and its own heap. Thus garbage
collection occurs on an independent, per application basis. Dalvik uses a
mark-and-sweep method for getting rid of unused objects. The mark bits are
kept in a separate, dedicated structure that is created ONLY at garbage
collection time. This maximizes the use of limited memory. At the lower
level, processes are terminated in order to free up memory as needed, which
was mentioned before. A processes‟ relative importance to the user determines
whether or not it is selected for termination. For example, a process with no
visible activities will selected for termination over a process that has
visible activities. This selection process is related to Component
Lifecycles. All components of an application have a particular life cycle.
This life cycle determines when they are active or inactive, or in the case
of activities, visible or invisible to the user. Each component can be in one
of a number of states that are particular to that component, and their entire
life cycle can be decomposed into a hierarchy of lifetimes depending on these
states.

      An activity can be active or running, which means it is in the
foreground of the screen and interacting with the user; paused, which means
it is still visible but the user is not directly interacting with it; and
stopped, which means it is invisible to the user, but still retains state and
member information, at least until its process is killed by the system when
it needs to reclaim memory. The entire lifetime of an activity occurs between
when it is created and destroyed, the visible lifetime of an activity occurs
between when it is started and stopped, and its foreground lifetime occurs
between when it is resumed from a pause and paused.

      A service can be used in two ways, it can be started and allowed to run
until it is stopped, and it can be bound to one or more clients and operated
using a defined interface. The entire lifetime of a service occurs between
when it is created and destroyed, and the active lifetime of a service occurs
between when it is started by an intent and when that intent is satisfied, or
when an service is bound to one more clients and then unbound from all of
them.

      Broadcast Receivers and Content Providers are similar in that they are
only active in response to something. For a Broadcast Receiver, its lifetime
is when it has received and is reacting to a broadcast. For a Content
Provider, its lifetime is when it is responding to a request from a Content
Resolver.

      Based on the above descriptions, we get a sense of the stages of a
process. When the system is deciding what process to terminate in order to
free memory, it chooses the process that lies at the bottom of a hierarcy of
process states. These states are a foreground process, which means it
contains one or more components that are actively interacting with the user,
bound to a client and/or responding to a broadcast or request; a visible
process, which has components that are visible but not in the foreground,
such as when a component is paused; a service process, which contains a
currently running service; a background process, which has no visible
components; and an empty process, which has no active components. Empty
processes are typically the ones targeted for termination, though they might
be kept around as part of a cache of threads to improve start up when a
component needs it.

      Obviously, in order for anything to run, there needs to be power
available for the hardware. However, power is typically a limited resource
inside devices Android runs on, so power must allocated and used
conservatively. Android has its own power management scheme, which sits in
the Linux kernel layer on top of the standard Linux power management scheme.
The Android Power Management scheme operates under the principal that the CPU
should not consume power is no applications or services need it. To enforce
this principal, applications and services are required to request CPU
resources using wake locks. Wake locks are accessible through the Android
application framework and native Linux libraries. The CPU is shut down if
there are no active wake locks. Thus, a locked wake lock prevents the system
from entering suspend or some other lower power state. The API for this form
of power management, the PowerManager base class, offers several types of
wake locks. ACQUIRE_CAUSES_WAKEUP causes a device to be turned on when
locked, which is contrast to other wake locks which only ensure that a device
that is already on stays on. FULL_WAKE_LOCK causes the screen, screen
backlight and keyboard backlight to be on and at full brightness.
ON_AFTER_RELEASE is unique in that, when the lock is released, it causes the
screen to stay on a fixed period of time before turning it off.
PARTIAL_WAKE_LOCK ensures that the CPU is active, though the screen may not
be. SCREEN_BRIGHT_WAKE_LOCK ensures that the screen is on and at full
brightness, though the keyboard backlight is allowed to turn off.
SCREEN_DIM_WAKE_LOCK ensures that the screen is on, but allows the keyboard
backlight to turn off and the screen backlight to dim.

      Networking support in Android takes the form of different APIs and
interfaces that are visible to applications and software writers. Bluetooth
and Wi-Fi functionality are accessed through the android.bluetooh and
android.net.wifi packages respectively. Using the Bluetooth APIs, an
application can scan for other Bluetooth devices, query the local Bluetooth
adapter for paired Bluetooth devices, establish RFCOMM (radio frequency
communication) channels, connect to other devices through service discovery,
transfer data to and from other devices, and manage multiple connections. The
Wi-Fi APIs for accessing the lower-level wireless stack that provides Wi-Fi
network access. Wi-Fi connections can be scanned, added, saved, terminated
and initiated using these APIs. It can also be used to retrieve device
information, such as link speed, IP address, and negotiation state.

GPS functionality can be added using an abstraction interface that is written
in C. A GPS driver contains a shared library that implements this interface.
3G and 4G do not have a public API that is accessible to applications.

The Android SDK uses the Eclipse integrated development environment, which is
available for Windows, Linux, Mac OS X, and Solaris. It is a popular IDE that
is used for a variety of platforms, so many programmers are already familiar
with it. The SDK also requires the Java Development Kit, as well as the
Android Developer Tools plug-in that integrates Eclipse with the SDK. All of
these parts are freely available for download. So getting a development
environment for Android is very simple and cheap. However, because Android
does not use a straight implementation of Java, there is something of a
learning curve inherent to developing applications for it.



Section 2: iPhone OS

      Like the Android OS, the iPhone OS is segmented into a number of
layers. From bottom to top these layers are the Core OS, Core Services,
Media, and Cocoa Touch. Each layer contains a number of levels that provide
certain functions or management. The Core OS layer is divided into the
System, Security, Accessory Support and CFNetwork levels. The System level
contains the critical kernel, which is based on Mach. This level contains
drivers, low-level UNIX interfaces, and provides management for virtual
memory, threads, the file system, networking, and interprocess communication.
It also exposes a number of interfaces that allow for low level control of
POSIX threading, BSD sockets, file-system access, standard I/O, Bonjour and
DNS services, locale information, and memory allocation. The Security level
contains the iPhone OS‟s security framework, which protects data and provides
interfaces for manipulating certificates, public and private keys, trust
policies and encryption. The Accessory Support level contains the External
Accessory level which allows the iPhone OS to communicate with external
devices attached to the iPhone through the 30-pin dock port, or through
Bluetooth. The CFNetwork level contains the CFNetwork framework, which
contains a number of C-based interfaces for working with network protocols.
These interfaces allow an application to use BSD sockets; create encrypted
connections using SSL or TLS; resolve DNS hosts; access to HTTP,
authenticated HTTP, HTTPS and FTP servers; and publish, resolve and browse
Bonjour services.




      The Core Services layer contains the XML support level, which allows
for manipulating and retrieving XML content; SQLite level, which allows for
embedding a SQL database into your application without having to run a
separate remote server process; In App Purchase level, which contains the
Store Kit framework and provides support for purchasing content and services
from the iPhone; Foundation Framework level, which provides Objective-C
wrappers for a number of features found in the Core Foundation level, such as
collection data types, string management, threads and run loops, and raw data
block management; Core Location level, which contains the Core Location
framework that uses the available hardware to triangulate the user‟s position
based on nearby GPS, cell, or WiFi signal information; Core Foundation level,
which provides C-based interfaces to basic data management and many of the
same features found in the Foundation Framework level; Core Data level, which
contains the Core Data framework which allows for the creation and management
of data models; and the Address Book level, which provides access to the
contacts stored on an iPhone.
      The Media layer is where the graphics, video and audio codecs and
frameworks are located. This layer contains the Video Technologies level,
which contains the Media Playback framework and is responsible for video
playback; the Audio Technologies level, which contains the AV Foundation,
Core Audio and Open AL frameworks which are responsible for audio recording
and playback; and the Graphics Technologies level, which contains the Core
Graphics, Quartz Core, and OpenGL ES frameworks which are responsible for 2D
and 3D graphics rendering.

      Finally, there is the Cocoa Touch layer is where applications are
actually implemented. It is composed of the UIKit level, which contains the
UIKit framework and provides Objective-C based interfaces for building
applications; the Peer to Peer Support level, which contains the Game Kit
framework that provides peer-to-peer network connectivity and voice for
applications; the Map Kit level which contains the Map Kit framework that
provides a map interface that can be embedded into applications; the In App
Email level, which provides support for composing and queuing email messages;
the Address Book UI level, which contains the Address Book UI framework that
provides an Objective-C interface creating, editing, and selecting contacts;
and the Apple Push Notification level, which alerts users of new information
even when the associated application is not running. With these layers and
levels defined, we will be looking at all of the same elements that were
described under the Android OS, beginning with application composition, and
moving on to thread management, memory management, networking support, power
management, and the iPhone SDK.

      There are two types of applications that run on the iPhone. The first
are web applications, which are created using HTML, CSS and JavaScript and
run off of a web server. They are transmitted over a network and run on the
Safari web browser. The second type is native applications, which are
developed within the Cocoa application environment using the UIKit Framework
and the Objective-C programming language. iPhone applications can all be
broken down into a number of parts, the first of which is the Core
Application which they all share. From there, other elements such windows,
views, events, graphics, web content, files, multimedia, and devices can be
added to make a complete application. There are several critical parts,
provided by the UIKit, within the Core Application that are needed for all
application, as well as application management. These parts are the main
function, the UIApplicationMain function, the Application Delegate, and the
main Nib file. The main() routine is used minimally, since the bulk of the
work is located inside the UIApplicationMain routine. The main() function
only does three things, it creates an autorelease pool, it calls
UIApplicationMain, and it releases the autorelease pool. The autorelease pool
is related to memory management, which will be described later. While it is
possible to change this implementation of main(), it is generally considered
to be unwise. The UIApplicationMain() functions takes four parameters and
uses them to initialize the application. These arguments are argc and argv,
which are initially passed into main(), and two string parameters that
identify the class of the application object and the class of the application
delegate. If the first string parameter is null, then UIKit uses the
UIApplication class by default. If the second string parameter is null, then
UIKit assumes that the application delegate is one of the objects loaded from
your application‟s main nib file.

      The application delegate monitors the high-level behavior of an
application, and is usually a custom object that the programmer creates.
Delegation is a mechanism used to avoid subclassing complex UIKit objects,
such as the default UIApplication object. Instead of subclassing and
overriding methods, you use the complex object unmodified and put your custom
code inside the delegate object. As interesting events occur, the complex
object sends messages to your delegate object. You can use these “hooks” to
execute your custom code and implement the behavior you need. The application
delegate object is responsible for handling several critical system messages
and therefore must be present in every iPhone application.

      The main nib file loads at initialization time. Nib files are disk-
based resources that contain a snapshot of one or more objects. The main nib
file of every iPhone application contains a window object, the application
delegate, and one or more key objects for managing the window. Loading a nib
file rebuilds the objects in the nib file, converting each object from its
on-disk representation to an actual in-memory version that can be manipulated
by your application. Objects loaded from nib files are no different from
objects created programmatically. But for user interfaces, it is often more
convenient to create the objects associated with your user interface
graphically and store them in nib files rather than create them
programmatically.

      Similarly to Android, all applications execute with the main thread of
their respective process. Also like Android, and most modern operating
systems, it is possible to generate worker threads for functions that time-
consuming or computationally intensive, so the main thread is not clogged.
Each thread has its own execution stack, and are all scheduled for execution
separately by the system. However, because they all reside within the same
process, they all share the same virtual memory space as well as the same
access rights as that process. There is cost to threads in terms of time and
memory, so great care should be taken to manage them properly. The core
structures needed to manage and schedule threads are stored in the kernel
using wired memory, while stack space and per-thread data are stored in
application‟s memory space. The core structures tend to take up roughly 1KB
of memory, but this is wired memory which cannot be paged to disk. The stack
space for iPhone OS main threads are usually about 1MB, while worker threads
are about half that at 512KB. Initializing a thread takes roughly 90ms.

      There are several different ways of generating threads in the iPhone
OS, the two most popular are by using the NSThread class or the POSIX Thread
API. The NSThread class is the main interface for creating threads in Cocoa,
and is also used in all versions of Mac OS X. The POSIX Thread API is C-
based, and while it is not the main method for generating threads, is it good
to use when writing an application for multiple platforms since it will be
supported on any POSIX-compliant operating system. Threads created by both
these methods are „detached‟, which means that the threads resources are
automatically reclaimed by the system when the thread terminates. While there
are other methods of creating threads, the only one that has seen any
significant use is Multiprocessing Services, which is built on top of POSIX
threads. This method was used is early versions of the Mac OS. It is not
available on the iPhone OS, but applications that use it can be modified to
use POSIX threads instead.

      Threads can be terminated by either allowing them to run their course
naturally, which means letting it reach the end of its main routine, or by
terminating them directly. Terminating them directly is generally
discouraged, because the thread is not able to clean up after itself and
leaves open the possibility for memory leaks. For example, if the thread
allocated memory, opened files, or acquired other resources, the application
may be unable to reclaim those resources if the thread is cleaned up after
properly. If direct termination of threads is considered a necessity, then
threads should be explicitly designed to a respond to some sort of
termination or exit message.

      As stated before, all threads within the same process share the same
address space. However, it is still possible for threads in one address
space/process to communicate with threads in another address space/process.
The same mechanism within the object-oriented model that allows objects to
communicate also allows for interprocess communication. The iPhone OS uses a
distributed objects architecture for remote message passing, which allows for
messages between threads of different processes to be treated exactly the
same as messages between threads in the same process. To send a message, an
application establishes a connection to the remote receiver via a proxy
object that exists in the same process. It then communicates with the remote
object through the proxy object. The proxy has no real identity of its own,
it just assumes the identity of the actual recipient. Messages sent to the
receiver are stored in a queue until the receiver is ready to respond to
them.




      One of the more significant differences between the iPhone OS and
Android falls within memory management. While Objective-C does have garbage
collection capability, it is not available on the iPhone OS. Instead a
reference counting model is used. When an object is created or copied, its
retain count is set to 1. From that point on, other objects may express an
ownership interest in that object, which increments it‟s retain count. When
ownership interest is relinquished, the retain count is decremented. When the
retain count reaches 0 the object is destroyed by the runtime, which calls
the dealloc method of that object‟s class. There are a few rules to keep in
mind when it comes to ownership interest. First, the application
automatically owns any object it creates or copies. Second, if you are not
the creator of an object but want it to stick around for future use, you
declare an ownership interest in it. Third, if you own an object, either by
creating it or an expressing an ownership interest, you are responsible for
releasing it when it is no longer needed. Fourth, if are not the creator of
an object and you have not expressed an ownership interest in it, you must
not release it.

      Autorelease pools and there connection to memory management were
mentioned earlier. When an object is autoreleased, it is marked for later
release. This is useful if the programmer wants an object to persist for a
certain amount of time before it is destroyed. Autoreleasing an object places
it in the autorelease pool. When the main() releases the autorelease pool,
all the objects within it are destroyed.

      Networking protocols in the iPhone OS are all managed by the CFNetwork
framework. This framework provides APIs that enable a variety of functions,
such as working with BSD sockets, encrypted connections, resolving DNS hosts,
FTP, and many others. The two principal APIs for this framework are the
CFSocket API and the CFStream API. The CFSocket API is used to create
sockets, which acts similarly to telephone jack, in order to connect and send
data to other sockets. BSD sockets are one example of the type socket that
can be created using this API. The CFStream API is used to create streams.
Streams are a sequence of bytes transmitted serially over a communications
link. Streams are one-way paths, so to communicate in both directions an
input (read) stream and output (write) stream are necessary. Read and write
streams provide an easy way to exchange data to and from a variety of media
in a device-independent way. You can create streams for data located in
memory, in a file, or on a network (using sockets), and you can use streams
without loading all of the data into memory at once. All the other APIs
available in the CFNetwork framework are built on top of these two core APIs.




      Power Management in the iPhone OS is accomplished through hardware
timers. The CPU, screen, Wi-Fi and baseband radios, the accelerometer, and
the disc are all attached to timers that power down that particular device if
it remains unused when the timer runs out. While it is possible to disable
these timers for applications that require them to be powered on at all
times, it is recommended that this not be done due to the nature of the
iPhone being a battery operated device.

      The iPhone SDK uses the Xcode integrated development environment. This
SDK requires OS X, so a Macbook, Mac Pro, or Hackintosh is needed to develop
applications for it. The SDK is free to download, but requires membership
into the Apple Developer Connection, which is also free. The SDK contains
many of the same developer tools contained in the OS X SDK, so a developer
for one will be comfortable developing applications for the other. As well,
because Objective-C, which the Cocoa Touch framework uses, is a superset of
C, programmers who are principally use C will be comfortable developing
applications using this SDK.



Part 4: Compare and Contrast

      Android uses the Eclipse development environment, while the iPhone uses
Xcode. Eclipse is available for a variety of platforms, such as Windows,
Linux, OS X and Solaris. It is, first and foremost, a Java IDE, which it
supports out of the box. Plug-ins can be used to provide integration with
other languages. It is built off of the Rich Client Platform, which is
composed of a number of parts, Equinox OSGi, which is the standard bundling
framework; the core Eclipse platform, which boots Eclipse and manages plug-
ins; the Standard Widget Toolkit, a portable toolkit for developing GUI
widgets; JFace, which provides viewer classes to bring model view controller
programming to SWT, file buffers, text handling, and text editors; and the
Eclipse Workbench, which provides views, editors, perspectives, and wizards.
Eclipse is free to download and install. It also comes with its own built-in
package manager, which makes downloading updates and installing new features
very easy.

      The iPhone OS uses the Xcode development environment, which is only
available for Mac OS X, and is propietary. It is primarily a C/Objective-
C/C++ IDE, though many languages have already been ported to it. It also
includes Interface Builder, which is an application used to construct GUIs.
It was originally developed for NeXT, which was also one of the building
blocks of Mac OSX. It eventually superseded a previous IDE developed by Apple
called Project Builder. Xcode possesses a number of interesting features
which makes developing code over a large number of computers, as well as
creating applications for the PowerPC and x86 platforms, easier. The first is
called Shared Workgroup Build, which uses the service discovery protocol
Bonjour, and the distcc compiler tool. These elements work together to allow
distributed compilation of software. An updated version of this feature,
called Dedicated Network Builds, has support for even larger groups of
computers. Xcode has the capability to compile Universal Binaries, which
makes applications compatible with both PowerPC and x86 platforms of OS X, it
also grants 32-bit and 64-bit compatibility. This same feature also allows
applications developed for ARM processors, which the iPhone uses, to built
and debugged on Xcode.
      Third party application support on both platforms have their strengths
and weaknesses. Both Android and iPhone require a developer fee before
applications can made available. Android requires a one-time $25 dollar fee,
while the iPhone requires an annual $99 fee. As mentioned before, their
respective SDKs also vary in terms of requirements. Eclipse can be installed
on Windows, OS X, Linux and Solaries, whereas Xcode requires Mac OS X. There
is much greater freedom in terms of determining ones development environment
with Eclipse compared to Xcode. The Eclipse SDK is primarily tailored for
Java while Xcode is primarily tailored for Objective-C, C and C++. The
Android framework does not use a true implementation of Java, so something of
a learning curve is required. The iPhone framework uses Objective-C, which is
a strict superset of C, so programmers who are already familiar with C will
have a much easier time of development. Also, because the iPhone SDK contains
many of the same developer tools as Mac OS X, applications writers for one
will be comfortable with the other. Similar Eclipse is used as the IDE for
several platforms, which includes OS X. So program writers who are used to
Eclipse will be similarly comfortable if they have already used it for OS X.
Their respective app stores are also an important component to consider when
it comes to application development. The iPhone App Store has a strict
application review process that must be passed before application can be sold
there. There is also a content rating system in place that allows for
parental controls on the iPhone to block certain applications from being
installed. However, the customer base for the Apple App Store far eclipses
that of the Android Marketplace. This means more potential customers for an
app, but because the number of available apps is also high, this means more
competition. The Android Marketplace does not have a review process or a
content rating system, however applications can still be banned by Google if
the situation permits. Also, selling applications and developing applications
are mutually exclusive. People in certain countries are only allowed to buy
applications and not develop them; Australia, New Zealand, and Canada are
examples of this policy.

      The Android operating uses a Java virtual machine to host all active
processes. However, it does not use a standard Java virtual machine. Instead,
it uses the Dalvik virtual machine which is register-based, compared to the
standard Java Virtual Machine which is stack-based. Stack-based virtual
machines must use instructions to load data on the stack and manipulate that
data, and therefore require more instructions in general than register-based
machines to implement the same high level code. However, instructions in a
register-based VM must encode the source and destination registers and,
therefore, tend to be larger. The Dalvik VM, compared to JVM, is slimmed down
to use less space, and it optimized for running multiple virtual machines
efficiently. These two aspects are mostly likely why it was chosen over the
standard Java Virtual Machine. As a consequence of using Dalvik, however, is
that standard Java bytecode cannot be interpreted by Dalvik, because it uses
its own custom bytecode. This is one of the reasons why Java on Android is
not compatible with standard Java, and therefore requires adjustment by Java
programmers. Virtualization on the iPhone has been gaining traction for quite
some time, with Sun announcing support for its Java Virtual Machine platform
as well as VMware announcing its Mobile Virtualization Platform. However,
these are meant to be an add-on to the iPhone‟s functionality. Unlike
Android, applications do not run in a virtual machine by default on the
iPhone.

      Each operating has its own security framework. The basis of Android‟s
security framework lies in its use of virtual machines and Linux user IDs to
completely isolate one process from another. This extends also to user and
application data. This means that reading or writing another user's private
data, or reading or writing another application's files are explicitely
forbidden by default. However, as mentioned earlier, it is possible to
arrange for processes to share the same application data and write into each
other‟s address spaces if necessary.

      The iPhone uses three standards as the basis for its security
architecture, BSD (Berkeley Software Distribution), Mach and CDSA (Common
Data Security Architecture). BSD and Mach lie together on one level, while
CDSA lies on top. BSD provides the basic file system and networking services
and implements a user and group identification scheme. BSD enforces access
restrictions to files and system resources based on the user and group IDs.
Mach provides memory management, thread control, hardware abstraction, and
interprocess communication. Mach enforces access by controlling which tasks
can send a message to a given Mach port, where a Mach port represents a task
or some other Mach resource. CDSA is an Open Source security architecture
adopted as a technical standard by the Open Group, from which Apple has
developed its own implementation. The core of CDSA is CSSM (Common Security
Services Manager), a set of open source code modules that implement a public
application programming interface called the CSSM API. CSSM provides APIs for
cryptographic services (such as creation of cryptographic keys, encryption
and decryption of data), certificate services (such as creation of digital
certificates, reading and evaluation of digital certificates), secure storage
of data, and other security services. A number of security APIs are available
that use these three standards as their basis. It is commonly believed that
the iPhone OS‟s lack of multi-tasking is a security feature. By prohibiting
more than one 3rd party application from running, you prevent opportunities
for malicious code to their work. In reality, this was done as more of a
power-savings measure, as it was determined that letting additional tasks sit
in the background would eat up too much battery life for no reason. Apple
does plan to implement multi-tasking in iPhone OS 4.0, however.

      In the end, but the iPhone OS and Android have their strengths and
weaknesses. The Android‟s strengths lies in its open source philosophy in
regards to app development, as well as the ease of setting an environment.
Similarly, as an open source operating system, it can run on a variety of
hardware platforms. So smartphone makers and manufacturers have the freedom
to tweak it to fit certain hardware. However, this can also be seen as a
weakness, because application developers must devote time and energy is
supporting multiple platforms if they choose to. The iPhone OS, in contrast,
only runs in a limited number of platforms, the 3G and the 3GS. So support is
much easier. However, development environment requirements are very
restrictive, because it requires an Intel-based Machintosh. Also, the annual
developer fee for the iPhone is much more expensive compared to Android,
which is a much smaller one-time fee. Android possesses several elements that
do not conform to standard. For example, its Linux 2.6 kernel has been
modified such that it is a complete Linux kernel. Also, its use of the Dalvik
virtual machine breaks compatibility with standard Java. A programmer who is
used to C will have a much easier time with the iPhone‟s Objective-C
framework compared to a programmer who is used to Java. In the end, these
„strengths‟ and „weaknesses‟ or only such depending on how much time and
money you are willing to invest in either platform. Off the bat, Android will
be easier to start with, but has a higher learning curve. The iPhone OS has a
higher cost of entry, but conforms better to known standards and has a much
higher customer base.

								
To top