PIERRE OMIDYAR by wulinqing

VIEWS: 7 PAGES: 22

									   Lesson 1: Mobile Agents or not Mobile Agents?


   Mobile agents are processes dispatched from a source computer to accomplish a
specified task. Each mobile agent is a computation along with its own data and
execution state. After its submission, the mobile agent proceeds autonomously and
independently of the sending client. When the agent reaches a server, it is delivered to
an agent execution environment. Then, if the agent possesses necessary authentication
credentials, its executable parts are started. To accomplish its task, the mobile agent
can transport itself to another server, spawn new agents, or interact with other agents.
Upon completion, the mobile agent delivers the results to the sending client or to
another server


A broad definition of a Mobile Agent


   Mobile Agents appeared in the scene the last decade of the previous millennium.
They have been embraced by researchers and practitioners as a potential technology
that could revolutionize the way we perform computations, develop applications and
systems. They were, and still are, viewed as a unique way to approach mobile and
wireless computing.
   Indeed, this is not an exaggeration; mobile agents have been used in a variety of
applications and computing areas. The driving force motivating mobile agent-based
computation is multifold: Mobile agents provide an efficient, asynchronous method
for searching for information or services in rapidly evolving network; mobile agents
may be launched into the unstructured network and roam around to gather
information. Second, mobile agents support intermittent connectivity, slow networks,
and light-weight devices. Thus, mobile agents provide many benefits in Internet
system programming (otherwise networking programming), in which there is a need
for different kinds of integrated information, monitoring and notification,
encapsulating artificial intelligence techniques, security and robustness. Also the
mobile agent paradigm has demonstrated satisfactory performance when deployed for
distributed access to Internet databases, distributed retrieving and filtering of
information and minimizing network workload. Finally, mobile agents have proved
very effective in supporting asynchronous execution of client requests, weak
connectivity and disconnected operations and the dynamic adaptation to the various
types of user connectivity common in wireless environments.
     Various mobile agent platforms have been developed. They can be broadly
categorized as Java and non-Java based ones, and they can be further split into
experimental and commercial ones. There is an increasing interest in Java-based
platforms due to the inherent advantages of Java, namely, platform independence
support, highly secure program execution, and small size of compiled code. These
features of Java combined with its simple database connectivity interface (JDBC) that
facilitates application access to multiple relational databases over the Web, make the
Java approaches very attractive. And in fact, this is JAVA orientation of the Mobile
Agent technology that gave it the boost we have seen so far and the hope (or illusion?)
that this technology can do and offer much more.
     While, however, everything is here, the infrastructure, the proof of concepts in
such a variety of applications areas in wireless, mobile or fixed networks, the big
question is: “Why hasn’t the industry embraced this technology as most of us
expected?”
     What is missing, what is actually needed, what is that final touch that could have
transformed this good “painting” to a brilliant Rembrandt? Why haven’t we see the
convergence of agents technologies (AI agents, mobile agents, moving objects etc)
into one powerful seamless concept that could serve all kind of applications (e.g.,
eCommerce, m-Commerce, mobile and wireless applications etc) at a level higher
than the one seen so far.
Is it,


        The security problem?

        Java and its unwillingness to make the basic agent execution environment part
         of it?
        Is it simply still too early?
        The diverge definitions of what is an agent and maybe its incompatibility with
         what we came to think of a mobile agent?
        Is it development effort? Lack of scalability or performance? Lack of a
         standard? Lack of proper education in mobile agents?
   Is it possible to get answers to these, and possible many other, questions and
maybe figure out how this new technology could in fact deliver what it promised or,
even better, what we came up to hope for? What researchers, practitioners and
industrialists can do or propose to aid the understanding of the direction the mobile
agent technology should aim for? Or any effort is futile and assimilation of this
technology not really possible?




        Lesson 2: FUZZY LOGIC - AN INTRODUCTION

INTRODUCTION

   This is the first in a series of six articles intended to share information and
experience in the realm of fuzzy logic (FL) and its application. This article will
introduce FL. Through the course of this article series, a simple implementation will
be explained in detail. Each article will include additional outside resource references
for interested readers.

WHERE DID FUZZY LOGIC COME FROM?

   The concept of Fuzzy Logic (FL) was conceived by Lotfi Zadeh, a professor at the
University of California at Berkley, and presented not as a control methodology, but
as a way of processing data by allowing partial set membership rather than crisp set
membership or non-membership. This approach to set theory was not applied to
control systems until the 70's due to insufficient small-computer capability prior to
that time. Professor Zadeh reasoned that people do not require precise, numerical
information input, and yet they are capable of highly adaptive control. If feedback
controllers could be programmed to accept noisy, imprecise input, they would be
much more effective and perhaps easier to implement. Unfortunately, U.S.
manufacturers have not been so quick to embrace this technology while the Europeans
and Japanese have been aggressively building real products around it.

WHAT IS FUZZY LOGIC?
    In this context, FL is a problem-solving control system methodology that lends
itself to implementation in systems ranging from simple, small, embedded micro-
controllers to large, networked, multi-channel PC or workstation-based data
acquisition and control systems. It can be implemented in hardware, software, or a
combination of both. FL provides a simple way to arrive at a definite conclusion
based upon vague, ambiguous, imprecise, noisy, or missing input information. FL's
approach to control problems mimics how a person would make decisions, only much
faster.

HOW IS FL DIFFERENT FROM CONVENTIONAL CONTROL METHODS?

    FL incorporates a simple, rule-based IF X AND Y THEN Z approach to a solving
control problem rather than attempting to model a system mathematically. The FL
model is empirically-based, relying on an operator's experience rather than their
technical understanding of the system. For example, rather than dealing with
temperature control in terms such as "SP =500F", "T <1000F", or "210C <TEMP
<220C", terms like "IF (process is too cool) AND (process is getting colder) THEN
(add heat to the process)" or "IF (process is too hot) AND (process is heating rapidly)
THEN (cool the process quickly)" are used. These terms are imprecise and yet very
descriptive of what must actually happen. Consider what you do in the shower if the
temperature is too cold: you will make the water comfortable very quickly with little
trouble. FL is capable of mimicking this type of behavior but at very high rate.

HOW DOES FL WORK?

    FL requires some numerical parameters in order to operate such as what is
considered significant error and significant rate-of-change-of-error, but exact values
of these numbers are usually not critical unless very responsive performance is
required in which case empirical tuning would determine them. For example, a simple
temperature control system could use a single temperature feedback sensor whose
data is subtracted from the command signal to compute "error" and then time-
differentiated to yield the error slope or rate-of-change-of-error, hereafter called
"error-dot". Error might have units of degs F and a small error considered to be 2F
while a large error is 5F. The "error-dot" might then have units of degs/min with a
small error-dot being 5F/min and a large one being 15F/min. These values don't have
to be symmetrical and can be "tweaked" once the system is operating in order to
optimize performance. Generally, FL is so forgiving that the system will probably
work the first time without any tweaking.

SUMMARY

    FL was conceived as a better method for sorting and handling data but has proven
to be an excellent choice for many control system applications since it mimics human
control logic. It can be built into anything from small, hand-held products to large
computerized process control systems. It uses an imprecise but very descriptive
language to deal with input data more like a human operator. It is very robust and
forgiving of operator and data input and often works when first implemented with
little or no tuning.




               Lesson 3: Introduction to Wireless Sensor
                                    Networks

    With the popularity of laptops, cell phones, PDAs, GPS devices, RFID, and
intelligent electronics in the post-PC era, computing devices have become cheaper,
more mobile, more distributed, and more pervasive in daily life. It is now possible to
construct, from commercial off-the-shelf (COTS) components, a wallet size
embedded system with the equivalent capability of a 90's PC. Such embedded systems
can be supported with scaled down Windows or Linux operating systems. From this
perspective, the emergence of wireless sensor networks (WSNs) is essentially the
latest trend of Moore's Law toward the miniaturization and ubiquity of computing
devices.
    Typically, a wireless sensor node (or simply sensor node) consists of sensing,
computing, communication, actuation, and power components. These components are
integrated on a single or multiple boards, and packaged in a few cubic inches. With
state-of-the-art, low-power circuit and networking technologies, a sensor node
powered by 2 AA batteries can last for up to three years with a 1% low duty cycle
working mode. A WSN usually consists of tens to thousands of such nodes that
communicate through wireless channels for information sharing and cooperative
processing. WSNs can be deployed on a global scale for environmental monitoring
and habitat study, over a battle field for military surveillance and reconnaissance, in
emergent environments for search and rescue, in factories for condition based
maintenance, in buildings for infrastructure health monitoring, in homes to realize
smart homes, or even in bodies for patient monitoring.
   After the initial deployment (typically ad hoc), sensor nodes are responsible for
self-organizing an appropriate network infrastructure, often with multi-hop
connections between sensor nodes. The onboard sensors then start collecting acoustic,
seismic, infrared or magnetic information about the environment, using either
continuous or event driven working modes. Location and positioning information can
also be obtained through the global positioning system (GPS) or local positioning
algorithms. This information can be gathered from across the network and
appropriately processed to construct a global view of the monitoring phenomena or
objects. The basic philosophy behind WSNs is that, while the capability of each
individual sensor node is limited, the aggregate power of the entire network is
sufficient for the required mission.
   In a typical scenario, users can retrieve information of interest from a WSN by
injecting queries and gathering results from the so-called base stations (or sink nodes),
which behave as an interface between users and the network. In this way, WSNs can
be considered as a distributed database. It is also envisioned that sensor networks will
ultimately be connected to the Internet, through which global information sharing
becomes feasible.
   The era of WSNs is highly anticipated in the near future. In September 1999,
WSNs were identified by Business Week as one of the most important and impactive
technologies for the 21st century. Also, in January 2003, the MIT's Technology
Review stated that WSNs are one of the top ten emerging technologies. It is also
estimated that WSNs generated less than $150 million in sales in 2004, but would top
$7 billion by 2010. In December 2004, a WSN with more than 1000 nodes was
launched in Florida by the ExScal team, which is the largest deployed WSN to date.
    Lesson 4: Digital Signatures and Public Key Encryption

   This document seeks to provide a brief introduction to digital signatures, in
particular using public key encryption. This is by no means an in-depth analysis of
different digital signature systems.


What is a digital signature?

   A digital signature is the electronic equivalent of a handwritten signature,
verifying the authenticity of electronic documents. In fact, digital signatures provide
even more security than their handwritten counterparts.

   Some banks and package delivery companies use a system for electronically
recording handwritten signatures. Some even go so far as to use biometric analysis to
record the speed with which you write and even how hard you press down, ensuring
the authenticity of the signature. However, this is not what is usually meant by digital
signatures — a great relief to those of us with limited budgets and resources.

   More often than not a digital signature uses a system of public key encryption to
verify that a document has not been altered.


What is public key encryption?

   Public key encryption (PKE) uses a system of two keys:

      a private key, which only you use (and of course protect with a well-chosen,
       carefully protected passphrase); and
      a public key, which other people use. Public keys are often stored on public
       key servers.
    A document that is encrypted with one of these keys can be decrypted only with
the other key in the pair.

    For example, let's say that Alice wants to send a message to Bob using PGP (a
popular public key encryption system). She encrypts the message with Bob's public
key and sends it using her favorite email program. Once the message is encrypted
with Bob's public key, only Bob can decrypt the message using his private key. Even
major governments using supercomputers would have to work for a very long time to
decrypt this message without the private key.




What does PKE have to do with digital signatures?

    Digital signatures often use a public key encryption system. Consider Alice and
Bob again: how can Bob be sure that it was really Alice who sent the message, and
not the criminally-minded Eve pretending to be Alice?

    This is where digital signatures come in. Before encrypting the message to Bob,
Alice can sign the message using her private key; when Bob decrypts the message, he
can verify the signature using her public key. Here's how it works:

    1. Alice creates a digest of the message — a sort of digital fingerprint. If the
        message changes, so does the digest.
    2. Alice then encrypts the digest with her private key. The encrypted digest is the
        digital signature.
    3. The encrypted digest is sent to Bob along with the message.
    4. When Bob receives the message, he decrypts the digest using Alice's public
        key.
    5. Bob then creates a digest of the message using the same function that Alice
        used.
    6. Bob compares the digest that he created with the one that Alice encrypted. If
        the digests match, then Bob can be confident that the signed message is indeed
        from Alice. If they don't match, then the message has been tampered with or
        isn't from Alice at all.

    If this sounds complicated, rest assured that the software makes it all very easy.


What if I need to verify a signature from someone I don't know, or be
sure that the key is really theirs?

    That's where digital certificates and certificate authorities come in. Let's start with
how it works in PGP. Say that someone claiming to be Bob's acquaintance Carol
sends a message to Alice. How does Alice know that Carol is who she claims to be?
Carol signed the message with her own private key, which has been digitally signed
by Bob (essentially saying, "I trust that this key is valid and hope that you will, too").
Because Alice knows and trusts Bob's key (and therefore his signature), Alice can
trust that Carol's key is valid — so the person claiming to be Carol almost certainly
really is Carol.

    Furthermore, once Alice trusts Carol's key, she can sign it. Then someone who has
and trusts Alice's key will be able to trust Carol's. This builds a web of trust among
PGP users.

    However, this informal web of trust may not be rigorous enough for business or
government purposes. For these cases, third-party entities known as certificate
authorities validate identities and issue certificates. These certificates, signed with the
CAs' well-known and trusted keys, can be used to verify someone's identity.
                Lesson 5: How Do DSL Modems Work?


   A Digital Subscriber Line (DSL) modem is a device used to connect a computer or
router to a telephone circuit that has Digital Subscriber Line service configured. Like
other modems, it is a type of transceiver. It is also called a DSL Transceiver or ATU-R
(for ADSL Transceiver Unit-Remote). The acronym NTBBA, which stands for
network termination broad band adapter, is also common in some countries.

   Some DSL modems also manage the connection and sharing of the DSL service in
a network, in this case, the unit is termed a DSL router or residential gateway. DSL
routers have a component that performs framing, while other components perform
Asynchronous Transfer Mode Segmentation and Reassembly, IEEE 802.1D bridging
and/or IP routing. Typical user interfaces are Ethernet and Universal Serial Bus
(USB). Although a DSL modem working as a bridge does not need an IP address, it
may have one assigned for management purposes.

Comparison to voice-band modem

   A DSL modem modulates high-frequency tones for transmission to a Digital
Subscriber Line Access Multiplexer (DSLAM), and receives and demodulates them
from the DSLAM. It serves fundamentally the same purpose as the voice-band
modem that was a mainstay in the late 20th century, but differs from it in important
ways.

       Most DSL modems are external to the computer and wired to the computer's
        Ethernet port, or occasionally its USB port. Internal DSL modems with PCI
        interface are rare but available.

       Microsoft Windows and other operating systems do not recognize external
        DSL modems connected by Ethernet, and hence have no Property Sheet or
        other internal method to configure them. This is because the transceiver and
        computer are considered separate nodes in the LAN, rather than the
        transceiver being a device controlled by the computer (such as web-cams,
        mice, keyboards etc.). Routers can be configured manually, using a Web page
        provided by the modem via the Ethernet that the router connects to. DSL
       modems rarely need to be configured, because they are part of the physical
       layer of computer networks, simply forwarding data from one medium (CAT5)
       to another one (telephone line).

      For external DSL modems connected by USB, Microsoft Windows and other
       operating systems generally recognize these as a Network interface controller.

      For internal DSL modems, Microsoft Windows and other operating systems
       provide interfaces similar to those provided for voice-band modems. This is
       based on the assumption that in the future, as CPU speeds increase, internal
       DSL modems may become more mainstream.

      DSL modems use frequencies from 25 kHz to above 1MHz (see Asymmetric
       Digital Subscriber Line), in order not to interfere with voice service which is
       primarily 0-4 kHz. Voice-band modems use the same frequency spectrum as
       ordinary telephones, and will interfere with voice service - it is usually
       impossible to make a telephone call on a line which is being used by a voice-
       band modem.

      DSL modems vary in data speed from hundreds of kilobits per second to many
       megabits, while voice-band modems are nominally 56K modems and actually
       limited to approximately 50 kbit/s.

      DSL modems exchange data with only the DSLAM to which they are wired,
       which in turn connects them to the Internet, while most voice-band modems
       can dial directly anywhere in the world.

      DSL modems are intended for particular protocols and sometimes won't work
       on another line even from the same company, while most voice-band modems
       use international standards and can "fall back" to find a standard that will
       work.

  Most of these differences are of little interest to consumers, except the greater
speed of DSL and the ability to use the telephone even when the computer is online.
Because a single phone line commonly carries DSL and voice, DSL filters are used to
separate the two uses.

DSL Limitations

  The number of DSL users in your area can also affect performance speeds. When
the DSL modem sends your signal to the provider, it is received by their own
specialized modem called a DSLAM. There is no direct connection between your
modem and Internet servers in the world. The DSLAM uses one line to support many
connections that are hooking up to the internet. Although they can handle dozens of
connections at once, the speeds can slow down as more users log on.

  DSL modems cannot be used everywhere. Again, there are limitations based on the
locale of the ADSL servers. If there are bridge taps to your phone line service that
stretch the actual connection, it may still be out of reach from your home. DSL
connections cannot pass through fiber optic lines. If your phone lines contain them,
you're out of luck.




                          Lesson 6: Grid Computing


   Grid computing is widely regarded as a technology of immense potential in both
industry and academia. The evolution pattern of grid technologies is very similar to
the growth and evolution of Internet technologies that was witnessed in the early
1990s.
   Similar to the Internet, the initial grid computing technologies were also
developed mostly in the universities and research labs to solve unique research
problems and to collaborate between different researchers across the globe. Recently,
the high computing industries like finance, life sciences, energy, automobiles,
rendering, etc. are showing a great amount of interest in the potential of connecting
standalone and silo based clusters into a department and sometimes enterprise-wide
grid system.
   Grid computing is currently in the midst of evolving standards, inheriting and
customizing from those developed in the high performance, distributed, and recently
from the Web services community. Due to the lack of consistent and widely used
standards, several enterprises are concerned about the implementation of an
enterprise-level grid system, though the potential of such a system is well understood.
Even when the enterprises have considered grid as a solution, several issues have
made them reconsider their decisions. Issues related to application engineering,
manageability, data management, licensing, security, etc. have prevented them from
implementing an enterprise-wide grid solution.
   As a technology, grid computing has potential beyond the high performance
computing industries due to its inherent collaboration, autonomic, and utility based
service behavior. To make this evolution possible all the above-mentioned issues need
to be solved. Some of the issues are technical and some of them have business and
economic overtones like the issue related to licensing. Each of the issues mentioned
above is important and deserves a close look and understanding. In this book we will
solely concentrate on the issue related to grid computing security.
   As an issue, security is perhaps the most important and needs close understanding
as grid computing offers unique security challenges. In this book we look at different
security issues pertaining to the grid system; some of them are of immediate concern
and some are long term issues. We will also look at security issues in other areas of
computer science like networks and operating systems which may affect the design of
future grids. We have categorized the issues pertaining to grid computing security into
three main buckets viz. architecture related issues, infrastructure related issues, and
management related issues. Architecture related issues are concerned about the overall
architecture of the grid system like the concerns pertaining to information security,
concerns about user and resource authorization, and issues pertaining to the overall
service offered by the grid system. The infrastructure related issues are concerned
about the underlying infrastructure which includes the hosts or the machines, and the
network infrastructure. In addition, several management systems need to be in place
for an all pervasive enterprise level and secure grid system. There are three main types
of management systems which are important from the grid perspective, namely the
credential management systems, the trust management systems, and the monitoring
systems. All the three issues mentioned above are dealt with in this book, along with
existing solutions and potential concerns.




             Lesson 7: Introduction to Parallel Systems


   The past decade has seen tremendous advances in microprocessor technology.
Clock rates of processors have increased from about 40 MHz to over 1.5 GHz. At the
same time, processors are now capable of executing multiple instructions in the same
cycle. The average number of cycles per instruction (CPI) of high end processors has
improved by roughly an order of magnitude over the past 10 years. All this translates
to an increase in the peak floating point operation execution rate (floating point
operations per second, or FLOPS) of several orders of magnitude. A variety of other
issues have also become important over the same period. Perhaps the most prominent
of these is the ability (or lack thereof) of the memory system to feed data to the
processor at the desired rate. Significant innovations in architecture and software have
addressed the alleviation of bottlenecks posed by the datapath and the memory.
   The role of concurrency in accelerating computing elements has been recognized
for several decades. However, their role in providing multiplicity of datapaths,
increased access to storage elements (both memory and disk), scalable performance,
and lower costs is reflected in the wide variety of applications of parallel computing.
Desktop machines, engineering workstations, and compute servers with two, four, or
even eight processors connected together are becoming common platforms for design
applications. Large scale applications in science and engineering rely on larger
configurations of parallel computers, often comprising of hundreds of tightly coupled
processors. Data intensive platforms such as database or web servers and applications
such as transaction processing and data mining often use clusters of workstations that
provide high aggregate disk bandwidth. Applications in graphics and visualization use
multiple rendering pipes and processing elements to compute and render realistic
environments with millions of polygons in real time. Applications requiring
guaranteed (high) availability rely on parallel and distributed platforms for
redundancy. It is therefore extremely important, from the point of view of cost,
performance, and applications, to understand the principles, tools, and techniques for
programming the wide variety of parallel platforms currently available. As we
proceed through this endaevour, we will also show how the principles of efficient
parallel programming help in writing efficient serial code.
   Development of parallel software has traditionally been thought of as time and
effort intensive. This can be largely attributed to the inherent complexity of specifying
and coordinating concurrent tasks, a lack of portable algorithms, standardized
environments, and software development toolkits. When viewed in the context of the
brisk rate of development of microprocessors, one is tempted to question the need for
devoting effort towards exploiting parallelism as a means of accelerating applications.
After all, if it takes two years to develop a parallel application, during which time the
underlying hardware and/or software platform has become obsolete, the development
effort is clearly wasted. However, there are some unmistakable trends in hardware
design which indicate that uniprocessor (or implicitly parallel) architectures may not
be able to sustain the rate of realizable performance increments in the future. This is a
result of lack of implicit parallelism as well as other bottlenecks such as the datapath
and the memory. At the same time, standardized hardware interfaces have reduced the
turnaround time from the development of a microprocessor to a parallel machine
based on the microprocessor. Furthermore, considerable progress has been made in
standardization of programming environments to ensure a longer life-cycle for
parallel applications. All of these present compelling arguments in favor of parallel
computing platforms.




                        Lesson 8: Network Security


   Every organization, using networked computers and deploying an information
system to perform its activity, faces the threat of hacking from individuals within the
organization and from its outside. Employees (and former employees) with malicious
intent can represent a threat to the organization’s information system, its production
system, and its communication networks. At the same time, reported attacks start to
illustrate how pervasive the threats from outside hackers have become. Without
proper and efficient protection, any part of any network can be prone to attacks or
unauthorized activity. Routers, switches, and hosts can all be violated by professional
hackers, company’s competitors, or even internal employees. In fact, according to
various studies, more than half of all network attacks are committed internally.
   One may consider that the most reliable solution to ensure the protection of
organizations’ information systems is to refrain from connecting them to
communication networks and keep them in secured locations. Such a solution could
be an appropriate measure for highly sensitive systems. But it does not seem to be a
very practical solution, since information systems are really useful for the
organization’s activity when they are connected to the network and legitimately
accessed. Moreover, in today’s competitive world, it is essential to do business
electronically and be interconnected with the Internet. Being present on the Internet is
a basic requirement for success.
   Organizations face three types of economic impact as possible results of malicious
attacks targeting them: the immediate, short-term, and midterm economic impacts.
The immediate economic impact is the cost of repairing, modifying, or replacing
systems (when needed) and the immediate losses due to disruption of business
operations, transactions, and cash flows. Short-term economic impact is the cost on an
organization, which includes the loss of contractual relationships or existing
customers because of the inability to deliver products or services as well as the
negative impact on the reputation of the organization. Long-term economic impact is
induced by the decline in an organization’s market appraisal.
   During the last decade, enterprises, administrations, and other business structures
have spent billions of dollars on expenditures related to network security, information
protection, and loss of assets due to hackers’ attacks. The rate at which these
organizations are expending funds seems to be impressively increasing. This requires
the business structures to build and plan efficient strategies to address these issues in a
cost-effective manner. They also need to spend large amounts of money for security
awareness and employees’ training.
   Network attacks may cause organizations hours and days of system downtime and
serious violations in data confidentiality, resource integrity, and client/employee
privacy. Depending on the level of the attack and the type of information that has been
compromised, the consequences of network attacks vary in degree from simple
annoyance and inconvenience to complete devastation. The cost of recovery from
attacks can range from hundreds to millions of dollars. Various studies including a
long-running annual survey conducted by the Federal Bureau of Investigation (FBI,
Los Angeles), and the American Computer Security Institute (CSI) have highlighted
some interesting numbers related to these costs. The Australian computer crime and
security survey has found similar findings. The surveys have mainly determined the
expenditures from a large number of responses collected from individuals operating in
the computer and network security of business organizations.




                          Lesson 9: What Is VRML?


   VRML (Virtual Reality Modelling Language, pronounced vermal or by its
initials, originally—before 1995—known as the Virtual Reality Markup Language) is
a standard file format for representing 3-dimensional (3D) interactive vector graphics,
designed particularly with the World Wide Web in mind.
Format

   VRML is a text file format where vertices and edges for a 3D polygon can be
specified along with the surface color, UV mapped textures, shininess, transparency,
and so on. URLs can be associated with graphical components so that a web browser
might fetch a webpage or a new VRML file from the Internet when the user clicks on
the specific graphical component. Animations, sounds, lighting, and other aspects of
the virtual world can interact with the user or may be triggered by external events
such as timers. A special Script Node allows the addition of program code (written in
Java or JavaScript (ECMAScript)) to a VRML file.

   VRML files are commonly called "worlds" and have the *.wrl extension (for
example island.wrl). Although VRML worlds use a text format, they may often be
compressed using gzip so that they transfer over the internet more quickly (some gzip
compressed files use the *.wrz extension). Many 3D modeling programs can save
objects and scenes in VRML format.

Standardization

   The Web3D Consortium has been formed to further the collective development of
the format. VRML (and its successor, X3D), have been accepted as international
standards by the International Organization for Standardization (ISO).

   The first version of VRML was specified in November 1994. This version was
specified from, and very closely resembled, the API and file format of the Open
Inventor software component, originally developed by SGI. The current and
functionally complete version is VRML97 (ISO/IEC 14772-1:1997). VRML has now
been superseded by X3D (ISO/IEC 19775-1)

Emergence, popularity, and rival technical upgrade

   The term VRML was coined by Dave Raggett in a paper submitted to The First
International Conference on the World-Wide Web in 1994, and first discussed at the
WWW94 VRML BOF established by Tim Berners-Lee, where Mark Pesce presented
the Labyrinth demo he developed with Tony Parisi and Peter Kennard

   In 1997, a new version of the format was finalized, as VRML97 (also known as
VRML2 or VRML 2.0), and became an ISO standard. VRML97 was used on the
Internet on some personal homepages and sites such as "CyberTown", which offered
3D chat using Blaxxun Software. The format was championed by SGI's Cosmo
Software; when SGI restructured in 1998 the division was sold to Platinum
Technologies, which was then taken over by Computer Associates, which did not
develop or distribute the software. To fill the void a variety of proprietary Web 3D
formats emerged over the next few years, including Microsoft Chrome and Adobe
Atmosphere, neither of which is supported today. VRML's capabilities remained
largely the same while realtime 3D graphics kept improving. The VRML Consortium
changed its name to the Web3D Consortium, and began work on the successor to
VRML—X3D.

   SGI ran a web site at vrml.sgi.com on which was hosted a string of regular short
performances of a character called "Floops" who was a VRML character in a VRML
world. Floops was a creation of a company called "Protozoa".

   H-Anim is a standard for animated Humanoids, which is based around VRML,
and later X3D. The initial version 1.0 of the H-Anim standard was scheduled for
submission at the end of March 1998.

   VRML provoked much interest but has never seen much serious widespread use.
One reason for this may have been the lack of available bandwidth. At the time of
VRML's popularity, a majority of users, both business and personal, were using slow
dial-up internet access. This had the unfortunate side effect of having users wait for
extended periods of time only to find a blocky, ill-lit room with distorted text hanging
in seemingly random locations.

   VRML experimentation was primarily in education and research where an open
specification is most valued. It has now been re-engineered as X3D. The MPEG-4
Interactive Profile (ISO/IEC 14496) was based on VRML (now on X3D), and X3D is
largely backward-compatible with it. VRML is also widely used as a file format for
interchange of 3D models, particularly from CAD systems.

   A free cross-platform runtime implementation of VRML is available in
OpenVRML. Its libraries can be used to add both VRML and X3D support to
applications, and a GTK+ plugin is available to render VRML/X3D worlds in web
browsers.




               Lesson 10: Windows Security – Password


   Passwords play an important role in information security as well as in other forms
of authentication by providing a low-tech solution for protecting resources that should
not be readily available to unauthenticated or unauthorized people or services. If we
think about the passwords we have and the type of information they protect, the
importance of passwords becomes clear. For instance, what if we were able to register
usernames for social sites such as Twitter, Facebook, and LinkedIn without using
passwords? Without some sort of authentication mechanism anyone would be able to
access your account data and change information without your approval. Apply the
same thought process to the work environment. What if corporate resources did not
require some sort of strong authentication? Think about some of the most important
information assets stored in your organization and what the impact could be if casual
access was permitted.
   Windows passwords storage and security is often the last line of defense for
protecting information stored locally in computers and for protecting Windows
domain access to resources. Unfortunately, in some cases, the use of passwords to
protect information is the only line of defense, which can leave organizations with
very little security implemented to protect its most important assets.
    Before moving directly into the dangers associated with attacks against Windows
passwords and a number of attack scenarios, it makes good sense to review how
Windows systems store passwords and how policies are used to enhance password
security and limit unauthorized access. Learning about the types, storage, and policies
used in the Windows implementation of passwords will help provide a solid
understanding of how attacks against them are possible.
   Windows operating systems offer several different methods of storing password
information. The primary goal of Windows password storage is to provide a secure
method of storing passwords on the operating system or within Active Directory and
offer a mechanism to authenticate users and services. Additionally, password storage
systems also allow administrators to define rules and apply policies to ensure
passwords are complex to protect systems against unauthorized access.
    The Security Accounts Manager (SAM) is a vital component of how Windows
stores passwords locally on the computer system. Storage of user and account
information in the SAM database provides system users the ability to authenticate to
the local system if an account has been created for them.
    During normal operation of a Windows system, the SAM database cannot be
copied due to restrictions enforced by the operating system kernel. The SAM database
is stored in two places within Windows: The first place is the location of the main
storage for passwords and the second one is a backup of the main file in the event that
recovery is required for a repair process. Offline attacks against the contents of the
SAM database are possible because contents of the SAM database are also stored in
memory.
   LAN Manager hash (LM hash) is used as the method to store passwords within the
Windows operating system in a encrypted form as an alternative to storing passwords
in clear text. When a password is less than 15 characters long, both an LM hash and
an NTLM hash are generated and stored in the local SAM database or in Active
Directory. In the event a password is more than 15 characters long, an LM hash can
not be created and thus one will not be stored for the password. This process occurs
when a new account and password is created or when a change is made to an existing
password.
   Password and lockout policies are rules an administrator can impose on how the
Windows operating system or Windows domain handles user logon attempts and
password implementations. These rules can be defined on a computer locally or
globally by modifying the domain password policies. Administrators can modify
default password and logon policies to help protect systems and the domain from
password attacks.
   Understanding how to manage these types of policies can help administrators
reduce the chances of a successful password attack. Understanding how Windows
stores passwords and the knowledge of some common attack methodologies can help
attackers identify weaknesses and opportunities for obtaining credentials stored on
Windows operating systems. Several different approaches can be taken to gain access
to Microsoft operating systems depending on the environment the attacker is in and
the state of the networks existing security.
    Many times attackers are able to gain access to passwords and password hashes
stored on Microsoft operating systems by leveraging vulnerabilities present due to the
lack of a consistent patch management methodology. In organizations where effective
patch management policies are not developed or followed, the likelihood of an attack
resulting in an attacker gaining access to systems and obtaining passwords is
significantly increased. This threat is further increased when operating systems are
missing patches and stable exploit code is readily available to leverage the
vulnerabilities present on operating systems.
   During the fingerprinting phase of network attacks, an attacker will identify target
systems and operating system types to determine what the network landscape looks
like. This information gathering also allows the attacker to determine what types of
attacks may be fruitful during the exploitation phases of an attack. Part of determining
the exploit ability of password attacks against Windows operating systems includes
identifying system password policies. These policies determine if an attacker can or
will perform password guessing, dictionary, and brute force attacks against the
operating system.
   What are the dangers associated with password attacks? Well, it is almost anything
that you can imagine. Remember, passwords are designed to restrict access to
information that only authenticated and authorized people are allowed access to.
Passwords are implemented at many places within corporate networks. For instance,
what if an attacker gained access to the password that protects customer data stored on
a Microsoft SQL Server database? It is feasible that an attacker may be able to copy
entire transaction histories, delete database contents, modify values, and ultimately
cause serious service disruptions.
   As part of an enterprise-wide risk assessment and identified threat scenarios,
stakeholders must consider the threats facing the organization. This is going to be one
of the best things your organization can do to help identify the dangers associated
with successful attacks. Once a password attack is successful, organizations must
consider the possibility of all confidentiality and integrity being lost depending on the
scope of the attack and access gained. Depending on the contingency plan in place,
mitigating controls, and the availability of reliable backup data, this impact can be
great.
   So far we have looked into a lot of the background about how Microsoft
implements passwords and password security and how some types of password
attacks may be conducted against Microsoft Windows targets. In the following
scenarios, we will explore some of the common attacks that are performed by attacker
to gain access to passwords and password hashes. You will also learn about some of
the most common tools used to conduct these attacks and quickly be able to identify
how dangerous these attacks can be.
   Although the tools listed in this section are some of the most popular tools in use
today, it is important to understand many more tools are available. In some cases,
tools are developed for very specific tasks and password attacks depending on the
attackers’ goals. Password cracking tools, logon crackers, and tools used for
enumeration are widely available, and as new protocols and services are developed,
you can be certain more tools will be developed.

								
To top