Supercomputers

Document Sample
Supercomputers Powered By Docstoc
					Ritesh patel



        Supercomputers
Overview Supercomputers can be defined as the most advanced and powerful
computers, or array of computers, in existence at the time of their construction.
Supercomputers are used to solve problems that are too complex or too massive for
standard computers, like calculating how individual molecules move in a tornado, or
forecasting detailed weather patterns. Some supercomputers are single computers
consisting of multiple processors; others are clusters of computers that work together.

History Supercomputers were first developed in the early 1970s when Seymour Cray
introduced the “Cray 1” supercomputer. Because microprocessors were not yet
available, the processor consisted of individual integrated circuits. Successive
generations of supercomputers were developed by Cray and became more powerful with
each version. Other companies like IBM, NEC, Texas Instruments and Unisys began to
design and manufacture more powerful and faster computers after the introduction of
the Cray 1. You can read more about Seymour Cray and other leading figures in
supercomputer technology at www.computerhalloffame.org/. The history of
supercomputers can be viewed in detail at this Web page. Visit the Digital Century Web
site to find a general overview of the history of the development of computers.

Today's fastest supercomputers include IBM's Blue Gene and ASCI Purple, SCC's
Beowulf, and Cray's SV2. These supercomputers are usually designed to carry out
specific tasks. For example, IBM's ASCI Purple is a $250 million supercomputer built
for the Department of Energy (DOE). This computer, with a peak speed of 467
teraflops, is used to simulate aging and the operation of nuclear weapons. Learn all
about this project by linking to this article. Future supercomputer designs might
incorporate the use of entirely new technologies of circuit miniaturization that could
include new storage devices and data transfer systems. Scientists at UCLA are currently
working on computer processor and circuit designs involving a series of molecules that
behave like transistors. By incorporating this technology, new designs might include
processors 10,000 times smaller, yet much more powerful than any current models. A
comprehensive article about this research can be found at
www.applesforhealth.com/supercomp1.html.

Processing Speeds Supercomputer computational power is rated in FLOPS
(Floating Point Operations Per Second). The first commercially available
supercomputers reached speeds of 10 to 100 million FLOPS. The next
generation of supercomputers (some of which are presently in the early stages
of development) is predicted to break the petaflop level. This would represent
computing power more than 1,000 times faster than a teraflop machine. To put
these processing speeds in perspective, a relatively old supercomputer such as
Ritesh patel


the Cray C90 (built in the mid to late 1990s) has a processing speed of only 8
gigaflops. It can solve a problem, which takes a personal computer a few hours,
in .002 seconds! The site www.top500.org/ is dedicated to providing information
about the current 500 sites with the fastest supercomputers. Both the list and
the content at this site is updated regularly, providing those interested with a
wealth of information about the developments in supercomputing technology.

Supercomputer Architecture Supercomputer design varies from model to model.
Generally, there are vector computers and parallel computers. Detailed information
about both kinds of architecture can be found at www.sdsc.edu/discovery/lo/sc.htm.
Vector computers use a very fast data “pipeline” to move data from components and
memory in the computer to a central processor. Parallel computers use multiple
processors, each with their own memory banks, to 'split up' data intensive tasks.

 A good analogy to contrast vector and parallel computers is that a vector computer
could be represented as a single person solving a series of 20 math problems in
consecutive order; while a parallel computer could be represented as 20 people, each
solving one math problem in the series. Even if the single person (vector) were a master
mathematician, 20 people would be able to finish the series much quicker. Other major
differences between vector and parallel processors include how data is handled and how
each machine allocates memory. A vector machine is usually a single super-fast
processor with all the computer's memory allocated to its operation. A parallel machine
has multiple processors, each with its own memory. Vector machines are easier to
program, while parallel machines, with data from multiple processors (in some cases
greater than 10,000 processors), can be tricky to orchestrate. To continue the analogy,
20 people working together (parallel) could have trouble with communication of data
between them, whereas a single person (vector) would entirely avoid these
communication complexities.

Recently, parallel vector computers have been developed to take advantage of both
designs. For more information about this design, visit this Netlib.org page.

Uses of Supercomputers Supercomputers are called upon to perform the most
compute-intensive tasks of modern times. As supercomputers have developed in the last
30 years, so have the tasks they typically perform. Modeling of real world complex
systems such as fluid dynamics, weather patterns, seismic activity prediction, and
nuclear explosion dynamics represent the most modern adaptations of supercomputers.
Other tasks include human genome sequencing, credit card transaction processing, and
the design and testing of modern aircraft.

Manufacturers Although there are numerous companies that manufacture
supercomputers, information about purchasing one is not always easy to find on the
Internet. The price tag for a custom-built supercomputer can range anywhere from about
Ritesh patel


$500,000 for a beowulf system, up to millions of dollars for the newest and fastest
supercomputers. Cray provides an informative Web site (www.cray.com/) with product
descriptions, photos, company information, and an index of current developments.

Scyld Computing Corporation (SCC) provides a Web site (www.scyld.com/) with
detailed information about their Beowulf Operating System and the computers
developed to allow multiple systems to operate under one platform.

IBM has produced, and continues to produce, some of the most cutting-edge
supercomputer technology. For information about IBM supercomputers visit
www.ibm.com/. Their “Blue Gene” supercomputer, being constructed in collaboration
with Lawrence Livermore National Labs, is expected to run 15 times faster (at 200
teraflops) than their current supercomputers. Read all about this project by visiting this
link. IBM is also currently working on what they call a "self-aware" supercomputer,
named "Blue Sky", for The National Center for Atmospheric Research (NCAR) in
Boulder, Colorado. The Blue Sky will be used to work on colossal computing problems
such as weather prediction. Additionally, this supercomputer can self-repair, requiring
no human intervention. Read all about Blue Sky in the article found here.

     Intel has developed a line of supercomputers known as Intel TFLOPS.
     Supercomputers that use thousands of Pentium Pro processors in a parallel
     configuration to meet the supercomputing demands of their customers. Information
     about Intel supercomputers can be found at Intel's Web site (www.intel.com) or by
     reading this article.
Ritesh patel




What is a Supercomputer?

                                       A supercomputer is defined simply as the
                                       most powerful class of computers at any
                                       point in time. Supercomputers are used to
                                       solve large and complex problems that are
                                       insurmountable by smaller, less powerful
                                       computers. Since the pioneering Cray-1®
                                       system arrived in 1976, supercomputers
                                       have made a significant contribution to the
                                       advancement of knowledge and the quality
                                       of human life. Problems of major economic,
                                       scientific and strategic importance typically
                                       are addressed by supercomputers years
before becoming tractable on less-capable systems.

In conjunction with some of the world's most creative scientific and engineering
minds, these formidable tools already have made automobiles safer and more fuel-
efficient; located new deposits of oil and gas; saved lives and property by predicting
severe storms; created new materials and life-saving drugs; powered advances in
electronics and visualization; safeguarded national security; and unraveled mysteries
ranging from protein-folding mechanisms to the shape of the universe.

Capable supercomputers are in short supply.

Today's supercomputer market is replete with "commodity clusters," products
assembled from collections of servers or PCs. Clusters are adept at tackling small
problems and large problems lacking complexity, but are inefficient at the most
demanding, consequential challenges - especially those of industry. Climate research
algorithms, for example, are unable to achieve high levels of performance on these
computers.

The primary "design points" for today's clusters are server and PC markets, not
supercomputing. Christopher Lazou, a high-performance computing consultant,
explains, "Using tens of thousands of commodity chips may provide the capacity
(peak flop rates) but not the capability, because of lack of memory bandwidth to a
very large shared memory." Cray's product portfolio addresses this issue with high-
bandwidth offerings.

High-end Supercomputers

For important classes of applications, there is no substitute for supercomputers
specifically designed not only for performance, but also high-bandwidth and low
latency. Historically, this has been accomplished through vector architectures and
Ritesh patel

more recently, multi-threaded architectures. These specialized supercomputers are
built to meet the most challenging computing problems in the world.

Today, new technology and innovation at Cray Inc. has allowed for a new class of
supercomputers that combines the performance characteristics of vector
supercomputers with the scalability of commodity clusters to achieve both high-
efficiency and extreme performance in a scalable system architecture. These
characteristics are embodied in the recently announced Cray X1™ system.

The Future of Supercomputing

Applications promising future competitive and scientific advantage create an
insatiable demand for more supercomputer power - 10 to 1,000 times greater than
anything available today, according to users. Automotive companies are targeting
increased passenger cabin comfort, improved safety and handling. Aerospace firms
envision more efficient planes and space vehicles. The petroleum industry wants to
"see" subsurface phenomena in greater detail. Urban planners hope to ease traffic
congestion. Integrated digital imaging and virtual surgery - including simulated
sense of touch - are high on the wish list in medicine. The sequencing of the human
genome promises to open an era of burgeoning research and commercial enterprise
in the life sciences.

As the demand for supercomputing power increases and the market expands, Cray's
focus remains on providing superior real-world performance. Today's "theoretical
peak performance" and benchmark tests are evolving to match the requirements of
science and industry, and Cray supercomputing systems will provide the tools they
need to solve their most complex computational problems
 Ritesh patel



    American Super Computer released to Russia
                 a business adventure of
          ROY International Consultancy, Inc.!



May 18, 2000 - Moscow - The first Mainframe High Power Super Computer
was exported from USA to Russia. This deal is a part of a long-term contract
between the Russian Oil exploration firm Tatneftgeophysika and ROY
International Consultancy Inc., headed by Dr.Cherian Eapen, who is shuttling
between USA and Russia. Last Christmas day, the Super Computer - Sun
Microsystem's "Starfire Enterprise10000" - was installed at the specially
prepared site of the client in Bugulma, an interior town of Tatarstan Republic
of Russia.

President of Sun Microsystems Corporation, Mr. Scott McNeally, (who
challenged Mr. Bill Gates and Microsoft for its technology and legal
competency), congratulated the great effort of the President of ROY
International Dr. Cherian Eapen, who is shuttling between USA and Russia
and made the impossible a possible one. "It was a 'Christmas Starfire', a
precious Christmas gift to Russia from America. This is an opening of high
power computer usage in this geography for peaceful purposes - a new bridge
opened between the two technology Super Powers of the world" - he said.

The Starfire Enterprise10000 is purchased for the seismological data
processing center of Tatneftgeophysika, a giant in the geophysical business
field in the whole region of the former Soviet Union. In spite of the existing
financial and economical problems, the Russian Geophysical firms are
struggling hard to stand on their own legs by procuring the most modern
technology stream in the field of computerization and processing of geological
and geophysical data. By 1999, the majority of geophysical firms of Russia
 Ritesh patel


achieved the automation of their production centers. Year 2000 opens with the
2nd phase of modernization and reconstruction of geophysical computing
centers, focusing mainly on upgrading of the power and speed of data
interpretation.

At present, the Russian seismological survey of Oil and Gas determines all
their data based on 2 Dimensional film which should be obtained with 3
Dimensional (3D) film. Without 3D film, it is impossible for accurate and quality
identification of Hydro-carbide fields. This 3D procedure increases the cost of
complicated research work, with high power computers and software. But this
gives a substantial economical advantage to the tune of 20 to 30%.

In order to become competitive in the already saturated 3D seismic market,
traditional geophysical firms started spending large amounts to modernize
their computing centers. They started inviting companies specialized in this
sphere -- Systems Integrators -- with given criterion of price, optimum
efficiency, productivity of technical solution, taking into consideration all
aspects of technology for processing of geophysical information.

One such experienced Systems Integrators working in CIS is ROY
International Consultancy Inc., whose main activity is the project design and
realization of corporate computing, especially for computing centers for the oil
and gas field. Founded in 1988, ROY International is the leading Systems
Integrator specialized in large computer systems development of corporate
computing centers. ROY International is the largest supplier of highly reliable
and secure UNIX based enterprise wide systems in the CIS. By this period,
ROY International designed and installed 300 projects throughout CIS
countries to modernize and reconstruct computing centers, installing more
than 2000 high power Sun Work Stations and Servers and porting major
software, available in the world and networking etc.

Bashkiristan Neftgeophysika, Udmrtneftgeophysika, Khantimansisk
 Ritesh patel


Geophysika, Sakhalinsk Neftgeophysika, Moormansk Neftgeophysika, Central
Geophysical Expedition, VNIIGAS, Sever Gazprom, Orenburg Gazprom,
Lukoil Kogalym, Yukos Moscow, Luk-Arco, Slavneft Tver, etc., to name a few,
are the leading computing centers installed by ROY International in the oil and
gas field.

At present, ROY International is completing the final installation at
Tatneftgeophysika, one of the major geophysical companies of Russia. Within
the framework of this project, ROY International is finalizing the execution of
the installation of a Sun Supercomputer together with leading software in the
world. This complex is specially designed for 3D geophysical data processing.

The world's leading oil and gas producers love the characteristics of
Enterprise10000, also called the 'CRAY-Killer'. Starting with new 18 Ultra
SPARC-II Microprocessors, TByte Storage Array, TByte ETL Tape Library,
more than 100 high power Sun Work Stations and Networking etc., this center
is the most powerful computing center in Russia and all CIS countries.

Being the Systems Integrator, ROY International, after several negotiations,
selected Paradigm Geophysical software (Israel) for Data Processing and
Schlumberger GeoQuest Software (France) for Data Interpretation. This is in
addition to the existing Data Processing Software, Landmark, of America.
ROY International also has agreements with manufacturers like Storage Tech
(USA), Qualstar Corporation (USA), E.R.Mapping (Australia), M4 Data Ltd.,
(England), Fujitsu (Japan), 3Com (USA), OYO Plotters and Instruments
(Japan), etc., and also with various Russian manufacturers and software
developers.

Trained Specialists and Engineers of ROY International completed the
networking job within two weeks and software installation and training just
completed. Four of the ROY International's specialists are with Ph.D.
qualifications in this field. Processing and Interpretation of Data are handled by
 Ritesh patel


more than 400 highly qualified employees of Tatneftgeophysica.

The General Director of Tatneftgeophysika, Mr.Rinat Kharisov said, "This is
the second time we are entering into a new era of major technological
modernization of their computing center, which is being executed by ROY
International. Six years back, we have modernized our computing center with
the help of ROY International. They replaced the Russian ES computers with
the Sun SPARC Center2000 which increased our computing power to 20
times. The present installation increases our power to another 70 times. This
enables us to find the results of Interpretation Data, saving substantial time
and money and we can compete at the global market".

"The new Super Computer project once more confirms that the trend in Russia
is to set up large scale information centers, for which high end Super
Computers are required", said Dr. Cherian Eapen, President of ROY
International. "It was a difficult task to get licenses from all US Govt.
Departments. The application for export license and non-proliferation
compliance letter etc. were routed through the Russian Ministry of Fuel and
Energy and through the US Embassy in Moscow to the Bureau of Exports
Administration in Washington DC. The procedure took a long time to grant
permission to allow the use of the Supercomputer for a civilian customer in
Russia as Russia is still under the list of countries for nuclear proliferation
concerns. Since ROY International has got a clean record and doesn't have
any military deals and since it is strict in working only for a peaceful production
activities, it got an advantage on license application. Departments of
Commerce, State, Defense, Atom Energy Ecology cleared the license
application for this Super Computer earlier and finally, the Department of
Energy also gave the green light to lift it to Russia.

This 2 Tons Super Computer complex was lifted from San Francisco to
Amsterdam by Lufthansa Cargo and from there to Moscow and to Nabereshni
 Ritesh patel


Chelni, a nearby airport of the end user in Tatarstan Republic by a chartered
flight arranged by ROY International.

Dr.Cherian Eapen said, "With the highest of security safeguard procedures,
we were able to reach the System to the pre-approved and designed site of
the computing center, per license requirement. Every moment was tense due
to danger of security reasons against any physical diversion during shipment.
One of our employees from Moscow, Victor, traveled with the freight
forwarders and security crew and informed me each hour the progress of
loading and off loading and air and ground transport to the destination. About
4 AM on December 25, Christmas day morning, I got the final call for that day
from Victor, asking me to take rest as the job is completed and therefore
requesting to allow them to celebrate the installation of the Super Computer,
by opening a bottle of Vodka."


  BiO News
Ritesh patel


  Posted By: gmax
  Date: 10/06/00 16:48
  Summary:BioInform: IBM Designs Architecture for Blue Gene Supercomputer,
  Collaborates with Academia
  ``Since IBM's announcement last year that it would spend $100 million to build
  a supercomputer called Blue Gene for protein folding research, it has begun
  collaborating with scientists at Indiana University, Columbia University, and the
  University of Pennsylvania on some of the mathematical techniques and
  software needed for the system.

  ``The company has also decided to use a cellular architecture for the machine,
  where it will use simple pieces and replicate them on a large scale. Protein
  folding research requires advances in computational power and molecular
  dynamics techniques - the mathematical method for calculating the movement
  of atoms in the formation of proteins, said Joe Jasinski, IBM's newly appointed
  senior manager of Blue Gene and the computational biology center.

  ```The first problem that we are attacking with Blue Gene is to understand at the
  atomic level the detailed dynamics, the motions involved in protein folding,'
  Jasinski said. `That's a very computationally intensive problem which requires
  at least a petaflop computer and probably something bigger.'

  ``Most of the system software as well as the routines that will drive the
  applications are being developed by IBM's computational biology group, which
  was formed in 1992 and now numbers about 40 scientists and engineers.''




DIFFERENT SUPER COMPUTER
Ritesh patel


Canada's Fastest Computer Simulates
Galaxies In Collision
by Nicolle Wahl
Toronto - Jul 25, 2003
A $900,000 supercomputer at the University of
Toronto -- the fastest computer in Canada -- is
heating up astrophysics research in this country and
burning its way up the list of the world's fastest
computers.

The new computer, part of the Department of
                                                       Shortened sequence of images
Astronomy and Astrophysics and the Canadian            showing the detailed interaction
Institute for Theoretical Astrophysics (CITA), was     of two galaxies colliding

ranked as the fastest computer in Canada and the 39th fastest in the world in the
latest list from www.top500.org, compiled by the Universities of Mannheim and
Tennessee and the National Energy Research Scientific Computing Center at
Lawrence Berkeley National Laboratory.

"An essential element of modern astrophysics is the ability to carry out large-
scale simulations of the cosmos, to complement the amazing observations being
undertaken," said Professor Peter Martin, chair of astronomy and astrophysics
and a CITA investigator.

"With the simulations possible on this computer, we have in effect a laboratory
where we can test our understanding of astronomical phenomena ranging from
the development of structure in the universe over 14 billion years to the
development of new planets in star-forming systems today."

When the computer, created by the HPC division of Mynix Technology of
Montreal (now a part of Ciara Technologies), starts its calculations, the 512
individual central processing units can heat up to 65 C, requiring extra ventilation
and air-conditioning to keep the unit functioning.

But with that heat comes the capability of performing more than one trillion
calculations per second, opening the door to more complex and comprehensive
Ritesh patel


simulations of the universe. It is the only Canadian machine to break the Teraflop
barrier -- one trillion calculations per second -- and it's the fastest computer in the
world devoted to a wide spectrum of astrophysics research.

"This new computer lets us solve a variety of problems with better resolution than
can be achieved with any other supercomputer in Canada," said Chris Loken,
CITA's computing facility manager. "Astrophysics is a science that needs a lot of
computer horsepower and memory and that's what this machine can provide.
The simulations are also enabled by in-house development of sophisticated
parallel numerical codes that fully exploit the computer's capabilities."

The machine, nicknamed McKenzie (after the McKenzie Brothers comedy sketch
on SCTV), with 268 gigabytes of memory and 40 terabytes of disk space,
consists of two master nodes (Bob and Doug), 256 compute nodes, and eight
development nodes. All of these are networked together using a novel gigabit
networking scheme that was developed and implemented at CITA.

Essentially, the two gigabit Ethernet ports on each node are used to create a
"mesh" that connects every machine directly to another machine and to one of 17
inexpensive gigabit switches. It took four people about two days and two
kilometres of cable to connect this network. The unique CITA design drives down
the networking cost in the computer by at least a factor of five and the innovative
system has attracted industry attention.

Professor John Dubinski has used the new computer to examine both the
formation of cosmological structure and the collisions of galaxies by simulating
the gravitational interaction of hundreds of millions of particles representing stars
and the mysterious dark matter. The anticipated collision of the Milky Way
Galaxy with our neighbouring Andromeda galaxy -- an event predicted to take
place in three billion years time -- has been modeled at unprecedented
resolution.
Ritesh patel


New simulations on the formation of supermassive black holes, again with the
highest resolution to date, have been carried out by his colleagues Professors
Ue-Li Pen and Chris Matzner. They have already uncovered clues which may
explain the mystery of why the black hole at the center of our galaxy is so much
fainter than had been expected theoretically.

The team has even grander plans for the future. "In astrophysics at the University
of Toronto we have continually exploited the latest computing technology to meet
our requirements, always within a modest budget," Martin said. "This is a highly
competitive science and to maintain our lead we are planning a computer some
ten times more powerful."




China To Build World's Most
Powerful Computer
Beijing (Xinhua) Jul 29, 2003
The Downing Information Industry Co., Ltd., a major
Chinese manufacturer of high-performance
computers, is to build the world's most powerful
computer, capable of performing 10 trillion
calculations per second.

Scheduled to be completed by March next year, the
super computer marks China's first step in the
development of a cluster computer system, which
has the highest calculation speed in the world,
according to a source with the company.               China's future science and
                                                      defence needs will require ever
                                                      more powerful high performance
                                                      computer systems.
Previously, the Downing Information Industry Co.,
Ltd. had successfully developed a super computer, capable of performing 4
trillion calculations per second.
Ritesh patel


Code-named "Shuguang4000A", the planned super computer covers an area
equal to a quarter of a football field, and it will use processors developed by
AMD, a United States computer chip maker.

AMD and the Chinese company have signed a cooperation agreementto develop
the planned super computer of China.

The fastest existing cluster computer system in the world is capable of
calculating at a speed of 7.6 trillion bytes per second.




    First Super Computer Developed in China

    China's first super computer which is capable of making 1.027 trillion calculations per second
    showed up in Zhongguancun, known as a "Silicon Valley" in the Chinese capital Beijing,
    Thursday.

    The computer, developed by the Legend Group Corp., China's leading computer manufacturer,
    boasts the same operation speed as the 24th fastest computer in the world's top 500 super
    computers. The leading 23 super computers were developed by Japan and the United States
    respectively.

    Legend president Yang Yuanqing said the computer will be installed at the mathematics and
    system science research institute affiliated to the Chinese Academy of Sciences in early
    September. It will be used in calculating hydromechanics, disposing petroleum and earth quake
    materials, climatic mode calculation, materials science calculation, DNA and protein calculation.

    Yang said it only takes the computer two minutes to complete the simulation of global climatic
    changes in one day, compared with 20 hours by other large computers.

    Computers with super calculation speed used to be the tool of small number of scientists in labs in
    the past, but now they are widely used in economic and social fields, even in film-making.
Ritesh patel

    A computer, which is capable of making 85.1 trillion calculations per second, the highest
    calculation speed in the world, has been recently developed in Japan.




Shell to use Linux supercomputer for
oil quest
December 12, 2000
Web posted at: 9:00 AM EST (1400 GMT)
LONDON, England (Reuters) -- Linux, the free
computer operating system, is expected to win
another high-profile victory on Tuesday when
Anglo-Dutch oil company Royal Dutch/Shell will
announce it is going to install the world's largest
Linux supercomputer.
Shell's Exploration & Production unit will use the supercomputer, consisting of
1,024 IBM X-Series servers, to run seismic and other geophysical applications in
its search for more oil and gas.
Data collected in Shell exploration surveys will be fed into the computer, which
will then analyze it.
Ritesh patel


The announcement comes just days after Swedish telecom operator Telia said it
would use a large Linux mainframe to serve all its Internet subscribers, replacing
a collection of Sun Microsystems servers.
"Linux is coming of age," said one source close to the deal.
Linux, developed by the Fin Linus Torvalds and a group of volunteers on the
Web, has been embraced by International Business Machines Corp. as a flexible
alternative to licensed software systems such as Microsoft's Windows or the Unix
platforms.
With Linux companies can quickly add or remove computers without worrying
about licenses for the operating software. Over the past year the software has
been tested and trialled for business critical applications. Major deals have now
started to come through.
Recently Musicland Stores Corp., the U.S. company that owns Sam Goody, said
it would install new Linux and Java-based cash registers. The most recent
announcements indicate that Linux usage is becoming more versatile, with the
operating system moving into many different applications, not just Internet
computers.




World's fastest computer
simulates Earth
Saturday, November 16, 2002 Posted: 2:57 PM EST (1957 GMT)
Ritesh patel

SAN JOSE, California (AP) -- A Japanese
supercomputer that studies the climate and other
aspects of the Earth maintained its ranking as the
world's fastest computer, according to a study
released Friday.
The Earth Simulator in Yokohama, Japan, performs 35.86 trillion
calculations per second -- more than 4 1/2 times greater than the
next-fastest machine.
Earth Simulator, built by NEC and run by the Japanese government,
first appeared on the list in June. It was the first time a
supercomputer outside the United States topped the list.
Two new machines, called "ASCI Q," debuted in the No. 2 and No. 3
spots. The computers, which each can run 7.73 trillion calculations   The Earth Simulator consists of 640
per second, were built by Hewlett-Packard Co. for Los Alamos          supercomputers that are connected by a
National Laboratory in New Mexico.                                    high-speed network.


Clusters of personal computers rank
                                                                      Story Tools
For the first time, high-performance machines built by clustering
personal computers appeared in the top 10.
A system built by Linux NetworX and Quadrics for Lawrence
Livermore National Laboratory ranked No. 5. A system built by High
Performance Technologies Inc. for the National Oceanic and
Atmospheric Administration's Forecast Systems Laboratory was No.
8.                                                                      RELATED
Hewlett-Packard Co. led with 137 systems on the list, followed by       Top 500 most powerful computers
International Business Machines Corp. with 131 systems. No. 3 Sun
Microsystems Inc. built 88 of the top 500 systems.
The Top 500 list, which has been released twice annually since 1993, is compiled by researchers at University of
Mannheim, Germany; the Department of Energy's National Energy Research Scientific Computing Center in
Berkeley and the University of Tennessee.




G5 SUPERCOMPUTER IN THE WORKS

Interesting rumor. Virginia Tech got bumped to the head of the line with their
order of 1100 new G5 computers, so that they could build a supercomputer and
Ritesh patel


make Linpack's Top 500 list this year.

Not too surprising that Apple gave them preferential treatment. Wonder if Apple
might be tempted into making a commercial?

Even if they just posted it on-line, it might prove interesting.

Virginia Tech building supercomputer G5 cluster
By Nick dePlume , Publisher and Editor in Chief

August 30, 2003 - Virginia Tech University is building a Power Mac G5 cluster that
will result in a supercomputer estimated to be one of the top five fastest in the
world.

In yesterday's notes article , we reported that Virginia Tech had placed a large order of
dual-2GHz G5s to form a cluster. Since that time, we've received additional information,
allowing us to confirm a number of details.

According to reports, Virginia Tech placed the dual-2GHz G5 order shortly after the G5
was announced. Multiple sources said Virginia Tech has ordered 1100 units; RAM on
each is said to be upgraded to 4GB or 8GB.

The G5s will be clustered using Infiniband to form a 1100-node supercomputer
delivering over 10 Teraflops of performance. Two sources said the cluster is estimated to
be one of the top five fastest supercomputers in the world.
However, Virginia Tech's on a deadline. The university needs to have the cluster
completely set up this fall so that it can be ranked in Linpack's Top 500 Supercomputer
list.

Apple bumped Virginia Tech's order to the front of the line -- even in front of first day
orders -- to get them out the door all at once. Sources originally estimated the G5s will
arrive the last week of August; they're still on track to arrive early, possibly next week.

This information is more-or-less public within the university community but no
announcement has been made. Earlier in the month, Think Secret contacted Virginia
Tech's Associate Vice President for University Relations, who said the report was an
"interesting story" and agreed to see what he could confirm. The university didn't respond
to follow-up requests for comment.
Ritesh patel



     INDIA's 'PARAM-10,000' SUPER
               COMPUTER
    INDIA'S AMAZING PROGRESS IN THE AREA OF HI-TECH COMPUTING"




India's Hi-Tech expertise, has made Harare Pyare Bharat, a nation to reckon with.
Following, is a short article by Radhakrishna Rao, a freelance writer who has contributed
this material to "INDIA - Perspectives" (August 1998), page 20 :-

"The restrictions imposed by the United States of America on the transfer of know-how
in frontier areas of Technology, and its consistent refusal to make available to India a
range of hardware for its development, have proved to be a blessing in disguise, because
Indian scientists and engineers have now managed to develop, indigenously, most of the
components and hardware required for its rapidly advancing space and nuclear power
programmes.

It was again the refusal of the U.S. administration to clear the shipment to India of a Cray
X-MP super computer, for use by the Institute of Sciences (IISc.), Bangalore, in the
1980's, along with severe restrictions on the sale of computers exceeding 2000 Mega
Theoretical Operations per Second (MTOPS), that led India to build one of the most
powerful super computers in the world. In fact, the unveiling of the "PARAM-10,000"
super-computer, capable of performing one trillion mathematical calculations per second,
stands out as a shining example of how 'restrictions and denials' could be turned into
impressive scientific gains. For the Pune-based Centre for Development of Advanced
Computing (C-DAC), which built this super-computing machine, it was a dream come
true.

In fact, the "PARAM-10,000", based on an open-frame architecture, is considered to be
the most powerful super-computer in Asia, outside Japan. So far, only U.S.A. and Japan
have built up a proven capability to build similar types of super-computers. To be sure,
Europe is yet to build its own super-computer in this category. As it is, "PARAM-
10,000", has catapulted India into the ranks of the elite nations that, already, are in the
rarefied world of tera flop computing which implies a capability to perform one trillion
calculations per second. In this context, a beaming Dr. Vijay P. Bhatkar, Director, of C-
DAC, says, "We can now pursue our own mission critical problems at our own pace and
on our own terms. By developing this, India's esteem in Information Technology (IT) has
been further raised."
Ritesh patel


As things stand now, "PARAM-10,000" will have applications in as diverse areas as
long-range weather forecasting, drug design, molecular modelling, remote sensing and
medical treatment. According to cyber scientists, many of the complex problems that
India's space and nuclear power programmes may encounter in the future could be solved
with "PARAM-10,000", without going in for the actual ground level physical testing. On
a more practical plain, it could help in the exploration of oil and gas deposits in various
parts of the country. Perhaps the post exciting application of "PARAM-10,000" will be in
storing information on Indian culture and heritage, beginning with Vedic times. "We
want to preserve our timeless heritage in the form of a multimedia digital library on
"PARAM-10,000", says Dr. Bhatkar, That C-DAC could manage to build a "PARAM-
10,000" machine in less than five years is a splendid tribute to the calibre and dedication
of its scientists and engineers.

No wonder C-DAC has bagged orders for as many as three "PARAM-10,000" machines.
And two of these are from abroad ; a Russian academic institute and Singapore
University are keenly awaiting the installation of "PARAM-10,000" machines on their
premises. The third machine will be used by the New Delhi-based National Informatics
Centre (NIC) for setting up a geomatics faculty designed to provide solutions in the area
of remote sensing and image processing. C-DAC is also planning to develop advanced
technologies for the creation of a national information infrastructure. Meanwhile, C-DAC
has proposed the setting up of a full fledged company to commercially exploit the
technologies developed by it. C-DAC was set up in 1988 with the mandate to build
India's own range of super-computers.

Incidently, "PARAM-10,000" is a hundred times more powerful that the first Param
machine built, way back in the early 1990's." --- (Radhakrishna Rao , Author)
Ritesh patel



IBM plans world's most powerful
Linux supercomputer
IDG News Service 7/30/03



A Japanese national research laboratory has placed an order with IBM Corp. for a
supercomputer cluster that, when completed, is expected to be the most powerful Linux-
based computer in the world.
The order, from Japan's National Institute for Advanced Industrial Science and
Technology (AIST), was announced by the company on Wednesday as it simultaneously
launched the eServer 325 system on which the cluster will be largely based. The eServer
325 is a 1U rack mount system that includes two Advanced Micro Devices Inc. Opteron
processors of either model 240, 242 or 246, said IBM in a statement.
The supercomputer ordered by AIST will be built around 1,058 of these eServer 325
systems, to make a total of 2,116 Opteron 246 processors, and an additional number of
Intel Corp. servers that include a total of 520 of the company's third-generation Itanium 2
processor, also known by its code name Madison.
The Opteron systems will collectively deliver a theoretical peak performance of 8.5
trillion calculations per second while the Itanium 2 systems will add 2.7 trillion
calculations per second to that for a total theoretical peak performance for the entire
cluster of 11.2 trillion calculations per second.
That would rank it just above the current most powerful Linux supercomputer, a cluster
based on Intel's Xeon processor and run by Lawrence Livermore National Laboratory
(LLNL) in the U.S. That machine has a theoretical peak performance of 11.1 trillion
calculations per second, according to the latest version of the Top 500 supercomputer
ranking.
Based on that ranking, the new machine would mean Japan is home to two out of the
three most powerful computers in the world. The current most powerful machine, the
NEC Corp.-built Earth Simulator of the Japan Marine Science and Technology Center,
has a theoretical peak performance of 41.0 trillion calculations per second while that of
the second-fastest machine, Los Alamos National Laboratory's ASCI Q, is 20.5 trillion
calculations per second.
The eServer 325 can run either the Linux or Windows operating systems and the
supercomputer ordered by AIST will run SuSE Linux Enterprise Server 8. IBM said it
expects to deliver the cluster to AIST in March, 2004. AIST will link the machine with
others as part of a supercomputer grid that will be used in research of grid technology,
life sciences bioinformatics and nanotechnology, IBM said.
General availability of the eServer 325 is expected in October this year and IBM said
prices for the computer start at US$2,919. The computers can also be accessed through
IBM's on-demand service where users pay for processing power based on capacity and
duration.
IBM's announcement is the second piece of good news for AMD and its Opteron
processor within the last two weeks. The processor, which can handle both 32-bit and 64-
bit applications, was launched in April this year.
Ritesh patel


China's Dawning Information Industry Co. Ltd. announced plans last week to build a
supercomputer based on AMD's Opteron processor. The Dawning 4000A will include
more than 2,000 Opteron processors, with a total of 2T bytes of RAM and 30T bytes of
hard-disk space and is expected to deliver performance of around 10 trillion calculations
per second. The Beijing-based company has an order for the machine but has not
disclosed the name of the buyer or when the computer will be put into service.
Opteron processors were also chosen for a supercomputer which is likely to displace the
AIST machine as the most powerful Linux supercomputer. Cray Inc. is currently
constructing a Linux-based supercomputer called Red Storm that is expected to deliver a
peak performance of 40 trillion calculations per second when it is delivered in late 2004.
Linux developer SuSE is also working with Cray on that machine.
Ritesh patel


Jefferson Team Building COTS
Supercomputer
Newport News - Jul 03, 2003
Scientists and engineers from Jefferson Lab’s Chief
Information Office have created a 'cluster
supercomputer' that, at peak operation, can process
250 billion calculations per second

Science may be catching up with video gaming.
Physicists are hoping to adapt some of the most
                                                         Chip Watson, head of the High-
potent computer components developed by                  Performance Computing Group
companies to capitalize on growing consumer              (from left); watches Walt Akers,
                                                         computer engineer: and Jie
demands for realistic simulations that play out          Chen, computer scientist, install a
                                                         Myrinet card into a computer
across personal computer screens.                        node.


For researchers, that means more power, less cost, and much faster and more
accurate calculations of some of Nature's most basic, if complex, processes.

Jefferson Lab is entering the second phase of a three-year effort to create an off-
the-shelf supercomputer using the next generation of relatively inexpensive,
easily available microprocessors. Thus far, scientists and engineers from JLab's
Chief Information Office have created a "cluster supercomputer" that, at peak
operation, can process 250 billion calculations per second.

Such a 250 "gigaflops" machine -- the term marries the nickname for billion to the
abbreviation for "floating-point operations" -- will be scaled up to 800 gigaflops by
June, just shy of one trillion operations, or one teraflop.

The world's fastest computer, the Earth Simulator in Japan, currently runs at
roughly 35 teraflops; the next four most powerful machines, all in the United
States, operate in the 5.6 to 7.7 teraflops range.

The Lab cluster-supercomputer effort is part of a broader collaboration between
JLab, Brookhaven and Fermi National Laboratories and their university partners,
in a venture known as the Scientific Discovery through Advanced Computing
Ritesh patel


project, or SciDAC, administered by the Department of Energy's Office of
Science. SciDAC's aim is to routinely make available to scientists terascale
computational capability.

Such powerful machines are essential to "lattice quantum chromodynamics," or
LQCD, a theory that requires physicists to conduct rigorous calculations related
to the description of the strong-force interactions in the atomic nucleus between
quarks, the particles that many scientists believe are one of the basic building
blocks of all matter.

"The big computational initiative at JLab will be the culmination of the lattice work
we're doing now," says Chip Watson, head of the Lab's High-Performance
Computer Group. "We're prototyping these off-the-shelf computer nodes so we
can build a supercomputer. That's setting the stage for both hardware and
software. "

The Lab is also participating in the Particle Physics Data Grid, an application that
will run on a high-speed, high-capacity telecommunications network to be
deployed within the next three years that is 1,000 times faster than current
systems.

Planners intend that the Grid will give researchers across the globe instant
access to large amounts of data routinely shared among far-flung groups of
scientific collaborators.

Computational grids integrate networking, communication, computation and
information to provide a virtual platform for computation and data management in
the same way that the Internet permits users to access a wide variety of
information.

Whether users access the Grid to use one resource such as a single computer or
data archive, or to use several resources in aggregate as a coordinated, virtual
Ritesh patel


computer, in theory all Grid users will be able to "see" and make use of data in
predictable ways. To that end, software engineers are in the process of
developing a common set of computational, programmatic and
telecommunications standards.

"Data grid technology will tie together major data centers and make them
accessible to the scientific community," Watson says. "That's why we're
optimizing cluster-supercomputer design: a lot of computational clockspeed, a lot
of memory bandwidth and very fast communications."

Computational nodes are key to the success of the Lab's cluster supercomputer
approach: stripped-down versions of the circuit boards found in home computers.
The boards are placed in slim metal boxes, stacked together and interconnected
to form a cluster.

Currently the Lab is operating a 128-node cluster, and is in the process of
procuring a 256-node cluster. As the project develops, new clusters will be added
each year, and in 2005 a single cluster may have as many as 1,024 nodes. The
Lab's goal is to get to several teraflops by 2005, and reach 100 teraflops by 2010
if additional funding is available.

"[Our cluster supercomputer] is architecturally different from machines built
today," Watson says. "We're wiring all the computer nodes together, to get the
equivalent of three-dimensional computing."

That can happen because of continuing increases in microprocessor power and
decreases in cost. The Lab's approach, Watson explains, is to upgrade
continuously at the lowest cost feasible, replacing the oldest third of the system
each year.
Ritesh patel


Already, he points out, the Lab's prototype supercomputer is five times cheaper
than a comparable stand-alone machine, and by next year it will be 10 times less
expensive.

Each year as developers innovate, creating more efficient methods of
interconnecting the clusters and creating better software to run LQCD
calculations, the Lab will have at its disposal a less expensive but more capable
supercomputer.

"We're always hungry for more power and speed. The calculations need it,"
Watson says. "We will grow and move on. The physics doesn't stop until we get
to 100 petaflops [100,000 teraflops], maybe by 2020. That's up to one million
times greater than our capability today. Then we can calculate reality at a fine
enough resolution to extract from theory everything we think it could tell us. After
that, who knows what comes next?"
Ritesh patel


               SUN Genome Center SuperComputing
                            The Genome Center of Wisconsin in May 2000
                            opened a new supercomputer facility built around a
                            Sun Microsystems Enterprise 10000 and several
                            smaller computers. The E 10000 is designed to
                            provide flexible computational power using from
                            between 1 and 36 processors as needed configurable
                            on the fly. In addition to its processor power it has 36
                            gigabytes of memory and 3 Terabytes of disk storage
                            to provide the optimal computing environment for
                            genomic research. In future the E 10000 will be able to
                            expand to 64 processors, 64 gigabytes of ram and 60
                            terabytes of online disk storage.


                            On September 22, 2000 Sun Microsystems
                            announced the Genome Center SuperComputing was
                            being named a Sun Center of Excellence. Being a
                            Center of Excellence is a statement that Sun
                            Acknowledges we are a quality center of computing
                            and that there is a continuing partnership between the
                            Genome Center and Sun Microsystems.




                             Mission
Ritesh patel


The mission of Genome Center SuperComputing is to provide Genomic researchers and their
academic collaborators access to computing power that would otherwise be outside the scope of
their organizations. In providing access to computing power, storage, local databases and most of
the commonly available Unix based biological software we are trying to keep researchers from
working with inadequate resources or supporting unwanted infrastructure.




 Cray super Computer


Applications for Cray Systems

The Cray Applications Group is committed to making
available the software which is important to our
customers. Cray Inc. works with third-party software
vendors to port codes and to assure that our
customers get the best possible performance.

From bioinformatics to seismic imaging to automotive crash simulations, Cray
systems are used to run applications which solve both large and complex
computational problems.

Cray applications data sheets:

        AMBER and Cray Inc. (pdf)
        Gaussian 98 and Cray Inc. Supercomputers (pdf)
        MSC.Nastran and Cray Inc. Supercomputers (pdf)
        MSC.Nastran Performance Enhancements on Cray SV1
        Supercomputers (pdf)
Ritesh patel




Cray Professional Services




                                              For more than 25 years, Cray has been
                                              at the forefront of high performance
                                              computing (HPC), contributing to the
                                              advancement of science, national
                                              security, and the quality of human life.
                                              Cray has designed, built, and supported
                                              high-performance computing solutions
for customers all around the world. Cray helps ensure the success of supercomputer
implementation by partnering with customers to provide complete solutions for the
most challenging scientific and engineering computational problems. These robust
solutions utilize Cray's deep supercomputing expertise and sterling reputation for
quality.

Cray's understanding of high-performance computing is unrivaled. Our Professional
Services Solutions give you access to some of the most savvy, experienced minds in
computing. Examples of capabilities in this area include software development,
custom hardware, extensions to Cray supercomputing products, and access to the
systems in Cray's world-class data center. We help Cray customers in all aspects of
high-performance computing, from problem analysis to solution implementation.
Cray Professional Services draws on Cray's extensive talent and expertise company-
wide.

Why engage Cray Professional Services?

      Over 25 years of experience in the HPC industry
      World-class technical expertise with access to the best minds, methods, and
       tools in the industry
      Exceptional customer service and dedication to quality

STORAGE SERVICES
Cray Professional Services provides SNIA certified SAN specialists to deliver solutions
related to high performance data storage, including SAN design and implementation.
Ritesh patel

Storage services include StorNext File System and StorNext Management Suite
implementations, RS200 extensions, custom Cray SANs, and legacy data migrations.
CUSTOM ENGINEERING
Cray has gathered some of the best engineering minds and technologies in the world
to produce its computer systems. To achieve the extreme levels of performance
found in supercomputers requires an enormous breadth and depth of leading-edge
technical talent. This talent is transferable into other high-performance applications
as well in terms of system design, code porting and optimization, system packaging,
system power and cooling technologies, and troubleshooting issues in the design and
manufacturing process.
Cray Custom Engineering also offers custom design enhancements to existing Cray
products and the use of traditional Cray hardware as embedded components in a
variety of other applications and products. The custom engineering offering from
Cray is targeted to assist both traditional and nontraditional Cray customers in
addressing their most extreme technical issues.



CRAY CONSULTING
                        Cray customers address the most complex, high-
                        performance computing problems. Whether in support of
                        issues of national security, safety, design simulation, or the
                        environment, Cray systems have been the favored
                        computational solution for more than 25 years.
                        To produce these state of the art systems, Cray has
                        developed a broad spectrum of core competencies in the
                        design, implementation, and optimization of high-
                        performance computing solutions. Cray scientists and
                        engineers are 100% focused on high-performance problems
                        and solutions - this is our business. Cray now offers this
tremendous intellectual capital to our customers to address your needs.

SUPERCOMPUTING ON DEMAND

Several generations of Cray products are available to support your high-performance
computing needs. "On demand" means these resources can be scheduled for use
whenever and wherever you need them. Whether it's providing compute services to
cover a peak in operational demand, support for application development or code
optimization, or an ASP-based environment, Cray will work with you to make
computational resources available to meet your specific high-performance computing
needs.

CRAY TRAINING

Cray products are designed to fit the highest performance compute needs of our
customers. Our goal is to ensure that our customers make the most of their systems.
Our training options are designed to enable Cray customers to see a quick return on
their compute investment. Classes are available on a wide variety of topics and
platforms, such as system administration, programming and optimization, and
various quick-start packages.

SITE ENGINEERING
Ritesh patel

Cray has been installing, relocating, and optimizing computing environments for over
25 years. Managing on-site system power and cooling, and interior climate conditions
requires the skills of highly trained personnel to ensure optimal system support and
performance. Site Engineering at Cray merges the needs and dimensions of a
customer's specific computing environment and translates them into comprehensive
work plans and complete site engineering solutions.




Software for Cray Systems

               Powerful hardware systems alone cannot meet the requirements of
               the most demanding scientific and engineering organizations. Equally
               powerful, robust software is needed to turn supercomputers into
               indispensable productivity tools for the sophisticated government,
               commercial, and academic user communities. In these demanding
               environments, where multimillion-dollar projects are at stake,
               reliability, resource management, single job performance, complex
               multijob throughput, and high-bandwidth data management are
               critical.

                      UNICOS®
       The undisputed leader among high-end supercomputer operating systems
      UNICOS/mk™
       The UNICOS/mk operating system fully supports the Cray T3E system's
       globally scalable architecture
      CF90® Programming Environment
       The CF90 Programming Environment consists of an optimizing Fortran
       compiler, libraries, and tools
      Cray C++ Programming Environment
       C++ and C are the computer languages used today for many high-
       performance applications
      Message-Passing Toolkit (MPT)
       Provides optimized versions of industry-standard message-passing libraries
       and software
      Network Queuing Environment (NQE)
       Workload management environment that provides batch scheduling and
       interactive load balancing
      Distributed Computing Environment (DCE) Distributed File Service (DFS)
       An industry-standard, vendor-neutral set of tools and services providing
       distributed computing capability. DFS is a distributed DCE application
       providing an integrated file system with a unified name space, secure access,
       and file protection.
      Data Migration Facility (DMF)
       A low-overhead hierarchical storage management (HSM) solution
      Cray/REELlibrarian
       A volume management system that controls libraries of tape volumes
Ritesh patel




Cray Systems at Work

Cray systems provide powerful high performance solutions for the world's most
complex computational problems. The sustained performance obtained from Cray
supercomputers is used by researchers and computer scientists spanning such varied
disciplines as automotive manufacturing, geological sciences, climate prediction,
pharmaceutical development, and national security.

Cray supercomputers are used worldwide in research, academia, industry, and
government.


                  The Road to La-La Land - Pittsburgh Supercomputing Center
                  researcher Pei Tang uses the Cray T3E to probe the mysteries of
                  anesthesia.


                  Biomedical Modeling at the National Cancer Institute -
                  Researchers from around the world use NCI's Cray SV1 system to
                  solve some of the most difficult problems in computational
                  biology -- studying protein structure and function at the most
                  detailed levels.

                  Clean Power - George Richards, leader of the National Energy
                  Technology Laboratory's combustion dynamics team, takes on
                  the challenge of converting fuel to energy without creating
                  pollutants by using simulations on PSC's Cray T3E.


                  A Thumb-Lock on AIDS - PSC's Marcela Madrid simulates an HIV
                  enzyme on the Cray T3E to help develop drugs that shut down
                  HIV replication.
Ritesh patel




         SUPER COMPUTERS
   There are two main kinds of
supercomputers: vector machines
and parallel machines. Both kinds
work FAST, but in different ways.

      Let's say you have 100 math
problems. If you were a vector
computer, you would sit down and
do all the problems as fast as you
could.

To work like a parallel computer,
you would get some friends and
Ritesh patel



share the work. With 10 of you,
you would each do 10 problems. If
you got 20 people, you'd only have
to do 5 problems each.

     No matter how good you are at
math, it would take you longer to
do all 100 problems than to have
20 people do them together.




                         CRAY T3E
         SDSC's newest supercomputer is the CRAY
          T3E. The T3E is a parallel supercomputer
               and has 256 processors to work on
Ritesh patel



        problems. (Let's not worry about what "T3E"
                            stands for.)

          If you get all the T3E processors going full
               speed, it can do 153.4 billion -- that's
         153,400,000,000 -- math calculations every
          second. But researchers usually only use
        some of the T3E's processors at once. That
               way, many researchers can run their
                    programs at the same time.




                CRAY C90: Vector Machine
Ritesh patel



           The CRAY C90 is the busiest of SDSC's
               supercomputers and cost $26 million. A
       problem that takes a home computer 8 hours
                 to solve, the CRAY C90 can do in
          0.002 seconds. And some scientists have
         problems that take the CRAY C90 a couple
                            DAYS to do.

           The CRAY C90 is a vector machine with
        eight processors -- eight vector machines in
           one. With all eight processors, the CRAY
               C90 can do 7.8 gigaFLOPS. (In people-
         power you'd need one and a half times the
          Earth's population.) A pretty slick Pentium
               PC might reach about 0.03 gigaFLOPS,
                    depending on who you ask.



Application Of Super Computer]
Decision Agent Offers Secure Messaging At Pentagon
Ritesh patel

Woodland Hills - May 07, 2003
Northrop Grumman Corporation went live at the
Pentagon April 1 with the secure organizational
messaging services of the "Decision Agent"
following several months of installation and testing
by the company's California Microwave Systems
business unit.

Installed and managed by the Pentagon
Telecommunications Center (PTC), which supports over
30,000 users and 1,000 organizations, "Decision Agent" no Max it's a smoking zone, not
                                                        the secure communication bubble
is designed to provide an enterprise-wide information
profiling and management portal for the Defense Messaging System (DMS) Version 3.0.

"The 'Decision Agent' has been instrumental in allowing us to provide protected message
traffic for our customers," said Marvin Owens, director of the PTC. "It has proved to be a
tremendous tool in helping us achieve our mission of implementing DMS in the Pentagon
as well as for our other customers worldwide."

The new system at the PTC supports DMS communications for the Office of the
Secretary of Defense, the Army Operations Center and the military services headquarters'
staffs. In addition, it provides a communication gateway that permits interoperability
between Department of Defense organizations that use DMS and allied NATO and non-
Defense Department organizations that use legacy systems.

"By enabling DMS messaging without FORTEZZA cards and readers, and by
eliminating the need to install add-ons to end-user devices, 'Decision Agent' will allow
the Pentagon and other government agencies to reduce the costs and manpower
requirements traditionally associated with DMS implementations," said John Haluski,
vice president of California Microwave Systems' Information Systems unit.

The "Decision Agent" consists of a suite of integrated software applications that run on a
Windows 2000 server. These include Northrop Grumman's LMDS MailRoom, the most
powerful profiling engine currently available, and Engenium Corporation's Semetric, a
knowledge-based retrospective search engine.

The system enhances DMS functionality by automating the processes of identifying,
filtering and distributing military organization messages to specified addresses and
recipients, based on interest profiles and security clearances.

"Decision Agent" provides other enhancements as well, including virus checks, security
checks for possible mislabeling of messages and attachments, Web-based message
preparation and Boolean logic keyword and concept searches.
Ritesh patel


Frozen Light May Make Computer Tick Later This
Century
Boston - May 22, 2003
NASA-funded research at Harvard University,
Cambridge, Mass., that literally stops light in its
tracks, may someday lead to breakneck-speed
computers that shelter enormous amounts of data
from hackers.

The research, conducted by a team led by Dr. Lene
Hau, a Harvard physics professor, is one of 12
research projects featured in a special edition of      an old thumper at NASA in the
                                                        early 1960s
Scientific American entitled "The Edge of Physics,"
available through May 31.

In their laboratory, Hau and her colleagues have been able to slow a pulse of
light, and even stop it, for several- thousandths of a second. They've also created
a roadblock for light, where they can shorten a light pulse by factors of a billion.

"This could open up a whole new way to use light, doing things we could only
imagine before," Hau said. "Until now, many technologies have been limited by
the speed at which light travels."

The speed of light is approximately 186,000 miles per second (670 million miles
per hour). Some substances, like water and diamonds, can slow light to a limited
extent.

More drastic techniques are needed to dramatically reduce the speed of light.
Hau's team accomplished "light magic" by laser-cooling a cigar-shaped cloud of
sodium atoms to one- billionth of a degree above absolute zero, the point where
scientists believe no further cooling can occur.

Using a powerful electromagnet, the researchers suspended the cloud in an
ultra-high vacuum chamber, until it formed a frigid, swamp-like goop of atoms.
Ritesh patel


When they shot a light pulse into the cloud, it bogged down, slowed dramatically,
eventually stopped, and turned off. The scientists later revived the light pulse and
restored its normal speed by shooting an additional laser beam into the cloud.

Hau's cold-atom research began in the mid-1990s, when she put ultra-cold atoms
in such cramped quarters they formed a type of matter called a Bose-Einstein
condensate. In this state, atoms behave oddly, and traditional laws of physics do
not apply. Instead of bouncing off each other like bumper cars, the atoms join
together and function as one entity.

The first slow-light breakthrough for Hau and her colleagues came in March
1998. Later that summer, they successfully slowed a light beam to 38 miles per
hour, the speed of suburban traffic. That's two million times slower than the
speed of light in free space. By tinkering with the system, Hau and her team
made light stop completely in the summer of 2000.

These breakthroughs may eventually be used in advanced optical-
communication applications. "Light can carry enormous amounts of information
through changes in its frequency, phase, intensity or other properties," Hau said.

When the light pulse stops, its information is suspended and stored, just as
information is stored in the memory of a computer. Light-carrying quantum bits
could carry significantly more information than current computer bits.

Quantum computers could also be more secure by encrypting information in
elaborate codes that could be broken only by using a laser and complex
decoding formulas.

Hau's team is also using slow light as a completely new probe of the very odd
properties of Bose-Einstein condensates. For example, with the light roadblock
the team created, they can study waves and dramatic rotating-vortex patterns in
the condensates.
Ritesh patel




Navy to use IBM supercomputer for storm
forecasting
By Reuters
August 2, 2000, 5:40 PM PT
http://news.com.com/2100-1001-244009.html?tag=prntfr

NEW YORK--IBM said today the U.S. Department of Defense paid $18 million for one of the
world's fastest supercomputers to help Navy vessels avoid maritime disasters like the one
portrayed in the film "The Perfect Storm."

Code-named "Blue Wave," the new IBM RS/6000 SP will rank as the most powerful
supercomputer at the Defense Department and the fourth-fastest in operation anywhere in the
world. It will enable the U.S. Navy to create the most detailed model of the world's oceans ever
constructed.

The research performed by "Blue Wave" is expected to improve maritime storm forecasting as
well as search and rescue efforts for naval vessels.

In June, Armonk, N.Y.-based IBM unveiled the fastest computer in the world, able to process
more in a second than one person with a calculator could do in 10 million years.

That supercomputer was designed for the U.S. government to simulate nuclear weapons tests. It
was made for the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). IBM
sold the system, which occupies floor space equivalent to two basketball courts and weighs as
much as 17 elephants, to the DOE for $110 million.

The Navy computer, which can process two trillion calculations per second, will model ocean
depth, temperature and wave heights to new levels of accuracy and detail, boosting the ability of
meteorologists to predict storms at sea.

"The Perfect Storm," a best-selling book by Sebastian Junger recently made into a film, told the
tale of the Andrea Gail, a fishing vessel at sea off the coast of Newfoundland during a deadly
storm that killed the entire crew
Ritesh patel

In 1999, IBM became the leader in the traditional supercomputer market. IBM now has about 30
percent of that market, in which some 250 computers that range in price from $2 million to $100
million or more are sold every year, for use in weather predictions, research and encryption.




Sapphire Slams A Worm Into .Earth
Shatters All Previous Infection Rates
San Diego - Feb 04, 2003
A team of network security experts in California has
determined that the computer worm that attacked
and hobbled the global Internet 11 days ago was
the fastest computer worm ever recorded.

In a technical paper released Tuesday, the experts
report that the speed and nature of the Sapphire
worm (also called Slammer) represent significant
                                                               taking out earth.com takes but a
and worrisome milestones in the evolution of                   worm or two or three or more ......
computer worms.

Computer scientists at the University of California, San Diego and its San Diego
Supercomputer Center (SDSC), Eureka-based Silicon Defense, the University of
California, Berkeley, and the nonprofit International Computer Science Institute in
Berkeley, found that the Sapphire worm doubled its numbers every 8.5 seconds
during the explosive first minute of its attack.

Within 10 minutes of debuting at 5:30 a.m. (UTC) Jan. 25 (9:30 p.m. PST, Jan.
24) the worm was observed to have infected more than 75,000 vulnerable hosts.
Thousands of other hosts may also have been infected worldwide.
Ritesh patel


The infected hosts spewed billions of copies of the worm into cyberspace,
significantly slowing Internet traffic, and interfering with many business services
that rely on the Internet.

"The Sapphire/Slammer worm represents a major new threat in computer worm
technology, demonstrating that lightning-fast computer worms are not just a
theoretical threat, but a reality," said Stuart Staniford, president and founder of
Silicon Defense. "Although this particular computer worm did not carry a
malicious payload, it did a lot of harm by spreading so aggressively and blocking
networks."

The Sapphire worm's software instructions, at 376 bytes, are about the length of
the text in this paragraph, or only one-tenth the size of the Code Red worm,
which spread through the Internet in July 2001.

Sapphire's tiny size enabled it to reproduce rapidly and also fit into a type of
network "packet" that was sent one-way to potential victims, an aggressive
approach designed to infect all vulnerable machines rapidly and saturate the
Internet's bandwidth, the experts said.

In comparison, the Code Red worm spread much more slowly not only because it
took longer to replicate, but also because infected machines sent a different type
of message to potential victims that required them to wait for responses before
subsequently attacking other vulnerable machines.

The Code Red worm ended up infecting 359,000 hosts, in contrast to the
approximately 75,000 machines that Sapphire hit. However, Code Red took
about 12 hours to do most of its dirty work, a snail's pace compared with the
speedy Sapphire.

The Code Red worm sent six copies of itself from each infected machine every
second, in effect "scanning" the Internet randomly for vulnerable machines. In
Ritesh patel


contrast, the speed with which the diminutive Sapphire worm copied itself and
scanned the Internet for additional vulnerable hosts was limited only by the
capacity of individual network connections.

"For example, the Sapphire worm infecting a computer with a one-megabit-per-
second connection is capable of sending out 300 copies of itself each second,"
said Staniford. A single computer with a 100-megabit-per-second connection,
found at many universities and large corporations, would allow the worm to scan
30,000 machines per second.

"The novel feature of this worm, compared to all the other worms we've studied,
is its incredible speed: it flooded the Internet with copies of itself so aggressively
that it basically clogged the available bandwidth and interfered with its own
growth," said David Moore, an Internet researcher at SDSC's

Cooperative Association for Internet Data Analysis (CAIDA) and a Ph.D.
candidate at UCSD under the direction of Stefan Savage, an assistant professor
in the Department of Computer Science and Engineering.

"Although our colleagues at Silicon Defense and UC Berkeley had predicted the
possibility of such high-speed worms on theoretical grounds, Sapphire is the first
such incredibly fast worm to be released by computer hackers into the wild," said
Moore.

Sapphire exploited a known vulnerability in Microsoft SQL servers used for
database management, and MSDE 2000, a mini version of SQL for desktop use.
Although Microsoft had made a patch available, many machines did not have the
patch installed when Sapphire struck. Fortunately, even the successfully attacked
machines were only temporarily out of service.
Ritesh patel


"Sapphire's greatest harm was caused by collateral damage—a denial of
legitimate service by taking database servers out of operation and overloading
networks," said Colleen Shannon, a CAIDA researcher.

"At Sapphire's peak, it was scanning 55 million hosts per second, causing a
computer version of freeway gridlock when all the available lanes are bumper-to-
bumper." Many operators of infected computers shut down their machines,
disconnected them from the Internet, installed the Microsoft patch, and turned
them back on with few, if any, ill effects.

The team in California investigating the attack relied on data gathered by an
array of Internet "telescopes" strategically placed at network junctions around the
globe. These devices sampled billions of information-containing "packets"
analogous to the way telescopes gather photons.

With the Internet telescopes, the team found that nearly 43 percent of the
machines that became infected are located in the United States, almost 12
percent are in South Korea, and more than 6 percent are in China.

Despite the worm's success in wreaking temporary havoc, the technical report
analyzing Sapphire states that the worm's designers made several "mistakes"
that significantly reduced the worm's distribution capability.

For example, the worm combined high-speed replication with a commonly used
random number generator to send messages to every vulnerable server
connected to the Internet. This so-called scanning behavior is much like a burglar
randomly rattling doorknobs, looking for one that isn't locked.

However, the authors made several mistakes in adapting the random number
generator. Had not there been enough correct instructions to compensate for the
mistakes, the errors would have prevented Sapphire from reaching large portions
of the Internet.
Ritesh patel


The analysis of the worm revealed no intent to harm its infected hosts. "If the
authors of Sapphire had desired, they could have made a slightly larger version
that could have erased the hard drives of infected machines," said Nicholas
Weaver, a researcher in the Computer Science Department at UC Berkeley.
"Thankfully, that didn't occur."




University of Hawaii will use new IBM supercomputer to investigate
Earth's meteorological mysteries

"Blue Hawaii" system marks innovative partnership between university,
Maui High Performance Computing Center and IBM

Honolulu, HI, October 25, 2000—The University of Hawaii (UH) today introduced
an IBM supercomputer code-named "Blue Hawaii" that will explore the inner
workings of active hurricanes, helping university researchers develop a greater
understanding of the forces driving these destructive storms. The IBM SP
system—the first supercomputer ever installed at the University of Hawaii—is the
result of an initiative by the Maui High Performance Computing Center (MHPCC)
in collaboration with IBM. This initiative has culminated in an innovative
partnership between the university, MHPCC and IBM.
"We're delighted to have IBM and MHPCC as partners," said university president
Kenneth P. Mortimer. "This new supercomputer adds immeasurably to the
technological capacity of our engineering and science programs and will propel
us to a leadership position in weather research and prediction."
Ritesh patel


Donated by IBM to the university, Blue Hawaii is the technological heir to IBM's
Deep Blue supercomputer that defeated chess champion Garry Kasparov in
1997. Blue Hawaii will power a wide spectrum of University of Hawaii research
efforts, such as:

      Hurricane research. Wind velocity data acquired from weather balloons and aircraft-borne labs will be
       analyzed to develop a greater understanding of the forces that drive hurricanes. This will enhance
       meteorologists' ability to predict the storms.
      Climate modeling. Scientists will investigate the interaction between the oceans and the atmosphere
       believed to cause long-term climate variations. The research is expected to lead to a more accurate
       method for predicting changes in the world's climate, which will benefit numerous industrial sectors,
       including agriculture, manufacturing, and transportation.
      Weather forecasting. Meteorological data will be processed through state-of-the-art computer models to
       produce weather forecasts for each of Hawaii's counties.


In addition, scientists will rely on the supercomputer for a number of vital
research projects in the areas of physics and chemistry. Educational programs in
the university's Department of Information and Computer Sciences will also be
developed to train graduate students in computational science, which involves
using high-performance computers for simulation in scientific research projects.
"This supercomputer strengthens our reputation as a location with a burgeoning
high technology industry," Hawaii Governor Benjamin Cayetano said. "It is an
opportunity for our students and educators to work with a powerful research tool.
This donation by IBM boosts this Administration's own support of the university's
technology-related programs."
The synergy between UH, MHPCC, and IBM will provide the resources needed
to establish UH as a leader in research computing. MHPCC, an expert in
production-level computing on the SP supercomputer, is acting as an advisor to
UH on a broad range of technical topics and will install and prepare the
supercomputer for UH. In addition, MHPCC and IBM will assist UH researchers
in using the new research tool.
Located in the Department of Information and Computer Sciences at the
university's Pacific Ocean Science and Technology Building, Blue Hawaii is
powered by 32 IBM POWER2 microprocessors, 16 gigabytes of memory and 493
gigabytes of IBM disk storage. The machine substantially augments the
Ritesh patel


supercomputing power that's based in the state of Hawaii, already home to
MHPCC, one of the world's most prestigious supercomputer facilities.
Together, Blue Hawaii and MHPCC form a powerful technology foundation for
the burgeoning scientific research initiatives located in Hawaii. In the past five
years, government research grants awarded to Hawaii scientists have increased
by 34 percent to $103 million, according to the UH office of research services.
"Scientists at the University of Hawaii are conducting exciting research across a
number of important disciplines," said IBM vice president Peter Ungaro. "IBM is
proud to work with UH and MHPCC in providing the university with the industry's
most popular supercomputer, which will help researchers achieve their important
goals more quickly and with better results."
Most Popular Supercomputer
The Blue Hawaii system joins a long roster of IBM SP supercomputers around
the world. According to the TOP500 Supercomputer List*, IBM SPs now account
for 144 of the world's 500 most powerful high performance computers—more
than any other machine. The list is published twice a year by supercomputing
experts Jack Dongarra from the University of Tennessee and Erich Strohmaier
and Hans Meuer of the University of Mannheim (Germany).
IBM SP supercomputers are used to solve the most complex scientific and
business problems. With the IBM SP, scientists can model the effects of the
forces exerted by galaxies; corporations can perform complex calculations on
massive amounts of data in order to support business decisions; petroleum
exploration companies can rapidly process seismic data to determine where they
should drill; and company executives seeking to meet Internet demand can
enable complex Web-based transactions.
About the University of Hawaii
The University of Hawaii is the state's 10-campus system of public higher
education. The 17,000-student Manoa campus is a Carnegie I research
university of international standing that offers an extensive array of
undergraduate, graduate and professional degrees. The university's research
Ritesh patel


program last year drew $179 million in extramural funding and is widely
recognized for its strengths in tropical medicine, evolutionary biology, astronomy,
oceanography, volcanology, geology and geophysics, tropical agriculture,
electrical engineering and Asian and Pacific studies. Visit UH at www.hawaii.edu.
About MHPCC
MHPCC is ranked among the Top 100 most powerful supercomputer facilities in
the world. MHPCC provides DoD, government, private industry, and academic
users with access to leading edge, high performance technology.
MHPCC is a center of the University of New Mexico established through a
cooperative agreement with the U.S. Air Force Research Laboratory's Directed
Energy Directorate. MHPCC is a Distributed Center of the DoD High
Performance Computing Modernization Program (HPCMP), a SuperNode of the
National Science Foundation's National Computational Science Alliance, and a
member of Hawaii's growing science and technology community.
Ritesh patel




Technology use in Super Computer

Pipelining
The most straightforward way to get more performance out of a processing unit is
to speed up the clock (setting aside, for the moment, fully asynchronous designs,
which one doesn't find in this space for a number of reasons). Some very early
computers even had a knob to continuously adjust the clock rate to match the
program being run.
But there are, of course, physical limitations on the rate at which operations can
be performed. The act of fetching, decoding, and executing instructions is rather
complex, even for a deliberately simplified instruction set, and there is a lot of
sequentiality. There will be some minimum number of sequential gates, and thus,
for a given gate delay, a minimum execution time, T(emin). By saving
intermediate results of substages of execution in latches, and clocking those
latches as well as the CPU inputs/outputs, execution of multiple instructions can
be overlapped. Total time for the execution of a single instruction is no less, and
in fact will tend to be greater, than T(emin). But the rate of instruction execution,
or issue rate, can be increased by a factor proportional to the number of pipe
stages.
Ritesh patel


The technique became practical in the mid-1960s. The Manchester Atlas and the
IBM Stretch project were two of the first functioning pipelined processors. From
the IBM 390/91 onward, all state-of-the-art scientific computers have been
pipelined.

Multiple Pipelines

Not every instruction requires all of the resources of a CPU. In "classical"
computers, instructions tend to fall into categories: those which perform memory
operations, those which perform integer computations, those which operate on
floating-point values, etc. It us thus not too difficult for the processor pipeline to
thus be further broken down "horizontally" into pipelined functional units,
executing independently of one another. Fetch and decode are common to the
execution of all instructions, however, and quickly become a bottleneck.

Limits to Pipelining

Once the operation of a CPU is pipelined, it is fairly easy for the clock rate of the
CPU to vastly exceed the cycle rate of memory, starving the decode logic of
instructions. Advanced main memory designs can ameliorate the problem, but
there are always technological limits. One simple mechanism to leverage
instruction bandwidth across a larger number of pipelines is SIMD (Single
Instruction/Multiple Data) processing, wherein the same operation is performed
across ordered collections of data. Vector processing is the SIMD paradigm that
has seen the most visible success in high-performance computing, but the
scalability of the model has also made it appealing for massively parallel designs.
Another way to ameliorate the memory latency effects on instruction issue is to
stage instructions in a temporary store closer to the processor's decode logic.
Instruction buffers are one such structure, filled from instruction memory in
advance of their being needed. An instruction cache is a larger and more
persistent store, capable of holding a significant portion of a program across
multiple iterations. With effective instruction cache technology, instruction fetch
Ritesh patel


bandwidth has become much less of a limiting factor in CPU performance. This
has pushed the bottleneck forward into the CPU logic common to all instructions:
decode and issue. Superscalar design and VLIW architectures are the principal
techniques in use today (1998) to attack that problem.


Scalable Vector Parallel Computers
The Scalable Vector Parallel Computer Architecture is an architecure where
vector processing is combined with a scalable system design and software. The
major components of this architecture are a vector processor as the single
processing node, a scalable high performance interconnection network, including
the scalability of I/O, and system software which supports parallel processing at a
level beyond loosely coupled network computing. The emergence of a new
Japanese computer architecture comes to the surprise of many who are used to
thinking that Japanese companies never undertake a radical departure from
existing architectures. Nonetheless scalable vector parallel computers are an
original Japanese development, which keeps the advantages of a powerful single
processor but removes the restrictions of shared memory vector multiprocessing.
The basic idea of this new architecture is to implement an existing vector
processor in CMOS technology and build a scalable parallel computer out of
these powerful single processors. The development is at the same time
conservative and innovative in the sense that two successful and meanwhile
proven supercomputer design principles, namely vector processing and scalable
parallel processing, are combined to give a new computer architecture.
Ritesh patel



Vector Processing
Vector processing is intimately associated with the concept of a "supercomputer".
As with most architectural techniques for achieving high performance, it exploits
regularities in the structure of computation, in this case, the fact that many codes
contain loops that range over linear arrays of data performing symmetric
operations.
The origins of vector architecure lay in trying to address the problem of
instruction bandwidth. By the end of the 1960's, it was possible to build multiple
pipelined functional units, but the fetch and decode of instructions from memory
was too slow to permit them to be fully exploited. Applying a single instruction to
multiple data elements (SIMD) is one simple and logical way to leverage limited
instruction bandwidth.
Ritesh patel




Latest Technology
 Andromeda™
 The latest technology from Super Computer, Inc., code named "Andromeda™"
 provides game developers a rich set of tools designed to make game implementation
 easier and lower the time to market. Other features are designed to allow hosting a
 game server and server administration easy. In today's competitive gaming market, it
 is not only important to get your game to the stores quickly, but keep it selling. All of
 Andromeda™'s features help you do just that.
 Master Browser Service (MBS)

         "Push" technology allows your servers to be accurately listed in a
         single repository accessable from in-game browsers and third party
         aplpications. Andromeda™'s MBS uses a propriatry protocol that
         allows for both forwards and backwards compatability so that
         applications that retrieve information from the master browser need to
         go through updates to continue to show your servers. Because
         Andromeda™'s MBS is located in the fastest data center in the world
         you know that players can retrieve server lists quickly

 Server Authentication Service (SAS)

         Player and server authentication services provided by Andromeda™'s
         SAS are as flexable as they are robust. You can store everything from
         basic authentication information to full player settings allowing players
         to use their preferred configuration--even if they are on a friend's
         computer.

         SAS also works with Andromeda™'s Pay-Per-Play Service and
         Games-On-Demand Service to control whether a player may join a
         server.

 Statistics Tracking Service (STS)

         Andromeda™'s STS service uses streaming to collect player and
         server statistics in real-time. Information is fed into the STS database
         and processed resulting in accurate player and server statistics.

 Dynamic Content Delivery System (DCDS)

         In-game dialogs change. Layouts change. Why patch?
         Andromeda™'s DCDS system can deliver in-game content on the fly.
         Caching technology allows Andromeda™ to update only what has
Ritesh patel


         changed--including game resources. Deliver dynamic content, such as
         news, forums, and server rental interfaces without having to patch.

         Combined with Andomeda's Server Authentication Service and Pay-
         Per-Play Service, DCDS can also deliver an interface to players to
         allow them to add more time to their account without leaving the
         game.

 Remote Console Interface

         The remote console interface provides all the tools to allow server
         administrators to remotely control the server using a standard remote
         console language.

 CVAR Management Interface

         Another server administration tool, the CVAR Management Interface
         manages server settings and, combined with the Remote Console
         Interface, can restrict the ability for players to change certain CVARs
         using the remote console.

 Netcode Interface

         Solid netcode is the backbone of any multiplayer game. High-
         performance, reliable interfaces allows game developers to reduce
         their time to market and concentrate on developing their game.

 Pay-Per-Play Service

         Andromeda™'s Pay-Per-Play service allows game publishers to
         charge players by the minute, hour, day, or month to play online. This
         technology is both secure and reliable.

 Games-On-Demand Service

         This service allows a game server to created on demand for whatever
         duration the ccustomer desires. Whether they want the server for a few
         hours to play a match, or the same time every week for practices, this
         solution is an inexpensive way to obtain a high-performance server
         that meets tournament regulations inexpensively.

 Other Features
 Andromeda™'s tools integrate together. For example, your in-game browser can pull
 up a player summary and their player stats all through the MBS without having to
 connect to the SAS and STS. High availability clustering technology ensures that
 Andromeda™ is available 24/7/365.
Ritesh patel




         ClanBuilder™ features total clan resource management including a
         roster, calendar, news board, gallery, links, remote game server
         console and robust security.

         The ClanBuilder™ roster is a fully customizable roster allowing guild
         masters to create their own fields, squads, ranks, and awards. The
         roster supports ICQ allowing visitors to add guild members to their
         ICQ contact list or send a message directly from the roster at the click
         of a button.

         The ClanBuilder™ calendar allows events to be created for the guild
         as a whole or a specific division. Recurrence patterns allow guild
         masters to easily create recurring events. ClanBuilder™ will even
         adjust the date and time of an event based on the time zone a vistor
         picks.

         The ClanBuilder™ news board is a powerful tool that allows news
         articles to be posted by any visitor requiring approval by the guild
         master before appearing on the news board. Like the rest of the
         ClanBuilder™ tools, division specific news can be posted.

         The ClanBuilder™ links and gallery allow screen-shots and links to be
         easily added and categorized. Images are stored automatically on the
         ClanBuilder™ server without the need for FTP or your own web
         space.

         The ClanBuilder™ game servers provides not only a place to list your
         clan game servers, but also allows remote console to be used with
         supported games.

         The ClanBuilder™ security model is simple yet powerful allowing
         guild masters to delegate administrative tasks to specific members and
         customize which members can make changes to any tool.

         ClanBuilder™ offers a rich set of tools allowing full customization
         down to the individual colors on the screen. Easy to use, yet powerful,
         no clan can afford to be without this time-saving tool.

         Get into the game fast, with ClanBuilder™.
Ritesh patel




Architectural Themes
Looking over the numerous independent high-performance computer designs in
the 1970's-90's, one can discern several themes or schools of thought which
span a number of architectures. The following pages will try to put those in some
kind of perspective.
Contributions of additional links to seminal articles in these categories are more
than welcome.
Email to: KevinK@acm.org

Pipelining

Dealing with Memory Latency

Vector Processing

Parallel Processing

Massive Parallelism

Commoditization
Ritesh patel




SUPER COMPUTER SELECTS WorldCom FOR HOSTING SERVICES

WorldCom, a the leading global business data and Internet communications
provider, announced that Super Computer Inc, a revolutionary online computer
gaming company, has selected WorldCom Internet Colocation Services to power
its cutting-edge services which support millions of online video game players.
Super Computer Inc SCI) was created for the 13 million online video game
players to improve the quality of their experience while addressing the high cost
of the game server rental industry. To establish a leadership position in the
rapidly growing market, SCI developed a supercomputer capable of delivering
game server access with unsurpassed speeds, reliability and value.
By choosing WorldCom, SCI benefits from the cost and operating efficiencies of
colocation outsourcing, while leveraging the performance, security and reliability
of WorldCom's world-class Internet data centers and direct, scalable, high-speed
connections to the facilities-based WorldCom global IP network. Financial terms
of the agreement were not disclosed.
"WorldCom has long been a tier one Internet provider, and unquestionably has
the most superior and expansive IP network on the planet," said Jesper Jensen,
president of Super Computer. "The two biggest factors for success in online
gaming are network performance and equipment performance. We require
access to the best network in order for us to be successful."
SCI's colocation solution ensures that its gaming servers will be up and running
24/7, even at peak usage times. With its industry-leading six-point Internet
colocation performance guarantees, WorldCom assures 100% network
availability, 100 percent power availability and 99.5% packet delivery, as well as
other key network performance metrics. In addition, Super Computer chose
Ritesh patel


WorldCom because its network could easily scale and provide the necessary
bandwidth - beyond speeds of 1.6 Gbps - as it grows.
"The Super Computer agreement highlights the fact that WorldCom is a viable
provider today and in the future," said Rebecca Carr, WorldCom director of
Global Hosting Services. "By outsourcing to WorldCom, SCI can leverage our
Internet expertise and our global Internet footprint to deliver the world- class
speed and reliability of our network to its customers."
Currently colocated in WorldCom's state-of-the-art Atlanta data center, the
company plans to expand into additional centers around the globe over the next
nine months.
With the support of its managed Web and application hosting affiliate Digex,
WorldCom offers the full array of Web hosting solutions from colocation to
shared, dedicated and custom managed hosting. Each runs over WorldCom's
facilities-based global network, through WorldCom's state-of-the-art data centers
across the U.S., Canada, Europe and Asia Pacific. Web hosting services are part
of the full continuum of advanced data communications solutions WorldCom
provides to business customers around the world.
About Super Computer Inc
Super Computer Inc (SCI) is one of the world's fastest growing game hosting
solutions. With the invention of the world's first supercomputer for FPS game
hosting, the Jupiter Cluster, and the Callisto mini-cluster for Broadband
Providers, SCI is working with the gaming industry to consolidate the game
hosting market. With the move to WorldCom's network, SCI now offers the
world's fastest gaming technology and connectivity. SCI introduced the concept
of games hosted on supercomputers to the consumers in June 2002.
About WorldCom Inc
WorldCom Inc is a pre-eminent global communications provider for the digital
generation, operating in more than 65 countries. With one of the most expansive,
wholly-owned IP networks in the world, WorldCom provides innovative data and
Internet services for businesses to communicate in today's market. In April 2002,
Ritesh patel


WorldCom launched The Neighborhood built by MCI -- the industry's first truly
any-distance, all-inclusive local and long-distance offering to consumers.
Ritesh patel