Docstoc

Advanced Scientific Computing in DOE_SC

Document Sample
Advanced Scientific Computing in DOE_SC Powered By Docstoc
					Addressing the Need for Advanced Scientific Computing in the DOE Office of Science
September 18, 2000 Document prepared by Thomas Jefferson National Accelerator Facility in support of advanced computing in the DOE Office of Science. Contacts: Ian Bird (igb@jlab.org), Chip Watson (watson@jlab.org), R. Roy Whitney (whitney@jlab.org)

Executive Summary
High-end computing has made tremendous advances in recent years. The ASCI program in particular has effectively demonstrated the feasibility of massively parallel computing at the 10 Tera-op scale. Many of the science programs within the purview of the Office of Science could benefit from these advances given the availability of sufficient and appropriate computing resources. Dramatic progress in science could be leveraged through high-end computing bringing unprecedented advances and allowing us to achieve the full potential of these technologies. The program outlined in this paper builds upon the ideas expressed in the computing plan “Scientific Discovery through Advanced Computing” produced by the Office of Science (1), in particular expanding upon the idea of application-specific computing centers (“Topical Centers” in the above paper) which we view as an excellent mechanism with which to both provide high-end computing resources to many science programs, and with which to get the technologies into the hands of the wider scientific community. In outline, the program proposed would cost some $175 M per year, and contains three elements: 1. One or two major computing facilities, such as, or similar to the National Energy Research Computing Center (NERSC), and funded at a level of $50-75 M per year above current levels; 2. 10-15 Application-specific computing centers, each operating a massively parallel computer on the 1 Tera-op scale, targeted at a specific science problem. This part of the program would be funded at $75-100 M per year. The total cost of the major centers and the application-specific centers should be some $150M, but the sharing between them should be flexible and regularly assessed to ensure the best investment. 3. Enabling technologies, including high-speed networking and associated infrastructure necessary to manipulate and analyze Petabyte-scale datasets and to deliver the computing power into the hands of the scientific community. This element would be funded at $25 M per year in order to adequately invest in the networking and infrastructure essential to support computing at this scale. The paper referenced above contains much insight and background information that we do not attempt to reproduce here. In terms of priorities there is complete consensus between the two approaches – both agree on the need for a large flagship center or two as a top priority, both agree on the need for development in enabling technologies, in particular in high-speed networking. Instead we focus upon the case for building 10-15 application specific high-end computing centers. We are convinced that funding computing facilities at a scale of 5-10% that of a large facility targeted at a specific problem domain and it’s scientific user community has a unique potential to produce unprecedented scientific advances in a very cost-efficient manner. In addition there are sufficient scientific programs waiting now for such resources that the proposed number of these facilities is fully justified. In this paper we do not directly discuss the issues of software development for massively parallel systems since they are fully discussed in (1), but note that the application centers are an excellent environment for that development since it is in the best interests of the associated user communities.

-1-

Introduction
The feasibility of massively parallel computing at the 10 Tera-op scale has been effectively demonstrated by the ASCI program. However, at the moment, only a limited community is able to contribute effectively to, and benefit from, existing high-end computing facilities. There exists a significant disparity between the supercomputing facilities available to the ASCI program and those available to civilian science. In addition to the needs within the defense community within the Office of Science in the Department of Energy, there is a wide range of civilian science problems that could make dramatic progress given a wider availability of Tera-scale computing facilities. These applications cover all aspects of the Office of Science programs and several span program areas –     High Energy and Nuclear Physics: encompassing accelerator design, understanding the Standard Model of particle physics, understanding the structure of nuclei, and extremely energetic nuclear processes; Biological and Environmental Research: including predicting the climate, understanding the processes that control a biological cell; Basic Energy Sciences: many aspects of materials science, predicting chemical reactivity and fluid dynamics for many applications, modeling processes within the Earth’s crust; Fusion Energy Sciences: many topics related to plasmas and material/plasma interactions, as well as modeling and design of heavy ion accelerators.

Further discussion of the import of some of these programs and applications can be found in (1), where there are listed some 14 major scientific challenges identified by the SC programmatic offices as being only addressable through advances in computational modeling and simulation. One of the major hurdles to be overcome in being able to make effective use of large massively parallel computer systems is the software. It is very difficult to map the scientific problem onto the physical architecture of the machines without the appropriate software tools and methodologies. That is an area where much work is still to be done, and which will benefit enormously from a wider availability of high-end computing facilities, particularly if those facilities can be made accessible to the wider scientific community including national laboratories and universities where the collaboration between application scientists and computer scientists can be extremely fruitful, bringing benefits to both communities. The program proposed is in-line with existing initiatives and ideas, and demonstrates how the deployment of application specific high-end computer facilities into the wider community, in conjunction with continued investment in the major computing centers, would enable the progress on many scientific programs, and encourage the development of the software and techniques essential to effectively use such facilities. The widespread use of high-end computing facilities thus drives advances in computing techniques as well as in basic science. The modest cost would enable huge strides in many areas of basic science that presently have a huge pent-up demand for computing. As an added benefit the general body of expertise in supercomputing technologies and techniques would broaden to encompass both DOE science centers, and collaborating laboratories and universities in addition. This wider pool of excellence enhances both the science and the defense-related computing programs, and the nation as a whole would gain. The program outlined here is complementary to the NSF supercomputing initiatives, which are more focused on computer science research and a few high-end facilities. This program should build and create a synergy between the different contributors to and beneficiaries of these capabilities to create mutual benefit to all and allow cross-fertilization of ideas and expertise.

-2-

An Advanced Scientific Computing Program to Meet Basic Research Needs in the Office of Science
A program designed to address the computing needs for basic science research would have three components:  One or two major supercomputer centers. These centers would focus on the high value applications that truly require massive numbers of processors and significant infrastructure, that smaller centers are not capable of providing. These large centers should be reserved for running only these applications. Within DOE, NERSC is an example of such a center. A number (10-15) of application-specific computer centers, a reasonable estimate is that a single center would require between $3 – 5 M per year depending on architecture and size, each targeted for the use of a single problem domain. The nature of the problem would determine the specific choice of technology and computer architecture, such that the system is adapted to the problem at hand. The scale of these centers is thus similar to that of the NSF “Centers of Excellence”. Development of enabling technologies, especially very high speed networking infrastructure and services.





The goal of an advanced scientific computing program is to efficiently deploy resources to solve demanding science problems, to ensure that high-end computing resources are available to the scientist, and that access to the systems for development and adequate testing is assured. This can most efficiently be achieved through a combination of large supercomputer centers (like the National Energy Research Scientific Computing Center (NERSC) operated by Lawrence Berkeley National Laboratory) and a suitable number of “application-specific computing centers” which on average should be of a scale with close to 5-10% of the resources of the large centers. These application-specific centers should be deployed into the DOE science community at both universities and laboratories and be focused on specific problem domains. This is an excellent means to get advanced computing technologies into the scientific community and encourages development of those techniques and algorithms essential to the efficient utilization of the resources. An application-specific center will also be a conduit for supercomputing facilities to be made available to university and other groups that collaborate with the DOE labs. These types of facilities have to be closely aligned with the DOE science programs and aimed at tackling the broad range of scientific applications involved. It is envisaged that such centers would be located at existing facilities in order to leverage existing infrastructure and expertise. The third part of the program would involve investment in enabling technologies – high speed networks to enable access to the facilities, wide area management of massive amounts of data, and collaboratory tools. This part of the program is critical to integrating the university based research and education programs into the overall effort using advanced computing for science.
Application Specific Optimizations

Large scale scientific applications can be characterized by the demands placed upon several key characteristics of a supercomputer, including integer and floating point performance, cache and main memory sizes and speeds, disk size and speed, network latency and bandwidth, and bandwidth to tertiary storage (for example, a robotic tape library). Tradeoffs can be made among these parameters to achieve the best price/performance for the application. National supercomputing centers choose parameters to yield the highest performance for the broadest set of applications possible. If scientific applications were substantially the same in the ratios of these parameters, this would be an ideal solution. In reality, different applications often call for markedly different parameters, to such an extent that for optimal price/performance, it is reasonable to deploy multiple supercomputers with different architectures and optimizations. One well-known example of this variability is in the area of

-3-

nuclear theory calculations – Lattice Quantum Chromodynamics (QCD). This application requires an extremely large amount of floating point operations and high memory bandwidth, yet relatively little memory or disk space. Lattice QCD is also remarkable in that special purpose computers have been built in order to better match the requirements of this application, with custom processor-to-processor links emphasizing nearest neighbor communications on a four dimensional lattice. These machines emphasize large processor count and lean memory configurations and have been extremely costeffective. Within the realm of massively parallel supercomputers there are a variety of possible architectures, which are more or less well adapted to the multitude of different scientific modeling and simulation problems. While Lattice QCD is one example, other problems however, require large amounts of memory associated with each processor, or require large bandwidth I/O between processors. Until now, the approach has mostly been to use a single very expensive architecture to solve all such problems, sharing processors and time between applications. With advances in technology that permit the construction of reasonably sized supercomputers from commercial components it is far more efficient to use the massive supercomputers for those applications that truly require them, while moving other applications onto dedicated systems that are configured in a way that is optimum for specific problems at hand.

The Benefits of Application-Specific Computing Centers
Application-specific Computing Centers are a valuable adjunct to the national supercomputing centers in that they allow high performance computing platforms to be optimized to the needs of a single (or small number of) scientific application(s). Further, when sited in the middle of an energetic scientific activity, they can provide enhanced opportunities in scheduling flexibility, adaptability, and on-demand support of a targeted user community. Finally, a modest number of moderately large computers can provide a higher aggregate performance for a given investment when compared to a single large supercomputer operated in a time-shared or space-shared (processor allocation) manner.
Cost Effective Centers

Supercomputing centers today share their resources among a large number of users, both by time allocation and by space allocation (allocating a certain number of processors within a large multiprocessor machine). A typical large job experiences a delay from the time a job is submitted to a batch queue until the job starts executing which depends upon the average job size and the number of jobs in the queue. Segmenting a supercomputer into a small number of smaller machines would have negligible impact on the time a user waits for a job to complete, as long as the largest machine is at least as large as that user’s time averaged allocation on the supercomputer, which is usually less than 5%. Implementing a ten Tera-ops supercomputing capability to support 100 applications as ten distinct one Tera-ops machines each supporting 10 applications allows each of those ten machines to be optimized for a different set of applications, thereby achieving higher performance for a fixed budget, and thereby shortening the average wait seen by a user. Deploying multiple supercomputers is cost effective as long as each such machine can be operated at high efficiency, which implies mature science applications (able to use a machine of given size), a large enough user community to keep each machine productively busy, and an environment capable of supporting the machine. Scaling science applications to one thousand processors is considerably easier than scaling to ten thousand processors, so most applications would do better to have a thousand processors for 50% of the time, than to have ten thousand processors for 5% of the time.

-4-

There is of course an upper limit on the number of smaller machines to create, as each such machine must have a large enough job mix to keep it busy (high efficiency) and enough processors to satisfy the allocation of a particular program. Assuming that there is at least one very large center (such as NERSC) to run the extremely scalable, high priority applications, it is straightforward to determine that over a dozen additional topical machines could be operated at the required high efficiency.
Enhanced Responsiveness

A computing facility operating to meet the needs of a targeted community can more readily adapt to changing priorities within that community. Instead of having to compare, for example, the priority of a biological science application and a high-energy physics application, it is only necessary to compare two applications within the same science discipline. All members of a program advisory committee would have sufficient knowledge to set priorities, and in fact the community itself could be involved in this dynamic decision making process. The center’s advisory committee (consisting of members of the community being served or experts in that domain) could operate in a manner similar to a program advisory committee for a single purpose experimental physics laboratory such as the Stanford Linear Accelerator Center or Thomas Jefferson National Accelerator Facility. Additionally, the computing center could adapt its budget to the computing priorities of the smaller community, upgrading disk, processors or software according to those evolving priorities.
Reaching the User Community

Deploying these multiple supercomputers at geographically dispersed locations can improve the coupling with the user community and hence improve the science achieved for the investment. However, they must be deployed to locations that can provide a high quality infrastructure. This infrastructure includes good network connectivity, access to large tertiary storage, large disk pools for staging, and a competent staff. These requirements are well met at all of the DOE national laboratories as well as at several major universities. Many of these facilities already operate large computing centers for experimental data analysis and other general purpose computing tasks, and have good network connectivity and robotic tape storage. In addition, each is the logical center of one or more vital scientific activities.

Enabling Technologies
The third cornerstone of this program is the technologies that enable the wide use of the facilities – both the national supercomputer centers and the topical centers. Those technologies include:  High speed networking. In this respect, the Energy Sciences Network (ESnet) is already on track to provide bandwidth needed for future applications. ESnet is very well connected to systems providing networking to the university community. Both the HighEnergy physics experiments, particularly the LHC experiments, and any large scale simulation initiatives will be involved with Petabyte scale data sets. Those networks will provide bandwidth for massive data transfers as well as for other purposes – videoconferencing, collaboratory tools, communication, etc. How those varying response requirements are dealt with such that bandwidth is guaranteed at the appropriate time and rate for each application is where effort will need to be focused. Data grids are major research efforts in both the US and Europe. A grid is the middleware that uses a high speed network infrastructure to handle the problem of how to provide resources and data to a researcher, when those resources and data may be all geographically widely separated. Such initiatives are the foundation of a real ability to provide true accessibility to large supercomputers for individual researchers or groups. Other technologies required for managing and using large massively parallel systems include: distributed systems development, collaboratory tools and resources.





-5-

These enabling technologies would put these advanced scientific computing tools in the hands of a broad community of users at both university and laboratory based facilities.

Deployment
An initial goal would be to deploy in FY02, a dozen or so one Tera-op machines, with architectures targeted to specific science domains. These would complement a 10 Tera-op machine at a major supercomputing center (such as NERSC). Ideal locations for many of the topical supercomputing centers would be at DOE National Laboratories and some universities with adequate existing infrastructure that could be leveraged to support a topical center. Tight coupling of these systems to the user community could be ensured with an advisory committee drawn from that community.

Conclusion
We foresee the situation within 5 years where a 100 Tera-op supercomputer is a viable proposition given the expected advances in microprocessor and networking technologies. It would be reasonable to expect such a machine to be available to civilian science programs and devoted to problems that require significant amounts of time on huge numbers of processors. Hand in hand with this, the building of a 10 Teraflop machine should be relatively straightforward and in terms of numbers of processors of the same order as many DOE labs and some universities are currently either building or planning in support of their existing experimental programs and scientific research. Thus it is reasonable to expect to be able to deploy several such systems, some larger, some smaller, into the hands of the science communities where there is now a huge unfulfilled demand, to build upon existing expertise and infrastructure. The advances in science that would spring from such an initiative would be significant, benefiting both the science communities, and cross-fertilizing with the other programs in the advanced computing community to achieve excellence in massively parallel computing techniques that would come with the availability of such systems. The benefits to the nation, for science leadership, scientific manpower, and economic leadership, of such a program will be significant and far-reaching.

References
1. “Scientific Discovery through Advanced Computing” produced by the Office of Science (Department of Energy, Office of Science, March 2000); available at : http://www.sc.doe.gov/production/octr

-6-


				
DOCUMENT INFO