Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

Role of computer networking in HEP experiments - DOC

VIEWS: 49 PAGES: 81

									  International Committee for Future Accelerators (ICFA)
 Standing Committee on Inter-Regional Connectivity (SCIC)
      Chairperson: Professor Harvey Newman, Caltech




              ICFA SCIC Report
Networking for High Energy and Nuclear Physics




                 On behalf of ICFA SCIC:
        Harvey B. Newman newman@hep.caltech.edu




                      February 2004
           (A Revision of the 2003 SCIC Report)
SCIC List Members:
People                 e-mail                            Organisation / Country
Alberto Santoro        santoro@uerj.br                   UERJ (Brazil)
Alexandre Sztajnberg   alexszt@uerj.br                   UERJ (Brazil)
Arshad Ali             arshad.ali@niit.edu.pk            NIIT(Pakistan)
Daniel Davids          daniel.davids@cern.ch             CERN (CH)
David Foster           david.foster@cern.ch              CERN (CH)
David O. Williams      david.o.williams@cern.ch          CERN (CH)
Dean Karlen            karlen@uvic.ca                    Univ. of Victoria & TRIUMF (Canada)
Denis Linglin          linglin@in2p3.fr                  IN2P3 Lyon (France)
Dongchul Son           son@knu.ac.kr                     KNU (Korea)
Federico Ruggieri      federico.ruggieri@ba.infn.it      INFN (Italy)
Fukuko Yuasa           fukuko.yuasa@kek.jp               KEK (Japan)
Hafeez Hoorani         hoorani@comsats.net.pk            Pakistan
Harvey B. Newman       newman@hep.caltech.edu            Caltech (USA)
Heidi Alvarez          heidi@fiu.edu                     Florida International University (USA)
HwanBae Park           sunshine@knu.ac.kr                KNU (Korea)
Julio Ibarra           julio@fiu.edu                     Florida International University (USA)
Les Cottrell           cottrell@slac.stanford.edu        SLAC (USA)
Marcel Kunze           marcel.kunze@hik.fzk.de           FZK (Germany)
Vicky White            white@fnal.gov                    FNAL (USA)
Michael Ernst          michael.ernst@desy.de             DESY (Germany)
Olivier H. Martin      olivier.martin@cern.ch            CERN (CH)
Richard Hughes-Jones   r.hughes-jones@man.ac.uk          University of Manchester (UK)
Richard Mount          richard.mount@slac.stanford.edu   SLAC (USA)
Rongsheng Xu           xurs@sun.ihep.ac.cn               IHEP (China)
Sergei Berezhnev       sfb@radio-msu.net                 RUHEP (RU)
Sergio F. Novaes       novaes@fnal.gov                   State University of Sao Paulo (Brazil)
Shawn McKee            smckee@umich.edu                  University of Michigan (USA)
Viacheslav Ilyin       ilyin@sinp.msu.ru                 SINP MSU (RU)
Sunanda Banerjee       sunanda.banerjee@cern.ch          India
Syed M. H. Zaidi       drzaidi@niit.edu.pk               NUST (Pakistan)
Sylvain Ravot          sylvain.ravot@cern.ch             Caltech (USA)
Vicky White            white@fnal.gov                    FNAL (USA)
Vladimir Korenkov      korenkov@cv.jinr.ru               JINR, Dubna (RU)
Volker Guelzow         volker.guelzow@desy.de            DESY (Germany)
Yukio Karita           yukio.karita@kek.jp               KEK (Japan)




                                            2
ICFA SCIC Monitoring Working Group:

Chair: Les Cottrell      cottrell@slac.stanford.edu   SLAC (USA)

Daniel Davids            daniel.davids@cern.ch        CERN (CH)
Fukuko Yuasa             Fukuko.yuasa@kek.jp          KEK (Japan)
Richard Hughes-Jones     r.hughes-jones@man.ac.uk     University of Manchester (UK)
Sergei Berezhnev         sfb@radio-msu.net            RUHEP (RU)
Sergio Novaes            novaes@fnal.gov              Sao Paulo (Brazil)
Shawn McKee              smckee@umich.edu             University of Michigan (USA)
Sylvain Ravot            sylvain.ravot@cern.ch        Caltech (USA)

ICFA SCIC Digital Divide Working group:

Chair: Alberto Santoro   santoro@uerj.br              UERJ (Brazil)

David O. Williams        david.o.williams@cern.ch     CERN (CH)
Dongchul Son             son@knu.ac.kr                KNU (Korea)
Hafeez Hoorani           hoorani@comsats.net.pk       Pakistan
Harvey Newman            newman@hep.caltech.edu       Caltech (USA)
Viacheslav Ilyin         ilyin@sinp.msu.ru            SINP MSU (RU)
Heidi Alvarez            heidi@fiu.edu                Florida International University (USA)
Julio Ibarra             julio@fiu.edu                Florida International University (USA)
Sunanda Banerjee         sunanda.banerjee@cern.ch     India
Syed M. H. Zaidi         drzaidi@niit.edu.pk          NUST (Pakistan)
Vicky White              white@fnal.gov               FNAL (USA)
Yukio Karita             yukio.karita@kek.jp          KEK (Japan)

ICFA SCIC Advanced Technologies Working Group:

Chair: R. Hughes-Jones   r.hughes-jones@man.ac.uk     University of Manchester (UK)

Harvey Newman            newman@hep.caltech.edu       Caltech (USA)
Olivier H. Martin        olivier.martin@cern.ch       CERN (CH)
Sylvain Ravot            sylvain.ravot@cern.ch        Caltech (USA)
Vladimir Korenkov        korenkov@cv.jinr.ru          JINR, Dubna (RU)




                                                3
                      Report of the Standing Committee on
                       Inter-Regional Connectivity (SCIC)
                                  February 2004

         Networking for High Energy and Nuclear Physics

                                               On behalf of the SCIC:

                                              Harvey B Newman
                                       California Institute of Technology
                                           Pasadena, CA 91125, USA
                                            newman@hep.caltech.edu

1.   Introduction: HENP Networking Challenges ............................................................. 6
2.   ICFA SCIC in 2002-3 ................................................................................................. 7
3.   General Conclusions ................................................................................................. 11
4.   Recommendations ..................................................................................................... 12
5.   The Digital Divide and ICFA SCIC.......................................................................... 16
  5.1.    The Digital Divide illustrated by network infrastructures ................................ 18
  5.2.    Digital divide Illustrated by network performance ........................................... 19
  5.3.    How GEANT closes the Digital divide in Europe ............................................ 22
  5.4.    A new “culture of worldwide collaboration” .................................................... 23
6. HENP Network Status: Major Backbones and International Links.......................... 24
  6.1.    Europe ............................................................................................................... 26
  6.2.    North America .................................................................................................. 35
  6.3.    Korea and Japan ................................................................................................ 40
  6.4.    Intercontinental links ........................................................................................ 43
7. Advanced Optical Networking Projects and Infrastructures .................................... 47
  7.1.    Advanced Optical Networking Infrastructures ................................................. 47
     AARNet in Australia................................................................................................. 47
     SANET in Slovakia................................................................................................... 47
     CESNETin Republic Czech ...................................................................................... 48
     PIONIER in Poland................................................................................................... 49
     SURFnet6 in Netherland ........................................................................................... 49
     X-Win in Germany ................................................................................................... 50
     FiberCO in the US .................................................................................................... 51
     National LambdaRail in the US ................................................................................ 51
  7.2.    Advanced Optical Networking Projects and Initiatives .................................... 53
8. HENP Network Status: “Remote Regions” .............................................................. 58
  8.1.    East-Europe ....................................................................................................... 58
  8.2.    Russia and the Republics of the former Soviet Union ...................................... 59
  8.3.    Mediterranean countries.................................................................................... 61
  8.4.    Asia Pacific ....................................................................................................... 62
  8.5.    South America .................................................................................................. 64



                                                                4
9. The Growth of Network Requirements in 2003 ....................................................... 67
10.    Growth of HENP Network Usage in 2001-2004 .................................................. 72
11.    HEP Challenges in Information Technology ........................................................ 75
12.    Progress in Network R&D .................................................................................... 75
13.    Upcoming Advances in Network Technologies ................................................... 76
14.    Meeting the challenge: HENP Networks in 2005-2010; Petabyte-Scale Grids with
Terabyte Transactions ....................................................................................................... 78
15.    Coordination with Other Network Groups and Activities .................................... 79
16.    Broader Implications: HENP and the World Summit on the Information Society80
17.    Relevance of Meeting These Challenges for Future Networks and Society ........ 81




                                                               5
1.       Introduction: HENP Networking Challenges

Wide area networking is a fundamental and mission-critical requirement for High Energy and
Nuclear Physics. Moreover, HENP‟s dependence on high performance networks is increasing
rapidly. National and international networks of sufficient (and rapidly increasing) bandwidth and
end-to-end performance are now essential for each part and every phase of our physics programs,
including:
        Data analysis involving physicists from all world regions
        Detector development and construction on a global scale
        The daily conduct of collaborative work in small and large groups, in both experiment
         and theory
        The formation and successful operation of worldwide collaborations
        The successful and ubiquitous use of current and next generation distributed collaborative
         tools
        The conception, design and implementation of next generation facilities as “global
         networks”1
     Referring to the largest experiments, and to the global collaborations of 500 to 2000
     physicists from up to 40 countries and up to 160 institutions, a well known physicist2
     summed it up by saying:


     “Collaborations on this scale would never have been attempted,
             if they could not rely on excellent networks.”

In an era of global collaborations, and data intensive Grids, advanced networks are required to
interconnect the physics groups seamlessly, enabling them to collaborate throughout the lifecycle
of their work. For the major experiments, networks that operate seamlessly, with quantifiable
high performance and known characteristics are required to create data Grids capable of
processing and sharing massive physics datasets, rising from the Petabyte (1015 byte) to the
Exabyte (1018 byte) scale within the next decade.
The need for global network-based systems that support our science has made the HENP
community a leading early-adopter, and more recently a key co-developer of leading edge wide
area networks. Over the past few years, several groups of physicists and engineers in our field, in
North America, Europe and Asia, have worked with computer scientists to make significant
advances in the development and optimization of network protocols, and methods of data
transfer. During 2003 these developments, together with the availability of 2.5 and 10 Gigabit/sec
wide area links and advances in data servers and their network interfaces (notably 10 Gigabit
Ethernet) have made it possible for the first time to utilize networks with relatively high
efficiency in the 1 to 10 Gigabit/sec (Gbps) speed range over continental and transoceanic
distances.


1
  Such as the Global Accelerator Network (GAN); see http://www.desy.de/~dvsem/dvdoc/WS0203/willeke-
20021104.pdf.
2
  Larry Price, Argonne National Laboratory, in the TransAtlantic Network (TAN) Working Group Report, October
2001; see http://gate.hep.anl.gov/lprice/TAN.


                                                       6
These developments have been paralleled by upgrades in the national, and continental core
network infrastructures, as well as the key transoceanic links used for research and education, to
typical bandwidths in North America, Western Europe as well as Japan and Korea of 2.5 to 10
Gbps. This is documented in a series of brief Appendices to this report covering some of the
major national and international networks and network R&D projects. The transition to the use of
“wavelength division multiplexing” to support multiple optical links on a single fiber has made
these links increasingly affordable, and this has resulted in a substantially increased number of
these links coming into service during 2003. In 2004 we expect this trend to continue and spread
to other regions, notably including 10 Gbps links across the Atlantic and Pacific linking Australia
and North America, and Russia, China and the US through the “GLORIAD” optical ring project.
In some cases high energy physics laboratories or computer centers have been able to acquire
leased “dark fiber” to their site, where they are able to connect to the principal wide area
networks they use with one or more wavelengths. In 2003-4 we are seeing the emergence of some
privately owned or leased wide area fiber infrastructures, managed by non-profit consortia of
universities and regional network providers, to be used on behalf of research and education.3
This includes “National Lambda Rail” covering much of the US, accompanied by initiatives in
many states (notably Illinois, California, and Florida), and similar initiatives are already
underway in several European countries (notably in Netherlands, Poland and the Czech Republic)
or being considered.
These trends have also led to a forward-looking vision of much higher capacity networks based
on many wavelengths in the future, where statically provisioned shared network links are
complemented by dynamically provisioned optical paths to form “Lambda Grids” for the most
demanding applications. The visions of advanced networks and Grid systems are beginning to
converge, where future Grids will include end-to-end monitoring and tracking of networks as well
as computing and storage resources, forming an integrated information system supporting data
analysis, and more broadly research in many fields, on a global scale.
The rapid progress and the advancing vision of the future in the “most economically favored
regions” of the world during 2003 also has brought into focus the problem of the Digital Divide
that has been a main activity of the SCIC over the last three years. As we advance, there is an
increasing danger that the groups in the less favored regions, including Southeast Europe, Latin
America, much of Asia, and Africa will be left behind. This problem needs concerted action on
our part, if our increasingly global physics collaborations are to succeed in enabling scientists
from all regions of the world to take their rightful place as full partners in the process of scientific
discovery.



2.        ICFA SCIC in 2002-3

The intensive activities of the SCIC continued in 2003 (as foreseen), and we expect this level of
activity to continue in 2004. The committee met 3 times in the last 12 months, and continued its
carry out its charge to:
         Track network progress and plans, and connections to the major HENP institutes and

3
  The rapid rise of plans or investigations of options for dark fiber in many corners of the world has been accompanied
by a growing awareness that the availability of leased or owned optical fiber at affordable prices may be short-lived, as
the telecom industry recovers following a period of rampant bankruptcies and consolidation in 2001-3.




                                                           7
         universities in countries around the world;
        Monitor network traffic, and end-to-end performance in different world regions
        Keep track of network technology developments and trends, and use these to “enlighten”
         network planning for our field
        Focus on major problems related to networking in the HENP community; determine
         ways to mitigate or overcome these problems; bring these problems to the attention of
         ICFA, particularly in areas where ICFA can help.
The SCIC working groups formed in the Spring of 2002 continued their work.
        Monitoring (Chaired by Les Cottrell of SLAC)
        Advanced Technologies (Chaired by Richard Hughes-Jones of Manchester)
        The Digital Divide (Chaired by Alberto Santoro of UERJ, Brazil)4

The working group membership was strengthened through the participation of several technical
experts and members of the community with relevant experience in networks, network
monitoring, and other relevant technologies throughout 2003.
The SCIC web site, hosted by CERN (http://cern.ch/icfa-scic) that was set up in the summer of
2002 is kept up to date with detailed information on the membership, meetings, minutes,
presentations and reports. An extensive set of reports used to write this repot is available at the
Web site.
While there were fewer general meetings of the SCIC in 2003 than in 2002, SCIC members took
an active and often central role in a large number of conferences, workshops and major events
related to networking and the Digital Divide during the past year. These events tended to focus
attention on the needs of the HENP community, as well as its key role as an application driver,
and also a developer of future networks:
     SWITCH-CC (Coordination Committee) meeting, January 23, University of Bern,
        “DataTAG project Update”
        The AMPATH Workshop on Fostering Collaborations and Next-Generation
         Infrastructure, Miami, January 2003.5
        Meetings of TERENA, the Trans-European Research and Education Networking
         Association6. One of the key activities of TERENA was SERENATE7, “a series of
         strategic studies into the future of research and education networking in Europe,
         addressing the local (campus networks), national (national research & education
         networks), European and intercontinental levels” covering technical, policy, pricing and
         Digital Divide issues.
        Members‟ Meetings of Internet28, including the Internet 2 HENP Sponsored Interest
         Group, the Applications Strategy Council, the End-to-end Performance Initiative and the
         VidMid Initiative on collaborative middleware.
        Meetings of APAN9, the Asia Pacific Advanced Network (January and August 2003)

4
  The Chair of this working group passed from Manuel Delfino of Barcelona to Santoro in mid-2002.
5
  http://ampath.fiu.edu/miami03_agenda.htm
6
  http://www.terena.nl, The TERENA compendium contains detailed information of interest on the status
and evolution of research and education networks in Europe.
7
  The SERENATE website at http://www.serenate.org includes a number of public documents of interest.
8
  http://www.internet2.edu
9
  http://www.apan.net


                                                  8
           GEANT APM (Access Point Manager) meeting, February 3, CESCA, Barcelona (ES),
            “DataTAG project Update”
           PFLDnet, February 3-4, CERN, Geneva (CH), “GridDT (Data Transport)”
           ON-Vector Photonics workshop, February 4, San Diego (USA), "Optical Networking
            Experiences @ iGrid2002".
           OptIPuter workshop, February 7, San Diego (USA), "IRTF-AAAARCH research group"
           First European Across Grids Conference, February 14, Santiago de Compostela (ES),
            “TCP behaviour on Trans Atlantic Lambda‟s”
           MB-NG workshop, February, University College London (UCL) (UK), “Generic AAA-
            Based Bandwidth on Demand”
           CHEP‟2003, March 24-28, La Jolla/San Diego (USA), Olivier Martin (CERN),
            “DataTAG project Update”
           JAIST10 (Japan Advanced Institute of Science & Technology) Seminar, 24 February,
            Ishikawa (Japan), “Efficient Network Protocols for Data-Intensive Worldwide Grids”
           NTT, Tokyo (Japan), March 3, "Optical Networking Experiences @ iGrid2002"
           GGF7, Tokyo (Japan), March 4, "Working and research group chairs training".
           DataGrid conference, May 2003, Barcelona (ES), “DataTAG project update”
           RIPE-45 meeting (European Operators Forum), May 2003, Barcelona (ES), “Internet
            data transfer records between CERN and California”
           Terena Networking Conference, May 2003, Zagreb (HR), “High-Performance Data
            Transport for Grid Applications”
           RIPE-45 meeting (European Operators Forum), May 2003, Barcelona (ES), "Internet data
            transfer records between CERN and California".
           After-C5 & LCG meetings, June 2003, CERN (CH), “CERN‟s external networking
            update”
           US DoE workshop, June 2003, Reston, Virginia (USA), “Non-US networks”
           Grid Concertation meeting, June 2003, Brussels (BE), “DataTAG contributions to
            advanced networking, Grid monitoring, interoperability and security”.
           GGF8, June 2003, Seattle, Washington State (USA), “Conceptual Grid Authorization
            Framework”.
           ISOC, ledenvergadering, Amsterdam (NL), June 2003, "High speed networking for Grid
            Applications".
           SURFnet expertise seminar, Amsterdam (NL), June 2003, "High speed networking for
            Grid Applications".
           ASCI conference, Heijen (NL), June 2003, "High speed networking for Grid
            Applications".
           GGF GRID school, July 2003, Vico Equense, Italy, “Lecture on Glue Schema”
           EU-US Optical “lambda” workshop appended to the 21st NORDUnet Network
            Conference, August 2003, Reykjavik, Iceland, “The case for dynamic on-demand
            “lambda” Grids”
           NORDUnet 2003 Network Conference, August 2003, Reykjavik, Iceland, “High-
            Performance Transport for Data-Intensive World-Wide Grids”


10
     http://www.jaist.ac.jp/


                                                 9
        9th Open European summer School and IFIP Workshop on Next Generation Networks
         (EUNICE 2003), September 2003, Budapest–Balatonfüred, Hungary, “Benchmarking
         QoS on Router Interfaces of Gigabit Speeds and Beyond”
        NEC‟2003 conference, September 2003, Varna (Bulgaria), « DataTAG project update »
        RIPE-46 September 2003 meeting "PingER: a lightweight active end-to-end network
         monitoring tool/project”
        University of Michigan & MERIT, October 2003, Ann Arbor (MI/USA), "The Lambda
         Grid".
        Crakow Grid Workshop (CGW‟03), October 2003, Crakow, Poland, “DataTAG project
         update & recent results”
        The Open Round Table “Developing Countries Access to Scientific Knowledge;
         Quantifying the Digital Divide, ICTP Trieste, October 2003.
        Japan‟s Internet Research Institute, October 2003, CERN (Switzerland), « DataTAG
         project update & recent results»
        Telecom World 2003, Geneva, October 2003. This conference held every 3-4 years
         beings to together the telecommunications industry and key network developers and
         researchers. Caltech and CERN collaborated on a stand at the conference, on a series of
         advanced network demonstrations, and a joint session with the Fall Internet2 meeting.
         More details are available in appendix 26.
        UKLight Open Meeting, November 2003, Manchester (UK), "International Perspective:
         Facilities supporting research and development with LightPaths".
        The SuperComputing 2003 conference (November 15-21, Phoenix, Arizona, USA). SCIC
         members were involved in the construction of several booths, and conducted numerous
         demonstrations and presentations of Grid and network technologies. Scientists from
         Caltech, SLAC, LANL and CERN joined forces to win the Sustained Bandwidth award
         for their demonstration of “Distributed Particle Physics Analysis Using Ultra-High Speed
         TCP on the Grid”. More details are available in appendix 26.
        Bandwidth ESTimation 2003 workshop, organized by DoE/CAIDA, December 2003, San
         Diego, CA, USA, “A method for measuring the hop-by-hop capacity of a path”
        The World Summit on the Information Society11 (Geneva December 2003) and the
         associated CERN event on the Role of Science in the Information Society (RSIS). More
         details are available in appendix 26.
        Preparations for and launch of GLORIAD12, the US-Russia-China Optical Ring.


The conclusion from the SCIC meetings throughout 2002-3, setting the tone for 2004, is that the
scale and capability of networks, their pervasiveness and range of applications in everyday life,
and dependence of our field on networks for its research in North America, Europe and Japan are
all increasing rapidly. One recent development accelerating this trend is the worldwide
development and deployment of data-intensive Grids, especially as physicists begin to develop



11
  http://www.itu.int/WORLD2003/
12
  http://www.gloriad.org and http://www.nsf.gov/od/lpa/news/03/pr03151_video.htm . Information on the
Launch Ceremony (January 12-13, 2004 can be found at
http://www.china.org.cn/english/international/84572.htm


                                                 10
ways to do data analysis, and to collaborate in a “Grid-enabled environment13”.

However, as the pace of network advances continues to accelerate, the gap between the
technologically “favored” regions and the rest of the world is, if anything, in danger of widening.
Since networks of sufficient capacity and capability in all regions are essential for the health of
our major scientific programs, as well as our global collaborations, we must encourage the
development and effective use of advanced networks in all world regions. We therefore agreed to
make the committee‟s work on closing the Digital Divide14 a prime focus for 2002-415.
The SCIC also continued and expanded upon its work on monitoring network traffic and
performance in many countries around the world through the Monitoring Working Group. An
updated report from this Working Group accompanies this report. We continued to track key
network developments through the Advanced Technologies Working Group.


3.         General Conclusions

          The bandwidth of the major national and international networks used by the HENP
           community, as well as the transoceanic links is progressing rapidly and has reached the
           2.5 – 10 Gbps range. This is encouraged by the continued rapid fall of prices per unit
           bandwidth16 for wide area networks, as well as the widespread and increasing
           affordability of Gigabit Ethernet.
          A key issue for our field is to close the Digital Divide in HENP, so that scientists from
           all regions of the world have access to high performance networks and associated
           technologies that will allow them to collaborate as full partners: in experiment, theory
           and accelerator development. This is discussed in the following sections of this report,
           and in more depth in the 2003 Digital Divide Working Group Report17.
          The rate of progress in the major networks has been faster than foreseen (even 1 to 2
           years ago). The current generation of network backbones, representing an upgrade in
           bandwidth by factors ranging from 4 to more than several hundred in some countries,
           arrived in the last two years in the US, Europe and Japan. This rate of improvement is
           faster, and in some cases many times the rate of Moore‟s Law18. This rapid rate of
           progress, confined mostly to the US, Europe, Japan and Korea, as well as the major
           Transatlantic routes, threatens to open the Digital Divide further, unless we take action.
          Reliable high End-to-end Performance of networked applications such as large file
           transfers and Data Grids is required. Achieving this requires:
                o   End-to-end monitoring extending to all regions serving our community. A
                    coherent approach to monitoring that allows physicists throughout our
                    community to extract clear, unambiguous and inclusive information is a
                    prerequisite for this.

13
     See http://ultralight.caltech.edu/gaeweb
14
   This is a term for the gap in network capability, and the associated gap in access to communications and Web-based
information, e-learning and e-commerce, that separates the wealthy regions of the world from the poorer regions.
15
   In 2003, the world focused on Digital Divide issues through the WSIS and RSIS events. This global focus will
continue at least until the end of 2005, when the second half of the WSIS is held in Tunis.
16
   Bandwidth prices are expected to continue to fall for the next few years, although at a more modest rate than in the
recent past, due to the recovery of the telecom industry and the resulting rise in the demand for bandwidth.
17
   Available on the Web at http://cern.ch/icfa-scic
18
   Usually quoted as a factor of 2 improvement in performance at the same cost every 18 months.


                                                          11
            o   Upgrading campus infrastructures. While National and International
                backbones have reached 2.5 to 10 Gbps speeds in many countries, campus
                network infrastructures are still not designed to support Gbps data transfers in
                most of HEP centers. A reason for the under utilization of National and
                International backbones, is the lack of bandwidth to groups of end users inside
                the campus.
            o   Removing local, last mile, and national and international bottlenecks end-to-
                end, whether the bottlenecks are technical or political in origin. Many HEP
                laboratories and universities situated in countries with excellent network
                backbones are not well-connected, due to limited access bandwidth to the
                backbone, or the bandwidth provided by their metropolitan or regional
                network, or through the lack of peering arrangements between the networks
                with sufficient bandwidth. This problem is very widespread in our community,
                with examples stretching from China to South America to the northeast region
                of the U.S., with root causes varying from lack of local infrastructure to
                unfavorable pricing policies.
            o   Removing Firwall bottlenecks. Firewall systems are so far behind the needs
                that they won’t match the data flow of Grid applications. The maximum
                throughput measured across available products is limited to a few 100 Mbps! It
                is urgent to address this issue by designing new architectures that
                eliminate/alleviate the need for conventional firewalls. For example, Point-to-
                point provisioned high-speed circuits as proposed by emerging Light Path
                technologies could remove the bottleneck. With endpoint authentication, the
                point-to-point paths are private and intrusion resistant circuits, so they should
                be able to bypass site firewalls if the endpoints (sites) trust each other.
            o   Developing and deploying high performance (TCP) toolkits in a form that is
                suitable for widespread use by users. Training the community to use these tools
                well, and wisely.



4.        Recommendations

          ICFA should work vigorously locally, nationally and internationally, to ensure
           that networks with sufficient raw capacity and end-to-end capability are available
           throughout the world. This is now a vital requirement for the success of our field and
           the health of our global collaborations.
          The SCIC, and where appropriate other members of ICFA, should work in
           concert with other cognizant organizations as well as funding agencies on
           problems of global networking for HENP as well as other fields of research and
           education. The organizations include in particular Internet2, TERENA, AMPATH;
           DataTAG, the Grid projects and the Global Grid Forum.

HENP and its worldwide collaborations could be a model for other scientific disciplines, and for
new modes of information sharing and communication in society at large. The provision of
adequate networks and the success of our Collaborations in the Information Age thus has broader
implications, that extend beyond the bounds of scientific research.




                                               12
The world community will only reap the benefits of global collaborations in research and
education, and of the development of advanced network and Grid systems, if we are able to close
the Digital Divide that separates the economically and technologically most-favored from the
less-favored regions of the world. It is imperative that ICFA members work with and advise the
SCIC on the most effective means to close this Divide, country by country and region by region.

Recommendations concerning approaches to close the Divide, where ICFA and the HENP
Laboratory Directors can help, include:


        Identify and work on specific problems, country by country and region by region, to
         enable groups in all regions to be full partners in the process of search and
         discovery in science.

         As detailed in the Digital Divide Working Group‟s 2003 Report, networks with adequate
         bandwidth tend to be too costly or otherwise hard to obtain in the economically poorest
         regions. Particular attention to China, Russia, India, Pakistan19, Southeast Asia, Southeast
         Europe, South America and Africa is required.

         Performance on existing national, metropolitan and local network infrastructures also
         may be limited, due to last mile problems, political problems, or a lack of coordination
         (or peering arrangements) among different network organizations.20

        Create and encourage inter-regional programs to solve specific regional problems.
         Leading      examples      include     the     Virtual  Silk    Highway       project21
         (http://www.nato.int/science/e/silk.htm) led by DESY, the support for links in Asia by
         the KEK High Energy Accelerator Research Organization in Japan (http://www.kek.jp),
         the recently launched GLORIAD project linking Russia, China and the US 22, and the
         support of network connections for research and education in South America by the
         AMPATH “Pathway to the Americas” (http://www.ampath.fiu.edu ) based at Florida
         International University.

        Make direct contacts, and help educate government officials on the needs and
         benefits to society of the development and deployment of advanced infrastructure
         and applications: for research, education, industry, commerce, and society as a whole.

        Use (lightweight; non-disruptive) network monitoring to identify and track
         problems, and keep the research community (and the world community) informed on the
         evolving state of the Digital Divide. One leading example in the HEP community is the
         Internet End-to-end Performance Monitoring (IEPM) initiative (http://www-
         iepm.slac.stanford.edu ) at SLAC.


19
  More details about the Digital Divide in Pakistan are available in Appendix 4 : “Digital Divide and measures taken
by Government of Pakistan”.
20
   These problems tend to be most prevalent in the poorer regions, but examples of poor performance on
existing network infrastructures due to lack of coordination and policy may be found in all regions.
21
   Members of SCIC noted that the performance of satellite links is no longer competitive with terrestrial
links based on optical fibers in terms of their achievable bandwidth or round trip time. But such links offer
the only practical solution for remote regions that lack an optical fiber infrastructure.
22
   See http://www.gloriad.org


                                                         13
    It is vital that support for the IEPM activity in particular, which covers 100 countries with
    78% of the world population (and 99% of the population connected to the Internet) be
    continued and strengthened, so that we can monitor and track progress in network
    performance in more countries and more sites within countries, around the globe. This is
    as important for the general mission of the SCIC in our community as it is for our work
    on the Digital Divide.

   Share and systematize information on the Digital Divide. The SCIC is gathering
    information on these problems and intends to develop a Web site on the Digital Divide
    problems of research groups, universities and laboratories throughout its worldwide
    community. This will be coupled to general information on link bandwidths, quality,
    utilization and pricing. Monitoring results from the IEPM will be used to track and
    highlight ongoing and emerging problems.


    This Web site will promote our community‟s awareness and understanding of the nature
    of the problems: from lack of backbone bandwidth, to last mile connectivity problems, to
    policy and pricing issues.

    Specific aspects of information sharing that will help develop a general approach to
    solving the problem globally include:

        o   Sharing examples of how the Divide can be bridged, or has been bridged
            successfully in a city, country or region. One class of solutions is the
            installation of short-distance optical fibers leased or owned by a university or
            laboratory, to reach the “point of presence” of a network provider. Another is the
            activation of existing national or metropolitan fiber-optic infrastructures
            (typically owned by electric or gas utilities, or railroads) that have remained
            unused. A third class is the resolution of technical problems involving antiquated
            network equipment, or equipment-configuration, or network software settings,
            etc.

        o   Making comparative pricing information available. Since international
            network prices are falling rapidly along the major Transatlantic and Transpacific
            routes, sharing this information should help us set lower pricing targets in the
            economically poorer regions, by pressuring multinational network vendors to
            lower their prices in the region, to bring them in line with their prices in larger
            markets.

        o   Identifying common themes in the nature of the problem, whether technical,
            political and financial, and the corresponding methods of solution.

    NOTE: Progress on construction and maintenance of this Web site and database was
    hampered in 2003 by lack of funding and manpower. The SCIC is continuing to seek
    funding, or manpower support from the HENP Labs, to achieve these important goals.

   Create a “new culture of collaboration” in the major experiments and at the HENP
    laboratories, as described in the following section of this report.

   Work with the Internet Educational Equal Access Foundation (IEEAF)
    (http://www.ieeaf.org), and other organizations that aim to arrange for favorable


                                            14
         network prices or outright bandwidth donations23, where possible.

        Prepare for and take part in the World Summit on the Information Society (WSIS;
         http://www.itu.int/wsis/). The WSIS is being held in two phases. The first phase of the
         WSIS took place in Geneva in December 2003. The SCIC was active in this major event,
         and remains active in the WSIS process. The meeting in Geneva addressed the broad
         range of themes concerning the Information Society and adopted a Declaration of
         Principles and Plan of Action24. The second phase will take place in Tunis, in November
         2005. The WSIS process aims to develop a society where

          “highly-developed… networks, equitable and ubiquitous access to information,
         appropriate content in accessible formats and effective communication can help people
         achieve their potential…”.

         These aims are clearly synergistic with the aims of our field, and its need to provide
         worldwide access to information and effective communications in particular.

         HENP has been recognized as having relevant experience in effective methods of
         initiating and promoting international collaboration, and harnessing or developing new
         technologies and applications to achieve these aims. HENP has been involved in WSIS
         preparatory and regional meetings in Bucharest in November 2002 and in Tokyo in
         January 2003. It has been invited to run a session on The Role of New Technologies in
         the Development of an Information Society25, and was invited26 to take part in the
         planning process for the Summit itself.

         On behalf of the world's scientific community, in December, CERN organized the Role
         of Science in the Information Society27 (RSIS) conference, a Summit event of WSIS.
         RSIS reviewed the prospects that present developments in science and technology offer
         for the future of the Information Society, especially in education, environment, health,
         and economic development. More details about the participation of the HENP community
         in the WSIS and RSIS are given in Section 15 and Appendix 26.

        Formulate or encourage bi-lateral proposals28, through appropriate funding agency
         programs. Examples of programs are the US National Science Foundation‟s ITR, SCI
         and International programs, the European Union‟s Sixth Framework and @LIS programs,
         and NATO‟s Science for Peace program.

        Help start and support workshops on networks, Grids, and the associated advanced
         applications. These workshops could be associated with helping to solve Digital Divide

23
   The IEEAF successfully arranged a bandwidth donation of a 10 Gbps research link and a 622 Mbps
production service in September 2002. It is expected to announce a donation between California and the
Asia Pacific region early in 2003.
24
   http://www.itu.int/wsis/documents/doc_multi.asp?lang=en&id=1161|1160
25
   At the WSIS Pan-European Ministerial meeting in Bucharest in November 2002.
See http://cil.cern.ch:8080/WSIS and the US State Department site http://www.state.gov/e/eb/cip/wsis/
26
   By the WSIS Preparatory Committee and the US State Department.
27
   http://rsis.web.cern.ch/rsis/01About/AboutRSIS.html
28
  A recent example is the CLARA project to link Argentina, Brazil, Chile and Mexico to Europe. Another is the
CHEPREO project funded by the US NSF from Florida International University, AMPATH, other universities in
Florida, Caltech and UERJ in Brazil for a center for HEP Research, Education and Outreach which includes partial
funding for a network link between North and South America for HENP.


                                                        15
          problems in a particular country or region, where the workshop will be hosted. One
          outcome of such a workshop is to leave behind a better network, and/or better conditions
          for the acquisition, development and deployment of networks.

          The SCIC is planning the first such workshop in Rio de Janeiro in February 200429,
          approved by ICFA in 2003. ICFA members are requested to participate in these meetings
          and in this process.

         Help form regional support and training groups for network and Grid system
          development, operations, monitoring and troubleshooting.30

5.        The Digital Divide and ICFA SCIC

The success of our major scientific programs, and the health of our global collaborations, depend
on physicists from all world regions being full partners in the scientific enterprise. This means
that they must have access to affordable networks of sufficient bandwidth, with an overall scale
of performance that advances rapidly over time to meet the growing needs.
While the performance of networks has advanced substantially in most or all world regions, by a
factor of 10 roughly every 4 to 5 years during each of the last two decades, the gulf that separates
the best- and least-well provisioned regions has remained remarkably constant. This separation
can be expressed in terms of the achievable throughput at a given time, or the time-difference
between the moment when a certain performance level is first reached in one region, and when
the same performance is reached in another region. This is illustrated in Figure 1 below31, where
we see a log plot of the maximum throughput achievable in each region or across major networks
(e.g. ESnet) versus time. The figure shows explicitly that China, Russia, India, the Middle East,
Latin America, Central Asia, Southeast Europe and Africa are several years (from a few to 10
years) behind North America, Canada and (Western) Europe. While network performance in each
region is improving, the fact that many of the lines in the plot for the various regions are nearly
parallel means that the time-lag (and hence the “Digital Divide”) has been maintained for the last
few years, and there is no indication that it will be closed unless ICFA, the SCIC and the HENP
community take action.
Rapid advances in network technologies and applications are underway, and further advances and
possibly breakthroughs are expected in the near future. While these developments will have
important beneficial effects on our field, the initial benefits tend to be confined, for the most part,
to the most economically and technologically advanced regions of the world (North America,
Japan, and parts of western Europe). As each new generation of technology is deployed, it
therefore brings with it the threat of further opening the Digital Divide that separates the
economically most-favored regions from the rest of the world.
Closing this Divide, in an era of global collaborations, is of the highest importance for the
present and future health of our field.

29
   The first Digital Divide and HEPGrid Workshop has attracted more than 130 scientists, computer scientists,
technologists and government officials. See http://www.uerj.br/lishep2004
30
   One example is the Internet2 HENP Working Group in the US. See http://henp.internet2.edu/henp
31
   This figure is taken from the SLAC Internet End-to-end Performance Monitoring Project (IEPM); see
   http://www-iepm.slac.stanford.edu/ and the 2004 SCIC Monitoring Working Group Report at
http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan04/. The coverage of the data taken recently is substantially
improved compared to 2002. Note that the maximum throughput, based the rate of packet loss and round trip time
corresponds to the standard TCP stack. New or appropriately tuned TCP stacks can achieve much higher throughput
over high quality links.


                                                           16
Figure 1 Maximum throughput for TCP streams versus time, in several regions of the world, seen
                                      from SLAC




                                             17
5.1.      The Digital Divide illustrated by network infrastructures

Another stark illustration of the Digital Divide is shown in the Figure 2 , taken from the
TERENA32 2003 Network Compendium. It shows the core network size of NRENs in Europe, on
a logarithmic scale. The figure is an estimator of the total size of the networks, obtained by
multiplying the length of the various links in the backbone with the capacity of those links in
Mbits/s, The resulting unit is network size in Mbits/s * km, It shows that a number of countries
have made impressive advances in their national networks over the last 2 to 3 years. However,
except for Poland, the Czech Republic, Slovakia and Hungary, eastern European countries are
still far behind western European countries, especially if we divide the core network size by the
population of the country.




Figure 2 Core network size of European NRENs (in Mb/s*km). Note that the three graphs above are
                                                                       5     6
 on very different scales, and the smallest networks are a factor of 10 to 10 times smaller than the
                                                 33
                                largest (SURFnet in the Netherlands).


32
   The TransEuropean Research and Education Network Association (http://www.terena.nl) . The full 2003
compendium is available as a series of .pdf files at http://www.terena.nl/compendium/2003/ToC.html
33
   Please note that SURFnet entry includes some research links.


                                                       18
The disparity evident in Figure 2 is confirmed in Figure 3 which gives the total bandwidth of each
nation‟s external links, in 2002 and 2003. Note that the scale is logarithmic. Many countries
upgraded their links in 2002 and did not them upgrade again in 2003. The increases tend to go in
leaps and bounds; few networks are growing gradually, because one tends to advance to the next
technology generation as a result of an upgrade.




                    Figure 3 External bandwidth of European NRENs (in Mbits/s)


5.2.    Digital divide Illustrated by network performance

As      discussed      in     the     SCIC       Monitoring       Working        Group      Report
http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan04/ packet loss and Round Trip
Time (RTT) are two very relevant metrics in the evaluation of the digital divide. Since it began in
1995, the IEPM working group at SLAC has expanded its coverage to monitor over 850 remote
hosts at 560 sites in over 100 countries, so covering networks used by over 78% of the world's
population and over 99% of the online users of the Internet.




                                                19
Figure 4 shows the fractions of the world‟s population that experience various levels of “loss
performance”, corresponding to different ranges in the measured rate of packet loss, as seen from
the US. It can be seen that in 2001, less than 20% of the population lived in countries with good
to acceptable performance (< 2.5% packet loss). But the rate of packet loss has been decreasing
by 40-50% per year, and in some regions such as S.E. Europe, even more. By the end of 2003 the
fraction of the world experiencing good to acceptable performance had increased markedly, to
77%.




                                                                                             .




Figure 4 Fraction of the world's population in countries with various levels of measured loss
performance, seen from the US in 2001 (Top graph) and in December 2003 (Bottom graph).
“Poor”, “Very Poor” and “Bad” mean that effective collaboration over networks is virtually
                                        impossible.




                                               20
 Figure 5 Monthly average Round Trip Time (RTT) measured from U.S to various countries of the
world for January 2000 (above) and December 2003 (below). In the Jan. 2000 map countries shaded
                                in light green are not measured.




                                              21
Figure 6 shows the throughput seen between monitoring and monitored hosts in the major regions
of the world. Each column is for monitoring hosts in a given region, each row is for monitored
hosts in a given region. The cells are color-coded as follows:
          White: Good                 1000 kbps throughput achievable
          Green: Acceptable           500 kbps to 1000 kbps
          Yellow: Poor                200 kbps to 500 kbps
          Pink: Very Poor              < 200 kbps




     Figure 6 Derived throughputs in kbits/s from monitoring hosts to monitored hosts by region of the
                                          world, in August 2003


5.3.       How GEANT closes the Digital divide in Europe

Another way to measure the digital divide is the cost of connectivity. Figure 734 shows the
relative cost of international connectivity to countries in the GÉANT network, plotted against the
number of suppliers offering connectivity to that country in 2004. (The relative cost is the cost
divided by the lowest possible cost in GÉANT). The figure35 shows that the “Digital Divide”,
measured as the ratio between the most expensive and least expensive connectivity in Europe, is
114. Without including Malta and Turkey, this number is 39.4. In spite of a factor 114 between
the most and the less expensive connectivity, the GEANT charges are uniform throughout the


34
   Reference: Dai Davies (GEANT), January 2004.
35
   Explaining the figure, Davies states: Obviously, there are different speeds of connectivity in the network. The general
economies of scale in telecommunications mean that faster circuits represent relatively better value for money than
slower circuits. Adjustments have been made to the international connectivity numbers so that we are comparing prices,
having adjusted them for differences in capacity. The basis on which this has been done is a good knowledge of the
relative cost of different speeds of connectivity across Europe. Thus, typically, 622 Mbps is roughly half the cost of 155
Mbps, etc.


                                                           22
GEANT community, namely the Poles have to contribute the same amount to DANTE as the
Swiss for connecting at the same speed, only the access charges are different (i.e. the local loop).

                                               Number of Suppliers versus Cost of Connectivity GEANT 2004 Data inc Turkey and Malta


                                  120




                                  100
  Relative Cost of Connectivity




                                  80




                                  60




                                  40




                                  20




                                   0
                                        0        2                4                6                8                10               12   14
                                                                                  Number of Suppliers



 Figure 7 The relative cost of international connectivity to countries in the GÉANT network, plotted
               against the number of suppliers offering connectivity to that country.


5.4.                                    A new “culture of worldwide collaboration”

It is also important to note that once networks of sufficient performance and reliability, and tools
for remote collaboration are provided, our community will have to strive to change its culture, so
that physicists remote from the experiment, especially younger ones, and students who cannot
travel often (or ever) to the laboratory site of the experiment, are able to participate fully in the
analysis, and the physics.
A new “culture of worldwide collaboration” would need to be propagated throughout our field
if this is to succeed. The Collaborations would have to adopt a new mode of operation, where
care is taken to share the most interesting and current developments in the analysis, and the
discussions of the latest and most important issues in analysis and physics, with groups spread
around the world on a daily basis. The major HENP laboratories would also need to create
rooms, and new “collaborative working environments” able to support this kind of sharing, with
the backing of laboratory management. The management of the experiments would need to
strongly support, if not require, the group leaders and other physicists at the laboratory to
participate in, if not lead, the collaborative activity on an ongoing day-to-day basis.

While these may appear to be lofty goals, the network and Grid computing infrastructure, and




                                                                                       23
  cost-effective collaborative tools36 are becoming available to support this activity. Physicists are
  turning to the development and use of “Grid-enabled Analysis Environments” and “Grid-enabled
  Collaboratories”, which aim to make daily collaborative sharing of data and work on analysis,
  supported by distributed computing resources and Grid software, the norm. This will strengthen
  our field by integrating young university-based students in the process of search and discovery.

  It is noteworthy that these goals are entirely consistent with, if not encompassed by the visionary
  ICFA Statement37 on Communications in International HEP Collaborations of October 17, 1996:

   “ICFA urges that all countries and institutions wishing to participate even more effectively and
  fully in international HEP Collaborations should:
           Review their operating methods to ensure they are fully adapted to remote
            participation
           Strive to provide the necessary communications facilities and adequate international
            bandwidth”

  We therefore call upon the management of the HENP laboratories, and the members of ICFA,
  to assume a leadership role and help create the conditions at the laboratories and some of the
  major universities and research centers, to fulfill ICFA’s visionary statement.

  The SCIC is ready to assist in this work.


6. HENP Network Status: Major Backbones and International Links

  This section reviews some of the major network backbones and international links used by
  HENP. The rapid evolution of these backbones and the major links connecting the HENP
  laboratories (related to the increasing affordability of bandwidth), is an important factor in our
  field‟s ability to keep up with its expanding network needs.
  Since the requirements report by the ICFA Network Task Force (ICFA-NTF) in 199838, a
  Transatlantic Network Working Group in the US in 2001 studied the network requirements of
  several of the major HEP experimental programs in which the US is involved. The results of this
  study39 generally confirmed the estimates of the ICFA-NTF reports, but found the requirements
  for several major HEP links to be somewhat larger. This report showed that the major links used
  by HENP would need to reach the Gbps range to the US HEP and CERN laboratories by 2002-3,
  and the 10 Gbps range by roughly 2004-7 (depending on the laboratory). Transatlantic bandwidth
  requirements were foreseen to rise from 3 Gbps in 2002 to more than 20 Gbps by 2006. As
  discussed later in this report, however, the requirements estimates are tending to increase as
  bandwidth in the “leading” regions becomes more affordable, as new more cost effective network
  technologies are deployed, and as the potential and requirements for a new generation of Grid
  systems becomes clearer. The picture of requirements and the state of the major networks are thus
  evolving hand in hand.
  36
     See for example www.vrvs.org
  37
     See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html
  38
     See http://davidw.home.cern.ch/davidw/icfa/icfa-ntf.htm and the Requirements Report at
  http://l3www.cern.ch/~newman/icfareq98.html .
  39
     See http://gate.hep.anl.gov/lprice/TAN .


                                                     24
As discussed further in the 2003 report of the SCIC Advanced Technologies working group, the
prices per unit bandwidth have continued to fall dramatically, allowing the speed of the principal
wide area network backbones and transoceanic links used by our field to increase rapidly in
Europe, North America and Japan. Speeds in these regions rose from the 1.5 to 45 Megabits/sec
(Mbps) range in 1996-1997, to the 2.5 to 10 Gbps range today. The outlook is for the continued
evolution of these links to meet the needs of HENP‟s major programs now underway and in
preparation at BNL, CERN, DESY, FNAL, JLAB, KEK, DESY, SLAC and other laboratories in
a cost effective way. This will require substantial ongoing investment.
The affordability of these links is driven, in part, by the explosion in the data transmission
capacity of a single optical fiber, currently reaching more than 1 Terabit/sec. This is achieved by
using dense wavelength division multiplexing (DWDM), where many wavelengths of light each
modulated to carry 10 Gbps are carried on one fiber. The affordable end-to-end capacity in
practice is however much more modest, and is limited by the cost of the fiber installation and the
equipment for transmitting/receiving, routing and switching the data in the network, as well as the
relatively limited speed and capacity of computers to send, receive, process and store the data.
Another limitation is the market price for relatively short-distance connections. This most
severely affects universities and laboratories in third world countries due to the relatively scarce
supply of bandwidth and/or the lack of competition, but it also affects HEP groups in all regions
of the world where connections to national or regional academic and research networks are not
available at low cost. There is also the “last mile” problem that persists in North America and
many European countries, where prices for relatively short connections at 1.5 Mbps – 1 Gbps
speeds often remain high, as a result of heavy demand versus limited supply by (very) few
vendors. In addition, vendors are often reluctant to deploy services based on new technologies
(such as Gigabit Ethernet). This is a result of the fact that deployment of the new services and
underlying technologies require significant investments by the vendor, while at the same reducing
the revenue stream compared to the older products.40
Examples of rapid progress in the capacity of the network backbones and the main links used by
HENP are given below. In many cases the bandwidth actually available in practice for HEP, on
shared academic and research networks serving a whole country, is much lower.41 There is
therefore an important continued role for links dedicated to  or largely available to  HEP;
especially on the most heavily used routes.
While these capacity increases on major links during the past year have led to generally improved
network performance, in the countries mentioned and between them across the Atlantic and
Pacific, meeting the HEP-specific needs (mentioned above) is going to require continued
concerted effort. This includes sharing network monitoring results, developing and promulgating
guidelines for best practices, tracking technology developments and costs, and dealing with end-
to-end performance problems as they arise.




40
    These factors are the root cause of the fact that most of the world, including the most technologically advanced
regions, still use modems at typical speeds of 40 kbps. The transition to “broadband” services such as DSL or cable
modems is well underway in the wealthier regions of the world, but the transition is proceeding at a rate that will take
several years to complete.
41
   For example, the typical reserved bandwidth to the national labs in France was often in the 2-34 Mbps range, until
the advent of RENATER3 in the latter half of 2002.


                                                           25
6.1.        Europe
           The GEANT pan-European backbone42 now interconnects 32 countries, and its core
            network includes many links at 10 Gbps (See Figure 8). Individual countries are
            connected at speeds in the range of 155 Mbps to 10 Gbps. The next generation pan-
            European backbone GEANT2, also known under the names GN2, will start to be
            deployed in 2005. In addition to maintain and upgrade the services and functionality of
            GEANT, GEANT2 will support and integrate research projects with the establishment of
            advanced testbeds to experiment, integrate, validate and demonstrate new technologies
            and services.




             Figure 8 The GEANT Pan-European Backbone Network, showing the major links


42
     Also see http://www.dante.net/server/show/nav.007


                                                         26
Figure 9 gives an idea of the evolution of the national research and education networks‟
(NRENs‟) backbone capacity in western Europe from 2001 to 2003. In 2001, the highest
capacity was 2.5 Gbps; in 2003 the highest is 10 Gbps. Typically, the core capacity goes up
in leaps, involving the change from one type of technology to another. Except for Greece and
Ireland, all backbone capacities are larger than 1 Gbps.




                Figure 9 Core capacity on western European NRENs
   NORDUnet (Figure 10) is the Nordic Internet highway to research and education
    networks in Denmark, Finland, Iceland, Norway and Sweden, and provides the Nordic
    backbone to the Global Information Society. As shown in yellow, a part of the backbone
    has already been upgraded to 10 Gbps, and other links are being upgraded now.




                           Figure 10 The NORDUnet network


                                          27
    SURFnet543 is the Dutch Research and Education Network. It is a fully optical 10 Gbps dual
     stack IP network. Today 65% of the SURFnet customer base is connected to SURFnet5 via
     Gigabit/s Ethernet. The current topology of SURFnet5 is shown in Figure 11. Early in 2003
     SURFnet presented its plans for SURFnet6. As part of the GigaPort Next Generation
     Network project (www.gigaport.nl), SURFnet6 is designed as a hybrid optical network. It
     will be based on dark fiber and aims to provide the SURFnet customers with seamless
     Lambda, Ethernet and IP network connectivity. In addition, SURFnet pioneered Lambda
     networking44 and developed NetherLight45, which has become a major hub in GLIF46, the
     Global Lambda Integrated Facility for Research and Education.




                                  Figure 11 SURFnet 10 Gbps backbone




43
   See also Appendix 9
44
   “Lambda networking” is about using different “colors” or wavelengths of (laser) light in fibers for separate
connections. Each wavelength is called a “Lambda”. Current coding schemes allow for typically 10 Gbit/s to be
encoded by a laser on a high-speed network interface. “Lambda Grids” are now being developed where individual
optical links each carrying a Lambda are interconnected dynamically, to form an end-to-end LightPath on demand, in
order to meet the needs of very demanding Grid applications. See Appendix 8
45
   See http://www.surfnet.nl/innovatie/netherlight/
46
   See for example http://international.internet2.edu/resources/events/2003/Fall03ITF2-GLIF.ppt


                                                        28
      The current infrastructure of RENATER347 (Figure 12) has been in place since Fall 2002 and
       will be in operation until mid 2005. Its main components are WDM loops interconnecting all
       RENATER points of presence in France. The standard capacity is 2.5 Gb/s for all WDM
       segments. These loops are reconfigurable in a quasi-automated mode, in case of the failure of
       any segment. Furthermore, the IP routing tables can be reconfigured in order to redirect
       traffic, if needed. These two features make it a highly resilient network, in which any
       maintenance or incident is handled without any impact on users. Most user sites are still not
       directly connected to the RENATER backbone, but can reach it through a regional or a
       metropolitan infrastructure. The overall user base of RENATER corresponds to about 650
       sites, of which fewer than 50 are directly connected. Most of these networks, especially the
       regional ones, are not compatible with the backbone technology e.g. based on IP/ATM, as
       proposed by RENATER from 1999 to 2002. There is a continuous shift of around 3 years in
       technology and performance, between the regional and national infrastructure. The situation
       is improved regarding the metropolitan backbones, since the hardware equipment is more
       affordable and the design, deployment and operation of these networks is generally done by
       the users themselves. It is much easier to deploy end to end services through the metropolitan
       backbone when they are directly connected to the RENATER PoPs (with GE interfaces, for
       instance).
       The primary international connectivity is to GEANT, for which the access capacity will be
       upgraded from the current 2.5 Gb/s to 10 Gb/s by March 2004. In parallel, the IN2P348
       computer center in Lyon is investigating provisioning a dark fiber connection to CERN.




                                   Figure 12 The Renater3 network in France



47
     See http://www.renater.fr/Reseau/index.htm and Appendix 25
48
     See http://www.in2p3.fr/


                                                        29
    The G-WIN German academic and research network49 is the core of the “Intranet for the
     science community” in Germany. It is configured around 27 core nodes, primarily located at
     scientific institutions. DFN-Verien is responsible for the operation of G-WIN. There are 55
     links interconnecting the core nodes. As shown in Figure 13, they are operated at rates from
     2.5 Gbps up to 10 Gbps with some of them using transparent wavelength technology. As
     discussed in Appendix 2, several of the core links will be upgraded to 10 Gbps during 2004.

        G-WiN
                                                                                                DFN
                                                                                                Ausbaustufe 3
                                                                Rostock
                                                           Kiel                                     Kernnetzknoten
                                                        Hamburg                                          10 Gbit/s
       Global Upstream
                              Oldenburg
                                                         Braunschweig                                    2,4 Gbit/s
                                         Hannover                                                        2,4 Gbit/s
                                                                                 Berlin
                                                           Magdeburg
                             Bielefeld                                                                   622 Mbit/s
                        Essen                       Göttingen               Leipzig
         St. Augustin                                                                 Dresden
                                   Marburg             Ilmenau
             Aachen                              Würzburg
                           Frankfurt
                                                                 Erlangen
                                Heidelberg
                             Karlsruhe                                          Regensburg
          Kaiserslautern
                                           Stuttgart                    Garching
                                                         Augsburg




                                              Figure 13 Current G-WIN topology

    The SuperJANET4 network (Figure 14) in the UK is composed of a 10 Gbps core and many
     2.5 Gbps links from each of the academic metropolitan area networks (MANs).50 The core
     upgrade from 2.5 Gbps to 10 Gbps was completed in July 2002. The UK academic and
     research community also is deploying a next generation optical research network called
     “UKLight”51. The UKLight project will provide links of 10 Gbps to Amsterdam and to
     StarLight in Chicago.

           SuperJanet4, July 2002                                                                                      20Gbps
                                                                                                                       10Gbps
                             Scotland via                                                 Scotland via
                               Glasgow                                                    Edinburgh                    2.5Gbps
                                                       WorldCom       WorldCom
                                                                                                                       622Mbps
                                                        Glasgow       Edinburgh                                        155Mbps
                            NNW                                                                           NorMAN


                                                                                                     YHMAN
                                                    WorldCom         WorldCom
           Northern                                 Manchester        Leeds
           Ireland

                                                                                                                EMMAN
                 MidMAN
                                                    WorldCom          WorldCom
                                                     Reading           London
                                                                                                                    EastNet
                 TVN


                                                    WorldCom                                                   External
                                                                        WorldCom
                                                    Bristol                                                    Links
             South Wales                                                Portsmouth
                MAN
                                                                                                              LMN

                SWAN&
                BWEMAN                                                                                    Kentish
                                                                                          LeNSE           MAN



            Figure 14 Schematic view of the SuperJanet4 network in the United Kingdom.


49
   See http://www.dfn.de/win/ and http://www.noc.dfn.de/
50
   See http://www.superjanet4.net and http://www.ja.net .
51
   See http://www.ja.net/development/UKLight/UKLightindex.html and Appendix 10


                                                                       30
           The Garr-B network52 (Figure 15) in operation in Italy since late 1999, is based on a
            backbone with links in the range of 155 Mbps to 2.5 Gbps. International connections
            include a 2.5 Gbps from the backbone to GEANT, and 2.5 Gbps to the commercial
            Internet provided by Global Crossing. Links from the backbone to other major cities and
            institutes are typically in the range of 34 to 155 Mbps. The next generation GARR-G53
            network is based on point-to-point “lambdas” (wavelengths) with link speeds of at least
            2.5 Gbps, and advanced services such as IPv6 and QoS. Metropolitan area networks
            (MANs) will be connected to the backbone, allowing more widespread high speed access,
            including secondary and then primary schools. A pilot network “GARR-G Pilot”
            (http://pilota.garr.it/ ) based on 2.5 Gbps wavelengths has been in operation since early
            2001.




                                      Figure 15 The GARR-B network in Italy




52
     See http://www.garr.it/garr-b-home-hgarrb-engl.shtml
53
     See http://www.garr.it/garr-gp/garr-gp-ilprogetto-engl.shtml


                                                            31
           Over the last two years CESnet54, the Czech NREN, has designed and implemented a
            new network topology based on two essential requirements: redundant connections and a
            low number of hops for all major Points of Presence (PoPs). As shown on Figure 16, the
            network core is based on Packet Over SONET (POS) technology with all core lines
            operating at 2.5 Gbps. The network has a 1.2 Gbps line to GÉANT used for academic
            traffic, a 622 Mbps line to Telia used for commodity traffic, and a 2.5 Gbps line to
            NetherLight for experimental traffic.




                           Figure 16 The CESnet Network in the Czech Republic




54
     See http://www.ces.net/


                                                  32
           The SANET55 network infrastructure in Slovakia (Figure 17) is based on leased dark
            fibers. The Network is configured as two rings providing full redundancy with a
            maximum delay of 5ms. In the near future SANET is planning to connect other Slovak
            towns to the optical infrastructure and upgrade the backbone speed to 10Gbps. Currently
            SANET is in the process of establishing a direct optical connection from Bratislava to
            Brno in the Czech Republic through a leased dark fiber. SANET’s progress has been
            very rapid: in January 2002, the highest speed link was only 4 Mbps.




                                 Figure 17 The SANET backbone in Slovakia




55
     See http://www.sanet.sk/en/siet_topologia.shtm


                                                      33
           The early availability of fibers allowed Poland to dramatically improve the backbone
            transmission speed of its NREN Pionier56. From June 2003, 16 MANs situated along the
            installed fiber routes have been connected to form an advanced 10 Gigabit Ethernet
            (10GE) network as shown in Figure 18. This transmission was built using native 10GbE
            transport (the 10GE Local Area Network standard) over the DWDM equipment installed
            on the pair of PIONIER fibers. Using DWDM on the fibers allows for future, cost
            effective network expansion, and allows one to build testbeds for the next generation
            networks supporting advanced network services. The current 10 GE network is thought to
            be an intermediate solution. A true multi-lambda optical network is planned to be
            implemented and made available to the Czech academic and research community.

                                                  GDAŃSK


                                 KOSZALIN

                                                                        OLSZTYN



                    SZCZECIN

                                             BYDGOSZCZ

                                                          TORUŃ                                 BIAŁYSTOK




                                       POZNAŃ

                                                                               WARSZAWA
                  GUBIN
                           ZIELONA
                            GÓRA                                                                SIEDLCE
                                                          ŁÓDŹ

                                                                                     PUŁAWY


                                       WROCŁAW                                    RADOM
                                                                                                LUBLIN


                                              CZĘSTOCHOWA
                                                                            KIELCE

                                        OPOLE


                                                GLIWICE    KATOWICE


                                                                      KRAKÓW          RZESZÓW


                                                CIESZYN     BIELSKO-BIAŁA




                                     10GE links
                                     10 GE nodes

                                 Figure 18 The PIONIER network in Poland




56
     See http://www.pionier.gov.pl/str_glowna.html and Appendix 13


                                                            34
6.2.   North America

      The “Abilene” Network of Internet2 in the US was designed and deployed during 1998 as
       a high-performance IP backbone to meet the needs of the Internet2 community. The
       initial OC-48 (2.5 Gbps) implementation, based on Cisco routers and Qwest SONET-
       based circuits, became operational in January 1999. The upgrade to the current OC-192
       (10 Gbps) network, based on Juniper routers and adding Qwest DWDM-based circuits,
       was completed in December 2003. The current topology is shown in Figure 19.




                            Figure 19 The Abilene 10 Gbps backbone


       Connecting to Abilene are 48 direct connectors that, in turn, provide connectivity to more
       than 220 participants, primarily research universities and several research laboratories.
       The speeds of the connections range from a diminishing number of OC-3 (155 Mbps)
       circuits to an increasing number of OC-48 (currently six) and 10 Gigabit Ethernet (now
       two) circuits. Abilene connectors are usually “gigaPoPs”, consortia of Internet2
       members that cover a geographically compact area of the country and connect the
       research universities and their affiliated laboratories in that area to the Abilene backbone.
       The three-level infrastructure of backbone, gigaPoP, and campus network is capable of
       providing scalable, sustainable, high-speed networking to the faculty, staff, and students
       on more than 200 U.S. campuses. The actual performance achieved depends on the
       capacity and quality of the connectivity from departmental LAN to campus LAN to
       gigaPoP to Abilene.



                                               35
In addition, Abilene places high priority on connectivity to international and federal peer
research networks, including ESnet, CA*net (Canada), GEANT, and APAN (Asia
Pacific). Currently, Abilene-ESnet peering includes two OC-48 SONET connections and
will soon grow to three OC-48c SONET and one 10 Gigabit Ethernet connections.
Similar multi-OC48 and above peerings are in place with CA*net and GEANT. To make
these peerings scalable, we emphasize the use of 10 Gigabit Ethernet switch-based
exchange points; thus, Abilene has two 10 Gbps connections to Star Light (Chicago), one
to Pacific Wave (Seattle), one to MAN LAN (New York), and similar planned upgrades
to the NGIX-East (near Washington DC) and the planned Pacific Wave presence in Los
Angeles. The recent demonstration by the DataTAG collaboration of a single TCP flow
of more than 5.6 Gbps between Geneva and Los Angeles was conducted in part over the
Abilene Network (between Chicago and Los Angeles). In cases where the end-to-end
connection from the hosts on campuses to Abilene are provisioned at or above 1 Gb/s, we
are seeing increasing evidence of a networking environment where single TCP flows of
more than 500-900 Mbps can be routinely supported. An increasingly capable
performance measurement infrastructure permits the performance of flows within
Abilene and from Abilene to key edge sites to be instrumented. This instrumentation is
one component of the Abilene Observatory, a general facility for making Abilene
measurements available to the network research and advanced engineering community.
In addition to supporting high-performance, Abilene also provides native IPv6
connectivity to its members with performance identical to that provided for IPv4. A key
strength of IPv6 is its support for global addressability for very large numbers of nodes,
such as may be needed for large arrays of detectors or other distributed sensors.
In sum, this Abilene-based shared IP network provides excellent performance in the
current environment dominated by Gigabit Ethernet LANs and host interfaces. As we
face the future, however, we need to address the October 2006 end of the current Abilene
transport arrangement as well as the beginning, during 2007, of LHC operations. Both
will call for new forms of network infrastructure to support the advanced research needs
of our members. More details are available in Appendix 23.




                                       36
        The ESnet backbone57 is currently adding OC192 (10 Gbps) links across the northern tier
         and OC48 links (2.5 Gbps) across the Southern tier of the US, as shown in Figure
         20Error! Reference source not found.Error! Reference source not found.. Site access
         bandwidth is slowly moving from OC12 to OC48. The link to the StarLight58 optical
         peering point for international links has been upgraded at 2.5 Gbps.




                             Figure 20 The ESnet backbone in the U.S., in 2003.


         The bandwidth required by DOE‟s large-scale science projects over the next 5 years is
         characterized in the June 2003 Roadmap59 (discussed in Section 9). Programs that have
         currently defined requirements for high bandwidth include: High Energy Physics,
         Climate (data and computations), NanoScience at the Spallation Neutron Source60,
         Fusion Energy, Astrophysics, and Genomics (data and computations). A new ESnet
         architecture and implementation strategy is being developed in order to increase the
         bandwidth, services, flexibility and cost effectiveness of the network.
         The elements of the architecture include independent, multi-lambda national backbones
         that independently connect to Metropolitan Area Network rings, together with
         independent paths to Europe and Japan. The MAN rings are intended to provide
         redundant paths and on-demand, high bandwidth point-to-point circuits to DOE Labs.

57
   See http://www.es.net, http://www1.es.net/pub/maps/current.jpg and Appendix 12
58
   See http://www.startap.net/starlight
59
   “DOE Science Networking Challenge: Roadmap to 2008.” Report of the June 2003 DOE Science Networking
Workshop. Both Workshop reports are available at http://www.es.net/#research.
60
   The Spallation Neutron Source (SNS) is an accelerator-based neutron source being built in Oak Ridge, Tennessee, by
the U.S. Department of Energy. The SNS will provide the most intense pulsed neutron beams in the world for scientific
research and industrial development.


                                                         37
    The alternate paths can be used for provisioned circuits except in the probably rare
    circumstance when they are needed to replace production circuits that have failed. The
    multiple backbones would connect to the MAN rings in different locations to ensure that
    the failure of a backbone node could not isolate the MAN.
    Another aspect of the new architecture is high-speed peering with the US university
    community, and the goal is to have multiple 2.5-10 Gbps cross-connects with
    Internet2/Abilene to provide seamless, high-speed access between the university
    community and the DOE Labs. The long-term ESnet connectivity goals are shown in .
    The implementation strategy involves building the network by taking advantage of the
    evolution of the telecom milieu – that is, using non-traditional sources of fiber,
    collaborations with existing R&D institution network confederations for lower cost
    transport, and vendor neutral interconnect points for more easily achieving competition in
    local loops / tail circuits.
    Replacing local loops with MAN (metropolitan optical network) optical rings should
    provide for continued high quality production IP service, at least one backup path from
    sites to the nearby ESnet hub, scalable bandwidth options from sites to ESnet backbone,
    and point-to-point provisioned high-speed circuits as an ESnet service.
    With endpoint authentication, the point-to-point paths are private and intrusion resistant
    circuits, so they should be able to bypass site firewalls if the endpoints (sites) trust each
    other.
    A clear mandate from the Roadmap Workshop was that ESnet should be more closely
    involved with the network R&D community, both to assist that community and to more
    rapidly transition new technology into ESnet. To facilitate this, the new implementation
    strategy includes interconnection points with National Lambda Rail (NLR) and UltraNet
    – DOE‟s network R&D testbed.




Figure 21 Long-term ESnet Connectivity Goal. ESnet links are shown in black and the links of
                National Lambda Rail (NLR) are shown in red and yellow.



                                            38
            The CA*net461 research and education network in Canada connects regional research and
             education networks using wavelengths at typical speeds of 10 Gbps each. The underlying
             architecture of CA*net 4 is two 10Gbps lambdas from Halifax to Vancouver as shown in
             Figure 22. A third national lambda is planned to be deployed later this year. Instead of
             being thought of as one single homogenous IP routed network, the CA*net 4 network can
             be better described as a set of a number independent parallel IP networks, each associated
             with a specific application or discipline. There are connections to the US at Seattle,
             Chicago, and New York.




                         Edmonton

                                     Saskatoon


                                 Regina                                                       St. John’s
                   Calgary                       Winnipeg
                                                                                       Charlottetown
           Vancouver
                                                                     Montreal
                                                                  Ottawa
                                                                                Fredericton
                                                                                                    Halifax
                       Seattle

                                            Chicago
                                                                                          New York
                                                                   Toronto


                                    Figure 22 CA*net4 (Canada) network map

             CANARIE has developed special software for control of the optical-electrical switches at
             every CANARIE node which allows individual users or applications to directly control
             the routing, interconnection and switching of user assigned lightpaths across the network.
             In essence the UCLP62 (User Controlled LightPath) software creates layer 1 Virtual
             Private Networks (VPNs) by partitioning each electrical-optical switch into different
             management domains. This software is open source and has been developed by teams at
             the University of Waterloo, Université de Quebec à Montréal, Carleton University,
             Communications Research Center and Ottawa University.

             The UCLP software is fully compliant with the Open Grids Services Architecture
             (OGSA) specification and can therefore be fully integrated into a Grid environment. This
             is particularly important as the new web services work flow technology evolves, as then
             researchers will be able to interconnect instrumentation web services, with network web
             services and with computational or database web services.

             The UCLP software and the availability of lightpaths allows for the creation of
             discipline- or application-specific networks across the country. These networks can be
             completely independent of each other and interconnect separate routers, servers and/or
             other devices. The user controlled lightpaths can also be used by the regional
             network, or individual institutions to set up direct peering connections with each
             other. More details on Canaries‟ acitivities are available in Appendix 24.

61
     See http://www.canarie.ca/canet4/connected/canet4_map.html
62
     See http://www.site.uottawa.ca:1090/


                                                            39
6.3.         Korea and Japan

           SuperSINET63 (Figure 23) connects most of the major Japanese universities and national
            laboratories, and is indispensable for HEP research. In addition to a 10Gbps IP
            connection, it provides discipline-dedicated inter-site GbE‟s and MPLS-VPN‟s. The
            inter-site GbE‟s are provided as individual lambdas which are separate from the 10Gbps
            IP connections, while the MPLS-VPN‟s are configured over the 10Gbps IP connections.
            SuperSINET‟s link to New York was upgraded from 2 x OC48 (5Gbps) to 4 x
            OC48 (10Gbps) and was connected to MANLAN64 with 10 GE in December 2003. At
            New York, SuperSINET peers with the R&E networks in America and Europe. The
            current bandwidth to Abilene is 10 Gbps and it is planned to upgrade the bandwidth to
            GEANT and ESnet from 2,5 Gbit/s to 10 Gbit/s in 2004 .




                              Figure 23 SuperSINET (Japan) map, October 2003.




63
     See http://www.sinet.ad.jp/english/index.html and Appendix 16
64
     See http://international.internet2.edu/intl_connect/manlan/


                                                          40
      Two major backbone networks for advanced network research and applications exist in
       Korea: KREONET (Korea Research Environment Open NETwork)65, connected to over 230
       organizations including major universities and research institutions, and the KOREN (KOrea
       Advanced Research Network)66 connected to 47 research institutions. Both networks were
       significantly updated in 2003 as detailed below67.
       o    A major upgrade was made to the KREONET/KREONet2 backbone (Figure 24), raising
            the speed to 2.5-5 Gbps on a set of links interconnecting 11 regional centers, in order to
            support Grid R&D and supercomputing applications. The network also includes a Grid-
            based high-performance research network, called SuperSIReN with a speed of 10 Gbps
            centering around major universities and research institutes in Daedeok Science Valley.


                        KREONET Infrastructure (As of Dec. 2003)
     Cf) KOREN configuration
      Singapore
      (APII)    6M APII Test- bed                                      11 Area
                                             155M                     Backbone                                       KIX
                                                                                                               2G                         R&D
      Japan     1G           KOREN
                              KOREN         (Seoul)                    ★179 R&D                                                           Users
      (APII)                                                          Organizations
                                                                                                 Incheon
                     34M                 1G            Suwo
                                                                          Seoul                                      EP-
                                                                                                                     EP-Net
     Europe                                            n                                                       2G                          R&D
                                       Daejeon)
                                      (Daejeon)                  2.5G                    2.5G
     (TEIN)                                                                                                                                Users

                                                  Chonan                            5G                 Daegu        IX/Dacom
                                                                                                                    IX/Dacom
                                                                                                               1G
                                                               2.5G          Daejeon            2.5G                                      R&D
                                                                                                                                          Users
           Internet
            Internet
             USA                    155M
               USA                                             2.5G
                                                                                                2.5G   Pohan
                                                                                                               1G
                                                                                                                      IX/KT
                                                                      2.5G               2.5G          g                                  R&D
                                                   Gwangju                   2.5G                                                         Users
                       STAR TAP
                       *StarLight
                                     155M
                                                                                                                    6NGIX

                        ☆
                                                           Jeonju                           Busan              1G
      Abilene
       Abilene
                                                                                                                                          IPv6
                                                                        Changwon                                                          Users
               6TAP
                                                                SuperSIReN
            vBNS                                            Daedeok Science Town
             vBNS

                CA*netIII
                CA*netIII
                 CA*netIII
                                             10G      10G         10G           1G          10G        10G


                       SURFNet                    K
                        SURFNet                            K            K           K            K         C
                                                  B        A            R           I            A         N
                                                  S
                                                  I
                                                           R
                                                           I
                                                                        I
                                                                        B
                                                                                    G
                                                                                    A
                                                                                                 I
                                                                                                 S
                                                                                                           U                    Ref. KISTI
                                                                        B           M            T                              Mod. by D. Son       4
                                                                                                                    Center for High Energy Physics


     Figure 24The KREONET Infrastructure, showing the upgraded core network interconnecting 11
      regional centers, the main nat’l and int’l connections to other research and education networks




65
  KREONET is a national R&D network, run by KISTI (Korea Institute of Science and Technology Information) and
supported by the Korean government, in particular MOST (the Ministry Of Science & Technology) since 1988. For
science and technology information exchange and supercomputing related collaboration, KREONET provides high-
performance network services for the Korean research and development community. Currently KREONET member
institutions include 50 government research institutes, 72 universities, 15 industrial research laboratories, etc.
(http://www.kreonet.re.kr)
66
   KOREN (KOrea Advanced Research Network) was founded for the purpose of expanding the technological basis of
Korea and for providing a research environment for the development of high speed telecommunications equipment and
application services. Established in 1995, KOREN is a not-for-profit research network that seeks to provide
universities, laboratories and industrial institutes with a research and development environment for 6T related
technolog and application services based on a subsidy from the Ministry of Information and Communications.
(http://www.koren21.net)
67
   Further details are in Appendix 3


                                                                              41
   o   The speed of KOREN, shown in Figure 25, was upgraded to 10 Gbps between Seoul and
       Daejeon, and a 2.5 Gbps Ring configuration was installed connecting four cities (Daejeon
       – Daegu – Busan – Gwangju). There were initially 5 user sites connected at 1 Gbps and
       the number of such sites will be increased soon.




Figure 25 The KOREN infrastructure, showing the 2.5 Gbps ring interconnecting four major cities.




                                              42
6.4.       Intercontinental links

     The US-CERN link (“LHCNet”), between StarLight (in Chicago) and CERN (in Geneva) is
      jointly funded68 by the US (DOE and NSF through the Eurolink Grant) and Europe (CERN
      and the EU through the DataTAG69 project). This link included a 622 Mbps production
      service and a 2.5 Gbps service primarily for R&D until September 2003, when an upgraded
      service was installed based on a single 10 Gbps (OC-192) link. Today, a strict policing based
      on MPLS70 protects the production traffic from the research traffic. Peerings at 10 Gbps with
      Abilene and the TeraGrid have been set up at Chicago and an upgrade of the bandwidth of the
      peering to GEANT to 10 Gbps is scheduled in 2004 at CERN. In parallel with these
      developments, LHCnet is planned to be extended to the US west coast via NLR wavelengths
      made available to the HEP community through HOPI and Ultranet. Caltech is deploying a 10
      Gbps local loop dedicated to research and development, to the NLR PoP in Los-Angeles,
      using a dark fiber between the campus and downtown. A peering in Los Angeles with
      AARnet (Austalia) at 10 Gbps to support joint R&D is planned for mid-2004.
      A view of LHCNet and its main interconnections is shown in Figure 26. The optical
      “Lambda triangle” (see the figure) interconnecting Geneva, Amsterdam and Chicago
      with 10 Gbps wavelengths from SURFNet will soon be extended to the UK, forming
      an optical quadrangle, once the “UKLight” project begins operations.




                             Figure 26 LHCNet Peering and Lambda triangle




68
     Note that the EU and NSF funding will terminate in March 2004
69
   See http://datatag.web.cern.ch/datatag/
70
   Multi-Protocol Label Switching; see http://www.hyperdictionary.com/dictionary/Multiprotocol+Label+Switching


                                                       43
      StarLight71 is a research-support facility and a major peering point for US national and
       international networks. It is based in Chicago, and is designed by researchers, for researchers.
       It anchors a host of regional, national and international wavelength-rich Lambda Grids, with
       switching and routing at the highest experimental levels. It is also a testbed for conducting
       research and experimentation with “lambda” signaling, provisioning and management, for
       developing new data-transfer protocols, and for designing real-time distributed data mining
       and visualization applications. Since summer 2001, StarLight has been serving as a 1GigE
       and 10GigE electronic switching and routing facility for the national and international
       research and education networks. International lambdas connected to StarLight are shown in
       Figure 27.
      TransLight72 is a global partnership among institutions, organizations, consortia or country
       National Research Networks (NRNs) who wish to make their lambdas available for
       scheduled, experimental use. This one-year global-scale experimental networking initiative
       aims to advance cyberinfrastructure through the collaborative development of optical
       networking tools and techniques and advanced LambdaGrid middleware services among a
       worldwide community of researchers. TransLight consists of many provisioned 1-10 Gigabit
       Ethernet (GigE) lambdas among North America, Europe and Asia via StarLight in Chicago.
       As shown in Figure 27 the Translight members are CANARIE/CA*net4, CERN/Caltech,
       SURFnet/NetherLight,         UIC/Euro-Link,      TransPAC/APAN         (Asia),     NORDUnet,
       NorthernLight, CESNET/CzechLight and AARnet. TransLight is closely linked with the
       GLIF initiative described in section 7.2.




                                                 Figure 27 TransLight



71
     See http://www.startap.net/starlight and Appendix 6
72
     See http://www.startap.net/translight/ and Appendix 7


                                                             44
      The Global Ring Network for Advanced Application Development73 (GLORIAD) shown in
       Figure 28, is the first round-the-world high-speed network jointly established by China, the
       United States and Russia. The multi-national GLORIAD program will actively encourage and
       coordinate applications across multiple disciplines and provide for sharing such scientific
       resources as databases, instrumentation, computational services, software, etc. In addition to
       supporting active scientific exchange with network services, the program will provide a test
       bed for advancing the state-of-the-art in collaborative and network technologies, including
       Grid-based applications, optical network switching, an IPv6 backbone, network traffic
       engineering and network security.
       The ring was launched as “Little Gloriad” (155 Mbps) on January 12, 2004. The ring is
       planned to be upgraded to OC-192 (10 Gbps) in the near future.




         Figure 28 The Global Ring Network for Advanced Application Development (GLORIAD)




73
     See http://www.gloriad.org/ and Appendix 20


                                                   45
           In December 2003, the Trans-Pacific Optical Research Testbed (SXTransPORT74)
            announced the deployment of dual 10Gps capacity circuits, connecting Australia's
            Academic and Research Network75 (AARNet) to networks in North America, as part of a
            bundle of services, for approved non-commercial scientific, research and educational use.
            This gigantic leap (illustrated in Figure 29) increases the Australia-US trans-pacific
            bandwidth by a factor 64! The commissioning of the SXTransPORT testbed is expected
            to be completed by the summer of 2004. Plans to interconnect telescope facilities in
            Australia, the continental US and Hawaii are already under development.




     Figure 29 Australia-US bandwidth for research and education, showing the new dual 10
                                               Gbps research links




74
     See http://www.aarnet.edu.au/news/sxtransport.html
75
     See http://www.aarnet.edu.au/ and Appendix 15


                                                          46
7.       Advanced Optical                              Networking                   Projects              and
         Infrastructures

7.1.     Advanced Optical Networking Infrastructures
Most conventional carriers, a growing number of utilities and municipalities, and a number of
new-entrant carriers have installed fiber-optic cabling on their rights of way that well exceeds
their current needs, and so remains "unlit", or "dark". Lighting these fibers can now be done using
relatively inexpensive technology that is identical in many respects to that used on local area
networks, and so is on the way to being a "commodity", if not a "consumer" item.
Building networks that are based on a combination of this new technology and on gaining access
to either pre-existing dark fiber or fiber that has been newly installed for this purpose, is
increasingly being seen as a new way to build very high capacity networks for a very low cost,
while gaining a degree of control over the network that had always rested with the carrier.
In 2003-4 we are seeing the emergence of some privately owned or leased wide area fiber
infrastructures, managed by non-profit consortia of universities and regional network providers,
to be used on behalf of research and education. This marks an important change: from an era of
managed bandwidth services, to one where the research and education community itself owns and
shares the cost of operating the network infrastructure. The abundantly available dark fibers and
lambdas will cause a paradigm shift in networking. In the new scheme, the costs of adding
additional wavelengths to the infrastructure, while still significant, are much lower than was
previously thought to be possible. Many of the current advanced national initiatives are listed
below. In addition, there are many regional optical initiatives in the U.S.76

AARNet in Australia
In a deal finalized in December 2003, AARNet (the Australia's Academic and Research Network)
acquired dark fibres across Australia for 15 years. Initially these fibers will provide 10Gbps
across the country but the AARNet3 design will be capable of driving speeds of 40 Gbps and
beyond.

SANET in Slovakia
The SANET Association started build its gigabit network in June 2001. At this time it connects
all of the universities in Slovakia. The whole network is based on leased dark fibers with a total
length of 1660km. All links are built on the Gigabit Ethernet technology with the speed of 1Gbps
or 4Gbps by using CWDM. The longest Gigabit Ethernet segment of the SANET backbone is
112km. SANET has supported the idea of international cross border links based on leased dark
fibre since 2002. In August 2002 SANET became the first NREN in Europe to establish
international Gigabit Ethernet connections: to Austria, and then in April 2003 to the Czech
Republic. SANET is also planning to establish a dark fiber link to Poland this year.



76
  The U.S. regional initiatives include: California (CALREN), Colorado (FRGP/BRAN), Connecticut (Connecticut
Education Network), Florida (Florida LambdaRail), Indiana (I-LIGHT), Illinois (I-WIRE), Maryland, D.C. & northern
Virginia (MAX), Michigan, Minnesota, New York + New England region (NEREN), North Carolina (NC
LambdaRail), Ohio (Third Frontier Network), Oregon, Rhode Island (OSHEAN), SURA Crossroads (southeastern
U.S.), Texas, Utah and Wisconsin.


                                                       47
CESNET in the Czech Republic
CESNET77 (The Czech Academic and Research Network) has been leasing fibres since 1999. The
current National fiber footprint realized or contracted is 17 lines, an overall length of 2513 km.
Most of the CESNET backbone links rely on those leased fibers. The advantages are a wide
independence of carriers, better control of the network and important savings for higher
transmission rate and for more lambdas. Table 1 shows a case study comparing the costs for
leasing wavelength (“lambda”) services to leasing the fiber and operating it oneself, based on
offers for year 2003. It includes 4 year depreciation of equipment, academic discounts and
equipment service fees. As shown, a cost savings of a factor of 2 to 3 can be achieved for
relatively long 2.5 and 10 Gbps links.

             1 x 2,5G                     Leased 1 x 2,5G        Leased fibre with own equipment
                                           (EURO/Month)                   (EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)         7,000                          5 000 *
  about 300km (e.g. Praha - Brno)              8,000                          7 000 **

                   *                     2 x booster 18dBm
                  **                     2 x booster 27dBm + 2 x preamplifier + 6 x DCF


            4 x 2,5G                      Leased 4 x 2,5G        Leased fibre with own equipment
                                           (EURO/Month)                   (EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)        14,000                          8 000 *
  about 300km (e.g. Praha - Brno)             23,000                         11 000 **

                   *                     2 x booster 24dBm, DWDM 2,5G
                  **                     2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 2,5G


             1 x 10G                       Leased 1 x 10G        Leased fibre with own equipment
                                           (EURO/Month)                   (EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)         14,000                         5 000 *
  about 300km (e.g. Praha - Brno)              16,000                         8 000 **

                   *                     2 x booster 21dBm, 2 x DCF
                  **                     2 x (booster 21dBm + in-line + preamplifier) + 6 x DCF


             4 x 10G                       Leased 4 x 10G        Leased fibre with own equipment
                                           (EURO/Month)                   (EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)         29,000                        12 000 *
  about 300km (e.g. Praha - Brno)              47,000                        14 000 **

                   *                     2 x booster 24dBm, 2 x DCF, DWDM 10G
                  **                     2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 10G



           Table 1 Case study in Central Europe:buying lambdas vs. leasing fibre




                                                  48
PIONIER in Poland
The PIONIER78 (Polish NREN) network deployment started in 2001, with the fiber acquisition
process. As the availability and quality of the existing fibers were not satisfactory for current and
future demands of optical networking, the decision to build new fibers with the cooperation of
telecommunication carriers, using a cost-sharing model was taken. 2650km of fiber lines were
laid until June 2003 connecting 16 MANs. The complete fiber network shall connect 21 MANs
with 5200km of fiber by 2005, as shown on Figure 30 The .

                                                           GDAŃS K


                                       KOS ZALIN

                                                                                   OLS ZTYN



                         S ZCZECIN

                                                      BYDGOS ZCZ

                                                                    TORUŃ                                   BIAŁYS TOK




                                               POZNAŃ

                                                                                          WARS ZAWA
                        GUBIN
                                  ZIELONA
                                   GÓRA                                                                     SIEDLCE


                                                                     ŁÓDŹ
                                                                                                PUŁAWY


                                              WROCŁAW                                         RADOM
                                                                                                            LUBLIN


                                                       CZĘS TOCHOWA
                                                                                       KIELCE

                                                OPOLE


                                                        GLIWICE      KATOWICE


                                                                                KRAKÓW           RZES ZÓW


                                                         CIES ZYN     BIELS KO-BIAŁA




                                            Ins ta lle d fibe r
                                            P IONIER node s
                                            Fibe rs pla nne d in 2004
                                            P IONIER node s pla nne d in         2004


                                Figure 30 The PIONIER Fiber network in Poland

SURFnet6 in the Netherlands
The deployment of the next generation Dutch research and education network SURFnet6 will be
based on dark fibers. As shown in Figure 31 and detailed in Appendix 9, over 3000 km of
managed dark fiber pairs is already available for SURFnet today.




                Figure 31 Managed dark fiber pairs for SURFnet6, in the Netherlands

78
     See http://www.pionier.gov.pl/str_glowna.html and Appendix 13


                                                                     49
X-Win in Germany
DFN in Germany has started the process of upgrading from G-WiN to its next generation network
X-WiN.79 All links between the core nodes will be upgraded to 10Gbps and a flexible
reconfiguration scheme with latencies below 7 days will allow for dynamic reconfiguration of the
core links in case of changing data flow requirements. One major addition to existing standard
services will be bandwidth-on-demand, and the technical and economical feasibility aspects will
be exploited.
               o   In terms of the base technology diverse approaches are possible. Options include
                          SDH/Ethernet as a basic platform
                          Managed lambdas
                          Managed dark fiber and DFN’s own WDM

       The market in Germany for dark fiber offers interesting possibilities. For example, as shown
       on Figure 32, GasLine, a national provider for natural gas, has installed optical fibers along
       its gas pipelines. The geographical coverage is not only interesting for the core infrastructure
       but also for the many institutions that are found in the proximity of the links. The respective
       technical characteristics of the fibers and the economic aspects look very promising. The
       roadmap for the migration to X-WiN includes the installation and operation of an optical
       testbed, called Viola. Network technology tests in the (real) user environment will provide
       important input to the design of the next generation NREN. A feasibility study will be
       completed in early 2004, and the concept will be worked out until Q3/04. The actual
       migration from G-WiN to X-Win is expected to take place in Q4/05.




79
     See Appendix 2.


                                                   50
                            Figure 32 GasLine dark fiber network in Germany

FiberCO in the US
FiberCo80 in the US is a holding company that helps to provide inter-city dark fiber to regional
optical networks with the benefit of a national-scale contract and aggregate price levels. The
responsibility for lighting this fiber will rest with the regional networks. A secondary objective is
to ensure that the U.S. research university collective maintains access to a strategic fiber
acquisition capability on the national scale for future initiatives. FiberCo has executed two
agreements with Level 3 Communications that 1) provide it with an initial allocation of over
2,600 route-miles of dark fiber anywhere on Level 3's national footprint (see Figure 33) and 2) set
the ongoing costs for fiber maintenance and equipment co-location.




                              Figure 33 FiberCo Available fiber topology

National LambdaRail in the US
National LambdaRail81 (NLR) is not a single network, rather, a unique and rich set of facilities,
capabilities and services that will support a set of multiple, distinct, experimental and production
networks for the U.S. research community. On NLR, these different networks will exist side-by-
side in the same fiber-optic cable pair, but will be physically independent of each other as each
will be supported by its own lightwave or “lambda”. The principal objectives of NLR are to:
    Bridge the gap between leading-edge optical network research and state-of-the-art
     applications research;
    Push beyond the technical and performance limitations of today’s Internet backbones;
    Provide the growing set of major computationally intensive science (e-Science) projects,
     initiatives and experiments with the dedicated bandwidth, deterministic performance
     characteristics, and/or other advanced network capabilities needed; and


80
  See www.FiberCo.org
81
  See http://www.nationallambdarail.org/ and Appendix 27


                                                       51
     Enable the potential for highly creative, out-of-the-box experimentation and innovation that
      characterized facilities-based network research during the early years of the Internet.

A crucial characteristic of NLR is the capability to support both experimental and production
networks at the same time – with 50 percent of its resources allocated to network research.
As of January 2004, the Portland - Seattle link and Chicago - Pittsburgh link were already up.
West Coast (San Diego - Seattle) will be up in March-April The entire first phase of NLR: San
Diego to Los Angeles to Sunnyvale to Seattle to Denver to Chicago to Pittsburgh to Washington
to Raleigh to Atlanta to Jacksonville will be operational by the end of August 2004. Planning is
being finalized for the second phase, the remainder of the nationwide backbone. The NLR
infrastructure is shown on Figure 34.

                     SEA

                 POR


                 SAC                                                                  NYC         BOS
                             OGD                                 CHI
               SVL                           DEN                         CLE
                           FRE                                                 PIT          WDC
                                                         KAN
                                                                                RAL
                                             STR                       NAS
                     LAX         PHO
                                                                 WAL           ATL
                           SDG         OLG
                                                   DAL
                                                                                JAC



               15808 Terminal, Regen or OADM site (OpAmp sites not shown)
               Fiber route

    Figure 34 National Light Rail: Planned layout of the optical fibre route (from Level(3)) and Cisco
                                         Optical Multiplexers.
Without the availability of dark fibers, the NLR infrastructure would have probably never been
deployed. As illustrated in Figure 35, the role of dark fibers is vital to link the NLR optical
infrastructure to campuses and laboratories, via regional optical networks.




               Figure 35 NLR’s ‘Virtuous Circles’ and the Vital Role of Dark Fiber



                                                   52
7.2.     Advanced Optical Networking Projects and Initiatives

    The DataTAG82 project has deployed a large-scale intercontinental Grid testbed involving the
     European DataGrid project, several national projects in Europe, and related Grid projects in
     the USA. The transatlantic DataTAG testbed is one of the largest 10 Gigabit Ethernet (10GE)
     testbeds ever demonstrated, in addition to being the first transatlantic testbed with native
     10GigE access capabilities. The project explores some forefront research topics such as the
     design and implementation of advanced network services for guaranteed traffic delivery,
     transport protocol optimization, efficiency and reliability of network resource utilization,
     user-perceived application performance, middleware interoperability in multi-domain
     scenarios, etc. One of the major achievements of the project is the establishment of the new
     Land Speed Record83 with a single TCP stream of 5.64 Gigabit/sec between Geneva and Los-
     Angeles sustained for more that one hour.
    NetherLIGHT84 is an advanced optical infrastructure in the Netherlands proving the
     foundation for network services optimized for high-performance applications. Operational
     since January 2002, NetherLIGHT is a multiple Gigabit Ethernet switching facility for high-
     performance access to participating networks, and will ultimately become a pure wavelength
     switching facility for wavelength circuits as optical technologies and their control planes
     mature. NetherLIGHT has become a major hub in GLIF, the Global Lambda Integrated
     Facility for Research and Education shown in Figure 36. GLIF is a World Scale Lambda
     based Laboratory for Application and Middleware development on the emerging
     “LambdaGrid”, where Grid applications ride on dynamically configured networks based on
     optical wavelengths. The GLIF community shares the vision to build a new Network
     paradigm, which uses the Lambda network to support data transport for the most demanding
     e-Science applications, concurrent with the normal aggregated best effort Internet for the
     commodity traffic.




                  Figure 36 GLIF - Global Lambda Integrated Facility –, 1Q2004


82
   See http://www.datatag.org
83
   See http://lsr.internet2.edu
84
   See http://www.surfnet.nl/innovatie/netherlight/ and Appendixes 5 and 8


                                                         53
      UltraNet85 is an experimental research testbed (Figure 37) funded by the DOE Office of
       Science to develop networks with unprecedented capabilities to support distributed large-
       scale science applications that will drive extreme networking, in terms of sheer throughput as
       well as other capabilities.




                                          Figure 37 Ultranet Testbed

      UKLight86 will enable the UK to join several other leading networks in the world creating an
       international experimental testbed for optical networking. UKLight will bring together
       leading-edge applications, Internet engineering for the future, and optical communications
       engineering, and enable UK researchers to join the growing international consortium which
       currently spans Europe and North America. These include StarLight in the USA, SURFnet in
       the Netherlands (NetherLIGHT), CANARIE (Canadian academic network), CERN in
       Geneva, and NorthernLIGHT bringing the Nordic countries onboard. UKLight will connect
       UK national research backbone JANET to the testbed and also provide access for UK
       researchers to the Internet2 facilities in the USA via StarLight.




85
     See http://www.csm.ornl.gov/ultranet/
86
     See http://www.ja.net/development/UKLight/ and Appendix 10


                                                        54
    Garden (GRIDs and Advanced Research Development Environment and Network) is a
     research program being submitted87 to the European commission. The project has been
     launched by Cisco System and can be seen as the European equivalent of NLR. GARDEN
     proposes to build an intercontinental IP controlled optical network testbed, also based on
     future research infrastructures. The project‟s goal is to develop new protocols, architectures
     and AAA models, along with new GRID developments. They are strong prospect that Cisco
     will go on with GARDEN whether EU funded or not.

    The UltraLight88 concept is the first of a new class of integrated information systems that will
     support the decades-long research program at the Large Hadron Collider (LHC) and other
     next generation sciences. Physicists at the LHC face unprecedented challenges: (1) massive,
     globally distributed datasets growing to the 100 petabyte level by 2010; (2) petaflops of
     distributed computing; (3) collaborative data analysis by global communities of thousands of
     scientists. In response to these challenges, the Grid-based infrastructures developed by the
     LHC collaborations provide massive computing and storage resources, but are limited by
     their treatment of the network as an external, passive, and largely unmanaged resource.
     UltraLight will overcome these limitations by monitoring, managing and optimizing the use
     of the network in real-time, using a distributed set of intelligent global services.

     The UltraLight network will combine statically provisioned network paths (supporting a
     traffic mix including some gigabit/s flows using advanced TCP protocol stacks) with
     dynamically configured optical paths for the most demanding applications, managed end to-
     end by the global services. UltraLight is being designed to support Grid-based data analysis
     by the LHC and other large physics collaborations.89




                         Figure 38 Initial Planned Ultralight Implementation



87
   An other project called GARDEN is in the proposal stage. Garden (Grid Aware Network Development in Europe) is
a project that proposes to enrich the next generation research infrastructures (GEANT successors) with grid concepts,
services and usage scenarios. The proposal‟s goal is to create a pan-european heterogeneous testbed combining
traditional IP networks with advanced circuit-switched networks (gigabit switched or lambda switched).
88
   See http://ultralight.caltech.edu
89
   See http://ultralight.caltech.edu/gaeweb


                                                        55
      TeraGrid90 is a multi-year effort to build and deploy the world's largest, most comprehensive,
       distributed infrastructure for open scientific research. The TeraGrid currently includes ten
       U.S. sites (see Figure 39) housing more than 20 teraflops of computing power, facilities
       capable of managing and storing a petabyte of data, high-resolution visualization
       environments, and toolkits for grid computing. Four new TeraGrid sites, announced in
       September 2003, added more scientific instruments, large datasets, and additional computing
       power and storage capacity to the system. All the components will be tightly integrated and
       connected through a network that operates at 40 gigabits per second.




 Figure 39 The TeraGrid in 2004. The centers at NCSA, Argonne, ORNL, the San Diego, Pittsburgh
       and Texas/Austin Supercomputer Centers, Caltech, Indiana University and Purdue are
                interconnected by a network of three to four 10 Gbps wavelengths.

      The Hybrid Optical/Packet Infrastructure (HOPI) initiative is led by Internet2 and regroups a
       variety of people from the high speed network community. The initiative will examine new
       infrastructures for the future and can be viewed as a prelude to the process for the 3rd
       generation Internet2 network architecture. The design team91 will focus on both architecture
       and implementation. It will examine a Hybrid of shared IP packet switching and dynamically
       provisioned optical lambdas. The eventual hybrid will require a rich set of wide-area lambdas
       and the appropriate switching mechanisms to support high capacity and dynamic
       provisioning. The immediate goals are the creation and the implementation of a test-bed
       within the next year in coordination with other similar projects. The project will rely on the
       Abilene MPLS capabilities and dedicated waves from NLR.


90
     See http://www.teragrid.org/
91
     The HOPI design team includes S. Ravot and H. Newman (Caltech).


                                                        56
In planning the next generation Internet2 networking infrastructure, we anticipate the design
and deployment of a new type of hybrid network – one combining a high-performance,
shared packet-based infrastructure with dynamically provisioned optical lambdas and other
„circuits‟ offering more deterministic performance characteristics. We use the term HOPI to
denote both the effort to plan this future hybrid, and the set of testbed facilities that we will
deploy collaboratively with our members to test various aspects of candidate hybrid designs.

The eventual hybrid environment will require a rich set of wide-area lambdas connecting both
IP routers and lambda switches capable of very high capacity and dynamic provisioning, all
at the national backbone level. Similarly, we are working now to facilitate the creation of
regional optical networks (RONs) through the acquisition of dark fiber assets and the
deployment of optronics to deliver lambda-based capabilities. Finally, we anticipate that the
planned hybrid infrastructure will require new classes of campus networks capable of
delivering the various options to high-performance desktops and computational clusters.

To enable the testing of various hybrid approaches, we are planning the initial HOPI testbed,
making use of resources from Abilene, the emerging set of RONs, and the NLR
infrastructure. A HOPI design team, composed of engineers and scientists from Internet2
member universities and laboratories, is now at work.

As our ideas, testing, and infrastructure planning evolve, we will work closely with the high
energy physics community to ensure that the most demanding needs of our members (e.g.,
LHC) are met. We expect that the resulting hybrid packet and optical infrastructure will play
a key role in a scalable and sustainable solution to the future needs of this community.




                                            57
8.          HENP Network Status: “Remote Regions”
Outside of North America, western Europe, Australia, Korean and Japan, network connections
are generally slower92, and often much slower. Link prices in many countries have remained high,
and affordable bandwidths are correspondingly low. This is caused by one or more of the
following reasons:
       Lack of competition
       Lack of local or regional infrastructure
       Government policies restricting competition or the installation of new facilities,
        or fixing price structures.
Notable examples of countries in need of improvement include China, India, Romania and
Pakistan, as well as some other countries where HEP groups are planning “Tier2” Regional
Centers. Brazil (UERJ, Rio) is planning a Tier1 center for LHC (CMS) and other programs.
These are clear areas where ICFA-SCIC and ICFA as a whole can help.
As shown in this and previous sections, some of the neediest countries began to make substantial
progress over the last two years, while others (notably India) are in danger of falling farther
behind.
8.1.        East-Europe
      The network infrastructure for education and research in Romania is provided by the
       Romanian Higher Education Data Network93 (RoEduNet). Important progress have been
       made over the last two years, spurred on in part by the collaboration of Romanian groups in
       European (EDG, EGEE) and U.S. (PPDG, iVDGL) Grid projects.




                                Figure 40 The RoEduNet network in Romania



92
     Also see the 2003 Digital Divide Working Group report, and the ICFA-SCIC meeting notes at http://cern.ch/icfa-scic
93
     See http://www.roedu.net/ and Appendix 17


                                                           58
         As shown in , the backbone currently has two 155 Mbps inter-city links and an access to
         GEANT at 622Mbps The current plan is to connect three or four centers at 2.5 Gbps, and
         then deploy a 10Gb network infrastructure that may be based on dedicated dark fibers.


8.2.     Russia and the Republics of the former Soviet Union

Today the capacity of backbone channels for science in Russia is at the level of 45 Mbps,
although in many cases it is just a few Mbps and still (in rare but important cases) hundreds of
Kbps. At the same time Gigabit/sec networking is coming. There are several 1 Gbps links in
Moscow, and some will start in other regions.
International connectivity for Russian science is now at the 155 Mbps level. Connectivity with
the NaukaNet94 155 Mbps link from Moscow to Chicago (Starlight) via Stockholm (RBNet95
PoP) provides high-performance and highly reliable connectivity with HEP partners in the US. In
the summer 2002, another 155 Mbps link Moscow-Stockholm has been deployed for RBNet
commodity Internet traffic. In total RBNet has now four STM1 links between Moscow and
Stockholm, thus 622 Mbps of total bandwidth. In February-March 2004, the connectivity96 at 155
Mbps to GEANT, is going to be strengthened by a second 155 Mbps link for backup which will
also be used for pan-European GRID projects97. The bandwidth of the RUNNet-NORDUNet
(Moscow-Petersburg-Helsinki) link is now 622 Mbps. In Helsinki the traffic is handed over to
NORDUnet, for onward worldwide distribution. Russian HEP institutes potentially can use this
connectivity for their needs, in particular for international GRID initiatives. Connectivity to KEK
(Japan) for Russian HENP institutes is provided by a 512 Kbps terrestrial link between
Novosibirsk (BINP) and Tsukuba. The NaukaNet link is used also for connectivity with Japan,
via StarLight. The recent enhancements of the international connectivity with CERN, US, Japan
and European partners should guarantee that Russian groups will be able to partner effectively in
several international projects, notably the LHC Computing Grid (LCG) testbeds and the EGEE
activity, in 2004.
Further plans, for the development of international and regional connectivity for Russian science
and higher education in 2005-2007, depend strongly on the budget that will be available for those
years. One of the challenging initiatives is now the GLORIAD project, which can change
drastically the situation in Russia in case it succeeds in realizing its declared plans. Unfortunately,
for 2004 there was no return to the direct financing of Russian NREN from the State Budget (this
budget was cancelled at the end of 2002). So, the “Joint Solution” whereby funding is provided
by several Russian Ministries (Ministry of Industry, Science and Technologies; Ministry of
Atomic Energy, Ministry of Education, Russian Academy of Science) and major scientific centres
(RRC “Kurchatov Institute”, Moscow State University and Joint Institute for Nuclear Research),
that was initiated at the beginning of 2003, should be continued for 2004 as well, and this
extension is currently being realized. The collective budget should be at the level of 7M US
dollars for 2004, in comparison with 5M US dollars in 2003. This increase is caused by higher
prices in 2004 from Russian operators (+25% !), and the necessity to pay more for proper
development of international connectivity – to GEANT, and for the GLORIAD network, etc.
Achieving high-performance and highly reliable connectivity with CERN, and the Laboratories
participating in the LHC project remains a challenge, placing major demands on both the external
connectivity and the regional links. One should note that 2003 marks the first time that the

94
   See http://www.naukanet.org
95
   See http://www.ripn.net:8082/rbnet/en/index.html
96
   The annual core fee is covered by MoIST and RBNet manages this link
97
   EGEE in particular


                                                       59
Russian HEP requirements for international connectivity are properly met. For specific needs, e.g.
Data Challenges of LHC Experiments, the typical case in 2003 was for Russian institutes to
transfer data at speeds up to 50 Mbps. However, setting up virtual channels in the link to GEANT
with a flexible policy for capacity usage and sharing (for example, by use of MPLS technology)
is recognized as an important task, particularly for the LHC Data Challenges as well as for
effective use of Grid testbeds (e.g. in the framework of EGEE project).
The annual volume of data, produced by Russian HEP institutes in the framework of the program
of Data Challenges, was 25-30 Terabytes (TB) in 2003 and is projected to be 50-70 TB in 2004.
Thus, a rough estimation is that one can expect the data exchange between CERN (and other
regional centres) to be 120 and 250 TB in the years 2004 and 2005, respectively. Therefore, the
bandwidth used for data exchange with CERN should be at the level of 100-155 Mbps in 2004,
and at the level of 300 Mbps in 2005.
Outside of these main scientific areas in Russia, network capabilities remain modest. The network
bandwidth in the other republics of the former Soviet Union, and the speed of their international
connections, also remain low. This statement also applies to international connections to
Novosibirsk: the link from KEK to the Budker Institute (BINP) was recently upgraded  but only
from 128 to 512 kbps.

DESY is assisting with satellite connections the newly Independent States of the Southern
Caucasus (comprising Armenia, Azerbaijan and Georgia), and Central Asia (comprising
Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan and Uzbekistan). These countries, shown
in Figure 41, are located on the fringe of the European Internet arena and will not be in reach of
affordable optical fiber connections within the next few years. The project called Silk98 provides
connectivity to the GEANT backbone via satellite links. The project started with a transmit plus
receive bandwidth of 2 Mbps in September 2002 and increased it to 10 Mbps in December 2003.
From January 2004, the bandwidth is planned to be increased linearly by 500 kbps/month until
June 2005. This will lead to a maximum transmit plus receive bandwidth of about 24 Mbps by the
end of the period99.




98
     See http://www.silkproject.org and Appendix 13
99
  An important discussion among SCIC members (led by D. Williams) on the role of satellite links took
place during the past year. While satellite links are required in remote regions where no optical fiber
infrastructure (at all) is available, there is now no doubt that satellite links are far too expensive to support
the level of connectivity required for effective collaboration in a major HEP experiment. Estimates of the
ratio of the intrinsic cost per unit bandwidth of satellite links (in the Mbps range) to the best optical fiber
connectivity (in the 1-10 Gbps range) vary from several hundred to as high as 1000. Given this wide
disparity, there is no possibility that satellite links can lead to “research inclusiveness” for the region
concerned. It is therefore imperative for our global science collaborations that we encourage the
installation and effective use of modern optical fiber infrastructures wherever possible. These
discussions in SCIC will continue during 2004.


                                                        60
                               Figure 41 Countries participating in the Silk project


8.3.         Mediterranean countries
Internet connectivity is a relatively scarce resource in the Mediterranean countries; there is
virtually no direct intra-Mediterranean connectivity (between 2 Mediterranean countries), modest
internal connectivity (among research centers in a given country) and very modest Euro-
Mediterranean connectivity.

EUMEDCONNECT100 aims to establish the (Internet based) interconnection for Research
Networking between the Mediterranean partner countries (intra) as well as with the European
research networking (inter). This intra- and inter-connectivity will not only boost the
development of the Internet in each Mediterranean country, but will also create an infrastructure
around the Mediterranean region, which will transport any sort of co-operative research
application developed by the project participants. EUMEDCONNECT is financially supported by
the European Commission and includes Algeria, Cyprus, Egypt, Israel, Jordan, Lebanon, Malta,
Morocco, the Palestinian Authority, Syria, Tunisia and Turkey




100
      See http://archive.dante.net/eumedconnect/


                                                        61
8.4.        Asia Pacific

There has been some progress in Asia, thanks to the links of the Asia Pacific Academic
Network101 (APAN), shown in Figure 42, and the help of KEK. The bandwidths of these links are
summarized in Table 2. While there are high bandwidth Japan-U.S. and Taiwan-U.S links, most
of the international links within Southeast Asia are somewhere in the range from 0.5 to 155
Mbps, and the links to Southeast Asia are near the lower end of this range.

A notable upgrade on the link between Japan and Korea (APII) took place in January 2003, from
8 Mbps to 1 Gbps, and a second 1 Gbps link was added during 2003. The most prominent
example of progress in this region for 2003-4 is the upgrade from 310 Mbps to 20 Gbps of the
Australia-US bandwidth.




               Figure 42 View of the sub-regions of Asia and Australia participating in APAN




101
      See http://www.apan.net/ and Appendix 16


                                                   62
Table 2 Bandwidth of APAN links in Mbps (January 2004).




                          63
8.5.      South America

         The Brazilian Research and Education network RNP2102 is one of the most advanced
          R&E networks in South America. It has two international links. One of them, of 155
          Mbps, is used for Internet production traffic. The other is a 45 Mbps link that is
          connected to Internet2 through the AMPATH103 International Exchange Point in Miami,
          Florida and is used only for interconnection and cooperation among academic networks.
          Soon, the backbone will interconnect all Federal Institutions of Higher Education and
          Research Units in the Ministry of Science and Technology (MCT). In parallel, a new
          project called the Giga project has started. It aims to deploy a national optical network by
          2007, in which data will flow via the IP protocol directly over DWDM systems at Gbps
          speeds.

          Today, the internal connectivity in Brazil shown on Figure 43 is generally modest,
          varying from a few Mbps to 34 Mbps to most populous areas and the backbone is still
          based on ATM. The connection to UERJ (via the Rede Rio metropolitan of Rio de
          Janeiro) is currently very limited (to 16 Mbps). As a result, in the context of the
          CHEPREO104 project, ICFA-SCIC is involved in the improvement of the local
          connectivity to A. Santoro's regional computing center at UERJ. This is required if the
          Brazilian consortium of physics groups and computer scientists is to have a significant
          role in LHC physics, and Grid system developments. Further details about the internal
          connectivity in Brazil are available in Appendix 1.




                                  Figure 43 The RNP2 network in Brazil



102
    See http://www.rnp.br/
103
    AMPATH, “Pathway of the Americas”, see http://www.ampath.fiu.edu/
104
    See http://www.chepreo.org/about.htm. An Inter-regional grid enabled Center for High Energy Physics
Research and Educational Outreach at FIU.


                                                      64
           The CLARA105 (Cooperación Latino-Americana de Redes Avanzadas) is the recently
            created association of Latin American NRENs. The objective of CLARA is to promote
            co-operation among the Latin American NRENs to foster scientific and technological
            development. Its tasks will include promotion and project dissemination in Latin
            America, to ensure the long-term sustainability of the Latin American research networks
            and its interconnection to the U.S. and Europe. The proposed topology for Clara
            backbone is shown in Figure 44.




                              Figure 44 Proposed topology of CLARA’s backbone


           The mission of AMPATH106 is to serve as the pathway for Research and Education
            Networking in the Americas and to the World. Active since 2001, the purpose of the
            AMPATH project is to allow participating countries to contribute to the research and
            development of applications for the advancement of Internet technologies. In January
            2003, the connection to Internet2's Abilene network was upgraded to an OC12c
            (622Mbps). The AMPATH network is shown on Figure 45.




105
      See http://www.rnp.br/en/news/2002/not-021202.html
106
      See http://www.ampath.fiu.edu/ and Appendix 21 and 22


                                                        65
                 Figure 45 AMPATH “Pathway of the Americas”, showing the links between
                                     the US and Latin America.


      The ALICE107 project was set up in 2003 to develop an IP research network infrastructure
       within the Latin American region and towards Europe. It addresses the infrastructure
       objectives of the European Commission‟s @LIS program, which aims to promote the
       Information Society and fight the digital divide throughout Latin America. In Latin America,
       intra-regional connectivity is currently not developed. There is also no organized connectivity
       between the pan-European research network, GÉANT, and the National Research and
       Education Networks in Latin America. ALICE seeks to address these limitations. It also aims
       to foster research and education collaborations, both within Latin America and between Latin
       America and Europe.




107
      See http://www.dante.net/server/show/nav.009


                                                     66
       9. The Growth of Network Requirements in 2003

The estimates of future HENP domestic and transatlantic network requirements have increased
rapidly over the last three years. This is documented, for example, in the October 2001 report of
the DOE/NSF Transatlantic Network Working Group (TAN WG)108. The increased requirements
are driven by the rapid advance of affordable network technology (as illustrated in many
examples in the previous sections of this report) and especially the emergence of “Data Grids”109,
that are foreseen to meet the needs of worldwide HENP collaborations. The LHC “Data Grid
hierarchy” example (shown in Figure 46, as envisaged in 2000-2001) illustrates that the
requirements for each LHC experiment were expected to reach ~2.5 Gbps by approximately 2005
at the national Tier1 centers at FNAL and BNL, and ~2.5 Gbps at the regional Tier2 centers.
Taken together with other programmatic needs for links to DESY, IN2P3 and INFN, this estimate
corresponded to an aggregate transatlantic bandwidth requirement rising from 3 Gbps in 2002 to
23 Gbps in 2006.

As discussed in the following sections, it was understood in 2002-3 that the network bandwidths
shown in Figure 46 correspond to a conservative “baseline” estimate of the needs, formulated
using an evolutionary view of network technologies and a bottoms-up, static and hence overly
conservative view of the Computing Model needed to support the LHC experiments.

                                                                            CERN/Outside Resource Ratio ~1:2
                                ~PByte/sec                                  Tier0/( Tier1)/( Tier2)   ~1:1:1

                                               Online System                     ~100-400
      Experiment                                                                 MBytes/sec
                                                                                     CERN 700k SI95
                                                              Tier 0 +1                ~1 PB Disk;
                                                                                       Tape Robot
                           ~2.5 Gbps
       Tier 1                                                                                     FNAL: 200k
          IN2P3 Center                   RAL Center                         INFN Center          SI95; 600 TB

                                                                                                                2.5 Gbps
                                                   Tier 2                                    Tier2   Tier2  Tier2
                                                                                      Tier2 Center Center Center Center
                                                                            Tier2 Center
                           ~2.5 Gbps
         Tier 3
                            Institute
                                    Insti tu te Insti tu te   Insti tu te
                           ~0.25TIPS

                                                                              Physicists work on analysis “channels”
      Physics data cache                    0.1–1 Gbps
                                                                              Each institute has ~10 physicists
                                                      Tier 4
                 Workstations                                                 working on one or more channels

                                         Figure 46 The LHC Data Grid Hierarchy



108
    The report of this committee, commissioned by the US DOE and NSF and co-chaired by H. Newman (Caltech) and
L. Price (Argonne Nat‟l Lab) may be found at http://gate.hep.anl.gov/lprice/TAN. For comparison, the May 1998 ICFA
Network Task Force Requirements report may be found at http://l3www.cern.ch/~newman/icfareq98.html.
109
    Data Grids for high energy and astrophysics are currently under development by the Particle Physics Data Grid
(PPDG; see http://ppdg.net), Grid Physics Network (GriPhyN; see http://www.griphyn.org), iVDGL (www.ivdgl.org)
and the EU Data Grid (http://www.eu-datagrid.org) and EGEE Projects, as well as several national Grid projects in
Europe and Japan.


                                                                            67
One of the surprising results of the TAN WG report, shown in Table 3, was that the present-
generation experiments (BaBar, D0 and CDF) were foreseen to have transatlantic network
bandwidth needs that equal or exceed the levels presently estimated by the LHC experiments
CMS and ATLAS. This is ascribed to the fact that the experiments now in operation are
distributing (BaBar), or plan to distribute (D0; CDF in Run 2b) substantial portions of their event
data to regional centers overseas, while the LHC experiments‟ plans through 2001 foresaw only
limited data distribution.
                                  2001         2002         2003          2004         2005          2006
             CMS                   100         200           300           600         800           2500
            ATLAS                   50         100           300           600         800           2500
            BABAR                  300         600          1100          1600         2300          3000
             CDF                   100         300           400          2000         3000          6000
             Dzero                 400         1600         2400          3200         6400          8000
             BTeV                   20          40           100           200         300            500
             DESY                  100         180           210           240         270            300


      Total BW Required           1070         3020         4810          8440        13870         22800
      US-CERN BW                155-310        622          1250          2500         5000         10000
      Installed or Planned
      Table 3. TAN WG (2001) Estimate of the Installed Transatlantic Bandwidth Requirements for
                                           HENP (Mbps)

        The corresponding bandwidth requirements at the US HEP labs and on the principal links
across the Atlantic (for production networking) are summarized110 in Table 4.

                     2001             2002            2003         2004            2005              2006
SLAC                 622              1244            1244         2500            2500              5000
BNL                  622              1244            1244         2500            2500              5000
FNAL                 622              2500            5000         10000          10000              20000
US-CERN              310                 622          1244         2500            5000              10000

US-DESY              155                 310          310          310             310                622


       Table 4 TAN WG (2001) Summary of Bandwidth Requirements at HEP Labs and on Main
                                 Transoceanic Links (Mbps)


         The estimates above for the LHC experiments are now known to be overly conservative,
and need to be updated. They did not accommodate some later, larger estimates of data volumes
and/or data acquisition rates (e.g. for ATLAS), nor do they account for the more pervasive and
persistent use of high resolution/high frame rate videoconferencing and other collaborative tools
expected in the future. They also ignore the needs of individuals and small groups working with
institute-based workgroup servers (Tier3) and desktops (Tier4) for rapid turnaround when

110
  The entries in the table correspond to standard commercial bandwidth offerings. OC3 = 155 Mbps, OC12 = 622
Mbps, OC48 = 2.5 Gbps and OC192 = 10 Gbps.


                                                      68
extracting and transporting small (up to ~100 Gbyte) data samples on demand. The bandwidth
estimates also did not accommodate the new network requirements arising from the now current
view of dynamic Grid systems that include caching, co-scheduling of data and compute
resources, and “Virtual Data” operations (see www.ivdgl.org) that lead to significant automated
data movements. Based on the results of the TAN report, and the above considerations, a new
baseline for the US-CERN link was developed in 2002, corresponding to 10 Gbps in production
by 2005, doubling the bandwidth in 2006 and 2007, and thus reaching 40 Gbps for production
networking in time for LHC startup.

In June 2003, U.S. the Department of Energy (DOE) established a roadmap111 for the networks
and collaborative tools that the U.S. Science Networking and Services environment requires for
the DOE-supported science fields including astronomy/astrophysics, chemistry, climate,
environmental and molecular sciences, fusion materials science, nuclear physics, and particle
physics. This roadmap, shown in Table 5 is meant to ensure that the network provided by DOE
for its science programs will be adequate in the future. In order to meet its goal, it is
recommended that the Roadmap be implemented between now and 2008. As in other advanced
optical network initiatives, the DOE Roadmap foresees the deployment of lambda-switching
within the next 2 to 3 years, and the use of multiple 10 Gbps wavelengths and/or native 40 Gbps
wavelengths within 4 to 5 years. A major challenge is that the technologies do not exist today to
take data from a single source and move it to a single remote destination beyond 10 Gbps. In fact,
doing this at this rate from data sources to data destinations even in the same computer center is
far from routine today. This challenge is known as the end-to-end (E2E) challenge. The network
techniques being considered for meeting the challenge of greater-than 10-Gbps data transport
include lambda circuit switching and optical packet switching, both of which are on the leading
edge of R&D.
In parallel, the roadmap proposes to improve local connectivity by replacing local loops with
MANs (metropolitan optical networks) in areas where there is close proximity to multiple Office
of Science laboratories. As shown on Figure 47, optical rings should provide continued high
quality production IP service, at least one backup path from sites to the nearby ESnet hub,
scalable bandwidth options from sites to the ESnet backbone, and point-to-point provisioned
high-speed circuits as an ESnet service. With endpoint authentication, the point-to-point paths are
private and intrusion resistant circuits, so they should be able to bypass site firewalls if the
endpoints (sites) trust each other.




111
      The Roadmap report is available at http://www.osti.gov/bridge/product.biblio.jsp?osti_id=815539


                                                           69
                   Table 5 DOE Science Networking Roadmap
OE Science Networking Challenge: Roadmap to 2008




                                    70
                       Figure 47 : Metropolitan optical network in California
Another interesting networking roadmap is the one established by the HEP community in China.
The major scientific programs supported by the Institute of High Energy Physics (IHEP) are in
the form of both domestic and international collaborations, including HEP experiments, cosmic
ray observation and astrophysics experiments, with strong demands on the domestic and
international network. The bandwidth needs are summarized in Table 6 and should essentially be
supported by the GLORIAD112 initiative. A total bandwidth of 622 Mbps for the international
connectivity is judged to be necessary113 in 2004 and it has to be upgraded to 2.5Gbps in 2006 to
make possible the deployment of a Tier-1 national center in China. Further details are available in
Appendix 18.

 Applications                        Year 2004-2005                     Year 2006 and on
 LHC/LCG                             622Mbps                            2.5Gbps
 BES                                 100Mbps                            155Mbps
 YICRO                               100Mbps                            100Mbps
 AMS                                 100Mbps                            100Mbps
 Others                              100Mbps                            100Mbps
 Total (*)                           1Gbps                              2.5Gbps

                     Table 6. Prospective Need of Network for High Energy Physics




112
   See Section 6.4
113
   It should be noted that these requirements estimates are based on bottoms-up estimates of data volumes
and flows. According to the experience of network requirements committees (at CERN and in the U.S. for
example), this tends to result in conservative, baseline estimates. Use of dynamic Grid systems, as
described in earlier sections of this report, is likely to lead to larger bandwidth requirements than are shown
in Table 6.


                                                      71
10.         Growth of HENP Network Usage in 2001-2004
The levels of usage on some of the main links has been increasing rapidly, tracking the increased
bandwidth on the main network backbones and transoceanic links.

In 1999-2000, the speed of the largest data transfers over long distances for HENP were, with
very few exceptions, limited to just a few Mbps, because of the link bandwidths and/or TCP
protocol issues. In 2001, following link upgrades and tuning of the network protocol, large scale
data transfers on the US-CERN link in the range of 20-100 Mbps were made possible for the first
time, and became increasingly common for BaBar, CMS and ATLAS. Data transfer volumes of 1
Terabyte per day, equivalent to roughly 100 Mbps used around the clock, were observed by the
Fall of 2001 for Babar. These high speed transfers were made possible by the quality of the links,
which in many cases are nearly free of packet loss, combined with the modification of the default
parameter settings of TCP and the use of parallel data streams.114

In 2002, with many of the national and continental network backbones for research and
education, and the major transoceanic links used by our field reaching the 2.5-10 Gbps range115,
data volumes transferred were frequently 1 TB per day and higher for BaBar, and at a similar
level for CMS during “data challenges”. The upgrades of the ESNet (www.es.net) links to SLAC
and FNAL to OC12 in 2002 and early 2003, and the backbone and transatlantic link upgrades, has
led to transfers of several hundred Mbps being increasingly common at the time of this report.
While the growth of bandwidth usage at HENP labs in the U.S. during 2003 has been limited by
connectivity to the ESnet backbone (still at OC-12 or 622 Mbps), plans are now getting underway
(as summarized in the previous section) to remove these limitations.

The current bandwidth used by BaBar, including ESNet traffic, is typically in the 400 Mbps range
(i.e. ~4 TB/day equivalent) and is expected to rise as ESNet upgrades are put into service. The
long term trends, and future projections for network traffic, associated with distributed production
processing of events for BaBar, are shown in Figure 48.




114
      See http://www-iepm.slac.stanford.edu/monitoring/bulk/ and http://www.datatag.org
115
      Able to carry a theoretical maximum, at 100% efficiency of roughly 25 – 100 TB/day.


                                                           72
          Figure 48 Long term trends and projections for SLAC’s network requirements
                                  for offsite production traffic
The rate of HENP traffic growth and the HENP network requirement described in section 9 are
generally consistent with the growth of the data traffic in the World. This trend, corresponding to
a bandwidth increases by a factor of ~500 to 1000 in performance every 10 years, is confirmed by
the two examples below.
Figure 49 below shows an example of the traffic growth in a research network. The annual
growth of the ESNet traffic in the past five years has increased from 1.7x annually to just over
2.0x annually (This corresponds to a factor 1000 per decade).




                 Figure 49 ESnet Has Experienced Exponential Growth Since 1992




                                                73
Another interesting example is the aggregate flow traffic through the Amsterdam Internet
Exchange point (AMS-IX116) shown on Figure 50. It shows that the Internet traffic growth by 75-
100% per year (with the maximum rate of growth typically occurring in the summer and fall).




                         Figure 50 Traffic at the Amsterdam Internet Exchange Point




116
      See http://www.ams-ix.net/


                                                    74
11. HEP Challenges in Information Technology

The growth in HENP bandwidth requirements and network usage, and the associated need for
advanced network R&D in our field, are driven by the fact that HENP‟s current generation of
major experiments at SLAC, KEK, BNL and Fermilab, and the next generation of LHC
experiments, face unprecedented challenges: in data access, processing and distribution, and
collaboration across national and international networks. The challenges include:

            Providing rapid access to data subsets drawn from massive data stores, rising from
             Petabytes in 2003 to ~100 Petabytes by 2007, and ~1 Exabyte (1018 bytes) by
             approximately 2012 to 2015.
            Providing secure, efficient and transparent managed access to heterogeneous
             worldwide-distributed computing and data handling resources, across an ensemble of
             networks of varying capability and reliability
            Providing the collaborative infrastructure and tools that will make it possible for
             physicists in all world regions to contribute effectively to the analysis and the physics
             results, including from their home institutions. Once the infrastructure is in place, a
             new “culture of collaboration”, strongly supported by the managements of the HENP
             laboratories and the major experiments, will be required to make it possible to take
             part in the principal lines of analysis from locations remote from the site of the
             experiment.
            Integrating all of the above infrastructures to produce the first Grid-based, managed
             distributed systems serving “virtual organizations” on a global scale.


12. Progress in Network R&D

In order to help meet its present and future needs for reliable, high performance networks, our
community has engaged in network R&D over the last few years. In 2003, we made substantial
progress in the development and use of networks up to the multi-Gbps speed range, and in the
production use of data transfers at speeds up to the 1 Gbps range (storage to storage).

Extensive tests on the maximum attainable throughput have continued in the IEPM “Bandwidth
to the World” project117 at SLAC. HENP also has been involved in recent modifications and basic
developments of the basic Transport Control Protocol (TCP) that is used for 90% of the traffic on
the Internet. This was made possible through the use of advanced network testbeds across the US
or Canada118, across the Atlantic or Pacific, and across Europe119. Transfers at 5.6 Gbps120 have
been demonstrated over distances of up to 11,500 km in 2003. Progress made over the past year is
summarized in Table 7 that shows the history of the Internet2 land speed records (LSR)121 in the
single TCP stream class. The LSR awards honor the highest TCP throughput over the longest
117
    See http://www-iepm.slac.stanford.edu/bw
118
    Many demonstrations of advanced network and Grid capabilities took place, for example, at the SuperComputing
2002 and 2003 Conferences, notable for the increase in the scale of HENP participation and prominence, relative to
2001. See http://www.sc-conference.org/sc2002/ and http://www.sc-conference.org/sc2003/
119
    See http://www.datatag.org
120
    See http://lsr.internet2.edu/history.html
121
    See http://lsr.internet2.edu/


                                                         75
distance (product of the throughput with the terrestrial distance between the two end-hosts)
achieved with a single TCP stream. We expect transfers on this scale to be in production use by
later this year between Fermilab and CERN by taking advantage of the 10 Gbps bandwidth that
will be available by mid-2004.

                                                             Internet2 landspeed record history
                                                                  (in terabit-meters/second)


             70000



             60000



             50000



             40000

                                                                                                                       IPv4 terabit-meters/second)
             30000                                                                                                     IPv6 (terabit-meters/second)



             20000



             10000



                0
                     Month   Mar-00   Apr-02   Sep-02   Oct-02   Nov-02   Feb-03   May-03   Oct-03   Nov-03   Nov-03
                                                                 Month




           Table 7 Internet2 Land Speed Record history. The records of the last year are held by
                                         the HEP community.
In November 2003, a team of scientists and network engineers from Caltech, SLAC, LANL and
CERN at the SuperComputing 2003 Bandwidth Challenge joined forces and captured the
”Sustained Bandwidth Award” for the demonstration of “Distributed Particle Physics Analysis
Using Ultra-High Speed TCP on the Grid”, with a record bandwidth achieved of 23.2
Gigabits/sec (or 23.2 billion bits per second). The data, generated on the SC2003 showroom floor
at Phoenix, was sent to sites in four countries (USA, Switzerland, Netherlands, and Japan) on
three continents. The demonstration served to preview future Grid systems on a global scale,
where communities of hundreds to thousands of scientists around the world would be able to
access, process and analyze data samples of up to Terabytes, drawn from data stores thousands of
times larger.


13. Upcoming Advances in Network Technologies

Over the last few years optical component technology has rapidly evolved to support
multiplexing and amplification of ever increasing digital modulation rates. As discussed
throughout this report links using 10 Gbps modulation are now in widespread use, and some
vendors already have components122 and integrated products123 for links based on 40 Gbps
modulation. Therefore, even if 10 Gigabit Ethernet (10 GbE) is a relatively new technology, it is
no surprise that existing optical devices and 10 Gigabit Ethernet optical interfaces can be


122
      See for example http://www.convergedigest.com/Silicon/40gbps.asp
123
      See http://www.procket.com/pdf/8812_datasheet.pdf


                                                                               76
successfully married to support 10 Gbps transmission speeds over Long Haul or Extended Long
Haul optical connections.

It is possible now to transmit 10GE data traffic over 2000 Km without any O-E-O124 regeneration.
Therefore, it is possible to implement a backbone optical network based on 10GE Ethernet traffic
data without the need of expensive SONET/SDH125 infrastructure. The NLR infrastructure
described in Section 7.1 relies on the new Cisco 15808 DWDM long haul multiplexers which can
multiplex up to 80 Lambdas in a single fiber.
For very long haul connections, typically trans-oceanic connections, SONET technology will
remain for several years because the cost to replace the current equipment is not justified.
However, the way we use SONET infrastructures may change. The new 10 GE WAN-PHY126
standard is “SONET” friendly and defines how to carry Ethernet frames across SONET
networks127. In the future, the use of 10 GE may be generalized to all parts of the backbones
replacing the expensive Packet over Sonet (POS) technology:
              o    10GE WAN-PHY for “very” long haul connection (with “O-E-O’ regeneration)
              o    10GE LAN-PHY for regional, metro and local area (only “O-O” optical
                   amplifiers).
In addition to the generalization of the 10 GE technology in wide and local area networks, there
are now strong prospects for breakthrough advances in a wide range of network technologies
within the next one to five years, including:
         Optical fiber infrastructure: Switches, routers and optical multiplexers supporting
          multiple 10 Gigabit/sec (Gbps), 40 Gbps wavelengths and possibly higher speeds;
          a greater number of wavelengths on a fiber; possible dynamic path building128
         New versions of the basic Transport Control Protocol (TCP), and/or other protocols that
          provide stable and efficient data transport at speeds at and above 10 Gbps. Interesting
          developments in this area include FAST TCP129 ,GridDT130 , UDT131, HSTCP132, Bic-TCP133,
          H-TCP134 and TCP-Westwood+135
         Mobile handheld devices with I/O and wireless network speed, and computing capability
          to support persistent, ubiquitous data access and collaborative work
      Generalization of 10 Gbps Ethernet network interfaces on servers136 and eventually PCs;
      
      Ethernet at 40 Gbps or 100 Gbps
The many new projects being started, such as HOPI, GLIF, NetherLight, UKLight, Ultranet and

124
    Optical – Electrical – Optical regeneration
125
    For an explanation of SONET and SDH see http://www.techfest.com/networking/wan/sonet.htm
126
    The first transatlantic WAN-PHY connection has been demonstrated by CERN, CANARIE, the Carleton University
and SURFnet in September 2003. See https://edms.cern.ch/file/440356/1/article_ottawa_mk5_atlas.pdf and Appendix
24.
127
    See for example http://www.force10networks.com/applications/pdf/CP_cern.pdf for a description of the August
2003 10 GE WAN-PHY tests between Ottawa, Amsterdam and CERN.
128
    This would allow a combination of the circuit-switched and packet-switched network paradigms, in ways yet to be
investigated and developed. See for example “Optical BGP Networks” by W. St. Arnaud et al.,
http://www.canarie.ca/canet4/library/c4design/opticalbgpnetworks.pdf
129
    http://netlab.caltech.edu/FAST/
130
    http://sravot.home.cern.ch/sravot/GridDT/GridDT.htm
131
    UDT is a UDP-based Data Transport Protocol. See http://www.rgrossman.com/sabul.htm.
132
    See http://www.icir.org/floyd/hstcp.html.
133
    http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/index.htm.
134
    http://icfamon.dl.ac.uk/papers/DataTAG-WP2/reports/task1/20031125-Leith.pdf.
135
    http://www-ictserv.poliba.it/mascolo/tcp%20westwoood.htm.
136
    These interfaces are now available from Intel, S2IO and Napatech (Denmark).


                                                        77
Ultralight137, will within the next few years design and deploy new ways to support and manage
high capacity shared IP packet switched flows, and dynamically provisioned optical lambdas.

14. Meeting the challenge: HENP Networks in 2005-10;
    Petabyte-Scale Grids with Terabyte Transactions
Given the continued decline of network prices per unit bandwidth, and the technology
developments summarized above, a shift to a more “dynamic” view of the role of networks began
to emerge during 2002, triggered in part by planning for and initial work on a “Grid-enabled
Analysis Environment138”. This has led, in 2003, to an increasingly dynamic “system” view of the
network (and the Grid system built on top of it), where physicists at remote locations could
conceivably, a few years from now, extract Terabyte-sized subsets of the data drawn from multi-
petabyte data stores on demand, and if needed rapidly deliver this data to their home-sites.

If it becomes economically feasible to deliver this data in a short “transaction” lasting minutes,
rather than hours, this would enable remote computing resources to be used more effectively,
while making physics groups remote from the experiment better able to carry out competitive
data analyses. Completing these data-intensive transactions in just minutes would increase the
likelihood of the transaction being completed successfully, and it would substantially increase the
physics groups‟ working efficiency. Such short transactions also are necessary to avoid the
bottlenecks and fragility of the Grid system that would result if hundreds to thousands of such
requests were left pending for long periods, or if a large backlog of requests was permitted to
build up over time.

It is important to note that transactions on this scale, while still representing very small fractions
of the data, correspond to throughputs across networks of 10 Gbps and up. A 1000 second-long
transaction shipping 1 TByte of data corresponds to 8 Gbps of net throughput. Larger
transactions, such as shipping 100 TBytes between Tier1 centers in 1000 seconds, would require
0.8 Terabits/sec (comparable to the capacity of a fully instrumented fiber today).

These considerations, along with the realization that network vendors and academic and research
organizations are planning a rapid transition to optical networks with higher network speeds and
much higher aggregate link capacities, led to a roadmap for HENP networks in the coming
decade, shown in Table 8. Using the US-CERN production and research network links139 as an
example of the possible evolution of major network links in our field, the roadmap140 shows
progressive upgrades every 2-3 years, going from the present 2.5-10 Gbps range to the Tbps
range within approximately the next 10 years. The column on the right shows the progression
from static bandwidth provisioning (up to today) to the future use of multiple wavelengths on an
optical fiber, and the increasingly dynamic provision of end-to-end network paths through optical
circuit switching, to meet the needs of the most demanding science applications.



137
      See http://ultralight.caltech.edu
138
    See for example http://pcbunn.cacr.caltech.edu/GAE/GAE.htm, http://ultralight.caltech.edu/gaeweb and
http://www.crossgrid.org
139
    Jointly funded by the US DOE and NSF, CERN and European Union.
140
    Source: H.~Newman. Also see “Computing and Data Analysis for Future HEP Experiments”
presented by M. Kasemann at the ICHEP02 Conference, Amsterdam (7/02).
See http://www.ichep02.nl/index-new.html




                                                        78
It should be noted that the roadmap in Table 8 is a “middle of the road” projection. The rates of
increase fall in between the rate experienced between 1985 and 1995, before deregulation of the
telecommunications industry (a factor of 200 – 400 per decade) and the current decade where
between 1995 and 2005 the improvement will be a factor of 2500 – 5000.


           Year               Production                 Experimental            Remarks
           2001                  0.155                     0.622-2.5            SONET/SDH
           2002                   0.622                           2.5           SONET/SDH
                                                                            DWDM; GigE Integ.
           2003                     2.5                           10       DWDM; 1 + 10 GigE
                                                                                 Integration
           2005                     10                          2-4 X 10           Switch;
                                                                                Provisioning
           2007                  2-4 X 10                    ~10 X 10;        1st Gen.  Grids
                                                                40 Gbps
           2009                 ~10 X 10                    ~5 X 40 or           40 Gbps 
                              or 1-2 X 40                   ~20-50 X 10          Switching
           2011                ~5 X 40 or             ~25 X 40 or ~100 X      2nd Gen  Grids
                                ~20 X 10                      10             Terabit Networks
           2013                 ~Terabit                    ~MultiTbps        ~Fill One Fiber
                Table 8 A Roadmap for major links used by HENP network through 2013.
            Future projections follow the average trend of affordable bandwidth increases over
               the last 20 years: by a factor of ~500 to 1000 in performance every 10 years.



15. Coordination with Other Network Groups and
    Activities

In addition to the IEPM project mentioned above, there are a number of other groups sharing
experience, and developing guidelines for best practices aimed at high-performance network use.
     DataTAG project (http://www.datatag.org),
     The CHEPREO project (http://www.chepreo.org/),
     The Internet2 End-to-End Initiative (http://www.internet2.edu/e2e),
     The Internet2 HENP Working Group141 (see http://www.internet2.edu/henp)
     The Internet2 HOPI Initiative.

ICFA-SCIC is coordinating its efforts with these activities to achieve synergy and avoid
duplication of efforts, and will continue to do so in the coming year.


141
      Chaired by S. McKee (Michigan) and H. Newman (Caltech).


                                                       79
Grid projects such as GriPhyN/iVDGL, PPDG, the EU Data Grid, EGEE and the LHC Grid
Computing Project142 are relying heavily on the quality of our networks, and the availability of
reliable high performance of the networks supporting the execution of some of the Grid
operations. An international Grid Operations Center is planned at Indiana University (see
http://igoc.iu.edu/igoc/index.html). There is a Grid High Performance Networking Research
Group in the Global Grid Forum (http://www.epm.ornl.gov/ghpn/GHPNHome.html).

ICFA-SCIC, through its inter-regional character and the involvement of its members in various
Grid projects, has a potentially important role to play in the achievement of a consistent set of
guidelines and methodologies for network usage in support of Grids.

16. Broader Implications: HENP and the World Summit
    on the Information Society

HENP‟s network requirements, and its R&D on networks and Grid systems, have put it in the
spotlight as a leading scientific discipline and application area for the use of current and future
state-of-the-art networks, as well as a leading field in the development of new technologies that
support worldwide information distribution, sharing and collaboration. In the past year these
developments, and work on the Digital Divide (including some of the work by the ICFA SCIC)
have been recognized by the world‟s governments and international organizations as being vital
for the formation of a worldwide “Information Society”.143

We were invited, on behalf of HENP and the Grid projects, to organize a technical session on
“The Role of New Technologies in the Formation of the Information Society”144 at the WSIS
Pan-European Ministerial Meeting in Bucharest in November 2002 (http://www.wsis-
romania.ro/ ); then we took an active part in the organization of the conference on the Role of
Science in the Information Society (RSIS) organized by CERN, a Summit Event at the World
Summit on the Information Society (WSIS) which hold in Geneva from 10-12 December 2003.
The RSIS‟s goal was to illuminate science‟s continuing role in driving the future of information
and communication technologies. A Science and Information Society Forum145 during WSIS was
organized by CERN, and an SIS Online Stand was constructed by CERN and Caltech.
Demonstrations and presentations at the Forum and Online Stand showed how advanced
networking technology (used daily by the particle physics community) can bring benefits in a
variety of fields, including: medical diagnostics and imaging, e-learning, distribution of video
material, teaching lectures, and distributed conferences and discussions around the world
community.

         The timeline and key documents relevant to the WSIS may be found at the US State
         Department site http://www.state.gov/e/eb/cip/wsis/ . As example, The “Tokyo
         Declaration” issued after the January 2003 WSIS Asia-Pacific Regional Conference,


142
    See http/ppdg.net , http://www.griphyn.org, http:///www.ivdgl.org , http://www.eu-datagrid.org/ and
http://lhcgrid.web.cern.ch/LHCgrid/
143
    The formation of an Information Society has been a central theme in government agency and diplomatic circles
throughout 2002-3, leading up to the World Summit on the Information Society (WSIS; see http://www.itu.int/wsis/ )
in Geneva in December 2003 and in Tunis in 2005. The timeline and key documents may be found at the US State
Department site http://www.state.gov/e/eb/cip/wsis/
144
    The presentations, opening and concluding remarks from the New Technologies session, as well as the General
Report from the Bucharest conference, may be found at http://cil.cern.ch:8080/WSIS
145
    See http://sis-forum.web.cern.ch/SIS-Forum/


                                                        80
         defines a “Shared Vision of the Information Society” that has remarkable synergy with
         the qualitative needs of our field, as follows:

         “The concept of an Information Society is one in which highly-developed ICT
         [Information and Communication Technology] networks, equitable and ubiquitous
         access to information, appropriate content in accessible formats and effective
         communication can help people achieve their potential…”

The Declaration continues with the broader economic and social goals, as follows:

         [to] Promote sustainable economic and social development, improve quality of life for
         all, alleviate poverty and hunger, and facilitate participatory decision processes.


17. Relevance of Meeting These Challenges for Future
    Networks and Society
Successful construction of network and Grid systems able to serve the global HENP and other
scientific communities with data-intensive needs could have wide-ranging effects on research,
industrial and commercial operations. Resilient self-aware systems developed by the HENP
community, able to support a large volume of robust Terabyte and larger transactions, and to
adapt to a changing workload, could provide a strong foundation for the distributed data-intensive
research in many fields, as well as the most demanding business processes of multinational
corporations of the future.

Development of the new generation of systems of this kind, and especially the recent ideas and
initial work in the HENP community on “Grid-Enabled Analysis” and “Grid Enabled
Collaboratories”146 could also lead to new modes of interaction between people and “persistent
information” in their daily lives. Learning to provide, efficiently manage and absorb this
information and in a persistent, collaborative environment would have a profound effect on our
society and culture.

Providing the high-performance global networks required by our field, as discussed in this report,
would enable us to build the needed Grid environments and carry out our scientific mission. But
it could also be one of the key factors triggering a widespread transition, to the next generation of
global information systems in everyday life.

As we progress towards these goals, we must make every effort to ensure that scientists from all
regions of the world are able to take part in, and benefit from these developments, and in so
doing be full partners in the process of search and discovery at the high energy frontier.




146
   These terms refer to some of the concepts in current proposals by US CMS and US ATLAS to the NSF in 2003.
They refer to integrated collaborative working environments aimed at effective worldwide data analysis and
knowledge sharing, that fully exploit Grid technologies and netwo rks.


                                                      81

								
To top