Docstoc

International Networks and the US-CERN Link

Document Sample
International Networks and the US-CERN Link Powered By Docstoc
					Networks for HENP and ICFA SCIC

Harvey B. Newman
California Institute of Technology APAN High Energy Physics Workshop January 21, 2003

Next Generation Networks for Experiments: Goals and Needs
Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams
 Providing rapid access to event samples, subsets

and analyzed physics results from massive data stores  From Petabytes by 2002, ~100 Petabytes by 2007,

to ~1 Exabyte by ~2012.
 Providing analyzed results with rapid turnaround, by

coordinating and managing the large but LIMITED computing, data handling and NETWORK resources effectively  Enabling rapid access to the data and the collaboration  Across an ensemble of networks of varying capability  Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs  With reliable, monitored, quantifiable high performance

Four LHC Experiments: The Petabyte to Exabyte Challenge

ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation

Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 1018 Bytes) (2007) (~2012 ?) for the LHC Experiments

LHC Data Grid Hierarchy
~PByte/sec
Online System Experiment

CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1 ~100-400 MBytes/sec
CERN 700k SI95 ~1 PB Disk; Tape Robot
FNAL

Tier 0 +1
Tier 1
~2.5 Gbps
RAL Center INFN Center IN2P3 Center

2.5 Gbps

~2.5 Gbps Tier 3

Tier 2
Institute Institute

Tier2 Tier2 Tier2 Tier2 CenterTier2 Center Center Center Center

Institute Institute ~0.25TIPS

Physics data cache

0.1 to 10 Gbps

Tens of Petabytes by 2007-8. An Exabyte within ~5 Years later.

Workstations

Tier 4

ICFA and Global Networks for HENP
National and International Networks, with sufficient (rapidly increasing) capacity and capability, are essential for
The daily conduct of collaborative work in both experiment and theory Detector development & construction on a global scale; Data analysis involving physicists from all world regions The formation of worldwide collaborations The conception, design and implementation of next generation facilities as “global networks”

“Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

ICFA and International Networking
ICFA Statement on Communications in Int’l HEP Collaborations of October 17, 1996
See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html

“ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international HEP Collaborations should:
 Review their operating methods to ensure they are fully adapted to remote participation

 Strive to provide the necessary communications facilities and adequate international bandwidth”

NTF

ICFA Network Task Force: 1998 Bandwidth Requirements Projection (Mbps)
1998 2000 0.2 – 2 (2-10) 1.5 - 45 34 - 155 2005 0.8 – 10 (10 – 100) 34 - 622 622 - 5000 0.05 - 0.25 (0.5 - 2) 0.25 - 10 1.5 - 45 34 - 155 1.5 - 20

BW Utilized Per Physicist (and Peak BW Used) BW Utilized by a University Group BW to a Home Laboratory Or Regional Center BW to a Central Laboratory Housing One or More Major Experiments BW on a Transoceanic Link

155 - 622 2500 - 10000 34 - 155 622 - 5000

100–1000 X Bandwidth Increase Foreseen for 1998-2005 See the ICFA-NTF Requirements Report: http://l3www.cern.ch/~newman/icfareq98.html

ICFA Standing Committee on Interregional Connectivity (SCIC)
 Created by ICFA in July 1998 in Vancouver ; Following ICFA-NTF  CHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe (and network requirements of HENP)

As part of the process of developing these recommendations, the committee should  Monitor traffic  Keep track of technology developments  Periodically review forecasts of future bandwidth needs, and  Provide early warning of potential problems
 Create subcommittees when necessary to meet the charge  The chair of the committee should report to ICFA once per year, at its joint meeting with laboratory directors (Feb. 2003)  Representatives: Major labs, ECFA, ACFA, NA Users, S. America

ICFA-SCIC Core Membership
 Representatives from major HEP  ECFA representatives:

laboratories: W. Von Reuden (CERN) Volker Guelzow (DESY) Vicky White (FNAL) Yukio Karita (KEK) Richard Mount (SLAC)  User Representatives Richard Hughes-Jones (UK) Harvey Newman (USA) Dean Karlen (Canada)  For Russia: Slava Ilyin (MSU)

Denis Linglin (IN2P3, Lyon) Frederico Ruggieri (INFN Frascati)
 ACFA representatives:

Rongsheng Xu (IHEP Beijing) H. Park, D. Son (Kyungpook Nat’l University)
 For South America:

Sergio F. Novaes (University of Sao Paulo)

SCIC Sub-Committees








Web Page http://cern.ch/ICFA-SCIC/ Monitoring: Les Cottrell (http://www.slac.stanford.edu/xorg/icfa/scic-netmon) With Richard Hughes-Jones (Manchester), Sergio Novaes (Sao Paolo); Sergei Berezhnev (RUHEP), Fukuko Yuasa (KEK), Daniel Davids (CERN), Sylvain Ravot (Caltech), Shawn McKee (Michigan) Advanced Technologies: Richard Hughes-Jones, With Vladimir Korenkov (JINR, Dubna), Olivier Martin(CERN), Harvey Newman The Digital Divide: Alberto Santoro (Rio, Brazil)  With Slava Ilyin, Yukio Karita, David O. Williams  Also Dongchul Son (Korea), Hafeez Hoorani (Pakistan), Sunanda Banerjee (India), Vicky White (FNAL) Key Requirements: Harvey Newman  Also Charlie Young (SLAC)

Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*]


2001 2002
100 50 300 100 400 20 100 200 100 600 300 1600 40 180

2003
300 300 1100 400 2400 100 210

2004
600 600 1600 2000 3200 200 240

2005
800 800 2300 3000 6400 300 270

2006
2500 2500 3000 6000 8000 500 300

CMS ATLAS BaBar CDF D0 BTeV DESY

CERN 155- 622 2500 5000 10000 20000 BW 310 [*] BW Requirements Increasing Faster Than Moore’s Law
See http://gate.hep.anl.gov/lprice/TAN

History – One large Research Site

Much of the Traffic: SLAC  IN2P3/RAL/INFN; via ESnet+France; Abilene+CERN

Current Traffic ~400 Mbps; ESNet Limitation Projections: 0.5 to 24 Tbps by ~2012

Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report 2001
Tier1  Tier0 Data Flow for Analysis Tier2  Tier0 Data Flow for Analysis Interactive Collaborative Sessions (30 Peak) Remote Interactive Sessions (30 Flows Peak) Individual (Tier3 or Tier4) data transfers Limit to 10 Flows of 5 Mbytes/sec each  TOTAL Per Tier0 - Tier1 Link 1) 2) 3) 4) 5) NOTE:
 Adopted by the LHC Experiments; given in the upcoming

0.5 - 1.0 Gbps 0.2 - 0.5 Gbps 0.1 - 0.3 Gbps 0.1 - 0.2 Gbps 0.8 Gbps 1.7 - 2.8 Gbps

Hoffmann Steering Committee Report: “1.5 - 3 Gbps per experiment”  Corresponds to ~10 Gbps Baseline BW Installed on US-CERN Link  Hoffmann Panel also discussed the effects of higher bandwidths  For example all-optical 10 Gbps Ethernet across WANs

Tier0-Tier1 BW Requirements Estimate: for Hoffmann Report 2001
 Does Not Include the more recent ATLAS Data Estimates  270 Hz at 1033 Instead of 100Hz  400 Hz at 1034 Instead of 100Hz  2 MB/Event Instead of 1 MB/Event  Does Not Allow Fast Download to Tier3+4

of “Small” Object Collections
 Example: Download 107 Events of AODs (104 Bytes)  100 Gbytes;

At 5 Mbytes/sec per person (above) that’s 6 Hours !

 This is a still a rough, bottoms-up, static, and

hence Conservative Model.
 A Dynamic distributed DB or “Grid” system with Caching,

Co-scheduling, and Pre-Emptive data movement may well require greater bandwidth  Does Not Include “Virtual Data” operations: Derived Data Copies; Data-description overheads  Further MONARC Computing Model Studies are Needed

ICFA SCIC Meetings[*] and Topics
 Focus on the Digital Divide This Year  Identification of problem areas; work on ways to improve  Network Status and Upgrade Plans in Each Country  Performance (Throughput) Evolution in Each Country, and Transatlantic  Performance Monitoring World-Overview (Les Cottrell, IEPM Project)  Specific Technical Topics (Examples):  Bulk transfer, New Protocols; Collaborative Systems, VOIP  Preparation of Reports to ICFA (Lab Directors’ Meetings)  Last Report: World Network Status and Outlook - Feb. 2002  Next Report: Digital Divide, + Monitoring, Advanced Technologies; Requirements Evolution – Feb. 2003 [*] Seven Meetings in 2002; at KEK In December 13.

Network Progress in 2002 and Issues for Major Experiments
 Backbones & major links advancing rapidly to 10 Gbps range  “Gbps” end-to-end throughput data flows have been

tested; will be in production soon (in 12 to 18 Months)  Transition to Multi-wavelengths 1-3 yrs. in the “most favored” regions  Network advances are changing the view of the net’s roles  Likely to have a profound impact on the experiments’ Computing Models, and bandwidth requirements  More dynamic view: GByte to TByte data transactions; dynamic path provisioning  Net R&D Driven by Advanced integrated applications, such as Data Grids, that rely on seamless LAN and WAN operation  With reliable, quantifiable (monitored), high performance  All of the above will further open the Digital Divide chasm.

We need to take action

ICFA SCIC: R&E Backbone and International Link Progress
GEANT Pan-European Backbone (http://www.dante.net/geant)  Now interconnects >31 countries; many trunks 2.5 and 10 Gbps UK: SuperJANET Core at 10 Gbps  2.5 Gbps NY-London, with 622 Mbps to ESnet and Abilene France (IN2P3): 2.5 Gbps RENATER backbone from October 2002  Lyon-CERN Link Upgraded to 1 Gbps Ethernet  Proposal for dark fiber to CERN by end 2003 SuperSINET (Japan): 10 Gbps IP and 10 Gbps Wavelength Core  Tokyo to NY Links: 2 X 2.5 Gbps started; Peer with ESNet by Feb. CA*net4 (Canada): Interconnect customer-owned dark fiber nets across Canada at 10 Gbps, started July 2002  “Lambda-Grids” by ~2004-5 GWIN (Germany): 2.5 Gbps Core; Connect to US at 2 X 2.5 Gbps; Support for SILK Project: Satellite links to FSU Republics Russia: 155 Mbps Links to Moscow (Typ. 30-45 Mbps for Science)  Moscow-Starlight Link to 155 Mbps (US NSF + Russia Support)  Moscow-GEANT and Moscow-Stockholm Links 155 Mbps

R&E Backbone and Int’l Link Progress
Abilene (Internet2) Upgrade from 2.5 to 10 Gbps in 2002  Encourage high throughput use for targeted applications; FAST ESNET: Upgrade: to 10 Gbps “As Soon as Possible” US-CERN  to 622 Mbps in August; Move to STARLIGHT  2.5G Research Triangle from 8/02; STARLIGHT-CERN-NL; to 10G in 2003. [10Gbps SNV-Starlight Link Loan from Level(3) SLAC + IN2P3 (BaBar)  Typically ~400 Mbps throughput on US-CERN, Renater links  600 Mbps Throughput is BaBar Target for Early 2003 (with ESnet and Upgrade) FNAL: ESnet Link Upgraded to 622 Mbps  Plans for dark fiber to STARLIGHT, proceeding

NY-Amsterdam Donation from Tyco, September 2002: Arranged by IEEAF: 622 Gbps+10 Gbps Research Wavelength US National Light Rail Proceeding; Startup Expected this Year

2.5 10 Gbps Backbone > 200 Primary Participants All 50 States, D.C. and Puerto Rico 75 Partner Corporations and Non-Profits 23 State Research and Education Nets 15 “GigaPoPs” Support 70% of Members

2003: OC192 and OC48 Links Coming Into Service; Need to Consider Links to US HENP Labs

National R&E Network Example Germany: DFN Transatlantic Connectivity 2002
 2 X OC48: NY-Hamburg and NY-Frankfurt  Direct Peering to Abilene (US) and Canarie (Canada) UCAID said to be adding another 2 OC48’s; in a Proposed Global Terabit Research Network (GTRN)
Virtual SILK Highway Project (from 11/01): NATO ($ 2.5 M) and Partners ($ 1.1M)  Satellite Links to South Caucasus and Central Asia (8 Countries) In 2001-2 (pre-SILK) BW 64-512 kbps Proposed VSAT to get 10-50 X BW for same cost See www.silkproject.org [*] Partners: CISCO, DESY. GEANT,
UNDP, US State Dept., Worldbank, UC London, Univ. Groenigen

STM 16

National Research Networks in Japan SuperSINET
NIFS

 Started operation January 4, 2002
 Support for 5 important areas:

IP
Nagoya U NIG

WDM path

HEP, Genetics, Nano-Technology, Nagoya Space/Astronomy, GRIDs  Provides 10 ’s: Osaka  10 Gbps IP connection Osaka U  Direct intersite GbE links  9 Universities Connected Kyoto U
ICR Kyoto-U

IP router OXC Tohoku U

KEK

Tokyo

NII Chiba

NII Hitot.

ISAS

U Tokyo

January 2003: Two TransPacific 2.5 Gbps Wavelengths (to NY); Japan-US-CERN Grid Testbed Soon

Internet
NAO

IMS U-Tokyo

SuperSINET Updated Map: October 2002

APAN Links in Southeast Asia January 15, 2003

National Light Rail Footprint
SEA POR SAC OGD SVL FRE KAN LAX SDG STR PHO OLG DAL WAL ATL NAS RAL DEN CHI CLE PIT NYC BOS WDC

NLR
Buildout

15808 Terminal, Regen or OADM site Fiber route

Started November 2002 Initially 4 10 Gb Wavelengths To 40 10Gb Waves in Future

NREN Backbones reached 2.5-10 Gbps in 2002 in Europe, Japan and US; US: Transition now to optical, dark fiber, multi-wavelength R&E network

Progress: Max. Sustained TCP Thruput on Transatlantic and US Links

   

105 Mbps 30 Streams: SLAC-IN2P3; 102 Mbps 1 Stream CIT-CERN 125 Mbps in One Stream (modified kernel): CIT-CERN 190 Mbps for One stream shared on 2 155 Mbps links 120 Mbps Disk-to-Disk with One Stream on 155 Mbps link (Chicago-CERN)  5/20/02 450-600 Mbps SLAC-Manchester on OC12 with ~100 Streams  6/1/02 290 Mbps Chicago-CERN One Stream on OC12 (mod. Kernel)  9/02 850, 1350, 1900 Mbps Chicago-CERN 1,2,3 GbE Streams, OC48 Link  11-12/02 FAST: 940 Mbps in 1 Stream SNV-CERN; 9.4 Gbps in 10 Flows SNV-Chicago Also see http://www-iepm.slac.stanford.edu/monitoring/bulk/; and the Internet2 E2E Initiative: http://www.internet2.edu/e2e

8-9/01 11/5/01 1/09/02 3/11/02

*

for Next-Generation Networks: from 0.1 To 100 Gbps

FAST (Caltech): A Scalable, “Fair” Protocol

SC2002 11/02

Highlights of FAST TCP
 Standard Packet Size 

SC2002 10 flows SC2002 2 flows

I2 LSR
29.3.00 multiple

940 Mbps single flow/GE card 9.4 petabit-m/sec 1.9 times LSR  9.4 Gbps with 10 flows 37.0 petabit-m/sec 6.9 times LSR  22 TB in 6 hours; in 10 flows Implementation


Sender-side (only) mods
 Delay (RTT) based  Stabilized Vegas

9.4.02 1 flow 22.8.02 IPv6

SC2002 1 flow

CHEP 2001, Beijing
f

TCP

b

’

URL: netlab.caltech.edu/FAST

Next: 10GbE; 1 GB/sec disk to disk

7000km

Internet: distributed feedback system

Experiment Theory Geneva Harvey B Newman R (s) AQM California InstituteSunnyvale of Technology Baltimore p 3000km R (s) Chicago 1000km September 6, 2001

C. Jin, D. Wei, S. Low FAST Team & Partners

HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps
Year

Production

Experimental

2001 2002 2003 2005 2007 2009

0.155 0.622 2.5 10 2-4 X 10

0.622-2.5 2.5 10 2-4 X 10 ~10 X 10; 40 Gbps ~5 X 40 or ~20-50 X 10 ~25 X 40 or ~100 X 10

Remarks SONET/SDH SONET/SDH DWDM; GigE Integ. DWDM; 1 + 10 GigE Integration  Switch;  Provisioning 1st Gen.  Grids

40 Gbps  ~10 X 10 Switching or 1-2 X 40 2nd Gen  Grids 2011 ~5 X 40 or Terabit Networks ~20 X 10 ~Fill One Fiber 2013 ~T erabit ~MultiTbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use and Share Multi-Gbps Networks

HENP Lambda Grids: Fibers for Physics
 Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes

from 1 to 1000 Petabyte Data Stores  Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time.  Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) 1 10 10 100 100 1000 (Capacity of Fiber Today)  Summary: Providing Switching of 10 Gbps wavelengths within ~3-5 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.

IEPM: PingER Deployment
 Measurements from

Monitoring Sites

 34 monitors in 14 countries  Over 790 remote hosts; 3600

monitor-remote site pairs  Recently added 23 Sites in 17 Countries, due to ICTP Collaboration  Reports on RTT, loss, reachability, jitter, reorders, duplicates …  Measurements go 6ack to Jan-95

Remote Sites

 79 Countries Monitored  Contain > 80% of

world population  99% of online users of the Internet  Mainly A&R sites

History - Throughput Quality Improvements from US

Bandwidth of TCP < MSS/(RTT*Sqrt(Loss))

(1)

80% annual improvement Factor ~100/8 yr

Progress: but Digital Divide is Maintained

(1) Macroscopic Behavior of the TCP Congestion Avoidance Algorithm, Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), July 1997

NREN Core Network Size (Mbps-km):
100M

http://www.terena.nl/compendium/2002
Logarithmic Scale Advanced In Transition
Gr Ir Pl It Ch

Leading
Hu Es Fi

Nl Cz

10M
1M 100k

10k
1k

Lagging
Ro Ukr

100

Work on the Digital Divide: Several Perspectives
 Identify & Help Solve Technical Problems:

Nat’l, Regional, Last 10/1/0.1 km  Inter-Regional Proposals (Example: Brazil)
 US NSF Proposal (10/2002); possible EU LIS Proposal

 Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, …  E.g. RoEduNet (2-6 to 34 Mbps); Pricing not so different

from US-CERN price in 2002 for a few Gbps  Find Ways to work with vendors, NRENs, and/or Gov’ts
 Use Model Cases: Installation of new advanced fiber

infrastructures; Convince Neighboring Countries
 Poland (to 5k km Fiber); Slovakia; Ireland

 Exploit One-off Solutions: E.g. extend the SILK Project (DESY/FSU

satellite links) to a SE European site  Work with other organizations: Terena, Internet2, AMPATH, IEEAF, UN, etc. to help with technical and/or political sol’ns

Digital Divide Committee

Gigabit Ethernet Backbone; 100 Mbps Link to GEANT

Romania: 155 Mbps to GEANT and Bucharest; Inter-City Links of 2-6 Mbps; to 34 Mbps in 2003

GEANT 155Mbps

34Mbps

34Mbps 34Mbps

34Mbps

Annual Cost

> 1 MEuro

34Mbps

Digital Divide WG Activities
 Questionnaire Distributed to the HENP Lab Directors and

the Major Collaboration Managements  Plan on Project to Build HENP World Network Map; Updated and Maintained on a Web Site, Backed by Database:  Systematize and Track Needs and Status  Information: Link Bandwidths, Utilization, Quality, Pricing, Local Infrastructure, Last Mile Problems, Vendors, etc.  Identify Urgent Cases; Focus on Opportunities to Help  First ICFA SCIC Workshop: Focus on the Digital Divide  Target Date February 2004 in Rio de Janeiro (LISHEP)  Organization Meeting July 2003  Plan Statement at the WSIS, Geneva (December 2003)  Install and Leave Behind a Good Network  Then 1 (to 2) Workshops Per Year, at Sites that Need Help

We Must Close the Digital Divide
Goal: To Make Scientists from All World Regions Full
  






Partners in the Process of Search and Discovery What ICFA and the HENP Community Can Do Help identify and highlight specific needs (to Work On) Policy problems; Last Mile problems; etc. Spread the message: ICFA SCIC is there to help; Coordinate with AMPATH, IEEAF, APAN, Terena, Internet2, etc. Encourage Joint programs [such as in DESY’s Silk project; Japanese links to SE Asia and China; AMPATH to So. America]  NSF & LIS Proposals: US and EU to South America Make direct contacts, arrange discussions with gov’t officials  ICFA SCIC is prepared to participate Help Start, or Get Support for Workshops on Networks (& Grids)  Discuss & Create opportunities  Encourage, help form funded programs Help form Regional support & training groups (requires funding)

“Cultivate and promote practical solutions to delivering scalable, universally available and equitable access to suitable bandwidth and necessary network resources in support of research and education collaborations.”

Groningen Carrier Hotel: March 2002

http://www.ieeaf.org

CA-Tokyo by ~1/03 NY-AMS 9/02

(Research)

Global Medical Research Exchange Initiative Bio-Medicine and Health Sciences

St. Petersburg

NL CA MD
Barcelona Greece
Chenai Navi Mumbai

Kazakhstan Uzbekistan

CN

GHANA Buenos Aires/San Paolo

SG

Layer 1 – Spoke & Hub Sites Layer 2 – Spoke & Hub Sites Layer 3 – Spoke & Hub Sites

PERTH

Global Quilt Initiative – GMRE Initiative - 001

Propose Global Research and Education Network for Physics

Networks, Grids and HENP
 Current generation of 2.5-10 Gbps network backbones arrived

in the last 15 Months in the US, Europe and Japan  Major transoceanic links also at 2.5 - 10 Gbps in 2003  Capability Increased ~4 Times, i.e. 2-3 Times Moore’s  Reliable high End-to-end Performance of network applications (large file transfers; Grids) is required. Achieving this requires:  End-to-end monitoring; a coherent approach  Getting high performance (TCP) toolkits in users’ hands  Digital Divide: Network improvements are especially needed in SE Europe, So. America; SE Asia, and Africa:  Key Examples: India, Pakistan, China; Brazil; Romania  Removing Regional, Last Mile Bottlenecks and Compromises in Network Quality are now On the critical path, in all world regions  Work in Concert with APAN, Internet2, Terena, AMPATH; DataTAG, the Grid projects and the Global Grid Forum


				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:9
posted:11/13/2009
language:English
pages:43