The INFN Tier-1 HardwareSoftware status Deployment Plans by huf13890

VIEWS: 5 PAGES: 43

									The INFN Tier-1: Hardware/Software status
           Deployment Plans

          Cristina Vistoli – INFN CNAF
               CCR Workshop 2006
                  Otranto - Lecce
                          Summary

    •   Introduction
    •   Infrastructures services
    •   The upgrade plan
    •   Schedule




CCR Workshop 7/6/2006               2
                            The Tier1 Computing Centre
          •   Completely redundant Electric Supply
              System
                –   Transformer: 1250 KVA upgraded in
                    August 2005
                –   UPS: 800 KVA (~ 640 KW),
                –   Backup Diesel electrical Supply: 1250 KVA
                    (~1000 KW)
          •   Refrigeration
                –   RLS –Airwell installed on the roof, supplies
                    the room ~530 KW of refrigerating power
                    through refrigerated water
                –   1 UTA (supplies to the room 20%
                    refrigerating power and controls umidity)
                –   14 Local Cooling Systems (Hiross)
                    distribuited in the room (~25 KW each)
          •   New control and alarm systems (including
              cameras to monitor the hall)
                –   Circuit cold water temperature
                –   Hall temperature
                –   Fire
                –   Electric power transformer temperature
                –   UPS, UTL, UTA

                                                     • Located at CNAF, Bologna – underground
                                                       room (level -2) with a total space of ~1000 m2
CCR Workshop 7/6/2006                                                                                   3
                          Computing – Storage Resources
          • CPU:
                –   700 biprocessor boxes 2.4 – 3 GHz (+70 servers)
                –   150 new Opteron biprocessor boxes 2.6 GHz
                –   ~1700 KSi2k Total – exactly: 1580 KSi2K
                –   Decommissioning ~ 100 WNs (~ 150 KSi2K) moved to test farm
                        • Used for ECGI testbed
                – Issued Tender for 800 KSI2k ( Fall 2006)
          • Disk:
                – FC, IDE, SCSI, NAS technologies
                – 470 TB raw (~ 430 FC-SATA)
                – Issued Tender for 400 TB (Fall 2006)
          • Tapes:
                – Stk L180 18 TB
                – Stk 5500
                        • 6 LTO-2 with 2000 tapes 400 TB
                        • 2 9940B with 800 tapes 160 TB


CCR Workshop 7/6/2006                                                            4
                             Rack Composition

          • Power Controls (3U)
          • 1 network switch (1/2U)
          • 36 1U Box WNs
                – Connected to network
                  switch
                – Connected to KVM
                  system




CCR Workshop 7/6/2006                           5
                                Remote console
    • Paragon UTM8 (Raritan)
          – 8 Analog (UTP/Fiber) output connections
          – Supports up to 32 daisy chains of 40 nodes (UKVMSPD modules
            needed)
          – IP-reach (expansion to support IP transport) evaluted but not used
          – Used to control WNs
    • Autoview 2000R (Avocent)
          – 1 Analog + 2 Digital (IP transport) output connections
          – Supports connections up to 16 nodes
                • Optional expansion to 16x8 nodes
          – Compatible with Paragon (“gateway” to IP)
          – Used to control servers


CCR Workshop 7/6/2006                                                            6
                                  Power Switch

 •     2 models used:
       •    “Old” APC MasterSwitch Control
            Unit AP9224 controlling 3 x 8 outlets
            9222 PDU from 1 Ethernet
       •    “New” APC PDU Control Unit
            AP7951 controlling 24 outlets from 1
            Ethernet
 •     “zero” Rack Unit (vertical mount)
 •     Access to the configuration/control
       menu via serial/telnet/web/snmp
 •     Dedicated machine using APC
       Infrastructure Manager Software
 •     Permit remote switching off of
       resources in case of serious problems



CCR Workshop 7/6/2006                               7
                                          CPU Farm

          • Farm (~750 WNs, 1600 KSI2k)
                – SLC 3.0.x, Glite 3.0
                – Batch system: LSF 6.1
                – ~2600 CPU slots available
                        •   4 CPU slots/Xeon biprocessor (HT)
                        •   3 CPU slots/Opteron biprocessor
                –       22 VO currently supported
                –       Including special queues like infngrid, dteam
                –       24 InfiniBand-based WNs for MPI on a special queue
                –       Test farm on phased-out hardware (~100 WNs, 150 KSI2k)



CCR Workshop 7/6/2006                                                            8
                    List of VOs supported in the T1
                               – EGEE VOs      • REGIONAL
 • LCG VOs                          • BIOMED          BIO
             •   ALICE              • GEANT4          GRIDIT
             •   ATLAS              • MAGIC           INGV
                                                      THEOPHYS
             •   CMS
                               – GLOBAL
             •   LHCB                          • OTHER
                                    • BaBar
 • Certification VOs                • CDF
                                                      AMS
                                                      GEAR
             • DTEAM                • VIRGO           ARGO
             • INFNGRID             • ZEUS            PAMELA
             • OPS                                    QUARTO




30/08/2010                                                       9
                                              Activities
          • High-Availability Issues
                – Without a generic HA platform, we will have a very hard time in keeping
                  reasonable SLAs.
                – formed a group trying to work on a continuum of activities covering
                  lower layers like networking (esp. LAN), storage, farming – and their
                  interaction with SC and grid middleware. This group is coordinated by D.
                  Salomoni,  Leonello presentation
          • Service Challenge
                – Partecipation in SC4
                        • planning and Grid services setup
                        • FTS, LFC, …..
                – Tiziana Ferrari presentation




CCR Workshop 7/6/2006                                                                        10
                        Cpu Farm load



                                        Running


                                        GRID Running


                                        Pending


                                        Suspended


                                        Unknown Status


                                        Declared Slots


                                        Usable Slots




CCR Workshop 7/6/2006                                    11
                        Cf. http://grid-it.cnaf.infn.it/rocrep/: the plots
                        shown here report views related to job run at
                        the INFN Tier-1 only, but job statistics are
                        available for all the INFN grid sites.




CCR Workshop 7/6/2006                                                        12
                                                                                                                                installed
                                                                                                    MoU pledges for 2006
                                                                                                                                capacity

                                                                                                  CPU (KSI2K-years)      1800       1600

                                                                                                        aggregate 2006850 date**
                                                                                               Disk (Tbytes)            to     470
  CPU used - KSI2K-days                                                                        Tape (Tbytes)           850     560
                                       gen-06       feb-06       mar-06       apr-06       MoU*                     total  % MoU
                                cpu      1.104          592               0       678                                  2.374
                        ALICE
                                wall     1.232          691               2       795                                  2.720
                                cpu      2.243        3.665        2.469        6.200                                 14.577
                        ATLAS
                                wall     4.821        4.955        3.000        6.924                                 19.700
                                cpu      1.321        2.241          621        1.421                                  5.604
     CPU Grid           CMS
                                wall     3.080        3.626        1.627        2.683                                 11.016
                                cpu        107          233          453        1.083                                  1.876
                        LHCb
                                wall       578          358          684        1.192                                  2.812
                                cpu      4.775        6.731        3.543        9.382       45.900                    24.431      13%
                        TOTAL
                                wall     9.711        9.630        5.313       11.594       n/a                       36.248       n/a
                                cpu             0            0            1            0                                  1
                        ALICE
                                wall            0            0       230               0                                230
                                cpu             0            0            0            3                                  3
                        ATLAS
                                wall            0            0            0            3                                  3
                                cpu         34               8            0            0                                 42
  CPU Non-Grid          CMS
                                wall        42               8            0            0                                 50
                                cpu        196        2.104          971          594                                  3.865
                        LHCb
                                wall       397        2.337        1.152          760                                  4.646
                                cpu        230        2.112          972          597       45.900                     3.911       2%
                        TOTAL
                                wall       439        2.345        1.382          763       n/a                        4.929       n/a
                                cpu      1.104          592               1       678                                  2.375
                        ALICE
                                wall     1.232          691          232          795                                  2.950
                                cpu      2.243        3.665        2.469        6.203                                 14.580
                        ATLAS
                                wall     4.821        4.955        3.000        6.927                                 19.703
                                cpu      1.355        2.249          621        1.421                                  5.646
     CPU Total          CMS
                                wall     3.122        3.634        1.627        2.683                                 11.066
                                cpu        303        2.337        1.424        1.677                                  5.741
                        LHCb
                                wall       975        2.695        1.836        1.952                                  7.458
                                cpu      5.005        8.843        4.515        9.979       45.900                    28.342      15%
                        TOTAL
                                wall    10.150       11.975        6.695       12.357       n/a                       41.177       n/a

CCR Workshop 7/6/2006                                                                                                                       13
                                                                                                                               installed
                                                                                                 MoU pledges for 2006
                                                                                                                               capacity

                                                                                               CPU (KSI2K-years)        1800       1600

                                                                                               Disk (Tbytes)             850         470
                                                                                               Tape (Tbytes)             850         560



                                                                                                               last month as       %
                                          gen-06       feb-06       mar-06       apr-06       MoU*                     MoU
                              allocated            9            9            9        13
                        ALICE
                              used                 6            6            6            7
                              allocated        19           19           19           19
                        ATLAS
                              used                 8            9            2            5
   Disk Space -               allocated        63           63           64           72
                         CMS
      TBytes                  used             59           60           62           63
                              allocated        21           21           21           21
                         LHCb
                              used             17           17           16           16
                              allocated       112          112          113          125      850                      15%
                        TOTAL
                              used             90           92           86           91      595                      15%


                                                                                                               last month as       %
                                          gen-06       feb-06       mar-06       apr-06       MoU*                     MoU
                        ALICE                  13           13           13           14
                        ATLAS                  51           50           52           53
   Tape Space
                        CMS                    22           22           24           28
  Used - TBytes         LHCb                   46           47           54           54
                        TOTAL                 132          132          143          149      850                      18%


CCR Workshop 7/6/2006                                                                                                                      14
                                             Storage: hardware configuration

                                                         HMS (400 TB)                                             NAS (20TB)
                            STK180 with 100 LTO-1
                            (10Tbyte Native)

                                                                                                                                           NAS1,NAS4
                       W2003 Server with LEGATO                                                                                            3ware IDE SAS
                       Networker (Backup)                         Linux clients                                                            1800+3200 Gbyte
                                                             RFIO                                           NFS
                                                                                                                                          PROCOM 3600 FC
                                                                                                                                          NAS3 4700 Gbyte
                                    CASTOR HSM servers
                                                                                LAN                                         H.A.
                                                                                                                                          PROCOM 3600 FC
            STK L5500 robot (5500 slots) 6                                                                                                NAS2 9000 Gbyte
            IBM LTO-2,
            4 STK 9940B drives
                                                                       NFS-RFIO-GridFTP oth...

                                    SAN 2 (40TB)                                                                        SAN 1 (200TB)
                                                                         Diskservers with Qlogic FC HBA 2340
                            Infortrend
                                                                                                                           IBM FastT900 (DS 4500)
                            4 x 3200 GByte SATA
                                                                                                                           3/4 x 50000 GByte
                                                                                                                           4 FC interfaces


                                                                                        2 Brocade
        2 Gadzoox Slingshot                                                             Silkworm 3900
        4218 18 port FC Switch                                                          32 port FC Switch


        STK BladeStore                              AXUS BROWIE                                                         Infortrend
        About 25000 GByte                           About 2200 GByte                                                    5 x 6400 GByte SATA
        4 FC interfaces                             2 FC interface                                                      A16F-R1211-M2 + JBOD

CCR Workshop 7/6/2006                                                                                                                                        15
                        Storage: hardware configuration

                              16 Diskservers with dual Qlogic FC HBA 2340
                             Sun Fire U20Z dual Opteron 2.6GHZ DDR 400MHz
                                    4 x 1GB RAM SCSIU320 2 x 73 10K




    4 x 2GB redundand
    connections to the                                            Brocade Director FC Switch
          Switch                                                   (full licenced) with 64 port
                                                                            (out of 128)

                                        4 Flexline 600 with
                                            200TB RAW
                                        (150TB) RAID5 8+1




CCR Workshop 7/6/2006                                                                             16
                                    CASTOR HMS system

  – STK 5500 library
        •   6 x LTO2 drives
        •   4 x 9940B drives
        •   1300 LTO2 (200 GB) tapes
        •   650 9940B (200 GB) tapes
  – Access
        • CASTOR file system hides tape level
        • Native access protocol: rfio
        • srm interface for grid fabric available
          (rfio/gridftp)
  – Disk staging area
        • Data migrated to tapes and
          deleted from staging area when full
  – Migration to CASTOR-2


                                                    CASTOR disk space
CCR Workshop 7/6/2006                                                   17
                            Storage activities
          • GPFS                                                               Worker Nodes
                                                                               mounting GPFS

                – new tests with local
                  GPFS mount on WNs (no
                  RFIO)                                                                 Ethernet


                – Installation of GPFS
                                                           Storage servers GPFS
                                                              and GridFTPd
                                                                                           4 x Sun

                  RPMs completely                                                        Microsystem
                                                                                         SunFire V20Z


                  “quattorized”




                                           Cluster GPFS
                                                           Qlogic
                                                          2340 HBA           8 x 2 Gb/ s FC




                – GPFS mounted on 500
                                                                             SAN Fabric

                                                                             4 x 2 Gb/s FC

                  boxes (most of the
                  production farm)                                   22 TB



          • Partecipation to Hepix                              StorageTEK
                                                                 FLexLine
                                                                  FLX680
            storage WG
          • dCache testbed

CCR Workshop 7/6/2006                                                                                   18
                                                      StoRM testbed at cnaf T1
      StoRM is a disk based storage resource manager. It implements the SRM v.2.1.1
      standard interface. StoRM is designed to support guaranteed space reservation and
      direct access (native posix I/O call), as well as other I/O libraries like RFIO. Security
      aspects are based on user identity (VOMS certificate).
                                   Worker Nodes
                                   mounting GPFS                           Worker Node
                                                                                          CE
               StoRM




                                                                Ethernet

                                    Storage servers GPFS
                                        and GridFTPd


                                                                                         The disk storage (lower part of the
                4 x Sun
              Microsystem
              SUnFire V20Z
                                Qlogic
                                                                                          figure) is composed by roughly
                Cluster GPFS




                               2340 HBA               8 x 2 Gb/s FC


                                                      SAN Fabric                          40 TB. It is provided by 20
                                                      4 x 2 Gb/s FC                       logical partitions of one
                                                                                          dedicated StorageTEK FLexLine
                                          20 x 2 TB
                                           (40 TB)
                                                                                          FLX680 disk array storage,
                                      StorageTEK
                                                                                          aggregated by GPFS (version
                                       FLexLine
                                        FLX680                                            2.3.0-10).

CCR Workshop 7/6/2006                                                                                                          19
                                StoRM tests
          • Write test: srmPrepareToPut() with implicit reserveSpace of
            1GB files. globus-url-copy from local source to the returned
            TURL. 80 simultaneous client processes.
          • Read test: srmPrepareToGet() followed by globus-url-copy from
            the returned TURL to a local file (1 GB files). 80 simultaneous
            client processes.
          • Result: Sustained read and write throughputs measured : 4 Gb/s
            and 3 Gb/s respectively.
          •   The two tests are meant to validate the functionality and
            robustness of the srmPrepareToPut() and srmPrepareToGet()
            functions provided by StoRM, as well as to measure the read
            and write throughput of the underlying GPFS file system.
          • Release: the first production version of StoRM is under testing.
            The release is planned before the end of June.
          •      More information and results concerning stress tests and
            the srmCopy functionality will be presented tomorrow

CCR Workshop 7/6/2006                                                          20
                                     DB Service



      • Active collaboration with 3D project
            – One 4-nodes Oracle RAC
            – Castor2: 2 single instance DBs
            – One Xeon 2,4 with a single instance database for Stream replication tests on
              3D testbed
      • Starting deployment of LFC, FTS, VOMS readonly replica




CCR Workshop 7/6/2006                                                                        21
                                CNAF : the Operation Center (ROC)
                               of the Italian Grid production (Grid.it)
                                                          2500 CPUs, 500TB
                                                               41 „resource centers‟:
                                                                 Costantly growing
                                                             All centers are accessible
                                                          through Resource Brokers and
                                                              registered in the grid.it
                                                                Information System

                                                          Major sites are registered also
                                                               in the EGEE/LCG
                                                                  infrastructure
                                                          SPACI, ENEA, ESA-ESRIN
                                                             provide different CPU
                                                                   architectures



                        http://grid-it.cnaf.infn.it
CCR Workshop 7/6/2006                                                                       22
         The Structure (ad interim)                               CNAF “Consiglio di Centro”
                                                                  -Coordination board to define and plan
                                                                  „Inter‟ service activities and external relationship
                                                                         - Cnaf director
                                                                         - Administration and finance service
                            experiments/projects                         - Contract and software service
                                         Common User Interface

                                R&D            Grid Operation                Tier1
                               Service             Service                   Unit
                              A. Ghiselli        T. Ferrari (…)          C. Vistoli (…)


                        Services part of Tier1 Unit
                          Supporting all other services

     Technical and                                                                   LAN, WAN
                                   Farming               Storage
    general Services             D. Salomoni           L. Dell’Agnello
                                                                                     and security
         C. Vistoli (…)                                                                    S. Zani

CCR Workshop 7/6/2006                                                                                                    23
                                      Farming Team

          • Manpower
                – The Tier-1 farming group is composed of 3 FTE
                  staff people, plus 2 additional temporary FTE, (1 at
                  cern).
                        •   ® Davide Salomoni, staff - VO box, Policy group
                        •   Alessandro Italiano, staff - Batch system
                        •   Andrea Chierici, staff    - Quattor,
                        •   Felice Rosso, temporary contract - Monitoring
                        •   M. Emilio Poleggi , fellow CERN




CCR Workshop 7/6/2006                                                         24
                                       Storage Team

          • Manpower
                – The Tier-1 storage group is composed of 2 FTE
                  staff people, plus 3 additional temporary FTE, (1 at
                  cern).
                        •   ® Luca dell‟Agnello staff – GDB, LHC-OPN
                        •   Pier Paolo Ricci staff - Castor
                        •   Vladimir Sapunenko – temporary contract
                        •   Barbara Martelli – temporarry contract 3D project
                        •   Giuseppe Lore – (experiment support) temp. contract
                        •   G. Lo Presti – Cern fellow contract


CCR Workshop 7/6/2006                                                             25
                                 Infrastructure Team

          • Manpower
                – The Infrastructure group is composed of only 2
                  FTE staff people, plus 1 additional temporary FTE,
                  (1 at cern).
                        • ® Cristina Vistoli staff
                        • Massimo Donatelli staff
                        • Michele Onofri – temporary contract




CCR Workshop 7/6/2006                                                  26
                                    Network Team

          • Manpower
                – The Network team group is composed of 2 FTE
                  staff people, plus 1 additional temporary FTE, (1 at
                  cern).
                        • ® Stefano Zani staff
                        • Riccardo Veraldi staff
                        • Donato De Girolamo – temporary contract




CCR Workshop 7/6/2006                                                    27
                         Experiment support Team

          • Experiment support – assegni di ricerca
                –   Atlas- Giudo Negri
                –   LHCB- Angelo Carbone
                –   CMS- Daniele Bonacorsi
                –   Babar – Armando Fella
                –   Babar – Alexis Pompili
                –   CDF – Subir Sarkar/Daniel Jeans
                –   Alice – Giuseppe Lore
                –   Argo – Giuseppe Sansonetti
                –   Several fellows at Cern
CCR Workshop 7/6/2006                                 28
The upgrade plan of Tier1
                 Summary of Computing Resource Requirements
                All experiments - 2008                    CNAF: 10-12% of Tier0+Tier1 Resources
                From LCG TDR - June 2005
                                                            CERN               All Tier-1s All Tier-2s Cnaf Total
                CPU (MSPECint2000s)                            25                       56          61 10    142
                                                                                                        4
                Disk (PetaBytes)                                7                       31          19 5       57
                Tape (PetaBytes)                               18                       35                     53
                      CPU                                Disk                                     Tape

                            CERN                                CERN
                             18%                                 12%
                                           All Tier-2s                                                   CERN
                                              33%                                                         34%
        All Tier-2s
           43%


                                                                                    All Tier-1s
                             All Tier-1s                                               66%
                                                                 All Tier-1s
                                39%                                 55%




CCR Workshop 7/6/2006                                                                                           30
                        The WLCG INFN Pledged Resources




CCR Workshop 7/6/2006                                     31
                         Tier1: the deployable resources from the CSNs (approved)

         Total Tier1 CNAF Resource table (Budget 13 M€ + 4 M€ Services)

         Tier1 CNAF                                  2005            2006       2007   2008    2009    2010
         CPU (kSI2K)                                 1818            3420       5040   10800   14400   22800
         Disk (TB)                                     507           1056       1540   3960    5500    9350
         Tapes (TB)                                    800                900   1400   5000    7000    10000




         Non-LHC experiments Tier1 CNAF Resource table

         Tier1 non-LHC                               2005            2006       2007   2008    2009    2010
         CPU (kSI2K)                                   942           1800       2000   2700    2800    3000
         Disk (TB)                                     192                300   400     550     600     650
         Tapes (TB)                                     16                100   100     150     250     350




CCR Workshop 7/6/2006                                                                                          32
                  Tier1: the approved HW improvement

    Resources to be acquired
    Tier1 CNAF                        2005   2006   2007   2008    2009   2010
    CPU (kSI2K)                       1818   2850   4200   9000   12000   19000
    Disk (TB)                         507    960    1400   3600    5000   8500
    Tapes (TB)                        800    900    1400   5000    7000   10000
    Contingency (%)*                         20%    30%    40%     50%     50%

    * Included in the above numbers

    :
    CPU Box (Integral)                758    1044   1294   1620    1563   1717
    Disks (Integral)                  2535   3908   4723   6181    6016   6290
    # server (Integral)               101    156    189    247      241    252



    Box to acquire                    758    287    250    705      322    441
    Box to abandon                      0      0      0    379      379    287
    Disks to acquire                  2535   1373   815    2726    1102   1647
    Disks to abandon                    0      0      0    1268    1268   1373
CCR Workshop 7/6/2006                                                             33
                            The Electrical Power estimation
                             •2004      •2005      •2006         •7           •8        •9        •10
      •kSI2K/Box               •2,4      •3,6       •5,4       •8,1     •12,15     •18,225     •27,337
      •GB/disk                •200       •330       •540      •900      •1500       •2400       •4000
           Current status:
                 – BOX= PCI Card with 1 Dual Core Processor (2 Core CPU in 1 Processor Chip)
                 – Opteron 280 Dual Core, 2.4 GHz-> 1.5 KSpecInt/Core CPU-> 3 KSpecInt/Dual
                   Core Processor
                 – Opteron 280 Dual Core, 2.4 GHz -> 95Watt/Dual Core Proc. (+90 Watt
                   card+disk) ->
                 – 185Watt/Box -> 62Watt/KSpecInt
           • Next available improvements:
                 – PCI Card with 2 Processors with Dual Core (4 Core CPU/Box)
                 – Expected: 6KSpecInt/Box , 280 Watt/Box -> 47 Watt/KSpecInt
           • Consumption in ~2010
                 –      Linear extrapolation -> ~1KWatt/ 20 KSpecInt Box
                 –      All Manufacturers  <= 500Watt/Box
                 –      Assume factor 2 gain in Power Consumption/CPU Box->
                 –      25 Watt/KSpecInt-> 500Watt/CPU Box
                 –      Disks: Constant Consumption: 30Watt/Box
CCR Workshop 7/6/2006                                                                               34
                        Number of racks an cooling in 2010
•   CPU + Servers ->2000 BOX at 500W(Max)/Box -> 1 MW
•   CPU + Servers -> 30-36 Box/Rack -> 70-56 Racks ; 15-18 KW/Rack
•   Disk : 6300 Box at 30W/Disk ->200 KW
•   Disk-> 150 disks/Rack -> 42 Racks
•   Tapes: 10 PB required
•   Present libraries: 5000 Cassettes -> 2-3 libraries-> 30KW
•   Dissipated power in the room by computing apparatuses: 1.230 MW
      – Parameters for the Tier1 expansion plan:
      – -> 1.5 MW dissipated in the room
      – -> 3.0 MW total of which 1.5 MW for chillers and other
        services
      – -> 132 Racks (70 CPU+ 42 Disks)

CCR Workshop 7/6/2006                                                 35
             Power vs Hardware costs today


          • Example: high-volume dual-CPU Xeon server
                –   System power ~250W
                –   Cooling 1W takes about 1W  ~500W
                –   4-year power cost >50% of hardware cost!
                –   Ignoring:
                        • Cost of power distribution/UPS/Backup generator
                          equipment
                        • Power distribution efficiencies
                        • Forecasted increases in the cost of energy


CCR Workshop 7/6/2006                                                       36
                                Expansion Project status

   • Agreement with the University of Bologna
         – enlargement of the available space in order to host the new technological services
         – ~ 1000 sqm available
   • Preliminary project approved for the expansion of Electrical Power, Refrigeration,
     Acoustic Impact
   • Electrical Power: from 1.2MW to 4.0 MW
         – ~x3 actual system (Converter, UPS, Diesel Generators)
   • Refrigeration in the room
         – x3 from 0.5MWatt to 1.5MWatt redundant
   • New logistic in the room:
         – 2010: 70 Racks (2000 box processors, 20MSInt2k in 2010), 42 Racks (6300 box disks,
           10PB), 1+2Libraries STK (15PB tapes)
   • Reuse all already existing tools in the present Tier1 into new project




CCR Workshop 7/6/2006                                                                           37
                           New Layout of Tier1 Rooms:
                        Close High Density Island for CPUs




CCR Workshop 7/6/2006                                        38
                        Layout Area 2




CCR Workshop 7/6/2006                   39
             Layout refrigerated cell
5 Chillers x 318KWatt extra->~1600KWatt rid.
                        ENEL




         Converters




CCR Workshop 7/6/2006          41
                        Power Generators




CCR Workshop 7/6/2006                      42
                                   Upgrade schedule
          •   Bid for definitive, executive project and work direction
                – 6/06
          •   In parallel start bid for high density island commissioning:
                – 6/06
          •   Assignment of the project definitivo/esecutivo
                – 8/06
          •   Conclude bid for high density island
                – 9/06
          •   Delivery of final project and validation
                – 12/06
          •   Bid for the Tier1 Tech. Serv. upgrade work
                – 1/07
          •   Assignment of the work:
                – 4/ 07
          •   End of commissioning
                – 11/07
          •   Tier1 fully operational
                – November 07
CCR Workshop 7/6/2006                                                        43

								
To top