Mid- and Long-Term Plans for Online by o9S16v4

VIEWS: 0 PAGES: 11

									Mid- and Long-Term Plans for
Online

   S. Luitz for the Online Group
    Technical Board 12/15/05
Odc (Detector Controls)
   Not driven by Luminosity increase
   More RTEMS/PowerPC IOCs
              Faster boot times
              Merge IOCs
                     Less IOCs
                     Less network traffic
   Network cleanups
   For now keep EPICS version at “3.14.7-BaBar”
        Many BaBar-specific modifications to the core (e.g. CAEN support, various
         improvements and bug fixes to Channel Access code and gateways)
        Currently think it’s easier to selectively back-port improvements and bug fixes from
         later versions than to forward-port our changes to newer 3.14 distributions
        May revisit this decision if one of the next official EPICS releases turns out to be of
         significantly better quality than 3.14.7 …
        Maintain / update / add platforms as necessary (Solaris 10/ RHEL4/SL4, …)
   Support sub-detector improvements / additions
        Large number of LST HV power supplies still to come
   Look seriously into replacing CMLOG with a relational database
        CMLOG is very fragile
        Will be difficult to support after the end of data taking
        Would like something more robust (considering MySQL)
Orc (Run Control)
   Not driven by Luminosity increase
   Ongoing improvements to user
    interface
   More automation where possible
   Maintain / update / add platforms
    (Solaris 10, RHEL4, SL4, …) as
    necessary
Odb
   Not driven by luminosity increase
   Continue to phase out Objectivity from
       Configuration DB
            Early 2006
       Ambient DB
            Need to evaluate whether ROOT is adequate
                  Performance concerns
                  Almost 300GByte of partially compressed data
                  Will need implementation to find out
                  If ROOT is not adequate, need to look into alternative
                   technologies (RDB, …, …)
       Conditions DB
   Maintain / add / update platforms (Solaris 10, RHEL4,
    SL4, …)
CMP (IR2 Computing)
   All Components
       Maintain / add / update platforms (Solaris 10, RHEL4, SL4)
       Phase out or replace aging / failed equipment
       Keep everything up to date with security patches
   Components driven by luminosity increase
       Network, logging servers, L3 and Fast Monitoring farm nodes
       Will discuss in OEP part
   Components not affected by luminosity increase
       File servers, development machines, etc.
       Slowly upgrade infrastructure file servers
            Addresses increasing mismatch between more and faster clients and
             aging servers
       Add and maintain new services/servers as needed
            E.g. 2 MySQL servers were added recently
       Ongoing low-level activity
Oep (Online Event Processing)
   Most of it driven by luminosity increase
   What rates and event sizes will we have to
    deal with?
       7kHz L1 x 75kByte and 700Hz L3 x 75kByte
        starting sometime in 2007?
       Can Odf deliver such rates and event sizes?
       Assume as “worst-case” scenario
   Look at components of Odf backend and Oep
    system
Oep (2)                         Event Building Network

   L1 rate and event size  Event building switch
   Requirement: “No” packet loss at switch level
        Average rate: 7kHz x 75kByte = 525MByte/s = 4.2GBit/s
        Nominal backplane capacity of our current switch is 16GBit/s
        Some backplane arbitration overhead Run at >25% utilization
              Concern that switch may become “lossy” at higher utilizations (packet-loss tolerance is part of TCP/IP design and
               dropping packets under certain circumstances is a valid performance optimization)
        More importantly: Network traffic is bursty – system needs to deal with instantaneous rates
              On timescale of single event: 16Gbit/s (backplane) into 1Gbit/s farm node port
              Need sufficient buffering (getting worse if events are getting larger)
              Switch needs to be able to operate at a peak rate smoothed out by the amount of available buffer
              Need to look at event size distributions
              Bigger events  more buffering at output ports needed
              Our understanding of the network switch internals is good but somewhat limited
              Relevant per-port switch buffer nominally 384kByte – most likely less is actually available because it is also used to
               pipeline packets switching decision is made
              My guess: Of two back-to-back events of more than 150kByte to the same port one will get lost (introduces event
               size bias)
        Extremely complicated problem (due to many parameters and limited knowledge of switch internals)
        May already become relevant at smaller event sizes and lower trigger rates
        May consider to play safe and upgrade the event building switch to a faster model (Cisco 6500-720)
              >1MByte per-port output buffer
              720GBit per 48-port card switching capacity, 30GBit interconnect
        Best window to replace event building switch would be the long shutdown in 2006
Oep (3)                     Backend Network and Logging Servers

   L3 rate and logging servers
   700Hz x 75kByte = 52.5MByte/s
        Can be easily handled within the current system
        Will need some minor optimizations in network layout
              Connect the logging servers directly to the farm node switch …
              … and / or bundle the farm node switch uplinks
              could be done now in ~1h downtime
        Will need to increase the IR2 buffer capacity to fulfill our “contract” of being able to
         run through a weekend without HPSS
              52.5MByte/s  ca. 4TByte/day  > 12TByte disk buffer in IR2 (currently 3.6TByte + 2TByte
               on farm nodes)
              No significant downtime needed to do this
        Upgrade link to SCCS from 1GBit/s to 2GBit/s
              Fibers are in place
              Could be done now with minor (~1h) downtime
   Of course, everything downstream would need to be able to handle these rates!
        Not part of this talk
Oep(4)                        L3 and FastMon CPU

   Driven by Lumi / Data Rate
   Current hardware: 1.4GHz Dual-CPU P3
   Have ~60 machines
         32 have large disks & GBit Ethernet connection to Odf network
         16 have only GBit Ethernet connection to Odf network (will need disk upgrade to be used as L3 farm nodes)
         12 machines have no GBit Ethernet connection to Odf network and no large disks for logging (will need disk upgrade and
          network upgrade to be used as L3 farm nodes): We have the GigE NICs. We will need a switch line card (or switch
          upgrade).
   L3 CPU requirements unclear
         Currently (2.8kHZ x 35kByte) running at ca. 25-30%
         Don’t know exactly how CPU utilization will change with rate and event size
         Always the question of improving L3 rejection by spending more CPU cycles on the decision
         Constraint: L3 farm should be homogeneous (same CPU speed on all L3 machines)
   Several options (not all mutually exclusive)
         Use more L3 farm nodes (need to upgrade network & disk), buy 10-20 up-to-date machines (dual-CPU/dual-Core?) to run
          Fast Monitoring
               More farm nodes also help the network buffering
               Fast Monitoring machines could be installed in the one remaining computing rack on top of EH
         Run 2 Level-3 processes on each farm node
               Benefit may be smaller than expected – some complications
         Replace L3 farm nodes and Fast Monitoring farm nodes with faster machines
               may be needed due to end-of-life issues anyway (by 2008 our farm nodes will be 5-6 years old)
   A lot of this can be done fairly non-intrusively
   See what we need – buy last minute
Oep (5)       Infrastructure & Control

   Not driven by Luminosity increase
   Don’t foresee big scaling issues with larger
    farms
   Ongoing improvements to user interface
   More automation where possible
   Maintain / update/ add platforms (Solaris 10,
    RHEL4, SL4, …) as necessary
Summary / Outlook
   Non-Lumi-driven subsystems are fine
   Oep
        need to work closely with Odf & Trigger groups to determine what to do
        Most decisions can be taken “as we go”
             “buy late” as needed strategy is most cost-effective
             Can accommodate such a strategy technically for most parts
             Budgeting issues?
        Consider event building switch upgrade
             Pressure to “buy late” as needed somewhat lower
             Network equipment has longer lifetimes and product cycle

   Development will continue in all areas of Online until ~2008
        Do not foresee a feature or development freeze anytime soon
        Drivers: Luminosity, “Offline Software”, Hardware and Operating System
         advances
        Keep people interested
        Work on bugs / problems as they show up

								
To top