Docstoc

Slides - INFN - Istituto Naziona

Document Sample
Slides - INFN - Istituto Naziona Powered By Docstoc
					        Stato (di parte) della costruzione e del
                   software di MEG



1.   LXe cal. (PMT +criostato)
2. Elettronica di read-out        10 min.
3. Acceleratore
4. MC + offline                   30 min.




6 Febbraio 2006                                    1
            1. Test PMT a Pisa (+PSI nel LP)

 n.di PMT provati in f. del tempo nella facility
                                                   Test di tutti i PMT
                                                   praticamente
                                                   terminato (Pisa+PSI):
                                                   iniziata l’istallazione
                                                   nella struttura di
                                                   supporto (gia’ al PSI)


                                                   Nel LP, per la misura
                                                   delle QE dei PMT e’
                                                   stato fondamentale
Articolo sulla facility quasi                      lo sviluppo del MC e
pronto per NIM                                     dell’offline
   6 Febbraio 2006                                                     2
6 Febbraio 2006   3
           Simulazione e analisi LP
Simulation of alpha sources on wires (f = 50 mm) inside the LP
                                                      R. Pazzi, G. Signorelli




    Black: Data                               Confronto tra carica
                                              simulata e ricostruita vista
    Red: MC
                                              dai PMT in f. della distanza
                                              dalle sorgenti
Confronto tra posizione
simulata e ricostruita delle
sorgenti alfa
                     Dati simulati e analizzati per 220 PMT
  6 Febbraio 2006    (25% del calo.finale) con i programmi              4
                     illustrati tra un po’
Ritardo di 4 mesi nella consegna al PSI del
       criostato (DicembreMaggio)
                  (ritardo della SIMIC)


                                          Costruzione seguita
                                          settimanalmente da
                                          Raffaelli/Del Frate
                                          ma con scarsa
                                          soddisfazione
                                          (ulteriori ritardi??!!)




6 Febbraio 2006                                         5
  Gia’ cosi’: inizio presa dati a fine agosto/inizio
 settembre mentre potremmo prendere dati prima!




Dalla pagina web dell’INFN: Italy at CERN (nov. 2005)
For an Italian contribution of 80,4 million Euros, Italy has received orders for 85
millions and 2005 will shut up in an even more positive way” says Sandro Centro. In
particular Italy, with Ansaldo Superconduttori, excels in the building of one third of
1200 magnetic dipoles of Lhc, and with Simic in the production of the 75% of
cryostats, which contain the cold masses of the dipoles.

6 Febbraio 2006                                                                    6
     2. Read-out : necessita’ del DRS3...
 • Range dinamico (0.5 V)
 • Dipendenza dalla temperatura
 • Scarica dei condensatori non perfetta (2%)
 (dicembre 2005)




   DRS2 ok per il tempo ma: utilizzo FADC del
   trigger per la carica ? Un mese di presa dati nel
   2006 = 10-12 in sensibilita’! 1 ordine di grandezza
   sotto a MEGA
6 Febbraio 2006                                          7
     Utilizzo delle schede di trigger per la
               misura della carica

   1. FADC a 100 MHz commerciali. Q: 12 bit equivalenti
   2.Problema fan-in 41 (612/846)
      •   Da produrre (type 1 mod.): 30 (/50 tipo 1)
      •   Costo /scheda: €1200  36 K€
      •   Crate, interf. , cpu: 2 x €13000
      •   Costo totale: 62 K€
   Da ordinare ora.
   Utilizzo di 20 K€ per calibrazioni (sblocco s.j. 10
   K€ capp Pisa) + prestito dotaz. I (reintegrazione
   successiva)
6 Febbraio 2006                                          8
      3. Acceleratore: test a Legnaro
                                         I ~ 90 nA        NaI Energy Resolution
                                         Tp = 500 keV   (E)/E = 3.09 + - 0.03 %
                                                             (at 17.6 MeV)

                                                      Rate
                                                   R = 100 Hz




Bersagli di LiF e B costruiti a Genova
Ordine italiano CW: inviato il 19/1
Ordine americano restante parte CW
inviato ma problemi di termini di
pagamento
    6 Febbraio 2006                                                    9
      4. Status del software di
                MEG
a. Simulazione MC
       - Stato
       - Responsibilita’
b. Offline
       - Caratteristiche del framework
       - Stato di sviluppo del codice
       - Responsibilitia’
c. Computing power and data storage
       - Risorse disponibili al PSI;
       - Stima delle necessita’ con e senza pre-selezioni
       - Piano di utilizzo delle risorse
 6 Febbraio 2006                                       10
                  a. MC Simulation




6 Febbraio 2006                      11
                  MC Structure
•   MEGEVE - Event Generator;
•   GEM – The GEANT3 based detector simulator:
    – Liquid Xenon Calorimeter;
    – Drift Chamber;
    – Timing Counter;
    – Magnet and Target.
•   Code organized in modules, as OO classes;
•   LP & Beam Test fully simulated;
•   Code management under SVN;
•   MC code almost ready for production tests.

6 Febbraio 2006                              12
                        Positron track




Energy release in LXe




Hits on TC
  6 Febbraio 2006                13
              MEGEVE: the Event Generator
Man Power: P.Cattaneo (Pv), F.Cei (Pi), K.Ozone (Tokyo), Y.Hisamatsu (Tokyo), R.Sawada
            (Tokyo), V.Tumakov (UCI), S.Yamada (UCI)
•   Status
     – Signal events;
     – Michel positrons;
     – Radiative decay (RD):
     – Positron annihilation in flight (AIF):
         • Preliminary AIF within target;
         • Started study for realistic AIF: magnet, DCH, TC and Target.
     – Scheme to generate pile-up events: (Michel + RD, Michel + AIF, AIF + RD,
       RD + RD) + additional Michel decays; more than two events can be
       overlaid;
     - CPU time: 16 sec/event, dominated by scintillation photon tracking
       (will be improved).
     – Interactive version (GXINT) recently implemented.
•   Next
     – Realistic AIF and background studies (under way);
     – Study of online/offline pre-selection and calibrations (under way);
       6 Febbraio 2006                                                        14
      MEGEVE: Esempio di studio dei fondi
Realistic studies of Michel positron annihilation in flight.
Complete detector simulation (not the target only).
Main contributions from target and drift chambers.


                              Annihilation g energy spectrum in LXe




                                    Preliminary Work
 6 Febbraio 2006                                            15
                GEM:LXe Calorimeter
    Man Power: S.Yamada(UCI), K.Ozone(Tokyo), F.Cei(Pi), Y.Uchiyama(Tokyo)
•    Status
      – Geometry: final shape implemented for vessel, PMT holders & honeycomb;
      – Implemented decay curve and wavelength spectrum of LXe scintillation;
      – GEANT based (as Cerenkov photons) scintillation photon tracking:
          • Reflection/refraction on PMT quartz window and PMT holders;
          • PMT quartz window transmittance;
          • Absorption and scattering in Liquid Xenon.
      – Outputs:
          • Energy deposit, position and timing in Liquid Xenon;
          • Preliminary waveform output: hit timing of scintillation photons
            for each PMT (~8 x 104 photoelectrons @ 50 MeV, Q.E. = 16%).
•    Next
      – Implement cryostat supporting structure;
      – “Fast” scintillation photon tracking.


      6 Febbraio 2006                                                        16
              GEM: Drift Chambers
    Man Power: H.Nishiguchi (Tokyo), K.Ozone (Tokyo), Y.Uchiyama (Tokyo),
              M.Hillebrandt (PSI)
•    Status
      – Geometry:
          • DCH geometry completed;
          • wires and vernier pads simulated;
          • cables, cable duct implemented.
      – Isochrones tables for various B-field (Garfield);
      – Outputs:
          • Entrance/Exit position from chambers;
          • Energy, timing and direction for each hit;
          • Drift time in chambers.
•    Next
     - Implement charge and timing signal simulation on wires & pads and
        waveform digitization (under way).


      6 Febbraio 2006                                                       17
                  GEM: TC/Beam/Magnet
    Timing Counter          Man Power: P.Cattaneo (Pv), (V.Tumakov (UCI), F.Xiao(UCI))
•   Status
     – Geometry: scintillation bars/fibers, PMTs, APDs, photodiodes, light guides.
     – Outputs:
         • Hit position, timing and energy; energy, position & step length for hits;
         • Waveform output for scintillation bars.
     – Photon propagation based on analytical formula
•   Next
     – Implement supporting structure.
     – Improve light transmission in fibers
    Beam/Magnet                Man Power: K.Ozone (Tokyo), W.Ootani (Tokyo)
•   Status
     - Realistic treatment of target geometry and muon phase space.
•   Next
     – Implement target support and the beam transport within the detector.

       6 Febbraio 2006                                                        18
                       MC responsibilities
•   Coordination: F.Cei (Pisa), S.Yamada (UCI)

•   Code management: P.Cattaneo (Pavia), S.Yamada (UCI)

•   Event generator: F.Cei (Pisa), Y.Hisamatsu (Tokyo)

•   LXe calorimeter: F.Cei (Pisa), R.Sawada (Tokyo), S.Yamada (UCI)

•   Timing Counter: P.Cattaneo (Pavia)

•   DCH: H.Nishiguchi (Tokyo)

•   LP/Beam test: R.Pazzi (Pisa), R.Sawada (Tokyo)

•   Trigger: D.Nicolo’ (Pisa), Y.Hisamatsu (Tokyo)

     6 Febbraio 2006                                                  19
                  b. Offline




6 Febbraio 2006                20
                    Offline framework
  The offline framework: ROME




Root based Object Oriented Midas Environment is
the object oriented framework adopted for the
MEG on-line. It is under development at PSI and it
was tested during October 2004 test beam.

  6 Febbraio 2006                                    21
 • ROME is a framework generator.

 • Only 6 different objects.

 • All classes are generated, only event methods have to
   be written by the detector experts.

 • No knowledge about object oriented programming needed;

   detector analysis codes are written in C (not C++).

 • Interface with MYSQL database.
 • Separated into:
        an experiment independent part of the framework
        e.g. Event loop, IO;
        an experiment dependent part of the framework
       e.g. Data structure, program structure.

6 Febbraio 2006                                             22
                           ROME Objects

Folders                                Tasks
• Data objects in memory               • Calculation objects


Trees                                  Histograms
• Data objects saved to disc           • Graphical data objects


Steering Parameters                    Midas Banks
• Framework steering                   • Midas raw data objects



          Folders and Tasks support a very clear program structure.

          Modularity: tasks can be changed even at runtime.
   6 Febbraio 2006                                                23
                  Interconnections
      Disk                      Read (MIDAS, ROOT)
      (Input)
                                                                   Histograms
                                                                  Histograms
                                                                   Histograms
                  Read                           Fill


                                       Tasks
                                      Tasks
                                       Tasks                 Histograms
                         Fill                               Histograms
                                                             Histograms
Folders                                                    Fill


                                     Flag                          Histograms
                                                                  Histograms
                                                                   Histograms
      Fill
                      Trees
                     Trees
                      Trees
                                                        Disk
                                 Write (ROOT)           (Output)
6 Febbraio 2006                                                             24
    ROME as LP test beam framework

Application of ROME as an off-line framework in
connection with MIDAS (on-line) and SQL database.

                  MIDAS: data taking
                          slow control
                          data logging
                          run control
                  ROME:   on-line monitoring
                          off-line analysis
                  SQL:    channel information
                          geometry information
                          calibration constants
6 Febbraio 2006                                     25
             Offline Analysis Procedure (beam test)

    Before a Run            BoR                     Event                  EoR



                                                          data




                              trigger mode
calibration run number        channel info.   processed          calibration
                             geometry info.      data
                                calibration




          6 Febbraio 2006                                                        26
      Software scheme for final detector


 MC (geant)                       1.Bartender                  ROOT
                                       simulate pileup
    event generation
                                   electronics and trigger
        tracking          ZEBRA
   detector simulation

                                    Database

                       DAQ                                   2.Analyzer
                                                   MIDAS
Bartender and Analyzer are ROME based softwares.
They are standalone programs which read data and                 ROOT
write results in ROOT files.
     6 Febbraio 2006                                                  27
        Software Codes 1) Bartender
                      •   ROOT based program for
                          Event Cocktail;
                      •   Read experimental and
                          simulation data;
                      •   Make mixture of several MC
                          sub events;
                      •   Simulation of pulse shape of
                          MC data (digitization);
                      •   Rearrange channels of
                          experimental data to make
                          them as MC.
                      •   Possible simple calibration;
                      •   Possible trigger simulation.
          Analyzer
6 Febbraio 2006                                 28
     Software Codes 1) Bartender (cnt.)

   Example: LXe calorimeter waveform pile-up

    three possible models of single waveform;
    gaussian, sinusoidal or constant noise can be added;
    event rate can be specified;
    relative timing is extracted randomly.




   Nphe = 123
                  +            Nphe = 620
                                            =


6 Febbraio 2006                                             29
         Software Codes 2) Analyzer


- Software code initially developed to
  analyze beam test data;
- It can read ZEBRA and ROOT files
  from Bartender;
- Code development going on; work done about
  on waveform decoding.
- Algorithms ready in Fortran; to be translated in C.



6 Febbraio 2006                                   30
               Offline Responsibilities
Coordination/framework: R.Sawada (Tokyo), M.Schneebeli (PSI)

Database: R. Sawada (Tokyo)

Analyzer:

LXe: Y.Uchiyama (Tokyo), R.Sawada (Tokyo), G.Signorelli (Pisa)

DCH: M.Schneebeli (PSI), H.Nishiguchi (Tokyo), (P.Huwe (UCI))

TC: D.Zanello (Rome), (F.Xiao (UCI))



   6 Febbraio 2006                                               31
  c. Computing : available
 resources @ PSI vs needs

             1. Storage – 2. CPU power – 3. Network


      PSI
      MEG needs (Data, MC; CPU: Data reduction)
      Summary: PSI vs MEG needs


6 Febbraio 2006                                       32
        PSI: 1. Storage resources
Tape archive system   (R. Egli, PSI Computing Center)




 6 Febbraio 2006                                33
          PSI: 2. CPU (+fast access
               disks) resources
 Analysis computer cluster




 Available disk space 10 Tb
Maximum number of CPU’s 64

   6 Febbraio 2006                    34
       PSI: 3. Network resources
Network infrastructure




 6 Febbraio 2006                   35
           MEG needs: 1. Storage – DATA

                DATA without reduction
 Trigger conditions:
 - QSUM > 45 MeV               (1)
 - DT(LXe – TC) = 10 ns        (2)
 - Angular correlation ~ 15o   (3) (to be checked more precisely)
 Event rate R:
  108 m+/s  2 x 103 s-1  2 x 102 s -1  20 s -1
      (1) + solid angle (2)             (3)
   With a lower muon stopping rate, the rate reduction is roughly
   proportional to the square of the reduction factor
                              
          R ~ 3  6 s-1 for 3 x 107 m+/s
  6 Febbraio 2006                                               36
            DATA without reduction (Cnt.)

In one year (107 s) of data taking:
    3  6 x 107 events        (2 x 108 using 108 m+/s)
How many waveforms ?
Expected occupancy: 50 % for LXe (~ 450 wfm),
                 20 % for DCH (~ 300 wfm),
                 20 % for TC (~ 20 wfm)
 ~ 800 waveforms/event (2 bytes/channel)
 ~ 10(1011) waveforms/year, 1.6 Mb/event

With a factor 10 compression: 5    10 Tbyte/year
  6 Febbraio 2006                                 37
              MEG needs: 1. Storage – MC

 MC event samples:
 2 x 107 correlated events/year +
 two independent samples (106 positrons & 106 photons) to be
 merged for generating accidental background (in principle, up to
 1012 events).

  To reduce problems of multiple disk accesses, MC events must
  be duplicated (x 2 correlated, x 20 accidental). Copy!

 Event size based on LXe (photon arrival times) and TC information:
   200 kb/event (noise not included).
 Data storage:
  - (200 kb/event x 2 x 107 x 2) = 8 Tb/year (correlated events);
  - (200 kb/event x 2 x 106 x 20) = 8 Tb/year (uncorrelated
    events); + factor 3 for digitization:
   6 Febbraio 2006                                               38
                     TOTAL ~ 50 Tb/year
         MEG needs: 2. CPU time needed
                   for event reconstruction
 DCH: 200  250 msec/event for a five track event
   (Kalman filter);
 LXe: 100  200 msec/event (LP data extrapolation);
 TC & merging: unknown, but probably small;
     (DCH + LXe + TC + merging) ~ 0.5 sec/event
 Waveform fitting: ~ 10 sec/event
 For 6 x 107 events:
(DCH+LXe+TC+merging) ~ 3 x 107 CPU sec/year  3 CPU’s
 Waveform fitting       ~ 6 x 108 CPU sec/year  60 CPU’s
             (il fondo DOVREBBE dominare…!)
                     data pre-filtering !!!
   6 Febbraio 2006                                  39
               Possible strategy for
            data reduction and analysis

Two steps:
  1) perform a fast reconstruction for selecting
     most interesting events;
  2) perform a more refined reconstruction
     (including waveform fitting etc.) on the
     selected sample.
Assumptions: a rough “ADC & TDC like” information
                   must be provided by on-line algorithms.


 6 Febbraio 2006                                        40
                       Data reduction
                      DEg > 47.5 MeV (2 FWHM from signal)
    6 x 107 events/year            3 x 107 events/year
                      DEe+ > 50 MeV (4 FWHM from signal)
                                    6 x 106 events/year
                      Dqeg < 80 mrad (4 FWHM from signal)
                                    5 x 105 events/year

Waveform fitting: 5 x 105 events x 10 sec CPU/event

     = 5 x 106 sec CPU (~ 2 months on a single computer).

Timing information not used; if a fast reliable reconstruction can
be done, a further relevant reduction (~ 10) is possible.

     6 Febbraio 2006                                                 41
MEG needs: 2. CPU time needed for
           MC generation
Generation of samples:

(2 x 107 (correlated) + 2 x 106 (uncorrelated))
events/yr x 16 CPU sec/event =
          4 x 108 CPUs/yr ~ 12  15 CPU’s

Bartender:

1  2 CPU sec/event x 108 events/yr =
          1  2 x 108 CPU sec/yr ~ 5 CPU’s


 6 Febbraio 2006                                  42
Summary: 1. data storage resources
            and needs

             PSI Tap es        PSI Disks          MEG needs
         30  40 Tbytes      4 Tbyt es       ~ 10 Tbyt es/yr
         (free) + 40         (backuped) +      (real data)
         Tbytes occupied     6 Tbyt es (no   ~ 40  50 Tbytes/ yr
         by backup (to be    backuped)         (MC production)
         freed)                              ~ 10 Tbyt es/yr
                                              (overhead, DST’s)
 Total    70  80 Tbyt es      10 Tb ytes    60 70 Tbytes/yr

 Assuming that half of the data collected in one year must
 reside on disks (for monitoring, calibrations, faster analysis etc.),
 MEG needs ~ 5 Tbytes/year of disk space.

6 Febbraio 2006                                                  43
                  Summary: 2. CPUs

           PSI nodes               MEG needs
                            (single processing/year)
              64                   ~ 3 CPU’s
          (probably    (real data; no WFM fitting)
          Opteron)                   < 1 CPU
                       (selected data; WFM fitting)
                                   ~ 20 CPU’s
                       (MC production + Bartender)
                                   ~ 10 CPU’s
                       (MC rec.=3 x data; no WFM
                       fitting)
                                     < 1 CPU
                       (MC selection rec.; WFM fitting)
Total       64 CPU’s    ~ 33 (+20 per 10 repr.) CPU’s
6 Febbraio 2006                                       44
          3. Summary on data access
             resources and needs

              PSI link s            MEG needs
           25 Mbytes/ s           ~ 1 Mbytes/ s
           via FTP to      (with WFM compression)
           tapes;                ~ 10 Mb ytes/s
           1 Gb it/s to    (without WFM c ompression)
           disks fr om
           CPU cluster
                                      OK !


6 Febbraio 2006                                     45
                  Conclusione


    Inizio con 30 CPU al PSI ed eventuale
   aumento con il tempo.
    Richiesta di utilizzo diretto (log-in) di
   20 CPU del CNAF con 20 Tbyte di
   spazio disco per un’analisi italiana su
   campioni ridotti di dati.


6 Febbraio 2006                                  46

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:5
posted:4/15/2010
language:Italian
pages:46