Birmingham site report
Lawrie Lowe: System Manager
Yves Coppens: SouthGrid support
HEP System Managers’ Meeting,
RAL, May 2007
• 60 user-desktop PCs running Linux
– Older user-desktops are Pentium 3.2 GHz, 1-
GByte RAM, running Scientific Linux 3.
– Newer desktops are Athlon X2 dual-core, 2-
GByte RAM, running Scientific Linux 4.
– Migrating all to SL4 on user request
• 4 user-desktops running Windows XP
• 12 PCs in labs running whatever experiment
requires: Windows or Linux
• Alice Farm: 18 dual 800 MHz PC boxes
• BaBar Farm: 120 dual 800 MHz blades
• Atlas Farm: 38 dual 2.0 GHz blades
• 50% of Atlas Farm and 4% of BaBar Farm
is for our local use on SL3
• 50% Atlas and 90% BaBar farm on grid
• 6% of BaBar farm running a Pre-Production
• 14 laptops in a group Pool, running
Windows XP and SL3/4 dual-boot
• ~10 user laptops (mainly students)
• All laptops are behind an extra level of
‘firewall’ to the outside world
• Linux servers running various systems:
SL3, SL4, SL4_64, and CentOS 5 / SL5.
• Most recent file-server running CentOS 5,
with 16 TB raid, split as 2 filesystems.
• Citrix Windows Terminal Server(s) for
those required MS applications.
• Gigabit to the newer servers, 100Mb/s to
desktop (but gig interfaces on most PCs)
• 2 Gbits/s for the dept to rest of campus
• Campus firewall adopting default-deny
policy, but Physics has its own firewall
• Connection on campus to UKLight, in
testing by campus staff
• UK Front-ends hosting CE, MON and SE
plus CEs for BaBar farm and PPS service
• Storage increased to 10 TB earlier this year
• All nodes run on SL305, gLite 3 update 24
• pxe-boot/kickstart installation using yum
• + some steps to finish off configuration
• Existing eScience cluster with a batch
system of 38 dual nodes (nearly 3 yr old)
• New BEAR cluster installing this week
(May 2007): initial phase 256 nodes, each
with two dual-core processors, 60 TB data.
• By final phase: 2048 cores, 100 TB data.
• A share for us; plan for gridPP too.