Positioning Dynamic Storage Caches for Transient Data Sudharshan Vazhkudai Oak Ridge National Lab Douglas Thain University of Notre Dame Xiaosong Ma North Carolina State Univ. Vince Freeh North Carolina State Univ. High Performance I/O Workshop at IEEE Cluster Computing 2006 Problem Space • Data Deluge – Experimental facilities: SNS, LHC (PBs/yr) – Observatories: sky surveys, world-wide telescopes – Simulations from NLCF end-stations – Internet archives: NIH GenBank (serves 100 gigabases of sequence data) • Typical user access traits on large scientific data – Download remote datasets using favorite tools • FTP, GridFTP, hsi, wget – Shared interest among groups of researchers • A Bioinformatics group collectively analyze and visualize a sequence database for a few days: Locality of interest! – Often times, discard original datasets after interest dissipates Existing Storage Models • Local Disk – High bandwidth local access to small data. • Distributed File Systems and NAS – Medium bandwidth for dist/shared data. • Mass Storage ($) – High latency access for disaster recovery. • Parallel Storage ($$$) – High bandwidth shared access to large data with high reliability and fault tolerance. What’s Missing? Private Workstations University Cluster CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU Medium Bandwidth High Latency Computing Cluster Wide Area Computing Cluster CPU CPU CPU CPU Networks CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU Fat Pipe Fat Pipe Parallel Storage Mass Storage Needed: Transient Storage • High bandwidth – Needs to be keep up with network and archive. – Also needs to keep up with aggressive apps. (viz?) • Some management control. – Capacity, bandwidth, locality are all limited. – Need some controls in order to guarantee QoS. • Understandable latency. – Keep user informed about stage-in latency. – Once staged, should have consistent latency. • Low cost. – Old idea: Lots of commodity disks. – Can we scavenge space from existing systems? • Reliability useful, but not crucial. Transient Storage: Use Cases • Checkpointing Large Computations – Don’t need to keep all forever! • Impedance Matching for Large Outputs – Evacuate CPUs, then trickle data to archive. • Caching Large Inputs – Share same data among many local users. • Out of Core Datasets – Large temporary array split across caches. A Real Example: Grid3 (OSG) Robert Gardner, et al. (102 authors) The Grid3 Production Grid Principles and Practice IEEE HPDC 2004 The Grid2003 Project has deployed a multi-virtual organization, application-driven grid laboratory that has sustained for several months the production-level services required by… ATLAS, CMS, SDSS, LIGO… Grid2003: The Details The good news: – 27 sites with 2800 CPUs – 40985 CPU-days provided over 6 months – 10 applications with 1300 simultaneous jobs The bad news on ATLAS jobs: – 40-70 percent utilization – 30 percent of jobs would fail. – 90 percent of failures were site problems – Most site failures were due to disk space! Two Transient Storage Projects • Freeloader – Oak Ridge Natl Lab and North Carolina State U – Scavenge unused desktop storage. – Provide a large cache for archival backends. – Modify scientific apps slightly for direct access. • Tactical Storage – University of Notre Dame – Use comp. cluster storage as flexible substrate. – Configure subsets for distinct needs. – Filesystem interfaces for existing apps. Desktop Storage Scavenging? • FreeLoader – Imagine Condor for storage • Harness the collective storage potential of desktop workstations ~ Harnessing idle CPU cycles – Increased throughput due to striping • Split large datasets into pieces, Morsels, and stripe them across desktops • Scientific data trends – Usually write-once-read-many – Remote copy held elsewhere – Primarily sequential accesses • Data trends + LAN-Desktop Traits + user access patterns make collaborative caches using storage scavenging a viable alternative! Properties of Desktop Machines • Desktop Capabilities better than ever before • Space usage to Available storage ratio is significantly low in academic and industry settings • Increasing numbers of workstations online most of the time – At ORNL-CSMD, ~ 600 machines are estimated to be online at any given time – At NCSU, > 90% availability of 500 machines • Well-connected, secure LAN settings – A high-speed LAN connection can stream data faster than local disk I/O FreeLoader Environment FreeLoader Architecture • Lightweight UDP • Scavenger device: metadata bitmaps, morsel organization • Global free space • Morsel service layer management • Monitoring and • Metadata management Impact control • Soft-state registrations • Data placement • Cache management • Profiling Comparing FreeLoader with other storage systems Tactical Storage Systems (TSS) • A TSS allows any node to serve as a file server or as a file system client. • All components can be deployed without special privileges – but with security. • Users can build up complex structures. – Filesystems, databases, caches, ... – Admins need not know/care about larger structures. • Two Independent Concepts: – Resources – The raw storage to be used. – Abstractions – The organization of storage. App App file transfer Parrot ??? Parrot App Simple Distributed Filesystem Abstraction Filesystem Parrot Distributed Database Abstraction UNIX UNIX UNIX UNIX UNIX UNIX UNIX 3PT file file file file file file file server server server server server server server UNIX UNIX UNIX UNIX UNIX UNIX UNIX file file file file file file file system system system system system system system Workstations owners control Cluster administrator controls policy on each machine. policy on all storage in cluster Applications: High BW Access to Astrophys Data tcsh, cp, vi, emacs, fortran... CPU Disk CPU Disk CPU Disk Adapter GBs/ CPU Disk CPU Disk CPU Disk Day 10 TB Scratch CPU Disk CPU Disk Logical Disk CPU GBs / Day Disk Volume GBs/ CPU Disk CPU Disk CPU Disk Day General Purpose Computing Cluster Tape Archive Applications: High BW Access to Biometric Data CPU Disk CPU Disk Job CPU Disk NFS I/O Gb Ethernet Job CPU Disk CPU Disk CPU Disk Storage Archive NFS I/O CPU Disk CPU Job Disk CPU Job Disk Disk Disk Disk NFS I/O CPU Disk CPU Disk Job CPU Disk General Purpose Computing Cluster Applications: High BW Access to Biometric Data CPU Disk CPU Disk Job CPU Disk Gb Ethernet Job CPU Disk CPU Disk CPU Disk Storage Archive Controlled Replication CPU Disk CPU Job Disk CPU Job Disk Disk Disk Disk CPU Disk CPU Disk Job CPU Disk General Purpose Computing Cluster Open Problems • Combining Technologies – A filesystem interface for Freeloader. – Making TSS harness FL benefactors. • Seamless Data Migration – Not easy to move between parallel systems! – Can transient storage “match impedance?” • Performance Adaptation – Many axes: BW, Latency, Locality, Mgmt. – Can we have a system that allows for a more continuous tradeoff or reconfiguration? Take-Home Message Big, fast storage archives are important, but... Making transient storage usable, accessible, and high performance is critical to improving the end-user experience.