magenheimer

Document Sample
magenheimer Powered By Docstoc
					                                             Paravirtualized Paging

                   Dan Magenheimer, Chris Mason, Dave McCracken, and Kurt Hackel
                                      {first.last}@oracle.com

                                                Oracle Corporation

                     Abstract                                  Here’s the quirky part: the size of the cache is
                                                               unknown and cannot be determined. Sometimes a
Conceptually, fast server-side page cache storage              page “put” will be found by a “get” and sometimes
could dramatically reduce paging I/O. In this                  not; it’s impossible to tell a priori. However, like
workshop extended abstract, we speculate how such
                                                               any cache, it’s fast enough and large enough that
a device might be used, then show how it can be                using it is almost always a good thing.
implemented virtually in a hypervisor. We then
introduce hcache (pronounced “aitch-cash”), our
prototype implementation built on the Xen                      How might an operating system use such a device?
hypervisor and utilized by slightly modified Linux             Since persistence is not guaranteed, dirty pages
paravirtualized domains.        We discuss the                 cannot be placed in the hcache, only clean pages. As
implementation and the current status of hcache,               a result, the hcache unfortunately can’t be used as a
present some performance results, compare it to                general-purpose storage device. But it still has at
related work, and conclude with some speculation of            least two interesting applications:
other possible uses for hcache.
1. Introduction                                                •   Whenever the operating system is about to evict
                                                                   a clean page from its page cache, it can “put” the
Imagine a new very fast but somewhat quirky device                 page to the hcache. And whenever the operating
that might someday become widely available on                      system is about to request that a disk driver
many systems; let’s call it an “hcache” (pronounced                DMA a page into a page frame, it would first try
“aitch-cash”). The device is essentially a very fast,              a “get” from the hcache to see if a prior “put”
page-granularity, fully-associative cache, which is so             had saved the page in the hcache, thus saving the
fast that DMA requests take no longer than a few                   cost and latency of a disk access. Depending on
times as long as a RAM-to-RAM copy. Thus an                        access and eviction patterns, paging from disk
operating system can access the device                             may be greatly reduced.
synchronously, when a lock is held, and even when              •   In a partitioned, containerized, or virtualized
interrupts are disabled. The driver for this device                system running multiple operating systems, the
might have the following very simple API, where a                  hcache could be used as a quickly accessible
“handle” is a unique identifier determined by the                  copy of a read-only clustered filesystem. For
operating system:                                                  example, if different partitions are running the
                                                                   same Linux operating system, the hcache might
•   hcache_put(page_frame, handle)                                 contain a copy of a commonly executed program
•   hcache_get(empty_page_frame, handle)                           such as the shell or compiler. After one partition
•   hcache_flush(handle)                                           promotes a page of the program from the disk to
                                                                   its buffer cache and “puts” it into the hcache,
The “put” function saves the data from the page and                other partitions can “get” the copy from the
associates it with the specified handle. The “get”                 hcache, thus similarly reducing paging from
function finds a page in the hcache with the handle                disk.
and fills the empty page frame with the data. The
“flush” function disassociates the handle from any             The reader is invited to suggest additional uses as
data so that subsequent “get” calls with that handle           there are certainly more.
will fail.



        Paravirtualized Paging                          Page 1                                    10/31/2008
                                 Usenix First Workshop on I/O Virtualization (WIOV’08)
2. A hypervisor-based cache in the hypervisor                  object” call and a “flush hcache’ call have been
                                                               added to the API to simplify implementation of file-
The physical device described in the previous                  removal-like operations and filesystem “unmount”.
section is only metaphorical, but represents a very            Linux-side changes require the addition of “put”
realistic capability that can be implemented not               hypercalls at a single code location in the generic
using physical storage media, but instead with                 page cache removal code and a single “get”
“spare” physical memory in the hypervisor of a                 hypercall in the generic filesystem code. The
virtualized system. This hypervisor-based cache --             difficult part is the correct placement of a handful of
or “hcache” -- can be accessed by a slightly-                  “flush” hypercalls to ensure that data consistency is
modified operating system using simple hypercalls              maintained between the hcache and the operating
and can be viewed by such operating systems as a               system page cache. Fortunately, the potential race
second-chance page cache for evicted clean pages or            conditions are the same as for managing the page
by a cluster of operating systems as a shared server-          cache vs persistent storage, so are well understood.
side filesystem cache similar to, but much faster              The object identifier is the Linux inode number and
than, the cache RAM in a modern disk array.                    the index is the page offset into the inode. When
                                                               Linux discards or truncates an inode, a flush-object
In a virtualized system with multiple hcache-aware             hypercall is made and when a filesystem is
paravirtualized guests, available hcache memory                unmounted, a flush-hcache call is made.
should be divided equitably and dynamically. To                The Xen-side hcache code efficiently implements
each guest, hcache appears as a private page cache             the basic get/put/flush/flushobject operations
of unknown size but since no persistence guarantees            utilizing a hierarchy of dynamic data structures: A
are made, a mostly idle guest may be allocated a               domain-private hcache is explicitly created when a
smaller portion of the hcache, or even none at all,            filesystem is mounted or dynamically when the first
while the allocation for a very active guest could be          hcache_put hypercall is performed with a page
increased dynamically as needed. This is sort of a             belonging to a filesystem.            This hcache is
“fair share memory scheduler” for page cache space             implemented as a hashed-list of objects; each object
and could be controlled with internally derived                is created as needed and serves as the root of a
policies, by administrator-supplied parameters, or by          “radix tree” [1] of nodes for fast lookup of indices.
derivation from parameters provided for virtual                The leaf nodes of the radix tree point to page
machine CPU scheduling.                                        descriptors, which in turn point to pageframes
                                                               containing the actual data. The page descriptors are
3. Hcache implementation                                       kept in two doubly-linked LRU lists: one private list
                                                               for each domain, and one global list across all
An hcache implementation has been prototyped with              domains. Thus, unutilized pages can easily be
changes to a paravirtualized Linux guest and with              recycled as needed to accommodate constantly
code added to the Xen 3.3 hypervisor.              To          changing “memory scheduling” needs. Finally,
accommodate real operating system usage, the                   counters are kept for all data structures and pages are
generic API has been extended in a number of ways:             timestamped so that utilization can be easily
First, each domain can allocate multiple independent           determined and rebalanced as necessary.
hcaches, and an explicit “initialize hcache” call has          When a guest performs an hcache_put hypercall, the
been added with a parameter indicating whether it is           Xen hcache code allocates an unused memory page
private or shared. (At the time of this writing, only          and any necessary data structures and copies the
the second-chance page cache mechanism has been                page of data from the guest. If there is insufficient
implemented, so the shared-inclusive mechanism is              memory, one or more pages may be first evicted
not yet used.) Next, the handle has been divided into          from either the global LRU list or private LRU list,
three components: a hcache identifier, a 64-bit object         depending on memory scheduler parameters and
identifier and a 32-bit page identifier. These are             policy.    If memory is still not available, the
roughly analogous to a “filesystem,” a “file” and a            hcache_put simply fails -- since there is no guarantee
page-granularity offset into a file. Finally, “flush           of persistence, there is no requirement that a put is


        Paravirtualized Paging                          Page 2                                    10/31/2008
                                 Usenix First Workshop on I/O Virtualization (WIOV’08)
successful; no indication of failure is even necessary,        opportunities we have fixed. For example, an
though one is provided in the hypercall return value.          unexpectedly high ratio of hcache_flush_object()
                                                               calls led us to rewrite the linux-side interface to
                                                               utilize inode numbers instead of the linux “address
For an hcache_get, the specified object identifier is
                                                               space” abstraction as an identifier for objects. This
hashed and the corresponding radix tree found. The
                                                               not only reduced hcache overhead, but also led to a
radix tree is searched for the index and, if a match is
                                                               cleaner linux-side implementation.           Another
found, the data is copied to the guest. In the case of
                                                               example: Profiling hcache identified the Xen
a private-exclusive hcache_get, the page and
                                                               dynamic memory allocation (“xmalloc”) code as a
associated data structures are then freed; for a
                                                               horrible bottleneck, driving worst case hcache call
shared-inclusive get, the page and data structures are
                                                               times into the millions of cycles. This led to the
left intact but the lists are updated to mark the page
                                                               wholesale replacement of Xen xmalloc with a much
as recently used.
                                                               faster TLSF-based [8] allocator, a change which has
An hcache_flush is simply a private hcache_get with            already been pushed upstream into xen-unstable.
no copying. An hcache_flush_object walks the radix
tree and flushes and frees all pages and data
structures associated with that object. Finally, a
function is provided to destroy and recyle an entire
private hcache, so that memory can be proactively              5. Hcache performance
recovered when Xen destroys an entire domain;
technically this is not necessary as all unused pages          We have measured hcache on a simple but widely
will eventually move to the end of the LRU queues              used “benchmark”, compiling the Linux kernel. We
and be evicted.                                                test on two hardware platforms: a dual core 3GHz
                                                               processor and 2GB physical memory; and a 2.9GHz
                                                               quad core with hyperthreading and 4GB physical
There are some interesting locking challenges,                 memory. The software foundation is an hcache-
memory allocation issues, and hypercall sequence               modified 64-bit Xen 3.3 hypervisor with Oracle
corner cases. For example, in a put-get-get sequence           Enterprise Linux 5.2 (OEL) as domain0. At boot,
of the same handle, is it possible that the first get          Xen absorbs about 42MB of memory and domain0
will fail but the second get will succeed? And what            is restricted to 512MB via boot parameter. Our test
is the cause and proper response to a put when the             domain is a 32-bit OEL guest configured to use a
handle already maps to existing data in the hcache?            “tap:aio” virtual disk, with between 256MB and
These are beyond the scope of this introductory                2GB of memory and with either 2 vcpus or 4 vcpus.
abstract.
                                                               For our workload, we use a “make -j 10” of linux-
                                                               2.6.25.10 accelerated with the “ccache” [2]
                                                               preprocessor. Our methodology is to measure five
                                                               runs, discard the lowest and highest measurements
4. Hcache status                                               and average the remaining three. We reset the
                                                               environment before each compile with a “make
We have completed a prototype implementation of                clean” and a command to flush the page cache. We
the second-chance cache functionality of hcache and            time the compile (only) rounded to the nearest
the cluster/sharing functionality is currently under           second and bracket the compile with “iostat” to
development. We have only as yet measured hcache               measure disk block reads and round this metric to
with small workloads on a single domain;                       the nearest thousand.
comprehensive testing will require multiple
                                                               Table 1, at the end of the paper, shows our
simultaneous virtual machines with real or simulated
                                                               measurements. To briefly summarize, hcache on
workloads. Still, preliminary results are promising.
                                                               this workload reduces disk reads by nearly 95% and
We have heavily instrumented the hcache code in                as a result increases throughput by between 22% and
order to collect a large set of internal statistics for        50%, with best results when more CPU resources are
analysis; this has already pointed out some tuning             available to the guest.


        Paravirtualized Paging                          Page 3                                   10/31/2008
                                 Usenix First Workshop on I/O Virtualization (WIOV’08)
Some additional interesting data we gleaned from               6. Related work
our instrumentation when hcache is enabled:
•   hcache_get hit ratio is about 80%                          Lu and Shen [6] introduce the concept of a
                                                               hypervisor-based page cache, which influenced the
•   average cost for hcache get’s and put’s is about           ideas behind hcache. However, cached pages in
    2.5x the cost of an average page copy, which we            their implementation are stored not in the hypervisor
    measure at about 1.5usec on one platform and               but in the “service” domain (dom0), which requires
    about half that on the other; maximum cost is              costly interdomain transfer and coordination; this is
    about twice the maximum cost for a page copy               because they do not constrain the cache to clean
• about 80% of the 1.7M hcache calls do an                     pages and must map and track physical device I/O
    hcache_flush, showing we may be over-paranoid              performed in the service domain. The exclusiveness
    on the linux side to guarantee data consistency,           also obviates its use for sharing between multiple
    and so our implementation may still have room              virtual machines.
    to improve
• hcache data structures are comfortably managing              Geiger [5] studiously avoids changes to the OS but
    over 100K pages, belonging to 20K unique                   uses a hypervisor to passively infer useful
    objects (inodes) in four hcaches (filesystems);            information about a guest’s unified buffer cache
    note that this also reveals some insight into the          usage, with goals of working set size estimation and
    working set size of the workload                           improving hit rate in remote storage caches.
• one hash table is seeing a maximum hash chain                Interestingly, Geiger’s success is measured against
    length of over 50 entries, showing yet another             “the ideal eviction detector” -- an OS modified
    opportunity for improvement                                exactly as needed for hcache.
Since the single guest, large memory environment,              Much of the excellent analysis in Wong and Wilkes
cold page cache, and the diskbound workload all                [10] reapplies easily to hcache. Indeed, the DEMOTE
favor hcache, some may argue that the benchmark is             operation is analogous to hcache_put, though the
a bit contrived. To counter this concern, we provide           data is copied to a remote disk-array cache rather
a second set of test runs, where we disable compiler           than a server-side hypervisor cache. In particular,
acceleration, remove the command to drop the page              we intend to try some of the same benchmarks and
cache between compiles, and even “warm” the page               compare some of the resulting curves, and we are
cache with a pre-measurement compile. But we also              eager to attempt some of the adaptive cache insertion
reduce the guest memory size to simulate a poorly              policies.
provisioned guest. As shown in Table 2, without
hcache on a 128MB guest, some thrashing occurs                 Finally, the transparent content-based page sharing
and, as a result, disk reads climb dramatically and            described by Disco [3] and by Waldspurger [9]
performance plummets. But with hcache enabled,                 likely utilizes a hypervisor-cache-like mechanism to
performance is roughly the same as if the guest were           assist in memory overcommitment. We wonder
properly provisioned with twice as much memory --              whether the explicit white-box sharing we intend to
or greatly overprovisioned with eight times as much.           employ with read-only clustered filesystems might
                                                               prove superior on some consolidated workloads to
Our intent is certainly not to claim that hcache will          the black-box copy-on-write mechanisms used in
demonstrate such outstanding results on a much                 VMware ESX.
wider variety of environments and workloads, but
rather merely to show that hcache has strong
potential in some cases -- and more room to
improve.




        Paravirtualized Paging                          Page 4                                   10/31/2008
                                 Usenix First Workshop on I/O Virtualization (WIOV’08)
7. Conclusions and future work                                 8. References

We have introduced hcache, a hypervisor-based                  1. Bovet, D.P and Cesati, M. Understanding the
non-persistent page cache that allows underutilized               Linux Kernel, Third Edition, O’Reilly &
memory to act as a “second-chance” page cache and,                Associates, Inc. 2005
potentially, as a shared page cache for clustered
                                                               2. Brown, M. Improve collaborative build times
filesystems. We have implemented a prototype of
                                                                  with ccache, http://www.ibm.com/
the second-chance cache functionality and have
                                                                  developerworks/linux/library/l-ccache.html
measured it against a single but non-trivial
workload, demonstrating preliminary but surprising             3. Bugnion, E., Devine, S., Rosenblum, M. Disco:
performance improvement potential in some                         Running commodity operating systems on
environments and workloads.                                       scalable multiprocessors. In Proc. 6th Usenix
                                                                  Symp. on Operating System Principles
We expect mixed benefit in other workloads: For
                                                                  (SOSP’97), pp 143-146, Saint-Malo France,
example, large memory domains and applications
                                                                  October 1997.
with large sequential datastreams may not benefit
much and will likely challenge the memory                      4. Gupta, N. Compressed Caching for Linux
scheduler. Domains with limited memory, or high                   http://code.google.com/p/compcache/
density consolidations using automatic ballooning              5. Jones, S.T., Arpaci-Dusseau, A.C., and Arpaci-
techniques [7,9] may benefit much more. Indeed we                 Dusseau R.H. Geiger: Monitoring the buffer
are already considering combining self-ballooning                 cache in a virtual machine environment. In Proc
with hcache to potentially better optimize memory                 12th ASPLOS, San Jose CA, October 2006.
utilization. Memory consumption may also be
further reduced when hcache is leveraged to share              6. Lu, P. and Shen, K. Virtual machine memory
read-only cluster filesystem data. And we speculate               access tracing with hypervisor exclusive cache.
that Geiger’s goals such as working set size                      In Proc. 2007 Usenix Annual Technical
estimation and remote storage cache hit rate                      Conference, Santa Clara CA, June 2007.
improvement may be achieved more effectively with              7. Magenheimer, D., Memory Overcommit…
an hcache-based approach.                                         without the commitment. Xen Summit 2008.
As hcache is applied to a wide variety of                         http://wiki.xensource.com/xenwiki/Open_Topic
simultaneous workloads, we expect to focus on                     _For_Discussion?action=AttachFile&do=get&ta
challenges in implementing policy code.            For            rget=Memory+Overcommit.pdf
example, how do we balance hcache usage between                8. Masamo, M., Ripoli I., et al. Implementation of
multiple competing memory-hungry domains?                         a constant-time dynamic storage allocator.
Other less ambitious but potentially advantageous                 Software Practice and Experience. vol 38 issue
ideas for future work have been proposed: 1)                      10, pp 995-1026. 2008.
Compress pages in the OS prior to cacheing [4], or
optionally in the hypervisor.            2) Add an             9. Waldspurger, C.A..      Memory Resource
hcache_flush_range() hypercall to make file                       Management in VMware ESX Server. In Proc.
truncation more efficient. We are eager to hear                   5th Usenix Symp. on Operating System Design
additional ideas. 3) Use part of hcache memory to                 and Implementation (OSDI’02), pp 181-194,
serve as a ghost cache [10] to assist in determining if           Boston MA, December 2002.
increasing or decreasing cache size would be                   10. Wong, T.M. and Wilkes, J. My Cache or
beneficial.                                                        Yours? Making Storage More Exclusive. In
                                                                   Proc   2002     Usenix   Annual   Technical
                                                                   Conference, pp. 161-175, Monterey CA, June
                                                                   2002.




        Paravirtualized Paging                          Page 5                                 10/31/2008
                                 Usenix First Workshop on I/O Virtualization (WIOV’08)
                             guest
physical      virtual                        hcache                         relative     disk reads      relative
                            memory                          time (s)
 cpus          cpus                          enabled                       to hcache        (K)         to hcache
                             (MB)
   2             2            256              yes            51               --            6                 --
   2             2            256               no            62             122%           98           1633%
   2             2           1024               no            63             123%           98           1633%
   4             2            256              yes            45               --            6                 --
   4             2            256               no            56             124%           98           1633%
   4             2           1024               no            57             127%           97           1616%
   4             4            256              yes            26               --            6                 --
   4             4            256               no            39             150%           98           1633%
   4             4           1024               no            38             146%           98           1633%
   4             4           2048               no            39             150%           98           1633%

              Table 1. Linux compiles (using cold page cache and ccache) -- hcache is superior



                             guest
physical      virtual                        hcache                         relative     disk reads      relative
                            memory                          time (s)
 cpus          cpus                          enabled                       to hcache        (K)         to hcache
                             (MB)
   4             4            128              yes            53               --           323                --
   4             4            128               no            114            215%           537           166%
   4             4            256               no             52            98%            18             6%
   4             4           1024               no             52             98%            0             0%

       Table 2. Linux compiles (with warm page cache and not using ccache) -- hcache compensates for
                                        underprovisioned memory




        Paravirtualized Paging                          Page 6                                    10/31/2008
                                 Usenix First Workshop on I/O Virtualization (WIOV’08)

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:4
posted:3/27/2010
language:English
pages:6