017 by huanghengdong

VIEWS: 4 PAGES: 8

									Partitioned Log-based Flash Memory management and adaptive
                           cleaning
                     Mei-Ling Chiang+ and Ruei-Chuan Changξ
                           Department of Information Management+
                       National Chi-Nan University, Puli, Taiwan, R.O.C.

                     Department of Computer and Information Scienceξ
                  National Chiao-Tung University, Hsinchu, Taiwan, R.O.C.

                            joanna@ncnu.edu.tw,           rc@cc.nctu.edu.tw

Abstract: Under the requirements of small size, light weight, and low power consumption, bat-
tery-backed RAM and flash memory are always employed for data storage in consumer electronics
and embedded devices. However, RAM and flash memory are totally different from hard disks used
for conventional storage systems. This difference imposes the need of redesigning storage systems for
them.

In the design of flash memory-based storage systems, erase management is the main concern since it
severely affects system performance, flash memory lifetime, and power consumption. To address this
issue, this paper employs the partitioned log-based mechanism with DAC data clustering to manage
data in flash memory. Meanwhile, the adaptive cleaning is proposed to adapt to different access be-
haviors for better cleaning performance. Performance evaluation demonstrated the improved system
performance, which results in the prolonged flash memory lifetime and the reduced power consump-
tion that is especially important for consumer electronics or mobile devices.

Keywords: Flash Memory, Cleaning Policy, Consumer Electronics

Introduction                                              Digital Assistants (PDAs), etc.

Flash Memory Characteristics                              Though flash memory has many advantages, its
                                                          special hardware characteristics impose design
Flash memory [1,7,8,16,23] is non-volatile that           challenges on storage systems. For example, flash
will retain the data stored to it when system is          memory cannot be written over existing data un-
powered off. Unlike DRAMs and SRAMs, data                 less erased in advance. To erase the flash memory,
retention needs not battery supplied. Also, unlike        the whole flash memory storage space is parti-
EPROMs and EEPROMs, flash memory is of                    tioned into fixed-size erasure units and each eras-
in-system updateability that stored data can be           ure unit is the basic unit in erasing the flash
erased then replaced under system processor               memory. Hardware manufacturers define the
control. So flash memory is ideal for data                erasure unit sizes (e.g., 64 Kbytes or 8 Kbytes).
storage and ROM replacement for storage due to            The erase operation is slow and consumes com-
its upgradability [7,8].                                  paratively lots of power. Besides, the number of
                                                          times an erasure unit can be erased is also limited
In addition to the non-volatility and in-system           (e.g., 10,000 to 1,000,000 times).
updatability, flash memory also has the ad-
vantages of fast access speeds, low power con-            Though flash memory read speed is comparable
sumption, shock resistance, high reliability, small       to that of DRAM, write access is much slower
size, and light weight. Because of these attractive       than read access, which is especially different
features, and the decreasing of price and the in-         from other storage media. Besides, flash memory
creasing of capacity, flash memory is widely used         is either byte accessible or block accessible de-
in consumer electronics, embedded systems, and            pending on different flash technologies used. For
mobile computers. Examples are digital cameras,           example, NOR-type flash memory [7,11,12,16,20]
voice recorders, set-top boxes, cellular phones,          is byte accessible that is the read/write block size,
pagers, notebooks, hand-held devices, Personal            whereas NAND-type flash memory [7,16,18,19]


                                                      7
requires data to be read/written in larger blocks        data are separated from cold data while remaining
(e.g., 512 bytes).                                       low processing overhead. Clustering hot data to-
                                                         gether in the same erasure units for a better
Impact on Storage System Design                          cleaning effectiveness has been shown in previ-
                                                         ous literature [4,5,13].
To overcome the limitations from hardware
characteristics (i.e., bulk erasure, slow erasure,       Outline of This Paper
and limited endurance), as to longer flash
memory lifetime, attain better system                    In this paper, the partitioned log-based flash
performance, and save power, the erase                   memory management [5] is used to manage data
operations should be avoided as much as                  in flash memory and the Dynamic Data
possible [1,3-5,7,8,13,23,24]. The erasure units         Clustering (DAC) [5] is applied for data
should be erased evenly to maximize the flash            clustering. Under this method, the flash memory
memory lifetime [4,5,7,23,24]. This is because if        storage space is managed as multiple regions
the flash memory is not evenly erased, hot spots         and each region is laid out as a log that also
will soon be worn out, even though the                   requires the cleaner to reclaim space occupied
utilization of this flash memory is still low.           by obsolete data. The essence is that each region
Besides, due to the disproportionate read/write          is devised to accommodate data with different
speed, write access to flash memory should be            write access frequencies, hoping data in the
reduced as possible as well [2,7,14,23,24]. These        same region will become garbage at the same
are the primary principles in implementing               rate, which results in the decrease of cleaning
flash-memory based storage systems.                      overhead. The DAC clustering method is
                                                         responsible for reorganizing data by using a state
Flash-memory based storage systems are com-              machine to dynamically cluster data according
monly implemented in two ways: either designed           to data update frequencies. Especially, the
from the scratch [22,24] or constructed by using         processing overhead is low and data are
existing file systems plus new device driver im-         classified in a fine-grained way, which further
plementations [6,9,10,13,15]. In the later ap-           contributes to the cleaning performance.
proach, the file system just generates requests to
the drivers in disk block and sector numbers and         Besides, when selecting erasure units to clean,
device drivers then translate the disk block ac-         previous works [4,5,13,24] showed that static
cesses to flash memory addresses. Evidently, no          policy that does not change with the variation of
matter which approach is used, it is important to        data access behaviors do not perform well for all
avoid having to erase during every data update.          access patterns. Since no single selection algo-
The non-in-place update mechanism is generally           rithm performs well for various data access pat-
used      to      achieve     this    expectation        terns, we propose an adaptive cleaning algorithm
[1,4-7,9,10,13,15,17,20-22,24]. Under this mech-         that combines the Cost Age Times policy (CAT)
anism, data updates are written to empty flash           [4,5] and the greedy policy [4,5,7,13,17,21,22,24]
memory space and obsolete data are marked in-            to adapt to variations of data access patterns.
validated as garbage that a software cleaner later
reclaims.                                                This paper describes the design and implementa-
                                                         tion of a flash memory server that utilizes the
Cleaning policies that determine the behavior of         partitioned log-based flash memory management
the cleaner such as which erasure units to clean,        with DAC data clustering and adaptive cleaning.
when to clean them, and how to clean them, se-           Performance evaluation shows that the number of
verely affect how flash memory is worn and the           erase operations performed and the cleaning
amount of erasure. Thus, cleaning policies are           overhead are significantly reduced. Besides, flash
key to flash memory management [1,4,5,8,13,24].          memory is evenly worn.
Two major concerns regarding cleaning policies           The rest of this paper is organized as follows.
are the segment selection algorithm that deter-          Section 2 describes the partitioned log-based
mines the erasure units to be cleaned and the data       flash memory management scheme and DAC
reorganization method that determines how to             clustering method. Section 3 presents the adaptive
migrate valid data in the selected erasure units.        cleaning policy. Section 4 describes the design
The data reorganization method has the most im-          and implementation of the flash memory server.
portant impact [5]. Therefore, the challenge is          Section 5 shows experimental results, and Section
how to effectively reorganize data, such that hot        6 concludes this paper.

                                                     8
                                                                 Bottom                                                      Top
Partitioned log-based Flash Memory Man-                          (Cold)                                                      (Hot)

agement and DAC data clustering
                                                                         Region 1             Region 2        ...           Region N
We use the partitioned log-based mechanism [5]
to manage data in flash memory. Under this
                                                                 (a) Partitioned log-based flash memory management.
mechanism, flash memory is logically partitioned                                Young &           Young &         Young &
into multiple regions, as shown in Figure 1(a).                                 updated           updated         updated
                                                                                                                                Young &
                                                                                                                                updated
Each region is laid out as log and consists of a set
of flash erasure units that need not physically                          Bottom                                         Top
                                                                         Region 1
                                                                                          Region 2          ...       Region N
contiguous. The idea is to cluster data blocks of
                                                               Too old
the similar write access frequencies in the same
regions, hoping that data in the same regions will                                  Too old       Too old         Too old

become garbage at the same rate such that most
                                                                          (b) State transition in DAC data clustering.
amount of garbage will form together. Only write
operations are concerned is because read opera-
tions do not incur cleaning.                                      Figure 1: Partitioned log-based flash memory
                                                                     management and DAC data clustering.
For the problem how to cluster data blocks of the
similar write access frequencies in the same re-             update data are written to the free space in the
gions, the problem that write access frequencies             current region. If a block is to be demoted, it also
may change over time should be dealt with too.               has to be old for the current region (i.e., its resi-
We use the DAC (Dynamic dAta Clustering) ap-                 dent time exceeds a certain threshold). Otherwise,
proach [5] that actively migrates data blocks be-            the block is migrated to the free space in the cur-
tween regions when their access frequencies                  rent region.
change. Data blocks are moved toward the Top
region (i.e., the hottest) if their update frequencies       By this active data migration between neighbor-
increase, whereas are moved toward the Bottom                ing regions during data updating time and during
region (i.e., the coldest) if decrease. In this way,         cleaning time, Top region will gather the most
regions can be dynamically shrunk or enlarged.               frequently updated data during the recent access-
                                                             es. The closer to the Top region, the hotter the
For the problem how and when to migrate data                 block is; otherwise, the colder it is. Therefore,
blocks between regions, the DAC clustering uses              data blocks of similar write access frequencies
a state machine for the region switching. The                can be effectively clustered.
state machine contains several states and the state
transition diagram is shown in Figure 1(b). Each             Adaptive Cleaning
data block is associated with a state indicating the
region it resides in. The starting state is “Bottom          Several segment selection algorithms have been
region,” where newly created data blocks reside.             proposed. In the greedy policy [4,5,13,24], the
The state switching occurs only when data blocks             cleaner always selects erasure units with the
are updated or when garbage collection occurs as             largest amount of garbage for cleaning, hoping
follows. When a data block is updated, the obso-             to reclaim as much space as possible with the
lete data block in the original region is invalidat-         least cleaning work. It works well for uniform
ed as garbage and the update data are written to             access; however, it was shown to perform poorly
free space in the upper region. When an erasure              for high localities of reference [4,5,13,24].
unit is selected for cleaning, all of its valid data
blocks are migrated back to the lower region by              In the cost-benefit policy [13], the cleaner choos-
being copied into free space in the lower region.            es to clean erasure units that maximize the for-
                                                                         a*(1u )
An additional criterion for state switching is add-          mula:     2u    , where u (utilization) is the per-
ed: the time threshold because the degree of hot-            centage of valid information in the erasure unit
ness for a block is determined by the number of              and (1-u) is the amount of free space reclaimed.
times the block has been updated but degrades as             The age is the time since the most recent modifi-
the block’s age grows. If a block is to be promot-           cation (i.e., the last block invalidation) and used
ed, it also has to be young for the current region           as an estimate of how long the space is likely to
(i.e., the resident time in the current region is            stay free. The cost of cleaning an erasure unit is
smaller than a certain threshold). Otherwise, the            2u (one u to read valid blocks and the other u to


                                                         9
write them back).                                          the cleaner knows the recent write access pat-
                                                           terns.
In the Cost Age Times (CAT) policy [4], the
cleaner chooses to clean erasure units that mini-          An approximate method is devised to determine
mize the formula:                                          whether uniform access has occurred. The idea is
                                    1                      to use a Write Monitor to monitor the incoming
Cleaning Cost                  *   Age * Number of         write requests. The Write Monitor maintains a
                Flash Memory
Cleaning.                                                  block reference table to count the number of
                                                           times each block has been accessed. Again, only
The cleaning cost is defined as the cleaning cost          write accesses are considered since read accesses
of every useful write to flash memory as u/(1-u),          do not incur cleaning. When cleaning is needed,
where u (utilization) is the percentage of valid           an Analyzer examines all the numbers recorded in
data in an erasure unit. Every (1-u) write incurs          the block reference table. If the variance of these
the cleaning cost of writing out u valid data. The         numbers is large (compared with the variance if
age is defined as the elapsed time since the eras-         uniform access has occurred), then non-uniform
ure unit was created. The number of cleaning is            access is reported.
defined as the number of times an erasure unit has
been erased. The basic idea of CAT formula is to           After cleaning, all the counters in the block ref-
minimize cleaning costs, but gives erasure units           erence table will be reset to zero. The period for
just cleaned more time to accumulate garbage for           monitoring write references begins from the last
reclamation. In addition, to avoid concentrating           cleaning time to the current cleaning time.
cleaning activities on a few erasure units, the            Therefore, only recent accesses affect the refer-
erasure units erased the fewest number of times            ence analysis.
are given more chances to be selected for clean-
ing.                                                       System design and implementation

In the dynamic policy used in the Linux flash              A partitioned log-based flash memory server
memory FTL driver [9,10], the cleaner uses the             with DAC data clustering and adaptive cleaning
greedy policy to select erasure units at 90% of the        is implemented. It uses the non-in-place update
times, while the remaining 10% of the times the            scheme and manages flash memory as fixed-size
cleaner chooses the erasure units that have been           blocks. A table-mapping method is used to map
erased the fewest number of times. The idea is to          logical block numbers to physical locations.
avoid concentrating the erasures on certain eras-
ure units.                                                 Figure 2 shows the data layout on flash memory.
                                                           The per-block information records information
Previous studies [4,5,13,14] showed that the per-          for every block, such as logical block number and
formance of static cleaning policies are sensitive         region number indicating which region the block
to data access behaviors and no single cleaning            currently resides in.
policy performs well for all access patterns in se-
lecting erasure units to clean. For example,               The translation table, shown in Figure 3, is con-
cost-benefit policy [13] and CAT policy [4] take           structed to speed up the address translation from
age of erasure unit into account. These two poli-          logical block numbers to physical addresses in
cies assume data just recently accessed will be            flash memory. The region number indicating
accessed soon, so hot erasure units are less cho-          which region the block belongs to and a
sen for cleaning. They were shown to perform               timestamp indicating when this block was allo-
well for locality of references [4,5,13]. However,         cated in the region. The DAC state machine uses
for uniform access, greedy policy performs best            this information to decide whether a block’s state
[4,5,13,24].                                               should be switched. The lookup table, shown in
                                                           Table 1, contains segment 1 information dupli-
Since data access patterns may change over time,           cated from segment headers of flash memory.
an adaptive cleaning policy is proposed, which
combines CAT policy and greedy policy to react
to changes in the data access patterns by dynami-
cally choosing policies. When the recent refer-
ences exhibit uniform access, greedy policy is
used; otherwise, CAT policy is used. Therefore,            1
the problem of adaptive cleaning becomes how                 Through out this paper, we use segments to repre-
                                                           sent hardware defined erasure units.

                                                      10
                                                                                                                                                        In d ex
                                                                          se gm en t se gm en t           ...                                           seg m en t
The region table, shown in Figure 4, keeps track                                                                                        ...
of information for each region, such as the active
segment indicating the segment currently used for
                                                          S eg m en t
data writing in the region, a region segment list         S u m m a ry
                                                                       n o . of segm en ts
                                                                                                       S eg m en t H ea d er
                                                          H ea der                                 n o . of erase op eratio ns      P er-B lo ck In fo rm a tio n
keeping track of each segment in the region, etc.                      n o . of b lock s
                                                                                                            tim estam p
                                                                                                                                        lo g ical b lock n o .
                                                                                                           in -u sed flag
When an active segment has no free space, a                                                               cleanin g flag                     reg ion n o .
                                                                                                   p er-blo ck in fo rm atio n               tim estam p
segment taken from the free segment list is used                                                   p er-blo ck in fo rm atio n
                                                                                                                                           u p d ate-tim es
                                                                                                                                           in -u sed flag
as the active segment and the change of active                                                                   .
                                                                                                                 .
                                                                                                                 .                          in v alid flag

segment is written to the index segment as an ap-                                                  p er-blo ck in fo rm atio n

pend-only log.

Experimental Results and analysis                                         Figure 2: Data layout on flash memory.

A partitioned log-based flash memory server
utilizing the DAC data clustering and adaptive
cleaning was implemented on Linux. Table 2                                      segment block region timestamp
                                                                                  no.     no.    no.
lists the experimental environment. To measure
                                                                logical
the effectiveness of alternate cleaning policies,               block
                                                                no. i
                                                                                       j      k                         Translation table
                                                                                      .
                                                                                      .
                                                                                      .       .
                                                                                              .
                                                                                              .
four policies were implemented in the server:
greedy policy [4,5,13,24] (Greedy), cost-benefit
policy [13] (Cost-benefit), CAT policy [4,5]                                                 segment
                                                                                             header
                                                                                                            jth segment
                                                                                                                                                flash memory
(CAT), and adaptive cleaning (Adaptive).                                  ...                                                                      ...

Because we wanted to know whether adaptive                                                   kth per
                                                                                             block information
                                                                                                                           kth data block


cleaning policy adapts to change of access pat-
terns and whether DAC data clustering performs
                                                              Figure 3: Translation table and address translation.
well for combination of different workloads, a
synthetic workload combining random access and
locality access was created. The workload con-
tained 4-phase data accesses: the first and the
third phases were locality access in which 90% of
                                                                            Erase Timest Used Cleaning                             Valid           First free
accesses were to 10% of data; the other phases                              count amp    flag   flag                               blocks            block
were random access. In each phase, 40-Mbyte                                                                                        count
data were written to flash memory in 4-Kbyte
                                                           Segment
units. Totally, 160-Mbyte data were written. The             no. i
flash memory utilization is initially set to 90%.                                                                      .
Benchmarks were created to overwrite the initial                                                                       .
data according to this synthetic workload. The                                                                         .
block size that the server managed was 4 Kbytes.
The number of states with which DAC state ma-                      Table 1: Lookup table to speed up cleaning.
chine was configured ranged from 1 to 4. The
time threshold for state switching was set to 30
minutes.

Performance Results

Figure 5 shows that when applying DAC data
clustering (i.e., the number of regions is more
than 1), each policy reduced large amounts of
erase operations and blocks copied and the
average throughput was largely increased as
well. For example, Adaptive incurred
15.97-22.55%        fewer   erase    operations,
20.12-28.41% fewer blocks copied, 16.5-23.3%
                                                               Figure 4: Region table and region segment lists.



                                                     11
                                                                 The block reference table maintained by the
Hardware:                                                        adaptive cleaner requires a substantial amount of
    Pentium 133 MHz with 32-Mbyte RAM
    PC Card Interface Controller: Omega Micro 82C365G
                                                                 main memory: 4 bytes per blocks. For example,
    Intel Series 2+ 24Mbyte Flash Memory Card                    for a 24-Mbyte flash memory with 4-Kbyte
                        (erasure unit size:128 Kbytes)           blocks, the table requires 23-Kbyte main memory.
Operating system:                                                Because current flash memory capacity is still
    Linux Slackware 96                                           small, the space overhead is limited. However, if
    (Kernel version: 2.0.0,PCMCIA package version: 2.9.5)        flash memory capacity becomes large, a small ta-
           Table 2: Experimental environment.                    ble with a certain replacement policy may be ap-
                                                                 propriate.

                                                                 Conclusions
less cleaning costs2. Throughput improvement
was 19.54-25.19%.                                                The storage organization using battery-backed
The results also show that applying an effective                 RAM and flash memory has been widely
cleaning policy can further reduce the cleaning                  appeared in consumer electronics, embedded
overhead. Adaptive performed best among all                      applications, and mobile devices. As the increase
policies, though the advantage over CAT is not                   of capacity and the decrease of price, flash
prominent. This is because the behavior of Adap-                 memory is becoming largely used. More and
tive is like Greedy for random access but like                   more applications use flash memory as their
CAT for locality access. Since half the synthetic                storage and large write and erase operations will
workloads were random accesses in which                          be created. Wear leveling will be very important
Greedy slightly outperformed CAT, so Adaptive                    especially when the flash memory capacity is
slightly outperformed CAT. However, Adaptive                     increased. Effective cleaning policies can
performed 0.1-2% worse than CAT in the average                   maximize flash memory usage, use flash
throughput, as shown in Figure 5(d). This is be-                 memory in a cost effectively way, improve
cause Adaptive needs the extra cost of tracking                  system performance, and reduce power
access patterns.                                                 consumption that is especially a critical issue for
                                                                 mobile computers and consumer electronics.
The above results demonstrated that using DAC
data clustering that effectively clusters data ac-               We have presented the design and implementation
cording to write access frequencies can reduce                   of a partitioned log-based storage server utilizing
cleaning cost for a variety of cleaning algorithms.              flash     memory.     The     server    uses    the
Our original motivation for the adaptive cleaning                non-in-place-update approach to avoid having to
was to combine the advantages of greedy policy                   erase during every update, and employs the DAC
and CAT policy. However, adaptive cleaning                       data clustering technique for clustering frequently
needs more evaluation to control its factor such as              accessed data to reduce cleaning overhead. The
how to detect access behaviors and when to                       adaptive cleaning policy is a hybrid method that
switch to different policies.                                    combines the advantages of the greedy policy and
                                                                 the CAT policy, which adapts to changes of ac-
Discussions for Time and Space Overheads                         cess patterns while reducing the number of erase
                                                                 operations performed and evenly wearing flash
Computations and space overhead are the                          memory. Computations and space overhead are
disadvantages of Adaptive. To adapt to the                       its shortcoming.
variations of write access patterns, the adaptive
cleaner needs to record each write reference and                 Performance evaluations show that with the adap-
compute the variance. These computations take                    tive cleaning policy and the fine-grained DAC
up a substantial amount of time. However,                        data clustering, the proposed storage server not
Adaptive still performed well in the average                     only reduces large amount of erase operations
throughput since it incurred the least amounts of                performed, but also evenly wear flash memory.
erase operations and blocks copied. Because the                  The result is the extended flash memory lifetime
advance of CPU speed is faster than the advance                  and reduced cleaning overhead.
of flash memory erase speed, we expect this
computation time to be largely reduced.                          Several factors are important in determining the
                                                                 effectiveness of adaptive cleaning, such as how to
                                                                 detect the access patterns and when to switch to
2                                                                different segment selection algorithms. Space uti-
    We use the cleaning cost metric as defined in [5].

                                                            12
lization improvement is also needed. The access                http://hyper.stanford.edu/~dhinds/pcmcia/d
behavior of workload has great impact too.                     oc/PCMCIA-PROG.html, v1.38, Feb.
                                                               1998.
Acknowledgement                                         [11]   Intel, Flash Memory, 1994.
                                                        [12]   Intel Corp., “Series 2+ Flash Memory Card
This research was supported in part by the                     Family Datasheet,”
National Science Council of the Republic of                    http://www.intel.com/design/flcard/datasht
China under grant No. NSC89-2213-E-260-027.                    s, 1997
                                                        [13]   A. Kawaguchi, S. Nishioka, and H.
References                                                     Motoda, "A Flash-Memory Based File
                                                               System," Proceedings of the 1995 USENIX
                                                               Technical Conference, Jan. 1995.
[1] M. Assar, S. Nemazie, and P. Estakhri,
                                                        [14]   B. Marsh, F. Douglis, and P. Krishnan,
     “Flash      Memory        Mass      Storage
                                                               "Flash Memory File Caching for Mobile
     Architecture,” United States Patent
                                                               Computers," Proceedings of the 27 Hawaii
     Number: 5,388,083, Feb. 7, 1995.
                                                               International Conference on System
[2] M. Baker, S. Asami, E. Deprit, J.
                                                               Sciences, 1994.
     Ousterhout, and M. Seltzer, "Non-Volatile
                                                        [15]   J. Murray, Inside Windows CE, 1998.
     Memory for Fast, Reliable File Systems,"
                                                        [16]   P. Pavan, R. Bez, P. Olivo, and E. Zanoni,
     Proceedings of the 5th International
                                                               “Flash Memory Cells – An Overview,”
     Conference on Architectural Support for
                                                               Proceedings of the IEEE, Vol. 35, No. 8,
     Programming Languages and Operating
                                                               pages 1248-1271, Aug. 1997.
     Systems, Oct. 1992.
                                                        [17]   M. Rosenblum and J. K. Ousterhout, “The
[3] R. Caceres, F. Douglis, K. Li, and B.
                                                               Design and Implementation of a
     Marsh, "Operating System Implications of
                                                               Log-Structured File System,” ACM
     Solid-State Mobile Computers," Fourth
                                                               Transactions on Computer Systems, Vol.
     Workshop on Workstation Operating
                                                               10, No. 1, 1992.
     Systems, Oct. 1993.
                                                        [18]   K. Sakui, Y. Itoh, R. Shirota, Y. Iwata, S.
[4] M. L. Chiang and R. C. Chang, “Cleaning
                                                               Aritome, T. Tanaka, K. Imamiya, J.
     Policies in Mobile Computers Using Flash
                                                               Kishida, M. Momodomi, and J. Miyamoto,
     Memory,” Journal of Systems and
                                                               “NAND Flash Memory Technology and
     Software, Vol. 48, No.3, pp. 213-231, Nov.
                                                               Future      Direction,”    VLSI      Device
     1999.
                                                               Engineering        Laboratory,      Toshiba
[5] M. L. Chiang, Paul C. H. Lee, and R. C.
                                                               Corporation, Japan.
     Chang, "Using Data Clustering to Improve
                                                        [19]   SanDisk Corporation. SanDisk SDP Series
     Cleaning Performance for Flash Memory,"
                                                               OEM Manual, 1993.
     Software Practice & Experience, Vol. 29,
                                                        [20]   D. See and C. Thurlo, “Managing Data in
     No.3, pp. 267-290, Mar. 1999.
                                                               an Embedded System Utilizing Flash
[6] R. Dan and J. Williams, “A TrueFFS and
                                                               Memory,”       Technical     Paper,    Intel
     Flite Technical Overview of M-Systems
                                                               Corporation, Document Rev. 1.01,
     Flash File Systems,” 80-SR-002-00-6L
                                                               http://www.intel.com/design/flcard/papers/
     Rev.                                  1.30.
                                                               esc_flsh.htm, Jun. 30, 1995.
     http://www.m-sys.com/tech1.htm,        Mar.
                                                        [21]   M. Seltzer, K. Bostic, M. K. McKusick,
     1997.
                                                               and C. Staelin, “An Implementation of a
[7] B. Dipert and M. Levy, Designing with
                                                               Log-Structured File System for UNIX,”
     Flash Memory, Annabooks, 1993.
                                                               Proceedings of the 1993 Winter USENIX,
[8] F. Douglis, R. Caceres, F. Kaashoek, K. Li,
                                                               1993.
     B. Marsh, and J. A. Tauber, "Storage
                                                        [22]   P. Torelli, "The Microsoft Flash File
     Alternatives for Mobile Computers,"
                                                               System," Dr. Dobb's Journal, Feb. 1995.
     Proceedings of the 1st Symposium on
                                                        [23]   S. E. Wells, “Method For Wear Leveling
     Operating       Systems     Design      and
                                                               In a Flash EEPROM Memory,” United
     Implementation, 1994.
                                                               States Patent Number: 5,341,339, Aug. 23,
[9] D. Hinds, “Linux PCMCIA HOWTO,”
                                                               1994.
     http://hyper.stanford.edu/~dhinds/pcmcia/d
                                                        [24]   M. Wu and W. Zwaenepoel, "eNVy: A
     oc/PCMCIA-HOWTO.html, v2.5, Feb.
                                                               Non-Volatile, Main Memory Storage
     1998.
                                                               System," Proceedings of the 6th
[10] D. Hinds, “Linux PCMCIA Programmer’s
                                                               International Conference on Architectural
     Guide,”

                                                   13
                                       Support for Programming Languages and
                                       Operating Systems, 1994.
                                                                                                                                            170000
                                       7000                                                                                                                                             Greedy




                                                                                                                  number of blocks copied
                                                                            Greedy
number of erase operations

                                                                                                                                            160000                                      Cost-benefit
                                       6500                                 Cost-benefit                                                                                                CAT
                                                                            CAT                                                             150000
                                                                                                                                                                                        Adaptive
                                                                            Adaptive
                                       6000                                                                                                 140000

                                       5500                                                                                                 130000

                                                                                                                                            120000
                                       5000
                                                                                                                                            110000
                                       4500                                                                                                                1                 2           3             4
                                                   1           2            3              4
                                                                                                                                                                     number of regions
                                                             number of regions


                                              (a) Number of erase operations.                                                     (b) Number of blocks copied during cleaning.


                                       8000                                                                                            36000
                                                                            Greedy                                                                        Greedy
                                                                                                 average throughput (bytes/sec)
            simplified cleaning cost




                                                                            Cost-benefit
                                                                                                                                       34000              Cost-benefit
                                       7500
                                                                                                                                                          CAT
                                                                            CAT                                                        32000              Adaptive
                                       7000                                 Adaptive
                                                                                                                                       30000
                                       6500
                                                                                                                                       28000

                                       6000                                                                                            26000


                                       5500                                                                                            24000
                                                   1           2            3              4                                                          1                  2               3             4

                                                            number of regions                                                                                        number of regions

                                                       (c) Cleaning cost.                                                                            (d) Average throughput.


                                   16.00                                                                                                    7000

                                   14.00                                                                                                                                     # Greedy          # CAT
       degree of uneven wearing




                                                            Greedy          Cost-benefit                                                    6000
                                   12.00                    CAT             Adaptive
                                                                                                                                            5000
                                                                                                     number of cleanings




                                   10.00
                                                                                                                                            4000
                                       8.00
                                                                                                                                            3000
                                       6.00
                                                                                                                                            2000
                                       4.00

                                       2.00                                                                                                 1000

                                       0.00                                                                                                   0
                                                   1            2            3             4                                                          1                  2              3              4
                                                            number of regions                                                                                      number of regions


                                              (e) Degree of uneven wearing.                    (f) Breakdown of policy selections in adaptive cleaning.


                             Figure 5: Performance results of DAC data clustering and adaptive cleaning for synthetic workload.
                                                                             14

								
To top