Docstoc

HIERARCHICAL DISK CACHE MANAGEMENT IN RAID 5

Document Sample
HIERARCHICAL DISK CACHE MANAGEMENT IN RAID 5 Powered By Docstoc
					HIERARCHICAL DISK CACHE MANAGEMENT IN RAID 5 CONTROLLER*

Jung-ho Huh, Tae-mu Chang Dept. of Computer Engineering, Dongguk University Seoul, Korea

ABSTRACT In RAID system, disk cache is one of the important elements in improving the system performance. Two-level cache displays superior performance in comparison to single cache and is effective in temporal and spatial locality. The proposed cache system consists in two levels. The first level cache is a set associative cache with small block size whereas the second level cache is a fully associative cache with large block size. In this paper, a RAID 5 disk cache model is presented that is located on a disk controller, which can improve disk input and output time especially in a large capacity disk cache and which can maintain consistency effectively. The aim is to show, in terms of hit ratio and service time, that the two-level cache structure presented in this paper is in fact much improved. 1. INTRODUCTION Recently, performance improvement of a processor has been increasing by 40-100% every year whereas performance improvement of magnetic disk that has mechanical parts in it has increased by mere 7%. Accordingly, as a means to overcome the difference in speed of CPU and main memory and disk input and output speed, RAID (Redundant Arrays of Inexpensive Disks) which can increase the reliability and performance of a magnetic disk has been widely used [1]. As disk cache is widely used to reduce disk access and the price of DRAM (Dynamic Random Access Memory) has been falling to one hundredth of the previous price every 10 year [2], there is an increasing tendency to use large capacity cache. Hence, in this paper, an attempt is made to present a two-level disk cache model of RAID 5, based on ___________________________________________
*

Copyright © 2003 by the Consortium for Computing Sciences in Colleges. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the CCSC copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Consortium for Computing Sciences in Colleges. To copy otherwise, or to republish, requires a fee and/or specific permission. 47

JCSC 19, 2 (December 2003)

both temporal and spatial locality, which can serve as a relevant implementation method in a disk cache with enlarged capacity and which can improve it performance as well. In two-level disk cache, caches on different levels operate differently depending on the write policy that is used to maintain consistency and the characteristics of inclusion property. In this paper, the two-level disk cache is distinguished between NVM (Non-Volatile Memory) Model 1, wherein the two caches have inclusion property to a certain extent and NVM Model 2 wherein the two caches do not have inclusion property at all. In this paper, a model is presented as a conceptual design of the two-level disk cache that consists of a non-volatile memory device using lithium battery as the first level cache and a volatile memory device as the second level cache, together with its operation method. The aim is to obtain the result of improved implementation speed in comparison to conventional disk cache controller using non-volatile memory device by increasing the cache hit ratio. 2. RELATED RESEARCH Typical single level cache [3] consisting of small blocks or cached RAID [4] uses LRU policy and, when cache is missing, directly brings missed blocks from disk arrays. This has a virtue in that implementation is easy, but miss ratio is too high due to insufficient prefetching. For this reason, two-level cache models of diverse structure began to be researched to improve cache performance. Split temporal/spatial cache (STS) [4], Selective cache [5], Victim cache [6], Dual data cache [7], HP 7200 assist cache [8], NTS caches is proposed in [9]. The two-level caches presented above are superior in performance than single level cache. The difference between these caches and the cache that is proposed in this paper is found in the associativity and the block size. Caches [4] and [5], though they support different block size, have the same associativity whereas caches [6], [8], and[9], though they support the same block size, have different associativity. The cache proposed in this paper improves cache performance by using temporal and spatial locality effectively, utilizing different associativity and different block sizes. Caches [4], [5], and [9] are structures that are designed to reflect effectively the method to determine locality and one locality only. In this sense, these caches are different from the structure presented in this paper that enhances both temporal and spatial locality. Cache [7] is a dual data cache that is located on the same hierarchical level, which is different in hierarchical structure from the two-level cache proposed in this paper. In this paper, the emphasis was put on the improvement of system performance in consideration of the character and the application environment of the RAID system. 3. NECESSITY FOR TWO-LEVEL CACHE IN RAID Cache in RAID is quite meaningful in that memory is not accessed at random and there exists locality [11]. That is, in terms of temporal and spatial locality, L1 cache can increase temporal locality and L2 cache can increase spatial locality. This locality makes the cache an essential element in the RAID system, since the cache is used as a read/write buffer, reduces
48

CCSC: Northwestern Conference

access time of the large capacity disk arrays that are processed in slow speed, thus improving the processing speed of application programs. The reasons why conventional memory cache should be applied selectively to RAID cache based on an analysis of application environment and constituting models are as follows: First, an increase in capacity is comparatively easy. Memory cache is difficult to configure in large size due to limitations of manufacturing cost and configuration, but RAID cache is configured in SDRAM (Synchronous DRAM) which is comparatively easy to increase its capacity. For example, if L1 cache is configured in non-volatile memory devices using lithium battery and L2 cache is configured in volatile memory devices, high performance two-level cache can be configured inexpensively. Second, spatial locality of the RAID system can be increased. Most of the computer system uses algorithms that are based on segment or page in the memory, and a certain file that is consecutive logically is not consecutive physically in the memory. Conventional hardware cache is primarily focused on temporal locality, but the RAID cache is the actual buffer that can store data from disk arrays in the host. And in most operating systems, files are stored in track or buffer as a unit. In addition, not only temporal locality but spatial locality also can be increased using L2 cache, since as a result of benchmarking most application programs use more than 50% of the adjacent data [12]. Third, algorithms do not affect the host processor since RAID cache is operated with a processor that is embedded in the RAID controller [13]. Algorithm of mapping and address conversion of hardware cache is operated by a processor in the host, thus contributing to the complexity of the algorithm. But the cache in RAID system is operated by an embedded processor, and the complexity of algorithm does not affect the host processor. For these reasons, it is difficult to apply the RAID disk cache as it is to memory cache, necessitating a new cache design that is appropriate to RAID disk. That is, since L1 cache increases temporal locality and L2 cache increases spatial locality, what is needed is to design cache in two level in the RAID and use it as a buffer in read/write. This will reduce the access time to large capacity disk arrays, thus improving the processing speed. 4. CONVENTIONAL DISK CACHE MODEL A disk cache model using NVRAM is shown in Figure 1. It uses a non-volatile memory device for read/write.

Figure 1. Disk cache model using NVRAM

49

JCSC 19, 2 (December 2003)

Based on the result of search and comparison in the cache, operations of hit and miss for read/write on the disk respectively are as follows: 1) Read hit: Reads data on VRAM. 2) Read miss: The requested block is staged from disk to VRAM to be transmitted to the host, and the remaining portion of the track is staged to VRAM. 3) Write hit: Four modes are available. -DFW(DASD fast write):Both write NVRAM and VRAM -CFW (Cache Fast Write): Writes on VRAM. -DC (Dual Copy): Copies in duplicate to two different locations of the disk to enhance reliability. -DCFW (Dual Copy with Fast Write): Performs Both above DC mode and write NVRAM at same time The choice of DFW mode and CFW mode is determined by the importance of consistency with the disk as well as the capacity of NVRAM. 4) Write miss: The remaining portion of the disk track that includes the block for write operation is staged and converted to write hit mode. Mostly DFW mode in this model, stage and destage are carried out asynchronously, thereby not affecting read/write operation. But the requested read/write operation should wait as the disk cache is being used. It is also closely related to the performance of the disk cache since, while access is being made to the disk cache, stage and destage cannot be carried out. Also, in view of a recent research material (14), frequent disk access lowers the performance of disk cache. According to this research, in consideration of the importance of the consistency with the disk, during the moments of write hit, the CFW mode is often encountered with destage when there are many writes if the capacity of NVRAM is not sufficient. This results in a reduction of effects that can process several write requests for an identical address with one disk operation using NVRAM, newly erasing approximately 40% of the updated data within 30 seconds. In this paper, as a model of two-level cache, two different types of model are presented, where first level cache near the processor is NVRAM and second level cache near the disk is VRAM. Operation methods of these two caches are varied and compared with the conventional disk cache, in an effort to prove the superiority of the two-level disk cache model presented in this paper. The two types of model presented in this paper are defined as NVM Model 1 and NVM Model 2 respectively. In NVM Model 1, overhead for stage and destage is small since VRAM includes the content of NVRAM to a certain extent, and enables parallel processing of disk operation such as stage/destage and cache operation is possible. NVM Model 2 is intended to gain the capacity increase effect of NVRAM, which is identical in amount as the gap between VRAM and the sum of the virtue of NVM Model 1 and the content of NVRAM.

50

CCSC: Northwestern Conference

5. TWO-LEVEL DISK CACHE MODEL IN RAID 5 Proposed in this paper are two types of disk cache model. One type is NVRAM (Non-Volatile RAM) which is applied to disk cache, to be used for read/write in L1 cache. The other type is L2 cache that consists of volatile memory devices. In a common system using NVRAM, the cache capacity for each level is similarly constituted as the sum of NVRAM and the cache capacity of the volatile memory devices. Between read/write on L1 cache and in operation between L2 cache and the disk, there is parallelism. Life cycle of the small block being cached selectively on L1 cache is extended to increase temporal locality. Whenever cache miss occurs, many adjoining small-sized blocks are prefetched to increase spatial locality. Major characteristics of the two models proposed in this paper are as follows: First, L1 cache uses set-associative mapping algorithm and L2 cache uses fully-associative mapping algorithm. L1 cache, the size of a small block, uses set-associative mapping algorithm to lower miss ratio and reduces thrashing effect. Therefore, in comparison to direct mapping scheme the system efficiency is improved, and in comparison to fully-associative mapping scheme seek time and cycle are reduced while the system speed is increased. With L2 cache, there is a spatial buffer wherein a small number of comparatively large data blocks choose fully-associative mapping scheme. Thus, it minimizes conflict miss and reduces waste of time for search. Second, L1 cache uses LRU replacement algorithm and L2 cache uses LFU replacement algorithm. L1 cache uses, in consideration of temporal locality, LRU policy. For L2 cache, since the temporal locality of the execution program was already taken into consideration, the request that is sent to L2 cache has reduced temporal locality, requiring less of LRU policy. Therefore, with L2 cache, LFU policy is also considered and replacement algorithm is applied only to one whose content matches with the disk. Third, line size for L1 cache and L2 cache is operated differently. L1 cache is in sector or smaller unit whereas L2 cache is in track that is a larger unit. The size of line is closely related with hit ratio of the cache. Thus, to increase the hit ratio of L2 cache, it is necessary to increase the hit ratio of L1 cache. Since the speed of L1 cache is faster than that of L2 cache and L2 cache should perform disk operation, in order to prevent Rotational position sensing miss delay with the disk, the L2 cache must be in track unit. In contrast, the line size of L1 cache should be in sector to prevent fragmentation. Since L2 cache should maintain track that includes the content of L1 cache, even when the requested unit for input and output is large, the temporal locality of L2 cache should be increased. Fourth, with regard to write policy, write back is used to allocate L2 cache exclusively for disk operation. Since L1 cache is a non- volatile memory device, write through is not considered. Conceptual structure of two-level cache is shown in Figure 2. L2 cache is in the form of a large block where many small blocks belong, and it remains in that form until the large block is replaced [14].

51

JCSC 19, 2 (December 2003)

Figure 2. The structure of two- level cache

Also, not only the large block containing small blocks is replaced, many small blocks accessed before also move selectively to L1 cache. Maintaining L1 cache in small block size will increase the number of blocks in cache space where data with temporal locality are given. 5.1. NVM MODEL 1 Two-level disk cache NVM Model 1 in RAID 5 that is proposed in this paper is shown in Figure 3. Read/write are carried out in L1 cache. The principle of its operation is as follows:

Figure 3. NVM Model 1 (1) Both read and write are carried out in L1 cache. In case of read miss, access is made to L2 cache. In case of a hit, the data is brought to L1 cache for access. (2) Write operation is completed after accessing L1 cache. In case L2 cache is missing, stage operation to L2 cache is implemented asynchronously. Subsequently, the updated line is moved to L2 cache and destaged to the disk. Stage and destage after write operation are asynchronous operations, and they can reduce the waiting time when subsequent read/write hit at L1 cache. (3) Although L2 cache includes the content of L1 cache, it is not always necessary that the contents match exactly. When modifications are made in L1 cache and destage operation is delayed, the stability is still maintained. (4) L1 cache, in consideration of spatial locality, reads several lines sequentially whereas L2 cache prefetches one track with the shortest search distance. In Figure 3, the parity logic is a logic in which the parity is updated when write operation is carried out on L1 cache [15]. It operates in ordinary RAID 5 write method or in read-modify-write mode, which is carried out in a unique parity engine. A summary algorithm
52

CCSC: Northwestern Conference

and state diagram of each level Figure 4. NVM model 1 used Write-Back update policy because L1 cache is non-volatile memory for reduce cache coherence overhead.
[Algorithm I (NVM model 1)] READ (L1): If (HIT) {Transfer from L1 Cache; Update LRU status;} Else {Send Request to L2 Cache;} READ (L2): If (HIT) Transfer to L1 Cache(Sequential Prefetching); else Read from Disk and Restart READ; Write (L1): If (HIT) Update L1 Cache marking DESTAGING; Else allocate a Line frame marking EXCLUSIVE; DESTAGE (invoked asynchronously): Transfer from L1 Cache to L2 Cache marking DIRTY; Transfer DIRTY lines to Disk; STAGE (invoked asynchronously): Transfer EXCLUSIVE lines from Disk; Invoke a DESTAGE operation;

(a) Algorithm

(b-1) L1 cache (b-2) L2 cache (b) State diagram Figure 4. NVM model 1 5.2. NVM MODEL 2 Two-level disk cache NVM Model 2 in RAID 5 is shown in Figure 5. L1 cache is used for read/write operations whereas L2 cache is used for read operation. The principle of their operations is as follows: (1) Read operation is carried out both in L1 cache and L2 cache. L1 cache is searched first. In case of a hit, read

Figure 5. NVM Model 2
53

JCSC 19, 2 (December 2003)

operation is carried out. Otherwise, an access is made to L2 cache. Accordingly, when L1 cache is missing, this reduces the reading overhead from L2 cache to L1 cache. Also, in case of a read miss in L1 cache, it reduces the number of lines that exist in both caches simultaneously. (2) Write operation is completed after accesses are made to L1 cache. Regardless of a hit or miss in L1 cache, L2 cache should be checked. In case of a miss, stage operation to L2 cache should follow. Subsequently, updated lines are moved to L2 cache and destaged again to the disk. Stage and destage after write operation are asynchronous operations, and they can reduce the waiting time when subsequent read /write hit at L1 cache. (3) In case updated lines are not moved yet to the disk, L2 cache includes the contents of L1 cache. Otherwise, it does not include the content of L1 cache. As shown in Figure 3, parity logic is in read-modify-write mode, which is carried out in a unique parity engine. A summary algorithm is as Figure 6.
[Algorithm II (NVM model 2)] READ: If (HIT on L1 Cache) Transfer from L1 Cache; Else if (HIT on L2 Cache) Transfer L2 Cache; Else {READ from Disk; Restart L2 READ;} WRITE: If (HIT) Update L1 Cache marking DESTAGING; Else allocate a Line frame marking EXECULSIVE; DESTAGE(Invoked asynchronously): Transfer from L1 Cache to L2 Cache, marking DIRTY; Transfer DIRTY lines to Disk; STAGE(Invoked asynchronously): Transfer EXCLUSIVE lines from Disk; Invoke a DESTAGE operation;

Figure 6. Algorithm of NVM Model 2 Also If NVM model 2 used write-back update policy then same as state diagram of NVM model 1 else if used write-through update policy then state diagram is as Figure 7. But write-back update policy is good performance than write-through update policy in this model. That's a reason of L1 cache is non-volatile memory for reduce cache coherence overhead as NVM model 1.

(a) L1 cache Figure 7. State diagram
54

(b) L 2 cache

CCSC: Northwestern Conference

5.3 CACHE COHERENCE Cache coherence is one of the important elements in two-level cache structure because of reduction of cache overhead is improving disk cache system performance. Figure 8 is state diagram of L1 cache for cache coherence. Each nodes means state of cache line, arc means operation of stage conversion. Leave read hit out of consideration because no effect by state conversion. In NVM model 1, snoop means invalid operation from L2 for contents inclusion of each level but no need in NVM model 2. In Figure 8, Destaging means update to L1 cache in case of write hit but not destage to disk. Exclusive means allocate a line frame in case of write miss but not stage from disk.
snoop/wb r.miss clean destage w.hit dirty Destaging wb {stage}/wb invalid snoop/wb replace w.miss exclusive
clean destage w.hit destaging dirty {stage}/wb {stage}/wb r.miss replace w.miss exclusive invalid

(a) NVM model 1

(b) NVM model 2

Figure 8. L1 cache line state diagram 6.VIRTUES OF TWO-LEVEL DISK CACHE MODEL IN RAID 5 Common virtue of two-level disk cache models is that disk operations and cache operations have parallel processing. In this sense, two-level disk cache models are very effective in an environment where input and output requests are frequent or there are many disk stage and destage operations due to read misses and write operations. Virtues of two-level disk cache models of RAID 5 proposed in this paper can be summarized as follows: First, high performance, large capacity disk caches can be provided inexpensively without constituting the entire disk cache with high-priced semiconductor memory devices. Second, disk operation such as stage/stage and cache operation of each stage, related to read/write hit can be parallel processed. Third, cache on each level can be used as part of the entire capacity of disk cache. Fourth, size of cache line by level, replacement algorithm, and mapping scheme can be applied differently to increase cache hit ratio. Fifth, by distributing and storing parity informationusing RAID 5 disk, disk write operation can be implemented in parallel to enhance reliability; 'Small Write Problem' can be alleviated by loading parities together on the cache.
55

JCSC 19, 2 (December 2003)

7. PERFORMANCE ANALYSIS In NVM Model 1, if we suppose the hit ratio of both first level cache and second level cache is the same as that of the conventional disk cache, Th will be as follows: Th=oh+B/Xc+(1-H1)(Tph+B/X12) (1) (Where, Th : The service time on hit, Service time: Time required to complete processing of requests other than asynchronous operations when input and output is requested to the disk controller, oh, om: Bus protocol time and cache overhead in case of hit or miss, B: Size of a block on average read, Xc: Reading speed from the cache, Tph, Tpm: Waiting time on hit or miss(Waiting time when the cache or the disk is in use), H1=Hit ratio of first level cache, X12: Data transfer speed between caches) Waiting time Tph varies depending on the frequency of stage/destage and required time, and the frequency of request on the disk. Tsd or the time required for stage/destage will be as follows: Tsd=sk+LAT+RPS+B/Xd) (2) (Where, sk: Average disk seek time, LAT: Average disk rotational latency, RPS: Average rotational position sensing miss latency, Xd : Reading speed from the disk) In the above formula, the portion that is related to Tph is B/Xd. RPS portion gets larger until cache hit occurs during stage/destage operation, which will lengthen the Tpm portion. If we suppose that a request on the disk has exponential distribution and the following request makes a hit at the cache when a read/write miss develops and a stage operation is being carried out, waiting time Tph will be as follows: Tph= I e-x/ß(Tsd-x)dx(a=Tsd-B/Xd, b=Tsd) (3)
a b

Since Formula (3) has too many variables to derive into a simple numerical formula, we will obtain the following values from an actual disk system [14] that has an scsi connection circuit, stage one whole track, obtain the value for Formula (4) on the hypothesis that the Rotational position sensing miss latency will not be considered, and substitute these to Formula (3), Xc=4MB/sec, Xd=1.628MB/sec, oh=om=2ms, b=512bytes, sk=12.5ms, LAT=0, RPS=0, B/Xd=14.74ms, Average request interval ß=100ms Tph=1ms Since destage was not considered in the above hypothesis, the actual value is believed to be somewhat larger. If we suppose in Formula (1) that H1 is about 0.7 and it uses SDRAM with 120ns cycle time, (1-H1)B/X12 will equal 0.03ms. Therefore, we know it can reduce the service time by 0.67ms on cache hit. If we take Tpm into consideration, it is believed that performance improvement will become more pronounced.
56

(4)

CCSC: Northwestern Conference

8. PERFORMANCE EVALUATION The details of the simulation environment and the results of simulation are presented in this section. In the simulation, the size of simulated blocks fixed 2KB, and altogether 20 groups of files are simulated, and each group comprises 250 files. Besides, the rewriting probability of files is set as 0.2, and response intervals conforms to uniform distribution U[100,200ms], read rates also U[40-70%] In this section, the hit ratio and service times are compared in order to evaluate the proposed two-level cache structure's usage of the temporal locality and spatial locality, and to estimate the efficiencyof the proposed cache structure. Then the responsive speed of the RAID system with the proposed cache structure is analyzed theoretically. 1) HIT RATIO Hit ratio, an important index of system performance, directly reflects the efficiency of cache. Hence, hit ratio of the proposed two-level cache structure is compared with that of the conventional disk cache structure. From the result of simulation in the TABLE 1, it can be seen that the average hit ratio of the proposed two-level cache structure is improved 10-20%. Comparison of hit ratio between the proposed disk cache and the conventional disk cache structure TABLE 1. Comparison of Hit Ratio Hit ratio (%) Conventional 0.85-0.89 NVM model 1 L1: 0.80-0.84 L2: 0.14-0.17 NVM model 2 L1: 0.11-0.16 L2: 0.88-0.92

The Result shows that the hit ratio of the proposed two-level disk cache structure is much higher than that of conventional disk cache structure, that is to say the increase of the hit ratio decreases the service times to communicate with RAID controller and to access disk arrays, which improves the efficiency of cache. The reason for the improvement is that the proposed two-level cache structure fully utilizes the temporal locality and spatial locality. 2) SERVICE TIMES Service time is also an important accessorial index for cache system. Given a certain task for reading and writing actually fewer times for accessing disk arrays are expected. When the block size of first-level and second-level is very small, thus even if the hit ratio is high because of temporal locality, RAID controller should still access the disk arrays many times to complete the task, which is not an ideal case. So only the service times can

57

JCSC 19, 2 (December 2003)

really show TABLE 2 the efficiency and the speed of the cache. TABLE 2. Comparison of service time Service Read rate(%) 40 50 60 70 Conventional 7.33 7.01 7.12 6.93 time (ms) NVM model 1 4.72 5.43 6.01 6.33 NVM model 2 4.61 5.32 5.63 5.75

From the result of simulation TABLE 2, it can be seen that the service times of the proposed two-level cache structure is improved about 13%-36%, That is a cache occupation is high. Comparison of service time between the proposed disk cache and the conventional disk cache structure. 9. CONCLUSION In this paper, attempts are made to increase cache hit ratio with two-level cache using the temporal and spatial locality, and to enhance reliability of disk cache using non-volatile memory device in L1 cache even when power supply is cut off and a system error develops. Also, efforts are made to alleviate the 'Small Write Problem' of RAID 5 without additional disk access by parity when writing by loading data and parity together to the write cache. Our future research includes how to concretize the methods proposed in this paper into a mathematical theory, develop various analytical models, and conduct comparative research through simulation, using diverse workloads. REFERENCE [1] Paul Massiglia, the RAID book-A storage System Technology Handbook,6th edition, RAID Advisory Board 1997. [2] Jai Menon and Jim Cortney, "The architecture of a fault-tolerant cached RAID controller," Proceedings of the 20th annual international symposium on computer architecture, pp.76-86, 1993. [3] Gao Jun, Wu Zhiming, Jiang Zhiping, "A key technology of design in RAID system," computer Peripherals Review, Vol.24, no. 1, pp.5-8, 2000. [4] V. Milutinovic, M. Tomasevic, B. Markovic, M. Tremblay, "The Split Temporal/Spatial Cache: Initial Performance Analysis," SCIZZL-5, Mar, 1996. [5] A Gonzalez, C. Aliagas , M. Matco, "Data cache with multiple caching strategies tuned
58

CCSC: Northwestern Conference

to different types of locality," Supercomputing '95, pp.338-347, 1995. [6] N. P. Jouppi, ""Improving Direct-Mapped Cache Performance by the Addition of a Small Fully Associative Cache and Prefetch Buffers," Proc. 17th ISCA, pp.364-373. May. 1990, [7] Kil-Whan Lee, Gi-Ho Park, Tack-Don Han, Shin-Dug Kim, "An Effective Selection Mechanism to Exploit Spatial Locality in Dual Data Cache," International Conference on Computers, Communications and Systems' 98, pp.31-37, Nov. 1998. [8] G. Kurpanchek et al, Pa-7200:A Pa-RISC Processor with Integrated High Performance MP Bus Inteface, COMPCON Digest of Papers, Feb. pp.375-382. 1994. [9] Jude A. Rivers, Edward S. Davidson, Reducing Conflicts in Direct-Mapped Caches with a Temporality-Based Design," Proceedings of the 1996 International Conference on Parallel Processing, Vol. 1, pp.151-162. Aug. 1996, [10] Dai Mei-e, "Analysis for organization fashions and character of cache in high property computer system," Microelectronics & Computer, No.5, pp.15-18. 2000. [11] Jung-Hoon Lee, Jang-Soo Lee and Shin-Duk Kim, "A new cache architecture based on temporal and spaial locality," Journal of Systems Architecture, pp. 1451 1467, 2000. [12] Chen Yun, Yang Genke, Wu Zhiming, "The Application of Two-Level Cache in RAID System," Proceedings of the 4th World Congress on Intelligent Control and Automation, June, 2002. [13] A. Varma, Q. Jacobson, "Destage Algorithms for Disk Arrays with Nonvolatile Caches," IEEE Transactions on Computers, Feb. 1998. [14] Western Digital, WD-SC8320 Technical Reference Manual, WD0097S8/89 ver 1.0.

59


				
DOCUMENT INFO
Shared By:
Stats:
views:106
posted:8/2/2009
language:English
pages:13
Description: HIERARCHICAL DISK CACHE MANAGEMENT IN RAID 5