Answer each of the four numbered questions in a by variablepitch339


									OS comps

10 – 11:30am, 4140 CSE bldg.

October 1, 2007

Answer each of the four numbered questions in a separate blue book. 1. These five questions concern scheduling. For each, answer the question and provide an explanation for it. a. Given 3 processes X = (0,4), Y = (0,2) and Z = (3,1), where (a, s) specifies a process’s arrival and service times, what is the average turnaround time when using shortest remaining time? b. Given 3 processes X = (0,4), Y = (0,2) and Z = (3,1), where (a, s) specifies a process’s arrival and service times, assuming preemption, what is the largest the average turnaround time can ever be? c. Multi-level Feedback-Queue Scheduling approximates which of the following scheduling policies: FIFO, Priority, Round Robin, Shortest Process Next? d. If a set of processes can be scheduled to meet all deadlines using Earliest Deadline First, can all deadlines also be met using Rate Monotonic Scheduling? e. When using the proportional-share scheduling algorithm, say process X is to receive 70% of the CPU and process Y has received 30%, and thus far X has received 50% and Y has received 20%, which process should be given the CPU for the next quantum? 2. As you have seen in your readings, file system performance is all about avoiding disk seeks – either by locating data blocks close together on the disk while reading and writing large numbers of blocks at a time, or avoiding writing to disk altogether. Answer the following questions about the Log-Based File System (LFS) and the Soft Updates system: a. Recall that LFS writes data a segment-at-a-time, requiring an entire segment’s worth of free space. Hence, LFS required a cleaner that periodically compacted the live data in old segments. Describe the tradeoff in deciding how many segments to clean at once. That is, what are the pros and cons of cleaning many segments at the same time? Few segments? b. Upon experimentation, Rosenblum discovered that a ‘hot-and-cold’ file access pattern caused the greedy cleaner to clean segments at a higher average utilization than a uniform access pattern. Why did the authors claim this occurred? Describe the ‘cost-benefit’ cleaning policy the authors devised to address the issue. c. Unlike LFS, Soft Updates continues to use the standard FFS on-disk structures, instead altering the rules for when data blocks are written to disk. What invariants does Soft Updates enforce with respect to meta-data it writes to disk? d. When evaluating Soft Updates, the authors discovered that its performance decreased when the size of the benchmark increased. That is, when the benchmarks operated on either larger data sets or for longer periods of time, the relative throughput of Soft Updates (as compared to FFS) often decreased considerably when compared to smaller data sets over shorted periods of time. List two causes of this effect. 3. Lazy evaluation is an optimization technique frequently used in the implementation of operating system services. a. Please explain what lazy evaluation is in general and when it will be an optimization. b. For each of the following systems, describe how lazy evaluation is used as an optimization and under what particular circumstances it will be effective: Mach VM, LRPC & FFS.


4. Here is Peterson’s algorithm for two processes: int in0 = 0 , in1 = 0 ; int turn = 0 ; /* process 0 */

while ( 1 ) { in0 = 1 ; turn = 1 ; while ( in1 && turn == 1 ) ; /* critical section */ in0 = 0 ; } /* process 1 */

while ( 1 ) { in1 = 1 ; turn = 0 ; while ( in0 && turn == 0 ) ; /* critical section */ in1 = 0 ; } a. You are explaining Peterson’s algorithm to a friend, who observes “This is crazy. The variable turn indicates which process enters the critical section when both wish to enter. Why should process 0 want process 1 to enter? Process 0 should want to enter first! So, process 0 should set turn to 0 (and process 1 should enter turn = 1)”. Is this a good idea? Explain. b. Some multiporcessors do not have atomic memory operations. For example, many architectures locally cache writes until a memory barrier instruction is executed. This works as follows: suppose (shared memory) address 100 contains the value 13. If processor A writes 25 to address 100, the value will be cached at A until processor A issues a memory barrier instruction. The memory barrier instruction forces the cached value back to shared memory. Peterson’s algorithm, as written above, has no memory barrier instructions. Does it correctly implement mutual exclusion on a multiprocessor without them? Explain. If it doesn’t, then insert the fewest memory barrier instructions that will make it a correct implementation and explain why your solution is correct and minimal. You may assume that the operating system scheduler executes a memory barrier instruction immediately before preempting a process from a processor.


To top