; Caching
Learning Center
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>



  • pg 1

• Andrew Security
• Andrew Scale and Performance
• Sprite Performance
Andrew File System
Network File System
     Andrew File System

• AFS, AFS2, Coda
• 1983 to present, Satya its champion
• Ideas spread to other systems, NT
        Security Terms

• Release, Modification, Denial of
• Mutual suspicion, Modification,
  Conservation, Confinement,
• Identification, Authentication,
  Privacy, Nonrepudiation
       System Components

                                  Authentication Server
Secure Servers                             Virtue
                                           Protected Workstations

                 Virtual File System
        Andrew Encryption

•   DES - Private Keys
•   E[msg,key], D[msg,key]
•   Local copy of secret key
•   Exchange of keys doesn’t scale
    – Web of trust extends to lots of servers
    – Pair wise keys unwieldy
   Andrew Authentication

• Username sent in the clear
• Random number exchange
  – E[X,key] sent to server (Vice)
  – D[E[X,key],key] = X
  – E[X+1,key] to client (Venus)
• BIND exchanges session keys
   Authentication Tokens

• Description of the user
• ID, timestamp valid/invalid
• Used to coordinate what should be
  available from Vice (server) to
  Virtue (client)
          Access Control

• Hierarchical groups
    – Project/shared accounts discouraged
•   Positive/Negative Rights
•   U(+) — U(-)
•   VMS linear list & rights IDs
•   Prolog engine in NT
•   Netware has better admin feedback
        Resource Usage

• Network not an issue
  – Distributed DOS ‘hard’
• Server High Water Mark
  – Violations by SU programs tolerated
  – Daemon processes given ‘stem’ accnt
• Workstations not an issue
  – User files in Vice
    Other Security Issues

• XOR for session encryption
• PC support via special server
• Diskless workstations avoided

• Cells (NT Domains)
• Kerberos
• Protection Server for user
      Sprite Components

               Client   Server

Client Cache                     Server Cache

  Local Disk                     Server Disk
          Sprite Design

• Cache in client and server RAM
• Kernel file system modification
  – Affects system/paging and user files
• Cache size negotiated with VM
• Delayed 30s write-back
  – Called ‘laissez-faire’ by Andrew
       NFS Comparison

• Presumed optimized
• RPC access semantics
  – NFS uses UDP, others TCP
• Sprite targeting 100+ nodes
• Andrew targeting 5,000+ nodes
      Andrew Scale and
• Dedicated server process per client
• Directory redirection for content
• Whole file copy in cache
      Problems already…

• Context switching in server
• TCP connection overhead
  – Session done by kernel
• Painful to move parts of VFS to
  other servers
  – Volume abstraction fixed this later
       Cache Management

•   Write on close        •   Delayed write
•   No concurrent write   •   Cache disabled
•   Versioning            •   Versioning
•   User level            •   Kernel level
       Function Distribution

•   TestAuth - validate cache   61.7%
•   GetFileStat - file status   26.8%
•   Fetch - server to client    4.0%
•   Store - client to server    2.1%
    Performance Improvements

•   Virtue caches directory
•   Local copy assumed correct
•   File id’s, not names, exchanged
•   Lightweight Processes (LWP)
    – Context data record on server
Andrew Benchmarks
Sprite Throughput
Sprite Benchmarks
Sprite Benchmarks
Cache Impact - Client
Cache Impact - Server
Cache Impact - Net
   General Considerations

• 17-20% slower than        • 6-8x faster vs no cache
  local                     • Server cache extends
                              local cache
• Server bottleneck
                            • Remote paging fast as
• Scan for files and read     local disk!
  almost all local          • 5x users/server

To top