Docstoc

Locks Faculty

Document Sample
Locks Faculty Powered By Docstoc
					Managing SQL Server
Transactions & Locks
     Shaun Cassells
   November 12th 2003
Managing SQL Server Locking
 • SQL Server 2000 uses locking to ensure
   transactional integrity and database consistency.
 • Locking prevents users from reading data being
   changed by other users and prevents multiple users
   from changing the same data at the same time.
 • If locking is not used, data within the database might
   become logically incorrect, and queries executed
   against that data might produce unexpected results.
 • Although SQL Server enforces locking automatically,
   you can design applications that are more efficient by
   understanding and customizing locking in your
   applications.
        Why Locks #1
• In this scenario Mike modifies a stock
  level by subtracting 1,000 from it,
  leaving 100 items. Katy reads the stock
  level and sees that there are only 100
  items in stock. Immediately after Katy
  has read this value and acted upon it,
  Mike's transaction fails and is rolled
  back, returning the stock level to its
  original value of 1,100.
Scenario #1
             Scenario #2
• In this scenario Mike's transaction sums a list of
  debts in a table and checks the result against a total
  debt value held elsewhere in the database. While
  Mike's transaction is summing the values in the list,
  Katy's transaction inserts a new row into the debt
  table after Mike's transaction has passed by and
  updates the total debt value. When Mike finishes
  summing the list and compares the calculated sum
  with the total debt value, it reports a discrepancy,
  where, in fact, there is no discrepancy at all. This is
  called the phantom insert phenomenon
Phantom Insert
SQL Server 2000 Lock Modes
Lock Mode       Description
SHARED (S)      This type of lock is used for
                read-only operations.
UPDATE (U)      This lock is used whenever
                the data is updated.
EXCLUSIVE (X) Prevents all other
              transactions from performing
              UPDATE, DELETE or
              INSERT.
SQL Server 2000 Lock Modes
INTENT        This is used to establish a hierarchy of locking:
              intent, shared intent, exclusive, and shared with
              intent exclusive. An intent lock indicates that
              SQL Server wants to acquire a shared or
              exclusive lock on some resources down in the
              hierarchy (e.g., table — page — row); at the
              very least the intent lock prevents any
              transactions from acquiring an exclusive lock on
              the resource.
SCHEMA        This lock type is used when a DDL
              operation is performed.

BULK UPDATE These locks are used when bulk
(BU)        copying is taking place.
       Lock Compatibility
Mode of Requested            Mode of Currently
 Lock                          Granted Lock
                        IS   S U IX SIX X
intent shared (IS)                      
shared (S)                              
update (U)                              
intent exclusive (IX)                   
shared with intent                      
  exclusive (SIX)
exclusive (X)                           
Types of Concurrency Problems
 • If locking is not available and several users
   access a database concurrently, problems
   might occur if their transactions use the same
   data at the same time.
 • Concurrency problems can include any of the
   following situations:

   –   Lost or buried updates
   –   Uncommitted dependency (dirty read)
   –   Inconsistent analysis (non-repeatable read)
   –   Phantom reads
               Lost Updates
• Lost updates occur when two or more transactions select
  the same row and then update the row based on the value
  originally selected.
   – Each transaction is unaware of other transactions.
   – The last update overwrites updates made by the other transactions,
     which results in lost data.

• For example, two editors make an electronic copy of the
  same document.
   – Each editor changes the copy independently and then saves the
     changed copy, thereby overwriting the original document.
   – The editor who saves the changed copy last overwrites the changes
     made by the first editor.
   – This problem could be avoided if the second editor could not make
     changes until the first editor had finished.
    Uncommitted Dependency
           (Dirty Read)
• Uncommitted dependency occurs when a second
  transaction selects a row that is already being updated by a
  transaction.
   – The second transaction is reading data that has not been committed
     yet and might be changed by the transaction updating the row.

• For example, an editor is making changes to an electronic
  document.
   – During the changes, a second editor takes a copy of the document
     that includes all of the changes made so far and distributes the
     document to the intended audience.
   – The first editor then decides the changes made so far are wrong and
     removes the edits and saves the document.
   – The distributed document contains edits that no longer exist and
     should be treated as if they never existed.
   – This problem could be avoided if no one could read the changed
     document until the first editor determined that the changes were
     final.
         Inconsistent Analysis
           (Nonrepeatable Read)
• Inconsistent analysis occurs when a second transaction
  accesses the same row several times and reads different
  data each time.
   – Inconsistent analysis is similar to uncommitted dependency in that
     another transaction is changing the data that a second transaction is
     reading.
   – In inconsistent analysis, however, the data read by the second
     transaction was committed by the transaction that made the change.
   – Also, inconsistent analysis involves multiple reads (two or more) of
     the same row and each time the information is changed by another
     transaction (hence the term non-repeatable read).

• For example, an editor reads the same document twice, but
  between each reading, the writer rewrites the document.
   – When the editor reads the document for the second time, it has
     changed. The original read was not repeatable.
   – This problem could be avoided if the editor could read the document
     only after the writer has finished writing it.
              Phantom Reads
•   Phantom reads occur when an insert or delete action is performed
    against a row that belongs to a range of rows being read by a
    transaction.
     – The transaction's first read of the range of rows shows a row that no longer
       exists in the second or succeeding read as a result of a deletion by a
       different transaction.
     – Similarly, as the result of an insert by a different transaction, the
       transaction's second or succeeding read shows a row that did not exist in
       the original read.

•   For example, an editor makes changes to a document submitted by a
    writer, but when the changes are incorporated into the master copy of
    the document by the production department, they find that new,
    unedited material has been added to the document by the author.
     – This problem could be avoided if no one could add new material to the
       document until the editor and production department finish working with the
       original document.
  Optimistic and Pessimistic
    Concurrency - Review
• SQL Server 2000 offers both optimistic and pessimistic
  concurrency control. Optimistic concurrency control uses
  cursors. Pessimistic concurrency control is the default for SQL
  Server.
• Optimistic Concurrency
    – Optimistic concurrency control works on the assumption that
      resource conflicts between multiple users are unlikely (but not
      impossible) and enables transactions to execute without locking
      any resources. Only when attempting to change data are resources
      checked to determine whether any conflicts have occurred. If a
      conflict occurs, the application must read the data and attempt the
      change again.
• Pessimistic Concurrency
    – Pessimistic concurrency control locks resources as they are
      required, for the duration of a transaction. Unless deadlocks occur,
      a transaction is assured of successful completion.
        Isolation Levels
• When locking is used as the concurrency
  control mechanism, it solves concurrency
  problems.
• This feature enables all transactions to run in
  complete isolation from one another, although
  there can be more than one transaction
  running at any time.
• Serializability is achieved by running a set of
  concurrent transactions equivalent to the
  database state that would be achieved if the
  set of transactions were executed serially.
  SQL-92 Isolation Levels
• Although serialization is important to transactions to
  ensure that the data in the database is correct at all
  times, many transactions do not always require full
  isolation.
   – For example, several writers are working on different
     chapters of the same book.
       • New chapters can be submitted to the project at any time;
         however, after a chapter has been edited, a writer cannot make
         any changes to the chapter without the editor's approval.
       • This way, the editor can be assured of the accuracy of the book
         project at any point in time, despite the arrival of new, unedited
         chapters.
       • The editor can see both previously edited chapters and recently
         submitted chapters.
  SQL-92 Isolation Levels
• The level at which a transaction is prepared to accept
  inconsistent data is termed the isolation level.
• The isolation level is the degree to which one
  transaction must be isolated from other transactions.
   – A lower isolation level increases concurrency, but at the
     expense of data correctness.
   – Conversely, a higher isolation level ensures that data is
     correct but can negatively affect concurrency.
• The isolation level required by an application
  determines the locking behavior that SQL Server
  uses.
    SQL-92 Isolation Levels
• SQL-92 defines the following isolation levels, all of
  which are supported by SQL Server:
   – Read uncommitted (the lowest level, at which
     transactions are isolated only enough to ensure that
     physically corrupt data is not read)
   – Read committed (SQL Server default level)
   – Repeatable read
   – Serializable (the highest level, at which transactions are
     completely isolated from one another)
• If transactions are run at an isolation level of
  serializable, any concurrent, overlapping
  transactions are guaranteed to be serializable.
  Isolation levels enable different
          types of behavior
Isolation      Dirty   Non-Repeatable Phantom Read
Level          Read    Read

Read        Yes        Yes           Yes
uncommitted

Read           No      Yes           Yes
committed

Repeatable     No      No            Yes
read

Serializable   No      No            No
  SQL-92 Isolation Levels
• Transactions must be run at an isolation level
  of repeatable read or higher to prevent lost
  updates that can occur when two transactions
  each retrieve the same row and update the
  row later based on the originally retrieved
  values.
• If the two transactions update rows by using a
  single UPDATE statement and do not base
  the update on the previously retrieved values,
  lost updates cannot occur at the default
  isolation level of read committed.
       Customizing Locking
• Although SQL Server implements locking
  automatically, it is possible to customize this feature
  in applications by performing the following tasks:
   – Handling deadlocks and setting the deadlock priority
   – Handling timeouts and setting the lock timeout duration
   – Setting the transaction isolation level
   – Using table-level locking hints with the SELECT, INSERT,
     UPDATE, and DELETE statements
   – Configuring the locking granularity for an index
    Managing Deadlocks
• A deadlock occurs when there is a cyclic
  dependency between two or more threads for
  a set of resources.
• Deadlock can occur on any system that has
  multiple threads, not just on a relational
  database management system.
• The waiting thread (query) is said to have a
  dependency on the owning thread for that
  particular resource.
  – It is waiting for the executing thread to finish
    Managing Deadlocks
• If the owning thread wants to acquire another
  resource that is currently owned by the
  waiting thread, the situation becomes a
  deadlock.
• Both threads cannot release the resources
  that they own until their transactions are
  committed or rolled back, and their
  transactions cannot be committed or rolled
  back because they are waiting on resources
  that the other owns.
Managing Deadlocks
DeadLock
    Managing Deadlocks
• Deadlocking is often confused with normal
  blocking.
  – When one transaction has a lock on a resource
    that another transaction wants, the second
    transaction waits for the lock to be released.
• By default, SQL Server transactions do not
  time out (unless LOCK_TIMEOUT is set).
  – The second transaction is blocked, not
    deadlocked.
 SQL Server automatic fix
• SQL Server chooses one of the deadlocked
  users as a victim and issues a rollback for its
  transaction. It will receive an error message
  similar to the following:
Server: Msg 1205, Level 13, State 1, Line 1
Your transaction (Process ID 52) was deadlocked on {lock}
  resources with another process and has been chosen as the
  deadlock victim. Rerun your transaction.
        Deadlock Priority
• A connection can set its deadlock priority such that,
  in the event of it being involved in a deadlock, it will
  be chosen as the victim, as follows:
SET DEADLOCK_PRIORITY LOW
• To return to the default deadlock handling
  mechanism, use the following code:
SET DEADLOCK_PRIORITY NORMAL
• Generally, the transaction involved in the deadlock
  that has accumulated the least amount of CPU time
  is usually chosen as the victim.
•
        Minimizing Deadlocks
    Although deadlocks cannot be avoided completely, the
  number of deadlocks can be minimized.
• Minimizing deadlocks can increase transaction throughput
  and reduce system overhead because fewer transactions
  are rolled back, undoing all of the work performed by the
  transaction.
• In addition, fewer transactions are resubmitted by
  applications because they were rolled back when they were
  deadlocked.
• You should adhere to the following guidelines to help
  minimize deadlocks:
    –   Access objects in the same order.
    –   Avoid user interaction during transactions.
    –   Keep transactions short and in one batch.
    –   Use a low isolation level.
    –   Use bound connections.
     Bound Connections
• Bound connections enable two or more
  connections to share the same transaction
  and locks.
• Bound connections can work on the same
  data without lock conflicts.
• Bound connections can be created from
  multiple connections within the same
  application or from multiple applications with
  separate connections.
• Bound connections also make coordinating
  actions across multiple connections easier.
  Customizing Timeouts
• If you find yourself in a deadlock, SQL Server
  terminates one of the participating
  transactions (with no timeout involved).
• If there is no deadlock, the transaction
  requesting the lock is blocked until the other
  transaction releases the lock.
• By default, there is no mandatory timeout
  period and no way to test whether a resource
  is locked before locking it, except to attempt
  to access the data (and potentially get
  blocked indefinitely).
  Customizing Timeouts
• The sp_who system stored procedure can be
  used to determine whether a process is being
  blocked and who is blocking it.
• The LOCK_TIMEOUT setting enables an
  application to set a maximum time that a
  statement will wait on a blocked resource.
• When a statement has waited longer than the
  LOCK_TIMEOUT setting, the blocked
  statement is canceled automatically, and
  error message 1222, 'Lock request time-out
  period exceeded,' is returned to the
  application.
   Customizing Timeouts
• However, any transaction containing the statement is
  not rolled back or canceled by SQL Server.
   – Therefore, the application must have an error handler that
     can trap error message 1222.
• If an application does not trap the error, it can
  proceed unaware that an individual statement within
  a transaction has been canceled, and errors can
  occur because statements later in the transaction
  might depend on the statement that was never
  executed.
• Implementing an error handler that traps error
  message 1222 enables an application to handle the
  timeout situation and take remedial action
   – for example, automatically resubmitting the statement that
     was blocked or rolling back the entire transaction
        Lock Timeout
• You can use the SET LOCK_TIMEOUT
  statement to specify the number of
  milliseconds that a statement will wait
  for a lock to be released, as shown in
  the following example:

SET LOCK_TIMEOUT -1
SELECT @@LOCK_TIMEOUT
   Customizing Timeouts
• To determine the current lock timeout setting
  (in milliseconds) for the current session, you
  can use the @@LOCK_TIMEOUT function,
  as shown in the following example:
SELECT @@LOCK_TIMEOUT
• A Value of:
  – 0 = means let SQL wait for lock to be released
  – -1 = nothing has been set
  – Greater than 0 = user specified time in ms
Setting Transaction Level
• In the following example, the transaction isolation
  level is being set to SERIALIZABLE, which ensures
  that no phantom rows can be inserted into the
  Authors table by concurrent transactions:
USE Pubs SET TRANSACTION ISOLATION LEVEL
  READ COMMITTED
• The isolation level can be overridden, if necessary,
  for individual SELECT statements by specifying a
  table-level locking hint.
• Specifying a table-level locking hint does not affect
  other statements in the session.
• You should use table-level locking hints to change
  the default locking behavior only if absolutely
  necessary.
             Transaction Level
• It can be seen that only the serializable isolation level prevents all
  these phenomena from occurring.
• By default, SQL Server runs at transaction isolation level read
  committed.
• The transaction isolation level is set for the connection with the
  following syntax:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

• The DBCC utility with the USEROPTIONS parameter can be used to
  check the current isolation level of the connection, as follows:
DBCC USEROPTIONS
Implementing Table-Level Locking
             Hints
  • A range of table-level locking hints can be specified
    along with the SELECT, INSERT, UPDATE, and
    DELETE statements in order to direct SQL Server
    2000 to the type of locks to be used.
  • Use table-level locking hints for finer control of the
    types of locks acquired on an object.
  • Locking hints override the current transaction
    isolation level for the session.
  • Note The SQL Server query optimizer automatically
    makes the correct determination.
     – You should use table-level locking hints to change the
       default locking behavior only when necessary.
     – Disallowing a locking level can affect concurrency adversely.
          Locking Hints
•   HOLDLOCK          •   ROWLOCK
•   NOLOCK            •   SERIALIZABLE
•   PAGLOCK           •   TABLOCK
•   READCOMMITTED     •   TABLOCKX
•   READPAST          •   UPDLOCK
•   READUNCOMMITTED   •   XLOCK
•   REPEATABLEREAD
HOLDLOCK & SERIALIZABLE
 • Hold a shared lock until completion of
   the transaction instead of releasing the
   lock as soon as the required table, row,
   or data page is no longer required.
 • HOLDLOCK = Legacy (not used
   anymore)
NOLOCK & READUNCOMMITTED
• The NOLOCK hint allows a dirty read to take place-that
  is, a transaction can read the uncommitted changes made
  by another transaction.
• The exclusive locks of other transactions are not honored,
  and the statement using this hint will not take out shared
  locks.
• This is equivalent to the READUNCOMMITTED hint.
• Data consistency will be provided to the level experienced
  by transactions running at transaction isolation level
  READ UNCOMMITTED.
• Using the NOLOCK keyword may increase
  performance, since lock contention may decrease, but
  this will be at the risk of lower consistency.
     PAGLOCK & ROWLOCK
• The PAGLOCK hint forces shared page locks to be taken where
  otherwise SQL Server may have used a table or row lock. For
  example, consider the following statement:
SELECT balance FROM accounts WITH (REPEATABLEREAD,
  PAGLOCK)
• If there is no appropriate index, the query optimizer will choose a table
  scan as the strategy used to execute the query.
• Depending on the number of rows that may be locked, the lock
  manager will take out row locks or perhaps a table lock because the
  REPEATABLE READ lock hint will force the shared row locks to be
  held until the end of the transaction, and therefore a single table lock is
  far more efficient.
• The PAGLOCK hint will ensure that the lock manager will use page
  locking instead of table locking or row locking.
• This hint does not only apply to shared locks. Exclusive page locks will
  also be forced if, say, an UPDATE statement rather than a SELECT
  statement was using the hint.
     READCOMMITTED
• The READCOMMITTED hint ensures that the
  statement behaves in the same way as if the
  connection were set to transaction isolation
  level READ COMMITTED.
• This is the default behavior for SQL Server.
• Shared locks will be used when data is read,
  which prevents dirty reads, but the shared
  locks are released at the end of the read and
  are not kept until the end of the transaction.
• This means that nonrepeatable reads or
  phantom inserts are not prevented.
             READPAST
• Skip locked rows.
• This option causes a transaction to skip rows locked
  by other transactions that would ordinarily appear in
  the result set, rather than block the transaction
  waiting for the other transactions to release their
  locks on these rows.
• The READPAST lock hint applies only to transactions
  operating at READ COMMITTED isolation and will
  read only past row-level locks.
• Applies only to the SELECT statement.
        REPEATABLEREAD
• The REPEATABLEREAD hint ensures that the statement
  behaves in the same way as if the connection were set to
  transaction isolation level REPEATABLE READ.
• This is not the default behavior for SQL Server.
• Shared locks will be used when data is read, and these will
  not be released until the end of the transaction.
• This means that nonrepeatable reads are prevented.
• However, phantom inserts are not prevented.
• This lock hint may reduce concurrency, since shared locks
  are held for longer periods of time than if the default read
  committed behavior is used.
           TABLOCK
• The TABLOCK hint forces a shared
  table lock to be taken where otherwise
  SQL Server may have used row locks.

• It will not be held until the end of the
  transaction unless hints such as
  REPEATABLEREAD are also used.
          TABLOCKX
• Use an exclusive lock on a table.
• This lock prevents others from reading
  or updating the table and is held until
  the end of the statement or transaction.
          UPDLOCK
• Use update locks instead of shared
  locks while reading a table, and hold
  locks until the end of the statement or
  transaction.
• UPDLOCK has the advantage of
  allowing you to read data (without
  blocking other readers) and update it
  later with the assurance that the data
  has not changed since you last read it.
            XLOCK
• Use an exclusive lock that will be held
  until the end of the transaction on all
  data processed by the statement.
• This lock can be specified with either
  PAGLOCK or TABLOCK, in which case
  the exclusive lock applies to the
  appropriate level of granularity.
• In the following example, the transaction isolation level is set to
  SERIALIZABLE, and the table-level locking hint NOLOCK is used with
  the SELECT statement:

USE Pubs
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
SELECT Au_lname FROM Authors WITH (NOLOCK)
GO

• When the SELECT statement is executed, the key-range locks typically
  used to maintain serializable transactions are not taken.
Customizing Locking for an
          Index
• The SQL Server query optimizer automatically makes
  the correct determination.
• You should not override the choices the optimizer
  makes.
• Disallowing a locking level can affect the concurrency
  for a table or index adversely.
• For example, specifying only table-level locks on a
  large table accessed heavily by many users can
  affect performance significantly.
   – Users must wait for the table-level lock to be released before
     accessing the table.
            Monitoring locks
• Finally, SQL Server lock management in action
• There are a number of ways to find information
  about the locking that is happening within SQL
  Server. These include the following:
  –   Use the sp_lock system stored procedure.
  –   Use the SQL Enterprise Manager.
  –   Use the Performance Monitor.
  –   Interrogate the system table syslockinfo directly.
  –   Use the SQL Profiler.
• Additionally, the sp_who system stored procedure
  is useful in finding blocked and blocking processes,
  and the DBCC utility can be used to set trace flags
  to record lock and deadlock information.
     Using the sp_lock system stored
                procedure
• The sp_lock system stored procedure displays
  information about the locks held by processes using
  the server.
• It can be entered as a standalone statement, in
  which case it will display all locks managed by the
  server, or it can take up to two SQL Server process
  identifiers (SPIDs) as a parameter.
• Some example output from the sp_lock system
  stored procedure is as follows:
EXEC sp_lock
                 Sp_lock
• Hint: To translate the ObjId to a table name, use
  the built-in system function OBJECT_NAME. For
  example:

SELECT OBJECT_NAME (965578478)
--------
Accounts

• The above output from sp_lock shows a number of
  locks held on various objects.
• Let us investigate the locks held by SPID 51. Apart from the
  database lock, it has requested and been granted shared
  (S) locks on two resources: a KEY and a RID.
51 7 965578478 2 KEY (4501518d90d1) S GRANT
51 7 965578478 0 RID 1:348:14 S GRANT
• A RID is a row lock on a data row on a data page.
• A KEY lock is a row lock on an index entry (key plus pointer)
  on an index page.

• Note Conventionally, the data pages in a table with a
  clustered index are considered to be part of the clustered
  index.
   – For that reason a row lock on a data row on a data page in such a
     table is considered to be a KEY lock, not a RID lock.
• Hint: To convert a file ID to a filename, use the
  FILE_NAME() function.
• If we look at the KEY lock, we can see the same
  values in the dbid and ObjId columns, but there is a
  value of 2 in the IndId column.
• The following Transact-SQL will translate this index
  ID to an index name.
SELECT name FROM SYSINDEXES
WHERE
  id = OBJECT_ID('Accounts') AND indid = 2
Using the SQL Server Enterprise
           Manager
Process Info same info as sp_lock
       Process Details
• As can be seen, the last Transact-SQL
  statement executed is displayed. This
  window also allows the database
  administrator to terminate a connection
  (Kill Process) or send a message to the
  user.
           System Monitor
• The System Monitor is a Windows 2000 utility that
  enables system managers and database
  administrators to monitor the many objects within a
  Windows 2000 system.

• Goto Start >> programs >> accessories >> system
  tools >> (either called System Monitor ||
  Performance)

• There are many counters that can be monitored for
  many objects, but here we are interested in those
  counters specific to the SQL Server:Locks object.
SQL Server: Locks       Explanation
  Object Counters
Average Wait Time (ms) Average amount of wait time (in milliseconds) for
                         each lock request that resulted in a wait

Lock Requests/sec       Number of new locks and lock conversions per
                          second requested from the lock manager

Lock Timeouts/sec       Number of lock requests per second that timed out,
                          including internal requests for NOWAIT locks

Lock Wait Time (ms)     Total wait time (in milliseconds) for locks in the last
                           second
Lock Waits/sec          Number of lock requests per second that could not
                          be satisfied immediately and required the caller
                          to wait before being granted the lock

Number of               Number of lock requests per second that resulted
  Deadlocks/sec           in a deadlock
              System Monitor
• The counters shown in proceeding table are for a particular
  instance of locked object. The instances that can be
  monitored are as follows:
   –   RID
   –   Key
   –   Page
   –   Extent
   –   Table
   –   Database
   –   Total
• This allows us to monitor counters for a particular type of
  lock or for all locks (Total).
• Note The System Monitor differentiates between SQL
  Server 2000 instances.
                   Sp_who
• The system procedure sp_who can be used to obtain
  information on the processes active within SQL Server.
• It can be entered as a standalone statement, in which case
  it will display information about all users and processes.
• It can take a SQL Server process identifier (spid) or
  alternatively a SQL Server login name as a parameter.
• Also, the parameter value ACTIVE can be used, which
  eliminates user connections that are waiting for input from
  the user-that is, with AWAITING COMMAND in the cmd
  column.
• Some example output from the sp_who system stored
  procedure is as follows:
Sp_who output

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:14
posted:7/20/2011
language:English
pages:65