Docstoc

Nguyen_1_

Document Sample
Nguyen_1_ Powered By Docstoc
					            Multiprocessors
• ELEC 6200: Computer Architecture and Design
• Instructor : Agrawal
• Name: Nam
Why we have to use multiprocessors?
• Definition: Multiprocessors are parallel
  processors with a single shared address.
• Multiprocessors have the highest performance
  , it is higher than the fastest uni-processor.
• Multiprocessors have a lot of more effective
  applications than a uni-processor: search
  engines, web servers, databases….
• Multitasking within an application
     Instruction and data stream of
            Multiprocessors.
• SIMD (single instruction, multiple data): every
  unit will be executing the same instruction, each
  execution unit has its own address registers, and
  so each unit can have different data addresses.
• MIMD ( multiple instructions, multiple data):
  Machines using MIMD have a number of
  processors that function asynchronously and
  independently. At any time, different processors
  may be executing different instructions on
  different pieces of data
• MISD…
     How multiprocessors work?
• How do parallel processors share data?
• How do parallel processors coordinate?
• How many processors?
1. Multiprocessors communicate through
   shared variables in memory , all processors
   can access any memory location via loads
   and stores
2. As processors operating in parallel, they
   normally share data. Only one processor at a
  time can acquire the lock and other processors
  interested in shared data have to wait until the
  original processor unlocks the variable so called
  Lock approach.
  Or processors can communicate by sending and
  receiving message.

• Parallel processing program: to refer to a single
  program that runs on multiple processors
  simultaneously
• it is difficult to write parallel processing
  programs, the programmer must know a good
  deal about the hardware.
Multiprocessors connected by a single
                 bus


  Processors        Processors          Processors



   Cashes               Cashes           Cashes




                           Single Bus


               Memory                      I/O
• Each microprocessor is much smaller than a
   multichip processor, so more processors can be
   placed on a bus.
• Caches can lower bus traffic.
• Mechanisms were invented to keep caches and
   memory consistent for multiprocessors
 Traffic per processor and the bus bandwidth
   determine the useful number of processors in
   such a multiprocessor.
Single bus designs are attractive but limited: high
   bandwidth, low latency, and long length,
   Limit to the bandwidth of a single memory
   module as well----> limited number of processors
Multiprocessors connected by a
           network
Processors   Processors     Processors



 Cashes       Cashes         Cashes




 Memory      Memory         Memory




                  Network
• In machines without a single global address
  space, communication is explicit ;the
  programmer or the compiler must send
  messages to ship data to another node and
  must receive messages to accept data from
  another node.
• Send and receive also have the advantage of
  making it easier for the programmer to
  optimize communication: it’s simpler to
  overlap computation with communication by
  using explicit sends and receives rather than
  with implicit loads and stores.
               New tendency
• An alternative to multiple microprocessors
  sharing an interconnect is bringing the processors
  inside the chip. In such designs, the processors
  typically share some of the caches and the
  external memory interface.
• Advantage: amortize the instruction accesses, the
  latencies associated with chip-to-chip
  communication disappear ,Shared data structures
  are also much less of a problem
• Challenge lies in software. What architecture
  makes software easier.
Thank You

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:5
posted:4/9/2012
language:
pages:11