Inside the computer Microcomputer
Classification of Systems: • Personal Computer / Workstation.
– Microcomputer • Desktop machine, including portables.
– Minicomputer • Used for small, individual tasks - such as
– Mainframe simple desktop publishing, small business
– Supercomputer accounting, etc....
• Typical cost : £500 to £5000.
• Chapters 1-5 in Capron • Example : The PCs in the labs are
• Medium sized server • Large server / Large Business applications
• Desk to fridge sized machine. • Large machines in purpose built rooms.
• Used for distributed data processing and • Used as large servers and for intensive
multi-user server support. business applications.
• Typical cost : £5,000 to £500,000. • Typical cost : £500,000 to £10,000,000.
• Example : Scarlet is a minicomputer. • Example : IBM ES/9000, IBM 370, IBM
• Scientific applications
• Large machines.
• Typically employ parallel architecture
(multiple processors running together).
• Used for VERY numerically intensive jobs.
• Typical cost : £5,000,000 to £25,000,000.
• Example : Cray supercomputer
What's in a Computer System? Software
• The Onion Model - layers. • Divided into two main areas
• Hardware • Operating system
• BIOS • Used to control the hardware and to provide an
interface between the user and the hardware.
• Manages resources in the machine, like
• Where does the operating system come in? • Memory
• Disk drives
• includes games, word-processors, databases, etc....
• The chunky stuff!
• CUI • If you can touch it... it's probably hardware!
• Command Line Interface
• The mother board.
• GUI • If we have motherboards... surely there must be
• Graphical User Interface fatherboards? right?
• WIMP • What about sonboards, or daughterboards?!
• Windows, Icons, Mouse, Pulldown menus • Hard disk drives
• Basic Input Output System
• Directly controls hardware devices like UARTS
(Universal Asynchronous Receiver-Transmitter) - Used in
COM ports. Central
Peripherals Processing Memory
• Stored in the ROM of the machine. Unit (CPU)
• What's ROM? - Read only memory
• Preserved while the computer is turned off.
Peripherals Processing Memory • Stores the program to be executed and the
data that this manipulates as bits
• Volatile - contents are lost when the
computer is switched off
• Memory consists of series of cells, each of
which holds one word of information
• Each cell has a unique memory address (a
number) that can also be written as bits
Memory Store and fetch
Memory cells Memory addresses • Two key properties of a cell are its value
0000110000111100 00000011 • Memory hardware performs to basic
Cell 1010101010101010 00000110 – store ( value address )
containing 0001100011101011 00000111
one word of 0001000100010001 00001000 – fetch (address value
Types of memory ROM
• Read Only Memory
• Random Access Memory (RAM) • Contains crucial start-up information for
• Read Only Memory (ROM) PCs.
• Erasable Programmable Read Only • Typically only 48K.
Memory (EPROM) • Non-volatile - information is preserved when
the power is switched off.
• Relatively slow - sometimes copied to RAM for
• Also located on memory chips.
• Random access memory
• Volatile - information is lost when the • Using a ROM Burner the instructions
power is switched off. within the chips can be changed
• Used for updates
• Early computer had around 4K of RAM.
• These-days 128Mb RAM is around £30
• Also seen in DVD players and videos to
• Read about SIMM, DIMM, SRAM, DRAM
Measuring memory Memory
• Memory not usually measured in words – Every 0 or 1 in binary is called a bit
• Measured instead in: – (0 means off, 1 means on)
• bytes – 8 bits = 1 byte = one character (A,B,C,$,£,1,2)
• megabytes • kilobytes (Kb) = 210 = 1024 bytes
• gigabytes • megabytes (Mb) = ~1,000,000 bytes
• International word for a byte is an octet • gigabytes (Gb) = ~1 billion bytes
Where are we ??
Logic Unit register
MBR MAR purpose
• ALU is the sub-component that performs
basic arithmetic and logic operations Program execution
• Registers provide very fast but expensive
storage (hence there are only a few) • The fetch-decode-execute cycle
– General purpose registers – Fetch instruction at memory address PC from
– Special purpose registers memory and put it into IR
• IR put instructions here to execute them – Decode and execute the instruction in IR
• PC memory address of the next instruction – Increment PC to point at the next instruction
• MAR where addresses are placed for fetch or store • Each single instruction may involve many
• MBR where values are placed for fetch or store
smaller micro-instructions. such as moving
words of information between registers
Chapter 3 (Capron)
• For example, for the single instruction:
“add the data at address X to the data at address Y The clock
and store the result back at Y” :
• The whole cycle is driven by the CPU’s
– move contents of PC to MAR
– move from MBR to IR • CPU speed is measured in Hertz (Hz)
– Put address X into MAR
(500MHz computer can handle 500 million machine cycles per second)
– move MBR to general register A • Chunks of information have to be moved
– put address Y into MAR
between parts of the computer - carried
– move MBR to general register B along an arrangement of parallel wires
– add A to B, result in C
– move C to MBR
called a bus
Peripherals Peripheral Devices
• Various peripherals are responsible for
input, output and permanent storage • Input
– programs and data have to get into memory • Mouse
– results have to be displayed to users • Scanner
– permanent storage is required • Light-pen
• Access to peripherals is orders of magnitude • Modem
slower than the speed of the CPU of the • What does modem mean?
speed of memory access
• Interrupts are used to allow the CPU to get
on with something else in the meantime
Peripheral Devices Peripheral devices
• Output • Storage
• Printer • Hard drive
• Plotter • Floppy drive
• Monitor • CD-ROM
• Modem • Zip disk
• Some peripherals use special dedicated I/O Summary of simple model
processors to increase performance
• Each peripheral is controlled by its own • Memory - words, cells, addresses, fetch and
instructions store, sizes
– disk: seek, read, write … • CPU - registers, ALU, fetch-decode-
– tape: wind, rewind, read, write ... execute, micro-programs, clock
– printer: form feed, reset, newline … • Peripherals - interrupts, specialised
– monitor: clear, refresh ... instructions
Peripherals Processing Memory
• The development of programming languages “The written commands in a programming
language, together with the comments that
• Language Translators
describe what they do.”
• Software Tools
The development of
• An algorithm is a logical plan for solving a task
• Algorithms and programs revisited that can be understood by humans.
• Machine-code • There are many notations for expressing
algorithms. They need to include:
• Assembly language
– hierarchical decomposition of tasks
• High-level languages – sequences of actions
• Object oriented languages – choice
– descriptions of objects and information
1st generation 2nd generation
Machine code Assembly languages
• Computers execute binary instructions called
machine code • Limited improvements over machine code
• Typically a few simple instructions consisting of
op-codes and addresses – mnemonics for op-codes improve readability
• Very difficult for humans to work with algorithms – more powerful addressing styles such as named
in machine code locations, relative addressing and indirect
– hard to read addressing
– easy to make but hard to correct mistakes • ADD A
– minimal program or data structures • LOAD A + 1
– not portable • STORE (A)
High level languages Important features of HLLs
• Developed in the 60s and 70s to be closer to • Readability
algorithms in terms of notation, control – use of English words and phrases
structures, data structures, and task – mathematical notation
decomposition – block structure
• Relatively portable – comments
• Based around hierarchical task • Powerful instructions and control structures
decomposition realised using – complex expressions and operators
functions/procedures/sub-routines – choice and repetition
– input and output
• Data structures
– named variables and constants Object-orientated languages
– typed variables
– structured types (arrays and records) • Encapsulate data and operations into objects
– dynamic linked types (trees, lists, queues and stacks)
• Program and system structure
• Classes provide templates for objects
– functions – establish precise interfaces between system
– libraries and modules components before implementation
• Abstraction and encapsulation – encourage team-work, re-use and safety
– functions support procedural abstraction
– modules support data abstraction
Internal data Public interface operations
Language translators The role of language translators
• The role of language translators • Programming languages have evolved to
• Interpreters and compilers make programming easier for humans
• Computers still execute machine code
• Syntax and semantics
• Language translators are required to
• Stages of interpretation translate between the two
• A new translator is required for each
combination of programming language and
• Translation involves several tasks
– translating high level program statements into • One statement is translated and then executed
combinations of machine code instructions at a time
– translating high level data structures into • Start at the beginning of the program
combinations of memory locations
– linking to existing code (libraries, modules, REPEAT
objects, system calls) to form a final program translate next program statement
• Two broad approaches to translation if no translation error occurred then execute the statement
– compilation and interpretation UNTIL the end of the program or an error occurs
• Whole program is translated, then executed
• The programmer writes the source program which • Interpretation and compilation both have
is translated into the object program advantages and disadvantages
• Start at the beginning of the source program
translate next source program statement • They are suited to different environments
UNTIL the end of the source program and stages of the development process
IF translation errors occurred
THEN report errors
ELSE execute the object program
Languages, syntax and semantics • Examples of syntax versus semantics
– In English
• Program languages are much simpler than • John has be a bad boys (incorrect syntax)
‘natural’ languages such as English, but do share • Flying tables sleep furiously (incorrect semantics)
some similarities: – In a program
– vocabulary of defined words • c + b = a; (incorrect syntax)
• d = 0; b = c/d; (incorrect semantics)
• hyp = opp + ajd; (incorrect sematics)
– names created by the programmer (identifiers)
• Another important difference to a natural language
• Programming languages have grammar that is ambiguity
defines it’s syntax
• Statements also have a semantic meaning
Scheduling and memory
• Errors can show up at translation time • Scheduling
– (syntax errors) or run-time (semantic errors) • Swapping
• Run time errors might be due to the • Memory - memory management
program or its environment • Paging
• Software development
Goals of memory management Physical vs. Logical
• enable several processes to be executed
(seemingly) at the same time; • Computer's memory size determined by
• provide satisfactory level of performance for hardware. This is the physical memory size.
users; • Usually smaller than the address space.
• protect each process from each other; • The address space is what a user program
• enable sharing of memory between processes; can access. This is the logical memory size
• make memory addressing transparent to (sometimes known as virtual memory size).
Running a process Scheduling
• First step must be to load executable image • Running a program results in a process
(.EXE file in MS-DOS; a.out format file in • The basic rule is that the CPU can only be
UNIX) from secondary storage into allocated to one process at a time
memory. • Multiprogramming is the illusion that the
CPU is shared between many concurrent
Swapping When to swap ?
• The CPU has to swap between processes: • How does the CPU know when to swap?
– preserve the complete state of the outgoing process – process might run until it completes
(program, data and registers) – process might run until it requests I/O
– restore the state of the incoming process – the O/S might regularly interrupt the CPU
– hand over control of the CPU • Whenever a swap needs to happen the scheduler
• The state of a process is captured in its core- (part of the kernel) takes over the CPU and
chooses the next process to run:
image, a freeze-frame of it as it runs
– round robin
• At any time, several processes may have core – dynamic priority
images in memory or on disk • This is supported by the process table
Relocation Memory allocation schemes
• Program with user address space starting at • Single process system
0 will not necessarily be loaded into a • Fixed partition system
physical space starting at address 0. • Segmented systems
• Relocation of addresses (but not data). • Problems
• Load time relocation (sorted out once when program is
loaded - program amended).
• Runtime relocation via relocation registers (sorted out
each time a logical address is accessed).
Memory allocation systems Memory allocation systems
Single process systems Fixed partition system
• Operating system
• User process
• Several partitions each capable of holding one
• Unused space
program. Partitions might or might not be of same
• Relocation register needed to indicate base address
of user process. • Largest partition needs to be large enough to hold
• For protection, also need register to indicate limit
address • Results in internal fragmentation of memory due
to unused space (holes) in each partition
Memory allocation systems
Memory allocation systems
• Program split into two or more (variable sized)
• Fragmentation. Cannot use space between
• Most common division is code and data. Also possible segments unless there is another segment small
to have programmer-defined segments (e.g. one for enough to fit.
each module of source code).
• Compaction is almost as inefficient as swapping
• Segments need not be contiguous, thereby reducing (but at least frees up some useful space).
problems of allocating free space.
• Trade-off between more, smaller segments making
• Need separate pairs of registers for each segment. memory fitting easier against need for more
• Segment attributes: sharable; writeable. segment registers.
Paging Page addressing
• Memory divided up into fixed-size chunks • Each process has a page table - defines mapping from
logical pages to physical pages (and any page
called pages. attributes).
• Typical page sizes: 512 bytes, 1K, 2K, 4K, … • Each logical address is split into two parts: page
number and offset within page. Page number mapped
• Relocation problem - effectively need a by page table, then offset added in.
relocation register for each page. However • Solves most of fragmentation problem since a process
since all pages are the same size, no limit can be fitted in to a number of separate holes. Some
fragmentation still exists because of unused space
register is needed within a page
Virtual memory • More processes can be held in memory (ready to run)
thereby easing scheduling problems. In extreme we could
hold 1 page of each process.
Paging (and to a lesser extent segmentation)
• Each process can be larger than the actual physical memory
suggests that not all pages (or segments) of available. Illusion that physical memory is as big as logical
a program need be in physical memory to memory. With 32-bit address spaces the illusion is a
run the program. Only the bits containing necessity.
the code being executed and the data being • The page table must record whether a page is in memory or
accessed need be there. The remainder
• If an address that is not in a page in memory (the resident
could be on secondary storage. set) is accessed, we have a slight problem. This is called a
page fault (usually an interrupt) - the requisite page must be
fetched from disk (demand paging).
• If at any time a page is to be brought into memory and • Efficiency is dependent on few page faults.
there are no free pages, another page must be swapped out Experiments have shown however that most processes
to make room for it (page replacement). exhibit locality of reference (i.e. they are more likely
to refer to addresses close by than those far away - e.g.
• If the page table records whether the contents of a page access to local variables within a procedure). Thus a
have changed (since it was read in from disk), then only working set of related pages will tend to generate few
dirty pages need to copied back out to the disk. page faults. (It is consequential that good use of
• Repeated swapping of pages is inefficient. Need to keep a modular programming strategies will improve
substantial portion of the program in memory to reduce efficiency of program.)
number of page faults to a minimum. This is the working • If a process's resident set is significantly smaller than
set. Working set can be determined by observing which its working set, rather than continue and have
pages are accessed over a period of time. (Information to increasing numbers of page faults (thrashing), it might
record this can be stored in the page table.) Pages not be better to swap the whole process out (thus freeing
accessed for a long time are good candidates to be first to memory for other processes) and bring a larger
be swapped out proportion of it back later.
Software development The waterfall process
• The waterfall process
• Classes Coding
• In the "waterfall model" of software development,
each stage should be completed before starting the
• However, there is no reason why different parts of NOTE: There is an old joke that says:
a program should not progress at different rates.
• Also in practice, there is often some overlap
between phases, e.g. one may need to implement a “In theory there no difference between theory
small part of a proposed design in order to find out and practice, but in practice there is."
whether it is feasible or not.
It applies here.
Structure in computer programs
• Software engineering suggests that you • You can use a function without knowing
should divide programs into separate pieces how it is implemented….
so that you can understand a piece at a time
• Consider the following examples:
– Ask a chef to prepare a meal
• A function is an early form of program
– Calculate the square root of a decimal number
division: A function has:
– Send an email message
– An interface which describes how you can use it
– An implementation which carries out the task
• Similar to a function, an object has…
• An alternative to dividing a program into – An interface which describes how the object
functions is to use objects can be used
• Objects are a natural way of thinking about – An implementation which represents the
things object’s internal state
– We are surrounded by objects (cars, television, people)
• An object’s state dictates how it responds to
• A computer program which is written in
terms of objects, closely represents the real requests
world problem it is trying to solve
• An example object: a CD Player
– Operations include:
What the CD player illustrates
• pause • An object provides a set of operations
• seek (Methods) that the user may request
• forward • An object has an internal state, which
should be hidden from the user
– Thus an object is a black box
– The state:
• Is there a disk in the player ? – You should only use an object through it’s
• What track is the player currently at ?
A class of CD Players
• We have discussed a single CD Player
– But there are millions of such players
throughout the world
• Most different makes and models have a lot
– To capture this commonality we can define a
class of CD players
• A class would describe the operations and internal
state which is common to all CD players