# Analysis of Algorithms

Document Sample

```					Analysis of Algorithms

Input

Algorithm

Output

An algorithm is a step-by-step procedure for solving a problem in a finite amount of time.

Math you need to Review
Summations Logarithms and Exponents
properties of logarithms: logb(xy) = logbx + logby logb (x/y) = logbx - logby logbxa = alogbx logba = logxa/logxb properties of exponentials: a(b+c) = aba c abc = (ab)c ab /ac = a(b-c) b = a logab bc = a c*logab
Analysis of Algorithms 2

Proof techniques Basic probability

Mathematics Review
Exponents

x x x
a b a

a b

x a b x b x a b ab (x )  x x  x  2x  x
n n n 2n

2n  2n  2n 1
Analysis of Algorithms 3

Mathematics Review (Cont’d)
Logarithms
xa  b  log x b  a
;c  0 log c b log a b  log c a

log ab  log a  log b log a / b  log a  log b log( a b )  b log a log x  x for all x  0
Analysis of Algorithms 4

Mathematics Review (Cont’d)
Logarithms (cont’d)
log 2 1  0, log 2 2  1, log 2 1024  10, log 2 1048576  20

Analysis of Algorithms

5

Mathematics Review (Cont’d)
Series : Geometric Series
a n 1  1 ai   a 1 i 0
n

2 i  2 n 1  1 
i 0 n

n

ai  
i 0 n

1 1 a 1 1 a

if 0  a  1 if 0  a  1 and n  

ai  
i 0

Analysis of Algorithms

6

Mathematics Review (Cont’d)
Series : Other Useful Series
N ( N  1) N 2 i  2  2 i 1
N

N ( N  1)(2 N  1) N 3 i2    6 3 i 1
N

N k 1 ik   k 1 i 1
N

k  1

Analysis of Algorithms

7

Mathematics Review (Cont’d)
Other Useful Formulas

 f ( N )  Nf ( N )
i 1 N i  n0

N

 f (i)   f (i)   f (i)
i 1 i 1

N

n 0 1

Analysis of Algorithms

8

Mathematics Review (Cont’d)
Modular Arithmetic a  b (mod n) if n devides a  b
For example

9  4  1 (mod12)

Analysis of Algorithms

9

Mathematics Review (Cont’d)
Proof techniques: Induction  Let Pn denote the statement involving the integer variable n, principle of math induction states:  If P1 is true and,  for some integer k  1, Pk+1 is true  whenever Pk is true, then Pn is true for all n  1.

Analysis of Algorithms

10

Running Time (§1.1)
Most algorithms transform input objects into output objects. The running time of an algorithm typically grows with the input size. Average case time is often difficult to determine. We focus on the worst case running time.
 

best case average case worst case
120 100

Running Time

80 60 40 20 0 1000 2000 3000 4000

Easier to analyze Crucial to applications such as games, finance and robotics
Analysis of Algorithms

Input Size

11

Experimental Studies (§ 1.6)
Write a program implementing the algorithm Run the program with inputs of varying size and composition Use a method like System.currentTimeMillis() to get an accurate measure of the actual running time Plot the results
9000 8000 7000

Time (ms)

6000 5000 4000 3000 2000 1000 0 0 50 100

Input Size
Analysis of Algorithms 12

Limitations of Experiments
It is necessary to implement the algorithm, which may be difficult Results may not be indicative of the running time on other inputs not included in the experiment. In order to compare two algorithms, the same hardware and software environments must be used
Analysis of Algorithms 13

Theoretical Analysis
Uses a high-level description of the algorithm instead of an implementation Characterizes running time as a function of the input size, n. Takes into account all possible inputs Allows us to evaluate the speed of an algorithm independent of the hardware/software environment
Analysis of Algorithms 14

Pseudocode (§1.1)
Example: find max High-level description element of an array of an algorithm More structured than Algorithm arrayMax(A, n) English prose Input array A of n integers Less detailed than a Output maximum element of A program currentMax  A[0] Preferred notation for for i  1 to n  1 do describing algorithms if A[i]  currentMax then Hides program design currentMax  A[i] issues return currentMax

Analysis of Algorithms

15

Pseudocode Details
Control flow

   

Method call
var.method (arg [, arg…])

if … then … [else …] while … do … repeat … until … for … do … Indentation replaces braces

Return value
return expression

Expressions
 Assignment (like  in Java)  Equality testing (like  in Java) n2 Superscripts and other mathematical formatting allowed
16

Method declaration
Algorithm method (arg [, arg…]) Input … Output …

Analysis of Algorithms

The Random Access Machine (RAM) Model
A CPU
An potentially unbounded bank of memory cells, each of which can hold an arbitrary number or character

2 1 0

Memory cells are numbered and accessing any cell in memory takes unit time.
Analysis of Algorithms 17

Primitive Operations
Basic computations performed by an algorithm Identifiable in pseudocode Largely independent from the programming language Exact definition not important (we will see why later) Assumed to take a constant amount of time in the RAM model
Analysis of Algorithms

Examples:
 










Evaluating an expression Assigning a value to a variable Indexing into an array Calling a method Returning from a method Comparing 2 numbers Following an object reference
18

Counting Primitive Operations (§3.4)
By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size
Algorithm arrayMax(A, n) currentMax  A[0] for i  1 to n  1 do if A[i]  currentMax then currentMax  A[i] { increment counter i } return currentMax # operations 2 1  2n 2(n  1) 2(n  1) 2(n  1) 1 Total
Analysis of Algorithms

8n  2
19

Estimating Running Time
Algorithm arrayMax executes 7n  1 primitive operations in the worst case. Define:
a = Time taken by the fastest primitive operation b = Time taken by the slowest primitive operation

Let T(n) be worst-case time of arrayMax. Then a (8n  2)  T(n)  b(8n  2) Hence, the running time T(n) is bounded by two linear functions

Analysis of Algorithms

20

Growth Rate of Running Time
Changing the hardware/ software environment
 

Affects T(n) by a constant factor, but Does not alter the growth rate of T(n)

The linear growth rate of the running time T(n) is an intrinsic property of algorithm arrayMax
Analysis of Algorithms 21

Growth Rates
Growth rates of functions:
  

In a log-log chart, the slope of the line corresponds to the growth rate of the function

T (n )

Linear  n Quadratic  n2 Cubic  n3

1E+30 1E+28 1E+26 1E+24 1E+22 1E+20 1E+18 1E+16 1E+14 1E+12 1E+10 1E+8 1E+6 1E+4 1E+2 1E+0 1E+0

1E+2

1E+4

1E+6

1E+8

1E+10

n
22

Analysis of Algorithms

Constant Factors
1E+26 The growth rate is 1E+24 1E+22 not affected by 1E+20  constant factors or 1E+18 1E+16  lower-order terms 1E+14 1E+12 Examples 1E+10 2 5  10 n  10 is a linear 1E+8 function 1E+6 5 2 8 1E+4  10 n  10 n is a 1E+2 quadratic function 1E+0 1E+0
T (n )

1E+2

1E+4 n

1E+6

1E+8

1E+10

Analysis of Algorithms

23

Big-Oh Notation (§3.4.1)
Given functions f(n) and g(n), we say that f(n) is 1,000 O(g(n)) if there are positive constants c and n0 such that 100 f(n)  c.g(n) for n  n0 Example: 2n  10 is O(n) 10

  

10,000

3n 2n+10 n

2n  10  cn (c  2) n  10 n  10/(c  2) Pick c  3 and n0  10

1 1 10 100 1,000

n

Analysis of Algorithms

24

Big-Oh Example
1,000,000

Example: the function n2 is not O(n)
  

n^2 100n 10n n

100,000 10,000 1,000 100 10 1 1

n2  cn n c The above inequality cannot be satisfied since c must be a constant

10

n

100

1,000

Analysis of Algorithms

25

More Big-Oh Examples
7n-2
7n-2 is O(n) need c > 0 and n0  1 such that 7n-2  c•n for n  n0 this is true for c = 7 and n0 = 1

 3n3 + 20n2 + 5 3n3 + 20n2 + 5 is O(n3) need c > 0 and n0  1 such that 3n3 + 20n2 + 5  c•n3 for n  n0 this is true for c = 4 and n0 = 21


3 log n + log log n
3 log n + log log n is O(log n) need c > 0 and n0  1 such that 3 log n + log log n  c•log n for n  n0 this is true for c = 4 and n0 = 2
Analysis of Algorithms 26

Big-Oh and Growth Rate
The big-Oh notation gives an upper bound on the growth rate of a function The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no more than the growth rate of g(n) We can use the big-Oh notation to rank functions according to their growth rate f(n) is O(g(n)) g(n) grows more f(n) grows more Same growth Yes No Yes
Analysis of Algorithms

g(n) is O(f(n)) No Yes Yes
27

To proof Big-Oh
f(n) โตช้ากว่า g(n)

If “f(n) is O(g(n)”

lim n  

f ( n) 0 g ( n)

lim n  

f ( n) c g ( n)

f(n) โตเท่ากับ g(n)

lim n  

f ( n) c g ( n)

c = finite constant (c=0 , c = finite const.)

c !=0 , c = finite const.
Analysis of Algorithms 28

Big-Oh Rules
If is f(n) a polynomial of degree d, then f(n) is O(nd), i.e.,
1. 2.

Drop lower-order terms Drop constant factors Say “2n is O(n)” instead of “2n is O(n2)” Say “3n  5 is O(n)” instead of “3n  5 is O(3n)”

Use the smallest possible class of functions


Use the simplest expression of the class


Analysis of Algorithms

29

Properties of Big-Oh (p.119)

from CS1, School of Informatics, UoE
Analysis of Algorithms 30

Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines the running time in big-Oh notation To perform the asymptotic analysis




We find the worst-case number of primitive operations executed as a function of the input size We express this function with big-Oh notation We determine that algorithm arrayMax executes at most 7n  1 primitive operations We say that algorithm arrayMax “runs in O(n) time”

Example:
 

Since constant factors and lower-order terms are eventually dropped anyhow, we can disregard them when counting primitive operations
Analysis of Algorithms 31

Asymptotic Algorithm Analysis
1 < log n < n < n.logn < n^2 < n^3 < 2^n

f(n) โตช้ากว่า g(n) :

lim n  

f ( n) 0 g ( n) f ( n) c g ( n)
f ( n)  g ( n)
32

f(n) โตเท่ากับ g(n) : lim n   f(n) โตเร็วกว่า g(n) :
lim n  

Analysis of Algorithms

CS1, school of informatics, UoE
Analysis of Algorithms 33

Techniques for alg. Analysis Step count (not in book)
1. Count each statement as 1 unit count


Normal statement (a = b+c)
 1 unit



If P1 then P2 else P3 for i=1 to n
 n units

(P1 takes t1 times, P2 = t2, P3 = t3)

 t1 + min(t2,t3) < t < t1 + max(t2,t3)




while loop
 consider the condition

2. Sum all unit count for each statement
Analysis of Algorithms 34

Ex#1.1 Computing Prefix Averages
We further illustrate asymptotic analysis with two algorithms for prefix averages The i-th prefix average of an array X is average of the first (i  1) elements of X: A[i]  (X[0]  X[1]  …  X[i])/(i+1) Computing the array A of prefix averages of another array X has applications to financial analysis
Analysis of Algorithms

35 30 25 20 15 10 5 0 1 2 3 4 5 6 7

X A

35

The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do 1  2  … (n  1) s  s  X[j] 1  2  … (n  1) A[i]  s / (i  1) n return A 1
Analysis of Algorithms 36

Arithmetic Progression
The running time of prefixAverages1 is O(1  2  … n) The sum of the first n integers is n(n  1) / 2


7 6 5 4 3 2 1 0 1 2 3 4 5
37

There is a simple visual proof of this fact

Thus, algorithm prefixAverages1 runs in O(n2) time

6

Analysis of Algorithms

Ex#1.2 Prefix Averages (Linear)
The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X A  new array of n integers s0 for i  0 to n  1 do s  s  X[i] A[i]  s / (i  1) return A

#operations

n 1 n n n 1
38

Algorithm prefixAverages2 runs in O(n) time
Analysis of Algorithms

Ex#2
while (n  0) do n  n / 2

Analysis of Algorithms

39

Ex#3
for (i=1;i<=n;i*=2)
f <- f*f+1; j <- (int)f/1000; f <- f-j*1000;

}

Analysis of Algorithms

40

Ex#4
i 1 while (i  n) k 1 while (k  i ) k  k 1 i  i 1

Analysis of Algorithms

41

Ex#5
i 1 jn while (i  j ) i i3 j  j 5

Analysis of Algorithms

42

Relatives of Big-Oh
big-Omega  f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0  1 such that f(n)  c•g(n) for n  n0 big-Theta  f(n) is (g(n)) if there are constants c’ > 0 and c’’ > 0 and an integer constant n0  1 such that c’•g(n)  f(n)  c’’•g(n) for n  n0 little-oh  f(n) is o(g(n)) if, for any constant c > 0, there is an integer constant n0  0 such that f(n)  c•g(n) for n  n0 little-omega  f(n) is (g(n)) if, for any constant c > 0, there is an integer constant n0  0 such that f(n)  c•g(n) for n  n0
Analysis of Algorithms 43

Intuition for Asymptotic Notation
Big-Oh  f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n) big-Omega  f(n) is (g(n)) if f(n) is asymptotically greater than or equal to g(n) big-Theta  f(n) is (g(n)) if f(n) is asymptotically equal to g(n) little-oh  f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n) little-omega  f(n) is (g(n)) if is asymptotically strictly greater than g(n)

Analysis of Algorithms

44

Example Uses of the Relatives of Big-Oh






5n2 is (n2) f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0  1 such that f(n)  c•g(n) for n  n0 let c = 5 and n0 = 1 5n2 is (n) f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0  1 such that f(n)  c•g(n) for n  n0 let c = 1 and n0 = 1 5n2 is (n) f(n) is (g(n)) if, for any constant c > 0, there is an integer constant n0  0 such that f(n)  c•g(n) for n  n0 need 5n02  c•n0  given c, the n0 that satifies this is n0  c/5  0
Analysis of Algorithms 45

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 95 posted: 11/4/2008 language: English pages: 45
How are you planning on using Docstoc?