Theory of Computation - Lecture 17 Time Complexity

Document Sample

```					Theory of Computation
Lecture 17: Time Complexity

Max Alekseyev

University of South Carolina

April 7, 2009
Running Time

Previously we were interested in whether a TM ever takes a ﬁnite
or inﬁnte amount of steps to compute. Depending on that we
classiﬁed TMs into deciders and recognizers. From now on, we will
be interested in more detailed running time analysis of deciders
(that halt on any input).
Running Time

Previously we were interested in whether a TM ever takes a ﬁnite
or inﬁnte amount of steps to compute. Depending on that we
classiﬁed TMs into deciders and recognizers. From now on, we will
be interested in more detailed running time analysis of deciders
(that halt on any input).
Deﬁnition
A running time or time complexity (in a worst case) of a decider M
for some language is a function

f :N →N

where f (n) is the maximum number of steps that M uses on any
input of length n.
If f (n) is a running time of a TM M (or an algorithm), we say that
M runs in time f (n) and that M is an f (n) time TM.
Asymptotic Analysis

Often the exact running time of an algorithm is a complex
expression.
The exact running time may depends on minor inessential
details such as a precise deﬁnition of a TM, while there are
many diﬀerent but equivalent deﬁnitions of a TM.
In most cases, to determine whether an algorithm is “fast” or
to compare it to other algorithms, it is enough to know just
the highest-order term of its time complexity.
As the size of input grows, the contribution of the lower-order
terms in the running time becomes negligible as compared to
the highest-order term.
Finding the highest-order term of an algorithm’s time
complexity is called asymptotic analysis.
Highest-Order Term

Informally speaking, the highest-order term of a function f (n) is a
term that grows with n most quickly as compared to the other
terms. For example, the function

f (n) = 6n3 + 2n2 + 20n + 45

has four terms and the highest-order term is 6n3 .
Asymptotic Upper Bound

Deﬁnition
Let f and g be functions f , g : N → R+ . We say that

f (n) = O(g (n))

if there exist positive constants c and n0 such that for every
integer n ≥ n0 ,
f (n) ≤ c · g (n).
When f (n) = O(g (n)), we say that g (n) is an (asymptotic) upper
bound for f (n).
Note that the big-O notation can suppress any constant factors.
For f (n) = 6n3 + 2n2 + 20n + 45, we can say f (n) = O(6n3 ) as
well as f (n) = O(n3 ) due to the following bound:

f (n) = 6n3 + 2n2 + 20n + 45 ≤ 6n3 + 2n3 + 20n3 + 45n3 = 73n3 .
Examples
Usually for logarithms we need to specify a base, such as log2 n.
But since for any ﬁxed base b > 1, there is an identity
log2 n = logb n , we can write log2 n = O(logb n) or simply
logb 2
log2 n = O(log n) (as the base of logarithm does not matter).
Q: Why the inequality b > 1 is important?
Examples
Usually for logarithms we need to specify a base, such as log2 n.
But since for any ﬁxed base b > 1, there is an identity
log2 n = logb n , we can write log2 n = O(logb n) or simply
logb 2
log2 n = O(log n) (as the base of logarithm does not matter).
Q: Why the inequality b > 1 is important?
log2 log2 n = O(log log n) because for any base b > 1:
logb n
log2 log2 n = log2    logb 2 = log2 logb n − log2 logb 2
logb logb n
=     logb 2 − log2 logb 2 = O(logb logb n)

Similarly, log2 n = O(log2 n) and nlog2 n = nO(log n) (prove!).
2
However, nlog2 n = O(nlog n ).
Examples
Usually for logarithms we need to specify a base, such as log2 n.
But since for any ﬁxed base b > 1, there is an identity
log2 n = logb n , we can write log2 n = O(logb n) or simply
logb 2
log2 n = O(log n) (as the base of logarithm does not matter).
Q: Why the inequality b > 1 is important?
log2 log2 n = O(log log n) because for any base b > 1:
logb n
log2 log2 n = log2    logb 2 = log2 logb n − log2 logb 2
logb logb n
=     logb 2 − log2 logb 2 = O(logb logb n)

Similarly, log2 n = O(log2 n) and nlog2 n = nO(log n) (prove!).
2
However, nlog2 n = O(nlog n ).
The upper bound is not uniquely deﬁned: for example, we can say
log2 n = O(log2 n) or log2 n = O(n) or log2 n = O(n log n) or
2                     2                 2
log2 n = O(n100 ), whichever is the most suitable for our needs.
2
Polynomial vs. Exponential Bounds

We will distinguigh between algorithms with polynomial running
time of the form nc (which is the same as nO(1) or 2O(log n) ) from
δ
algorithms with exponential running time of the form 2n .
Polynomial vs. Exponential Bounds

We will distinguigh between algorithms with polynomial running
time of the form nc (which is the same as nO(1) or 2O(log n) ) from
δ
algorithms with exponential running time of the form 2n .

Often we will deal with integer inputs and outputs written in the
binary form (i.e., over the alphabet {0, 1}). The length n of a
binary represention of an integer m is bounded by

n ≤ 1 + log2 m = O(log m).

Note that the running time is always viewed as a function of the
length of the input. In terms of an input m (if it is an integer),
running time is polynomial if it is (log m)O(1) but not mO(1) (the
latter may be exponential in log m).
Small-o Notation

Deﬁnition
Let f and g be functions f , g : N → R+ . We say that

f (n) = o(g (n))

if
f (n)
lim         =0
n→∞    g (n)
In other words, f (n) = o(g (n)) means that for any real number
c > 0, there exists a number n0 such that for every integer n ≥ n0 ,
f (n) < c · g (n).
While the big-O notation says that one function is asymptotically
not greater than the other, the small-o notation says that one
function is asymptotically smaller than the other.
Further Examples

√
n = o(n)
n = o(n log log n)
n log log n = o(n log n)
n log n = o(n2 )
n2 = o(n3 )
f (n) is never o(f (n))
Complexity Classes

Deﬁnition
Let t : N → R+ be a function. The time complexity class
TIME (t(n)) is a collection of all languages that decidable by an
O(t(n)) time TM.
The elements of the class TIME (n) have linear time complexity.

TIME (log log n) ⊂ TIME (log n) ⊂ TIME (n) ⊂ TIME (n log log n) ⊂
⊂ TIME (n log n) ⊂ TIME (n2 ) ⊂ TIME (n2 log n)
Complexity in Other Models

Theorem
Let t(n) be a function such that t(n) ≥ n. Then every t(n) time
multitape TM has an equivalent O(t 2 (n)) time single-tape TM.
(see Sipser Theorem 7.8, p. 254)
Theorem
Let t(n) be a function such that t(n) ≥ n. Then every t(n) time
nondeterministic single-tape TM has an equivalent 2O(t(n)) time
deterministic single-tape TM.
(see Sipser Theorem 7.11, p. 256)

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 21 posted: 12/24/2009 language: English pages: 15
How are you planning on using Docstoc?