AssignmentTutorOnline

INTRODUCTION TO COMPLEXITY THEORY

The level or order of complexity is an estimate of the running time of an algorithm or

the difficulty of solving a problem as the value of the input n gets larger.

It is also used to measure other requirements such as the size of n in binary digits and

thus the space (memory) needed to store it.

The running time of an algorithm is taken as the number of bit operations or ‘steps’

needed to complete the algorithm.

The size of an integer input n is the number of bits needed to represent it in binary

notation (i.e. in base 2).

BIG-O NOTATION

Defn: We say that

f(n) = O(g(n))

if there exists a constant c and a positive integer n0 such that f(n) < cg(n) for all

values of n ≥ n0:

This notation is used to indicate the size or complexity of a function.

We can think of f(n) as ‘less than’ g(n) as n ! 1 (that is, as n gets very very big).

Strictly speaking, f(n) is less than cg(n) for some constant c: But we are not usually

interested in small values of n; and as n ! 1 the fixed constant term becomes insignificant (that is, g(n) is pretty much the same as cg(n) when n is very very large)

and so we can informally think of f(n) as ‘less than’ the function g(n):

In this sense we often say that f(n) grows no faster then g(n) as n ! 1; and thus

g(n) is like an upperbound on f(n):

We try to choose the function g(n) to indicate the most complex part of the function

f(n) to give us a good idea of how fast f(n) grows as the input n gets bigger.

Example: f(n) = 2n3 + 5n2 + 3:

Here f(n) = O(n3) since the polynomial function f(n) is always less than, say 7n3;

when n ≥ 2: This indicates that the most complex or dominant term in the polynomial

f(n) is n3:

Copyright c 2005 K Lally 1

• Size: We denote by length2(n), the number of binary digits in the binary representation of the integer n:

Example: length2(3) = 2; length2(4) = 3; length2(7) = 3; length2(8) = 4:

In general we have the following formula:

length2(n) = blog2 nc + 1 = jlog logee n2 k + 1 = ln 2 1 : ln n + 1:

Now 1

ln 2 is just a constant, and so we see that

length2(n) = O(ln n):

Looking at a graph of the natural log function, we see that the number of bits needed

to represent n in binary grows moderately (not too fast too quickly) as the input n

gets bigger.

Example: length2(nr) = O(r ln n). Here both n and r are inputs, and we consider

n; r ! 1:

Example: Computing: 3! = 6; 4! = 24; 7! = 5040; 8! = 40320:

Therefore length2(3!) = 3; length2(4!) = 5; length2(7!) = 13; length2(8!) = 16:

Here we see that number of bits needed to represent n! in binary grows very fast (faster

than n itself) as the input n gets bigger.

In fact length2(n!) = O(n ln n) as n ! 1:

Copyright c 2005 K Lally 2

• Time: The time complexity of an algorithm is an estimate of the number of bit

operations required to complete the algorithm.

Example: When adding 2 integers m and n, we first represent both in binary and

then add bit by bit with possible carries.

The number of bit operations (add-and-possibly-carry) required is no more than the

length2 of the longest number.

Therefore Time(add m + n) = O(max(ln m; ln n)):

Example: Time(multiply m by n) = O(ln m ln n):

Example: Time(gcd(m; n)) = O(ln2 m) where m > n > 0:

Example: Time(compute n!) = O(n2 ln2 n):

COMPARATIVE ORDERS OF COMPLEXITY

When employing big-O notation, i.e. f(n) = O(g(n));

we usually want the function g(n) to be a simple-form function that indicates the most

complex part of f(n); and which is not too much larger than f(n); so that we get a

good idea where f(n) ranks in comparison to other functions and some standard levels

of difficulty.

Let n be the input value, ” a small positive constant (that can be chosen arbitrary

small), and c a fixed constant greater than 1: That is, 0 < ” < 1 < c:

Then, for any k > 1;

1 < ln n < lnk n < n” < n < nc < nln n < cn < n! < nn < ccn:

Copyright c 2005 K Lally 3

We plot some of these functions on the following graph:

It can be shown that

lnk n < n”

as n ! 1; for any k > 1 (no matter how big) and any 0 < ” < 1 (no matter how

small). This comparison forms the basis for the distinction between polynomial-time

algorithms and exponential-time algorithms.

CLASSIFICATION OF ALGORITHMS

• Polynomial-Time Algorithm – Efficient as n ! 1:

An algorithm is said to be a polynomial-time algorithm if there exists a polynomial

p(x) = a0 + a1x + a2x2 + · · · + akxk

of degree k ≥ 1; such that given input n of size O(ln n), the algorithm delivers an

answer in at most p(ln n) steps, that is, at most

p(ln n) = a0 + a1 ln n + a2 (ln n)2 + · · · + ak (ln n)k

steps. Therefore

Time(algorithm) = O(lnk n)

for some constant k ≥ 1; where as usual the highest power of the polynomial gives us

an indication of the most complex part.

Notice here that the order of complexity is a power of the ‘size’ of the input n

Example: Time(gcd(m; n)) = O(ln2 m) where m > n > 0; and thus the Euclidean

algorithm is a polynomial-time algorithm; only powers of ln m involved.

Copyright c 2005 K Lally 4

• Exponential-Time Algorithm – Inefficient/computationally infeasible as n ! 1:

An algorithm is said to be an exponential-time algorithm if there exists a constant

d > 0 such that the time complexity of the algorithm has the form

Time(algorithm) = O(nd)

that is, the order of complexity is a power of the ‘value’ of the input n itself.

Since nd = eln nd = ed ln n the time complexity can also be written as O(ed ln n) , that

is, a power of the exponential constant e:

Example: Time(compute n!) = O(n2 ln2 n); and thus is an exponential-time algorithm; powers of input n involved.

• Subexponential-Time Algorithm

An algorithm whose order of complexity lies in between polynomial-time O(lnk n) and

exponential-time O(nd = ed ln n) is called a subexponential-time algorithm, and its

complexity can be represented by the function

L(c; α) = O ec lnα n ln1-α(ln n)

where c is a positive constant, c > 0; and 0 < α < 1:

Recall eln n = n and ln en = n:

If α = 0 :

If α = 1 :

When 0 < α < 1 we have subexponential-time algorithms.

Copyright c 2005 K Lally 5

Example: Most algorithms used to factor a large integer n have time complexity

Time(algorithm) = O ecpln n ln ln n = O ec ln1=2 n ln1=2(ln n) :

and so are subexponential-time algorithms with complexity L(c; α) with α = 1=2:

Developments in factoring techniques often merely result in lowering the value of the

constant c:

PROBLEMS

A decision problem refers to a general question that requires a YES/NO answer,

e.g. Is n a composite number?

A search problem can be expressed in terms of a sequence of decision problems,

e.g. The search problem: What are the factors of n? can be broken down into a

sequence of decisions: Is 2 a factor? Is 3 a factor? etc.

An instance of a problem is a special subset or particular case of the problem,

e.g. What are the factors of 21?

e.g. Is an even number n composite?

COMPLEXITY CLASSES – Classification of decision problems

Decision problems are classified as follows (and this can be easily extended to include

search problems):

• P (Polynomial-Time) Class

A decision problem is in class P if it is solvable in polynomial time (using a deterministic algorithm, that is, one that works the same way each time we have the same input

values).

Example: Finding the gcd of 2 integers.

• NP (Non-Deterministic Polynomial-Time) Class

A decision problem is in class NP if a YES answer to the problem (perhaps given to

you by some supernatural being or ‘Oracle’) can be verified as correct in polynomial

time.

Copyright c 2005 K Lally 6

If a problem is in NP but not in P then the problem is known to not be solvable in

polynomial time using a deterministic algorithm, but might be able to be solved (and

the answer correct to some degree of probability) using a probabilistic or randomised

algorithm.

Example: Integer factorisation problem, subset-sum problem, discrete log problem,

3-colour problem, travelling salesman problem.

• NP-Complete Class

The NP-complete class is a subset of the NP class and contains all those decision problems that every other problem in NP can be reduced to in polynomial time.

Defn: One problem can be reduced to another, if the first can be solved as an immediate consequence (that is, an easy polynomial-time extension) of solving the second.

If we can solve an NP-complete problem efficiently then all other NP problems can

be solved efficiently, using as a subroutine the method of solving the NP-complete

problem.

NP-complete problems are therefore the ‘hardest’ problems in NP, every other problem

can be no harder.

Example: Subset-sum problem, 3-colour problem, travelling salesman problem.

Copyright c 2005 K Lally 7

- Assignment status: Already Solved By Our Experts
*(USA, AUS, UK & CA PhD. Writers)***CLICK HERE TO GET A PROFESSIONAL WRITER TO WORK ON THIS PAPER AND OTHER SIMILAR PAPERS, GET A NON PLAGIARIZED PAPER FROM OUR EXPERTS**

**NO PLAGIARISM**– CUSTOM PAPER