Algorithm Analysis

Binary search looks more efficient than linear search when the array is sorted. The efficiency (time complexity) of an algorithm can be measured by the running time as a function of the input data size. Such functions are called growth functions. And the running time is the number of basic steps taken when we run the algorithm. For example, given an array of length n, the linear search algorithms above takes about 3*n + 2 basic steps to finish.

Given an algorithm, we write function t(n) as the growth function of the algorithm on inputs of length n. In order to meaningfully compare different functions, algorithmists use the big-Oh notion to pick up dominating factors in the functions, but ignore unimportant constants and smaller powers.

Growth Function Order Label
\(t(n) = 17\) \(O(1)\) constant
\(t(n) = 3 \log n\) \(O(\log n)\) logarithmic
\(t(n) = 2n + 5\) \(O(n)\) linear
\(t(n) = 3 n^2 + 5n + 2\) \(O(n^2\)) quadratic
\(t(n) = 2^n + 18 n^2 + 3n\) \(O(2^n)\) exponential

The order of growth for linear search is O(n), and for binary search is O(log n). For binary search, given a sorted array of length n, each time of the loop we divide the array roughly into halves, until the array length becomes 1. Let t be the number of iterations in the loop, then we have \(n \cdot (1/2)^t = 1\). This implies that \(t = \log n\). Inside each iteration, we have constant number of operations.

The order of growth for the following problems are:

  • Finding the minimum value in an array: O(n)
  • Finding the minimum value in a sorted array: O(1)
  • Computing the average of the values in an array: O(n)
  • Comparing each pair of elements in an array: \(O(n^2)\)

The order of growth differs significantly when the input length grows. Suppose that the machine runs 10,000 operations per second. Then the following table shows how the running time grows as the input length grows.

n 10 100 1,000
log n 0.0003 sec 0.0007 sec 0.0010 sec
n 0.0010 sec 0.0100 sec 0.1000 sec
n2 0.0100 sec 1.0000 sec 100.00 sec
n4 1.0000 sec 2.7778 hrs 3.1710 yrs
22n 0.1024 sec 4×1016 cen 1×10166 cen

The exact definition of the big Oh notation is as follows. Let \(f(n)\) and \(g(n)\) be two functions. One writes \(f(n) = O(g(n))\) if and only if there is a positive constant \(k\) such that for all sufficiently large values of \(n\), we have \(f(n) <= k g(n)[/latex]. For example, we write [latex]3n^2 + 5n + 2 = O(n^2)[/latex] since for [latex]k = 4[/latex] and for large enough [latex]n[/latex] (say, [latex]n > 6\)), we have \(\)3n^2 + 5n + 2 <= 4n^2[/latex].

Comments

comments