UNIT 1: Fundamentals of the Analysis of Algorithm Efficiency

(2.1) Analysis Framework:

  • Time efficiency indicates how fast an algorithm in question runs

  • Space efficiency deals with the extra space the algorithm requires.

Measuring an Input's Size:

  • Involves defining a parameter, often denoted as 'n,' that represents the size of the algorithm's input.

  • This parameter serves as a basis for evaluating the algorithm's efficiency, as algorithms generally take longer to run on larger inputs.

  • The choice of 'n' depends on the nature of the problem.

Units for measuring Running Time:

Although we can measure the running time in seconds, it is not usually the most efficient way as there might be other factors affecting the runtime. As in computers, the compiler, or the processor..

Hence ,the thing to do is to identify the most important operation of the algorithm, called the basic operation, the operation contributing the most to the total running time, and compute the number of times the basic operation is executed.

It is not hard to identify the basic operation. It is usually the most time-consuming operation in the algorithm's innermost loop.

Example: Let Cop be the execution time of an algorithm's basic operation on a particular computer, and let C(n) be the number of times this operation needs to be executed for this algorithm. Then we can estimate the running time T (n) of a program implementing this algorithm on that computer by the formula,

T(n) ~ Cop x C(n)

Note that C(n) is used only for approximation as it has no information about non-basic operations.

Further, Cop can’t be relied on as it depends on the computer, compiler, processor and much more than just the name ‘Computer.’

Orders of Growth:

Since for small values of the parameter ‘n’, there won’t be much difference in the efficiency. Hence comes the order-of-growth. It is used to determine how the output differs for increasing no. of inputs. For a simple example, consider

x + y and x^y.

For x = y = 1,

x + y = 2, and

x^y = 1

But on taking x = y = 3,

x + y = 6, and

x^y = 9

Notice that for small values of x and y, both the equations have less difference in their outputs, but on taking greater values, the true nature of the equation is revealed. Therefore, this method plays the major role to find out the efficiency.

Worst-Case, Best-Case, and Average-Case Efficiencies:

The heading tells it all. But here.. They are the cases where an algorithm’s efficiency is graded- worst, best or average..
Take the example of sequential search, which searches for an element in an array..
The worst-case in this algorithm is when the element is the last element of the array, which means it has to check for all the elements, which is a waste of time when large arrays are considered.
The best-case is when the first element itself is the key element, taking no time to find it.
The average-case is when the element is located somewhere around the middle-element.

Here’s the algorithm for sequential search:
ALGORITHM SequentialSearch(A[0..n - 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[O .. n -1] and a search key K
//Output: The index of the frrst element of A that matches K
//or -1 if there are no matching elements

i ← 0

while i < n and A[i] ≠ K do

i ← i + 1

if i < n return i

else return -1

Asymptotic Notations and Basic Efficiency Classes:

To compare and rank the orders of growth, three notations, O (big oh), Ω (big omega), and
Θ (big theta) are used.

Informally, O (g(n)) is the set of all functions with a smaller or same order of growth
as g(n). For example, n ∈ O(n2) as the order of growth is small.

The second notation, Ω(g(n)), stands for the set of all functions with a larger
or same order of growth as g(n). For example, n3 ∈ Ω(n2)

Finally, Θ(g(n)) is the set of all functions that have the same order of growth
as g(n). For example, every quadratic equation comes under Θ(n2)

Formal definitions: