Computational complexity is a field from computer science that analyzes algorithms based on the amount of resources required for running it. The amount of required resources varies based on the input size, so the complexity is generally expressed as a function of n, where n is the size of the input.
The space complexity is the amount of memory space required to solve a problem concerning the input size.
The time complexity is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform.
When analyzing the time complexity of an algorithm we may find three cases: best-case, average-case, and worst-case. Let’s understand what it means.
Suppose we have the following unsorted list [1, 5, 3, 9, 2, 4, 6, 7, 8] and we need to find the index of a value in this list using linear search.
-
Best-case: This is the complexity of solving the problem for the best input. In our example, the best case would be to search for the value 1. Since this is the first value of the list, it would be found in the first iteration.
-
Average-case: This is the average complexity of solving the problem. This complexity is defined for the distribution of the values in the input data. Maybe this is not the best example but, based on our sample, we could say that the average-case would be when we’re searching for some value in the “middle” of the list, for example, the value 2.
-
Worst-case: This is the complexity of solving the problem for the worst input of size n. In our example, the worst-case would be to search for the value 8, which is the last element from the list.
Usually, when describing the time complexity of an algorithm, we are talking about the worst-case.
Asymptotic notations are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value.
There are mainly three asymptotic notations: Theta notation, Omega notation, and Big-O notation.
Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm.
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the worst-case complexity of an algorithm.
Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an algorithm.
https://www.bigocheatsheet.com/
Let’s see some common time complexities described in the Big-O notation.
An algorithm is said to have a constant time when it is not dependent on the input data (n). No matter the size of the input data, the running time will always be the same.
For example:
- Accessing an element from an array
- Push and Pop operation of fixed size stack
- Arithmetic Operation
- Comparisons
if(a>b)
statement
else
statement
An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step (it doesn’t need to look at all values of the input data).
For example Binary Search
while low <= high:
mid = (low + high) / 2
if target < list[mid]
high = mid - 1
elif target > list[mid]:
low = mid + 1
else:
break
An algorithm is said to have a linear time complexity when the running time increases at most linearly with the size of the input data.
For example:
- Search an element from a linear array
- Traverse a linear array
- Find maximum or minimum value from a linear array
- Suppose I have to search value from an array.
for i in range(N):
statement
An algorithm is said to have a quasilinear time complexity when each operation in the input data have a logarithm time complexity.
For example Quick Sort
def quicksort (list[], left, right):
pivot = partition (list, left, right)
quicksort (list, left, pivot - 1)
quicksort (list, pivot + 1, right)
An algorithm is said to have a quadratic time complexity when it needs to perform a linear time operation for each value in the input data
For example:
- Insertion sort
- Bubble sort
- Selection Sort
for i in range(N):
for j in range(N):
statement
An algorithm is said to run in cubic time if the running time of the three loops is proportional to the cube of N. When N triples, the running time increases by N * N * N.
for i in range(N):
for j in range(N):
for k in range(N):
statement
An algorithm is said to have an exponential time complexity when the growth doubles with each addition to the input data set.
For example Brute Force algorithm, Fibonacci
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
An algorithm is said to have a factorial time complexity when it grows in a factorial way based on the size of the input data
Example: Travel Salesman Problem (TSP), Heap Permutation
def heap_permutation(data, n):
if n == 1:
print(data)
return
for i in range(n):
heap_permutation(data, n - 1)
if n % 2 == 0:
data[i], data[n-1] = data[n-1], data[i]
else:
data[0], data[n-1] = data[n-1], data[0]