Skip to content

Instantly share code, notes, and snippets.

@Sahu-Ayush
Last active July 12, 2021 16:16
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Sahu-Ayush/2dfdd9925ab4453afe5aca42f8389a69 to your computer and use it in GitHub Desktop.
Save Sahu-Ayush/2dfdd9925ab4453afe5aca42f8389a69 to your computer and use it in GitHub Desktop.

Computational complexity is a field from computer science that analyzes algorithms based on the amount of resources required for running it. The amount of required resources varies based on the input size, so the complexity is generally expressed as a function of n, where n is the size of the input.

Space complexity:

The space complexity is the amount of memory space required to solve a problem concerning the input size.

Time Complexity:

The time complexity is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform.

When analyzing the time complexity of an algorithm we may find three cases: best-case, average-case, and worst-case. Let’s understand what it means.

Suppose we have the following unsorted list [1, 5, 3, 9, 2, 4, 6, 7, 8] and we need to find the index of a value in this list using linear search.

  • Best-case: This is the complexity of solving the problem for the best input. In our example, the best case would be to search for the value 1. Since this is the first value of the list, it would be found in the first iteration.

  • Average-case: This is the average complexity of solving the problem. This complexity is defined for the distribution of the values in the input data. Maybe this is not the best example but, based on our sample, we could say that the average-case would be when we’re searching for some value in the “middle” of the list, for example, the value 2.

  • Worst-case: This is the complexity of solving the problem for the worst input of size n. In our example, the worst-case would be to search for the value 8, which is the last element from the list.

Usually, when describing the time complexity of an algorithm, we are talking about the worst-case.

Asymptotic Notations

Asymptotic notations are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value.

There are mainly three asymptotic notations: Theta notation, Omega notation, and Big-O notation.

Theta Notation (Θ-notation)

Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm.

Theta Notation

Big-O Notation (O-notation)

Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the worst-case complexity of an algorithm.

Big O-Notation

Omega Notation (Ω-notation)

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an algorithm.

Omega Notation

Time Complexities Cheat Sheet

https://www.bigocheatsheet.com/

Some Common Time Complexities

Let’s see some common time complexities described in the Big-O notation.

Constant Time: O(1)

An algorithm is said to have a constant time when it is not dependent on the input data (n). No matter the size of the input data, the running time will always be the same.

For example:

  • Accessing an element from an array
  • Push and Pop operation of fixed size stack
  • Arithmetic Operation
  • Comparisons
if(a>b)
	statement
else
	statement

Logarithmic Time: O(log N)

An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step (it doesn’t need to look at all values of the input data).

For example Binary Search

while low <= high:
  mid = (low + high) / 2
  if target < list[mid]
    high = mid - 1
  elif target > list[mid]:
    low = mid + 1
  else:
  	break

Linear Time : O(N)

An algorithm is said to have a linear time complexity when the running time increases at most linearly with the size of the input data.

For example:

  • Search an element from a linear array
  • Traverse a linear array
  • Find maximum or minimum value from a linear array
  • Suppose I have to search value from an array.
for i in range(N):
     statement

Linear Logarithmic Time: O(N log N)

An algorithm is said to have a quasilinear time complexity when each operation in the input data have a logarithm time complexity.

For example Quick Sort

def quicksort (list[], left, right):
  pivot = partition (list, left, right)
  quicksort (list, left, pivot - 1)
  quicksort (list, pivot + 1, right)

Quadratic Time: O(N^2)

An algorithm is said to have a quadratic time complexity when it needs to perform a linear time operation for each value in the input data

For example:

  • Insertion sort
  • Bubble sort
  • Selection Sort
for i in range(N):
	for j in range(N):
		statement

Cubic Time: O(N^3)

An algorithm is said to run in cubic time if the running time of the three loops is proportional to the cube of N. When N triples, the running time increases by N * N * N.

for i in range(N):
	for j in range(N):
		for k in range(N):
		     	statement

Exponential Time: O(2^N)

An algorithm is said to have an exponential time complexity when the growth doubles with each addition to the input data set.

For example Brute Force algorithm, Fibonacci

def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n-1) + fibonacci(n-2)

Factorial Time: O(N!)

An algorithm is said to have a factorial time complexity when it grows in a factorial way based on the size of the input data

Example: Travel Salesman Problem (TSP), Heap Permutation

def heap_permutation(data, n):
    if n == 1:
        print(data)
        return
    
    for i in range(n):
        heap_permutation(data, n - 1)
        if n % 2 == 0:
            data[i], data[n-1] = data[n-1], data[i]
        else:
            data[0], data[n-1] = data[n-1], data[0]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment