Skip to content

Instantly share code, notes, and snippets.

@max-te
Created March 4, 2015 14:58
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save max-te/8ea8e8b3117abec41d0b to your computer and use it in GitHub Desktop.
Save max-te/8ea8e8b3117abec41d0b to your computer and use it in GitHub Desktop.
AD-Vorbereitung
MASTER THEOREM
If T(n) = a*T(ceil(n/b)) + O(n^d) for some a > 0, b > 1, d ≥ 0,
Then T(n) = { O(n^d) if d > log_b(a),
{ O(n^d log n) if d = log_b(a),
{ O(n^log_b(a)) if d < log_b(a).
ARRAY
DOUBLY LINKED LIST
insert O(1), delete O(1), get O(n), deleteFromTo O(1), insertList O(1)
SINGLY LINKED LIST
TREE
a-nary, complete, height Θ(log n), full
STACK
push, pop
QUEUE
enqueue, dequeue
HEAP A
max-heap-property: parent(v) ≥ v
MAX-HEAPIFY(Heap A, Index i) // O(log n)
m = max(LEFT(i), RIGHT(i), i)
if m ≠ i
swap(i, m)
MAX-HEAPIFY(A, m)
DECREASE-KEY(Heap A, Index i, Key b) // O(log n)
A[i] = b
MAX-HEAPIFY(A, i)
INCREASE-KEY(Heap A, Index i, Key b) // O(log n)
A[i] = b
while parent(A[i]) > A[i]
swap(i, parent(i))
i = parent(i)
EXTRACT-MAX(Heap A) // O(log n)
yield A[A.size]
A[1] = A[A.size--]
MAX-HEAPIFY(A, 1)
BUILD-MAX-HEAP(Array C) // O(n)
A := C as complete 2-Tree
FOR i = len(C)...1
MAX-HEAPIFY(A, i)
HASHING
with chaining: linked list in each entry
with open adressing: insert to first empty after hash; to remove shift up if needed
== SORTING ==
O(n^2)
SELECTION-SORT Find smallest, copy to new Array. Best Case O(n^2). Unstable.
INSERTION-SORT Keep A[1...j-1] sorted, insert A[j] to the correct position. Best Case O(n). Unstable.
for j = 2...n
key = A[j]
i = j - 1
while i > 0 && A[i] > key
A[i+1] = A[i]
i = i -1
A[i+1] = key
BUBBLE-SORT Stable.
for i = 1...n-1, j = n...i+1
if A[j] < A[j-1]
swap(A[j], A[j-1])
O(n log n)
MERGE-SORT Stable.
if n > 1: return MERGE(MERGE-SORT(A[1...n/2]), MERGE-SORT(A[1+n/2...n]))
else: return A
HEAP-SORT Unstable.
BUILD-MAX-HEAP(A)
for i = 1...n
A[n-i+1] = EXTRACT-MAX(A)
lower-bound-theorem:
Any deterministic, comparison-based sorting algorithm requires worst case Ω(n log n) comparisons.
Proof: Computation tree has n! leaves, therefore height Θ(n log n)
RANDOMIZED ALGORITHMS
Las Vegas: optimal result, random running time
Monte Carlo: fixed running time, result not always optimal
QUICK-SORT Recursiv Stable, in-situ Unstable. Worst Θ(n^2), For random pivot expected O(n log n)
if n=1: return A
p = chooosePivot(A)
A- = {a in A | a < p}
A= = {a in A | a = p}
A+ = {a in A | a > p}
Return Quicksort(A− ) + A= + Quicksort(A+ )
COUNTING SORT
Assume that the sorting key is an Integer between 1 and K.
K "buckets" with lists, insert each element into their respective bucket. O(n + K). Stable.
RADIX SORT
Assume each value has at most d digits. Θ(d(n + k)).
for i = 1...d
Sort A according to digit i with (stable) counting sort
BUCKET SORT
Uniformly distributed numbers from 0 to 1. Make n Buckets for [(i-1)/n, i/n). Sort each bucket naively.
Average O(n). Worst Case O(n log n)
== SEARCHING ==
BINARY SEARCH O(log n)
Given a sorted Array A, find q,
if it is not in A:
1) Return NotFound or
2) Return the index to insert it at.
Look at the middle, if it is not q, search in the respective half.
SEARCH TREE
Binary Tree. left ≤ parent ≤ right.
search O(h), h height.
min+max, insert, delete O(h)
inorder-tree-walk Θ(n)
BALANCED SEARCH TREE, AVL TREE
At every node, the height of the left and right subtree differs by at most 1. Height is O(log n).
To maintain balance: RROTATE, LROTATE O(1). On insert, fix imbalance O(log n).
RED-BLACK-TREE, SPLAY TREE
== GRAPH ALGORITHMS ==
WALKING
DFS Jump to neighbouring vertices, marking them as you go. If there are none, backtrack. O(|V|+|E|) on list, O(|V|^2) on matrix.
DFS(G)
FOR ALL u in V: u.color = white
FOR ALL u in V: if u is white: DFS-VISIT(G,u)
DFS-VISIT(G,u)
u.color = grey // discovery
for all v in N(u): if v is white: DFS-VISIT(G,v)
u.color = black // finish
FIND STRONGLY CONNECTED COMPONENTS
Compute finishing times f(u) with DFS
Call DFS(G^t) where the vertices in the main loop are considered in decreasing f(u)
Output the results of DFS-VISIT
O(|V| + |E|) on list.
TOPOLOGICAL SORT
Run DFS, if it finds a back edge, there is no top. sort, else sort by f decreasing.
BFS(G)
FOR ALL u in V: u.color = white
FOR ALL u in V: if u is white: DFS-VISIT(G,u)
BFS-VISIT(G,u)
u.color = grey
Q = Query([u])
WHILE Q not empty
s = DEQUEUE(Q)
for all v in N(u)
if v is white:
v.color = grey
ENQUEUE(Q,v)
s.color = black
O(|E| + |V|) on list, O(|V|^2) on matrix.
SHORTEST PATH PROBLEM
RELAX(u,v)
if v.d > u.d + w(u,v):
v.d = u.d + w(u,v)
v.p = u
SINGLE SOURCE
BELLMAN-FORD(G,s) O(|V|·|E|)
InitSS(G,s)
for i = 1...|V|-1
for all uv in E: Relax(u,v)
for all uv in E: if v.d > u.d + w(u,v): return false
return true
DECENTRALIZED-BELLMAN-FORD
DIJKSTRA for non-negativ weights.
Maintain a set S of explored vertices. Add the 'closest neighbour' to S. Relax.
Naive O(|V|·|E|). Min-Priority-Queue via heap O((V + E) log |V|)
ALL PAIRS
FLOYD-WARSHALL(n×n-Matrix W) O(|V|^3)
D_0 = W
for k = 1..n
D_k = 0
for s = 1..n
for t = 1..n
D_k[s,t] = min(D_k-1[s,t], D_k-1[s,k] + D_k-1[k,t])
return D_n
POINT-TO-POINT
BIDIRECTIONAL DIJKSTRA Alternate 2 Dijkstras, from s and from t, when they meet, return
A* Assuming we can estimate distances, use it to choose vertices for dijkstra
MINIMAL SPANNING TREE, undirected
KRUSKAL Repeatedly add the lightest remaining edge that does not produce a cycle.
PRIM Dijkstra result tree.
== GENERIC ALGORTHMIC TOOLBOX ==
GREEDY Locally optimal choice.
LOCAL SEARCH Start with an initial solution, explore similar solutions.
Simulated annealing: Allow the algorithm to choose a worse solution with a decreasing probabilty.
DYNAMIC PROGRAMMING e.g. Floyd-Warshall. Compute independent subproblems, combine the solutions.
INTELLIGENT EXHAUSTIVE SEARCH
Backtracking. Organize the search space as a tree. Whenever a partial solution is invalid, prune that branch.
Branch and bound. Is the branch “feasible”? Can it contain solutions that are better than what we have so far?
APPROXIMATION
MASTER THEOREM
If T(n) = a*T(ceil(n/b)) + O(n^d) for some a > 0, b > 1, d ≥ 0,
Then T(n) = { O(n^d) if d > log_b(a),
{ O(n^d log n) if d = log_b(a),
{ O(n^log_b(a)) if d < log_b(a).
____________________________________________________________________
HEAP A max-heap property: parent(v) ≥ v
MAX-HEAPIFY(Heap A, Index i) O(log n)
m = max(LEFT(i), RIGHT(i), i)
if m ≠ i
swap(i, m)
MAX-HEAPIFY(A, m)
DECREASE-KEY(Heap A, Index i, Key b) O(log n)
A[i] = b
MAX-HEAPIFY(A, i)
INCREASE-KEY(Heap A, Index i, Key b) O(log n)
A[i] = b
while parent(A[i]) > A[i]
swap(i, parent(i))
i = parent(i)
EXTRACT-MAX(Heap A) O(log n)
yield A[A.size]
A[1] = A[A.size--]
MAX-HEAPIFY(A, 1)
BUILD-MAX-HEAP(Array C) O(n)
A := C as complete 2-Tree
FOR i = len(C)...1
MAX-HEAPIFY(A, i)
____________________________________________________________________
HASHING with open adressing: insert to first empty after hash; to remove shift up if needed
___SORTING__________________________________________________________
O(n^2)
SELECTION-SORT Find smallest, copy to new Array. Best Case O(n^2). Unstable.
INSERTION-SORT Keep A[1...j-1] sorted, insert A[j] to the correct position. Best Case O(n). Unstable.
BUBBLE-SORT Stable.
O(n log n)
MERGE-SORT Stable.
HEAP-SORT Unstable. BUILD-MAX-HEAP, then EXTRACT-MAX
lower bound theorem: Any deterministic, comparison-based sorting algorithm requires worst case Ω(n log n) comparisons.
Proof: Computation tree has n! leaves, therefore height Θ(n log n)
____________________________________________________________________
QUICK-SORT Recursiv Stable, in-situ Unstable. Worst Θ(n^2), For random pivot expected O(n log n)
____________________________________________________________________
COUNTING SORT Assume that the sorting key is an Integer between 1 and K.
K "buckets" with lists, insert each element into their respective bucket. O(n + K). Stable.
RADIX SORT Assume each value has at most d digits. Θ(d(n + k)).
for i = 1...d: Sort A according to digit i with (stable) counting sort
BUCKET SORT Uniformly distributed numbers from 0 to 1. Make n Buckets for [(i-1)/n, i/n). Sort each bucket naively.
Average O(n). Worst Case O(n log n)
___SEARCH___________________________________________________________
BINARY SEARCH O(log n)
SEARCH TREE Binary Tree. left ≤ parent ≤ right.
search, min+max, insert, delete O(height). inorder-tree-walk Θ(n).
BALANCED SEARCH TREE, AVL TREE
At every node, the height of the left and right subtree differs by at most 1. Height is O(log n).
On insert, fix imbalance O(log n).
___GRAPHS___________________________________________________________
DFS Jump to neighbouring vertices, marking them as you go. If there are none, backtrack.
O(|V|+|E|) on list, O(|V|^2) on matrix. d: Discovery time, f: Finishing time
STRONGLY CONNECTED COMPONENTS O(|V| + |E|) on list.
Compute finishing times f(u) with DFS
Call DFS(G^t) where the vertices in the main loop are considered in decreasing f(u)
Output the results of DFS-VISIT
TOPOLOGICAL SORT
Run DFS, if it finds a back edge, there is none, else sort by f decreasing.
SHORTEST PATH PROBLEM
- SINGLE SOURCE
BELLMAN-FORD(G,s) O(|V|·|E|)
InitSS(G,s)
for i = 1...|V|-1
for all uv in E: Relax(u,v)
for all uv in E: if v.d > u.d + w(u,v): return false
return true
DECENTRALIZED-BELLMAN-FORD
DIJKSTRA for non-negativ weights.
Maintain a set S of explored vertices. Add the 'closest neighbour' to S. Relax.
Naive O(|V|·|E|). Min-Priority-Queue via heap O((V + E) log |V|)
- ALL PAIRS
FLOYD-WARSHALL(n×n-Matrix W) O(|V|^3)
D_0 = W
for k = 1..n
D_k = 0
for s = 1..n
for t = 1..n
D_k[s,t] = min(D_k-1[s,t], D_k-1[s,k] + D_k-1[k,t])
- POINT-TO-POINT
BIDIRECTIONAL DIJKSTRA Alternate 2 Dijkstras, from s and from t, when they meet, return
A* Assuming we can estimate distances, use it to choose vertices for dijkstra
MINIMAL SPANNING TREE, undirected
KRUSKAL Repeatedly add the lightest remaining edge that does not produce a cycle.
PRIM Dijkstra result tree.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment