Skip to content

Instantly share code, notes, and snippets.

@peanutpi
Created August 15, 2016 16:50
Show Gist options
  • Save peanutpi/c82fec51b239d18d328a4975035ddb97 to your computer and use it in GitHub Desktop.
Save peanutpi/c82fec51b239d18d328a4975035ddb97 to your computer and use it in GitHub Desktop.
Algorithm Learning

Following are the Algorithms/Concepts that one must know to solve CS puzzles:

  • Lists, Arrays, Stack

  • Trees-

    • Binary,
    • k-ary,
    • AVL,
    • B and B+ tree
    • range trees (variously known as interval trees or segment trees)
    • Additionally, many DP algorithms make use of a prefix sum array.
  • Sorting and Searching.

    • QuickSort,
    • RadixSort,
    • Binary search (useful in many optimization problems) and indexing
  • Priority Queues- Heaps

  • Pattern Matching and Parsing- Regex

  • Hashing- Hash Functions

  • Disjoint Sets

  • Graph Algorithms and Data Structures

    • Problems like computing the minimum spanning tree,
    • matching, and the detection of cut vertexes will arise in a number of situation.
    • Google’s PageRank, for example, is a very concrete application of graph algorithms.
    • Breadth first search(BFS),
    • Depth first search(DFS),
    • Strongly connected components(SCC),
    • Dijkstra,
    • Floyd-Warshall,
    • Minimum spanning tree(MST),
    • Topological sort.
  • Dynamic Programming- Closely related to graph algorithms,

    • Dynamic Programming exploit the fact that the optimal solution to a large problem can be expressed as an optimal combination of sub-problems.
    • Standard dynamic programming problems such as Rod Cutting, Knapsack, Matrix chain multiplication etc.
  • State Space Search Algorithms-

    • limited depth search,
    • Breadth-first search,
    • Depth-first search
    • A*;
  • Artificial Intelligence;

    • Genetic algorithms
  • Dijkstra's - depending on the type of contest, you might see basic pathfinding problems, or you might see problems with non-obvious reductions to pathfinding problems.

    • Whenever you have a cost minimization problem with a (reasonably small) finite number of states, an initial state a target state, you can look at it as a pathfinding problem.
    • Bellman-Ford is useful for pathfinding when edges may have negative costs. For example if you're navigating a maze with potions which boost health and hazards which lower it, Bellman-Ford would be a great approach.
  • Floyd-Warshall is useful for computing all paths. It is sometimes used in problems where you don't need all paths, because it's so easy to implement. It is slower than other pathfinding algorithms though, so whether Floyd-Warshall is an option depends on the graph size.

  • Edmonds-Karp for max flow/min cut problems. One common application is bipartite matching problems. For example, given N people, M food items, and a list of each person's food allergies, how many people can you feed?

  • The Hungarian algorithm for assignment problems. Similar to the above, but in these problems the edges have weights, and we're maximizing the total weight rather than just the number of matchings.

  • The sweep line "algorithm" (more of a general approach really) is useful for various geometric problems, like the nearest pair problem. Also useful for a variety of intersection-related problems, like finding intersecting line segments, or conflicting calendar events.

  • Graham scan or another convex hull algorithm (Monotone Chains algorithm),

    • for problems such as building a minimal fence to enclose animals.
  • Tarjan's algorithm for finding strongly connected components.

  • Prim's algorithm for minimum spanning trees.

  • Knuth-Morris-Pratt algorithm for string searching.

  • Combinatorial game theory comes up now and then. I recommend Thomas Ferguson's introduction.

  • Tries are useful in a variety of text-related problems.


  • Sieve of Eratosthenes, or another prime number sieve

  • Either Kruskal's or Prim's algorithm

  • Some implementation of topological sorting, such as by using DFS

  • Coordinate compression

  • Edmonds--Karp, or another implementation of the Ford--Fulkerson method; or a preflow-push algorithm; or, if you are preparing an ACM codebook, Dinic's algorithm. (Note: Max flow is not allowed to appear on the IOI, but may nevertheless appear on national team-selection contests)

  • Number theory: Modular arithmetic, Fermat’s theorem, Chinese remainder theorem(CRT), Euclidian method for GCD, Logarithmic Exponentiation, Sieve of Eratosthenes, Euler’s totient function.

  • Greedy: Standard problems such as Activity selection.

  • Search techniques: Binary search, Ternary search and Meet in the middle.

  • Data structures (Basic): Stacks, Queues, Trees and Heaps.

  • Data structures (Advanced): Trie, Segment trees, Fenwick tree or Binary indexed tree(BIT), Disjoint data structures.

  • Strings: Knuth Morris Pratt(KMP), Z algorithm, Suffix arrays/Suffix trees. These are bit advanced algorithms.

  • Computational geometry: Graham-Scan for convex hull, Line sweep.

  • Game theory: Basic principles of Nim game, Grundy numbers, Sprague-Grundy theorem.

  • In place merge sort & heap sort with linear time heap building.

  • Breadth first traversals of graphs & trees.

  • Dijkstra's algorithm w/ fibonacci heap

  • Kruskal's algorithm w/ union find by rank

  • deterministic quick select

  • polymorphic type inference

  • 2D voronoi diagram generation

  • 2D convex hull generation

  • Knapsack & related DP problems

  • Ford–Fulkerson algorithm for max flow.

More important than algorithms(just problems #$!%), the techniques/concepts residing at the base of such algorithms is more important.

There are broadly 4 ways in which classification of algorithms can be done.

Classification by purpose

Each algorithm has a goal, for example, the purpose of the Quick Sort algorithm is to sort data in ascending or descending order. But the number of goals is infinite, and we have to group them by kind of purposes.

  1. Classification by implementation Recursive or iterative A recursive algorithm is one that calls itself repeatedly until a certain condition matches. It is a method common to functional programming. For example, the towers of hanoi problem is well understood in recursive implementation. Every recursive version has an iterative equivalent iterative, and vice versa. Logical or procedural An algorithm may be viewed as controlled logical deduction. A logic component expresses the axioms which may be used in the computation and a control component determines the way in which deduction is applied to the axioms. Serial or parallel Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. This is a serial algorithm, as opposed to parallel algorithms, which take advantage of computer architectures to process several instructions at once. Sorting algorithms can be parallelized efficiently. Deterministic or non-deterministic Deterministic algorithms solve the problem with a predefined process whereas non-deterministic algorithm must perform guesses of best solution at each step through the use of heuristics.

  2. Classification by design paradigm

Divide and conquer A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively), until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. The binary search algorithm is an example of a variant of divide and conquer called decrease and conquer algorithm, that solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem.

Dynamic programming The shortest path in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. When the optimal solution to a problem can be constructed from optimal solutions to subproblems, using dynamic programming avoids recomputing solutions that have already been computed.

  • The main difference with the "divide and conquer" approach is, subproblems are independent in divide and conquer, where as the overlap of subproblems occur in dynamic programming.
  • Dynamic programming and memoization go together. The difference with straightforward recursion is in caching or memoization of recursive calls. Where subproblems are independent, this is useless. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.

The greedy method A greedy algorithm is similar to a dynamic programming algorithm, but the difference is that solutions to the subproblems do not have to be known at each stage. Instead a "greedy" choice can be made of what looks the best solution for the moment. The most popular greedy algorithm is finding the minimal spanning tree as given by Kruskal.

Linear programming The problem is expressed as a set of linear inequalities and then an attempt is made to maximize or minimize the inputs. This can solve many problems such as the maximum flow for directed graphs, notably by using the simplex algorithm. A complex variant of linear programming is called integer programming, where the solution space is restricted to all integers.

Reduction also called transform and conquer Solve a problem by transforming it into another problem. A simple example:finding the median in an unsorted list is first translating this problem into sorting problem and finding the middle element in sorted list. The main goal of reduction is finding the simplest transformation possible.

Using graphs Many problems, such as playing chess, can be modeled as problems on graphs. A graph exploration algorithms are used. This category also includes the search algorithms and backtracking.

The probabilistic and heuristic paradigm

Probabilistic Those that make some choices randomly. Genetic Attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of "solutions". Thus, they emulate reproduction and "survival of the fittest". Heuristic Whose general purpose is not to find an optimal solution, but an approximate solution where the time or resources to find a perfect solution are not practical.


You can look at Algorithms Repository

  1. Searching and sorting algorithms - Sorting algorithms include Quicksort , Merge sort, Heapsort, Bubble sort,Insertion sort, Radix sort. Other imp soting algorithms are Topological sort, Counting sort, Shell sort A comprehensive list can be found here. Important searching algorithms include breadth/ depth first search, binary search etc.

  2. Dynamic Programming -- To name a few DP problems, Longest Common Subsequence problem, Knapsack, travelling salesman problem etc. A list of dynamic programming algorithms can be found here.

  3. Graph algorithms -- Important graph algorithms are Dijkstra, Prim, Kruskal, Bellman-Ford. A comprehensive list can be found here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment