Skip to content

Instantly share code, notes, and snippets.

@jeremyckahn
Last active October 2, 2020 20:40
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save jeremyckahn/9a6f434c7045c33edde5d6b8e1112ec3 to your computer and use it in GitHub Desktop.
Save jeremyckahn/9a6f434c7045c33edde5d6b8e1112ec3 to your computer and use it in GitHub Desktop.
Practical Computer Science Fundamentals

Practical Computer Science Fundamentals

CompSci degrees cost a lot of money, take a lot of time, and aren't a viable option for a lot of folks. Luckily, much of what you'll learn in a proper Computer Science program can be self-taught online for free. This gist is a roundup of some of the most helpful resources I've found.

Where I'm coming from

I'm not pursuing a deep, academic comprehension of CS concepts — my goal is to become respectably conversant about the most prevalent and relevant subjects in the field.

Starting points

Here are some meta-resources to give you an overview of the subject matter:

  1. https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/ (Digestible explanation of Big-O notation)
  2. http://bigocheatsheet.com/ (Good index of data structures and sorting algorithms and their performance metrics)
  3. https://www.youtube.com/user/mikeysambol/videos (Quick, clear videos explaining said data structures and sorting algorithms)

My workflow for understanding this has been to internalize #1, and then go through #2 and #3 in tandem to dive into and grok each concept. I find it helpful to go over a bunch of stuff one day, partially understand it, and then revisit it a day or so later to cement the concepts in my head.

Things to know for interviews

Complexity theory

    1. O(1) (constant time)
    2. O(log n) (logarithmic time)
    3. O(n) (linear time)
    4. O(n log n) (linearithmic time)
    5. O(n^2) (quadratic time)
    6. O(2^n) (exponential time)
    7. O(n!) (factorial time)

Also see this helpful list on Wikipedia.

Big-O notation in simple, illustrated terms

All examples assume doSomethingWith() has a complexity of O(1). (Here is a really good article that focuses on this in the context of JavaScript.)

O(n)

A simple loop is O(n):

const arr = [...];
for (let i = 0; i < arr.length; i++) {
  doSomethingWith(arr[i]);
}

O(n^2)

A nested loop is O(n^2) (or replace 2 with however many levels of nested loops):

const arr = [[...], [...], [...], ...];
for (let i = 0; i < arr.length; i++) {
  let nestedArray = arr[i];
  
  for (let j = 0; j < nestedArray.length; j++) {
    doSomethingWith(nestedArray[j]);
  }
}

O(log (n))

A function that halves the amount of work remaining after each iteration is O(log (n)):

// Copied from https://gist.github.com/AlexKondov/60b9efaca1045aae9c8b412fe6f4bb29#file-binary-search-js
function binarySearch (list, value) {
  let start = 0;
  let stop = list.length - 1;
  let middle = Math.floor((start + stop) / 2);

  while (list[middle] !== value && start < stop) {
    if (value < list[middle]) {
      stop = middle - 1;
    } else {
      start = middle + 1;
    }

    middle = Math.floor((start + stop) / 2);
  }

  return (list[middle] !== value) ? -1 : middle;
}

O(n log (n))

A function that loops over an entire data set and calls an O(log (n)) function for every iteration (AKA, a typical decent sorting algorithm) is O(n log (n)):

// Copied from https://codepen.io/jeremyckahn/pen/bKwVXB
const merge = (a, b) => {
  const acc = [];

  while (a.length && b.length) {
    acc.push(a[0] > b[0] ? b.shift() : a.shift());
  }

  acc.push(...a);
  acc.push(...b);

  return acc;
};

const mergeSort = arr => {
  const { length } = arr;

  if (length === 1) {
    return arr;
  }

  const divider = Math.floor(length / 2);

  return merge(
    mergeSort(arr.slice(0, divider)),
    mergeSort(arr.slice(divider, length))
  );
};

Other interesting things

  • A good alternative to Hash Tables for word lookup problems is a trie
  • Splay Trees are really neat
    • Useful for cached data lookups
    • Also they are cool because they require no extra bookkeeping data
  • B-Trees are also really neat and are helpful for accessing ranges of data (think data that is stored on a slow medium like a hard disk)
  • Merge Sort is weird but really fast (and deceptively simple once you wrap your head around it)
  • Heap Sort is equally weird but fast and cool
  • Red-Black Trees (RB Trees) are super fast but super confusing and I really hope I never have to implement one
  • Radix Sort is magic: https://www.youtube.com/watch?v=xhr26ia4k38

Additional helpful links:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment