Skip to content

Instantly share code, notes, and snippets.

@aburan28
Created January 31, 2018 18:44
Show Gist options
  • Save aburan28/d98371566972ed98d044522d3bed77e7 to your computer and use it in GitHub Desktop.
Save aburan28/d98371566972ed98d044522d3bed77e7 to your computer and use it in GitHub Desktop.
"I've often seen this quote used to justify obviously bad code or code that, while its performance has not been measured, could probably be made faster quite easily, without increasing code size or compromising its readability.
In general, I do think early micro-optimizations may be a bad idea. However, macro-optimizations (things like choosing an O(log N) algorithm instead of O(N^2)) are often worthwhile and should be done early, since it may be wasteful to write a O(N^2) algorithm and then throw it away completely in favor of a O(log N) approach.
Note the words may be: if the O(N^2) algorithm is simple and easy to write, you can throw it away later without much guilt if it turns out to be too slow. But if both algorithms are similarly complex, or if the expected workload is so large that you already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload in the long run.
Thus, in general, I think the right approach is to find out what your options are before you start writing code, and consciously choose the best algorithm for your situation. Most importantly, the phrase "premature optimization is the root of all evil" is no excuse for ignorance. Career developers should have a general idea of how much common operations cost; they should know, for example,
that strings cost more than numbers
that dynamic languages are much slower than statically-typed languages
the advantages of array/vector lists over linked lists, and vice versa
when to use a hashtable, when to use a sorted map, and when to use a heap
that (if they work with mobile devices) "double" and "int" have similar performance on desktops (FP may even be faster) but "double" may be a hundred times slower on low-end mobile devices without FPUs;
that transferring data over the internet is slower than HDD access, HDDs are vastly slower than RAM, RAM is much slower than L1 cache and registers, and internet operations may block indefinitely (and fail at any time).
And developers should be familiar with a toolbox of data structures and algorithms so that they can easily use the right tools for the job.
Having plenty of knowledge and a personal toolbox enables you to optimize almost effortlessly. Putting a lot of effort into an optimization that might be unnecessary is evil (and I admit to falling into that trap more than once). But when optimization is as easy as picking a set/hashtable instead of an array, or storing a list of numbers in double[] instead of string[], then why not? I might be disagreeing with Knuth here, I'm not sure, but I think he was talking about low-level optimization whereas I am talking about high-level optimization.
Remember, that quote is originally from 1974. In 1974 computers were slow and computing power was expensive, which gave some developers a tendency to overoptimize, line-by-line. I think that's what Knuth was pushing against. He wasn't saying "don't worry about performance at all", because in 1974 that would just be crazy talk. Knuth was explaining how to optimize; in short, one should focus only on the bottlenecks, and before you do that you must perform measurements to find the bottlenecks.
Note that you can't find the bottlenecks until you have written a program to measure, which means that some performance decisions must be made before anything exists to measure. Sometimes these decisions are difficult to change if you get them wrong. For this reason, it's good to have a general idea of what things cost so you can make reasonable decisions when no hard data is available.
How early to optimize, and how much to worry about performance depend on the job. When writing scripts that you'll only run a few times, worrying about performance at all is usually a complete waste of time. But if you work for Microsoft or Oracle and you're working on a library that thousands of other developers are going to use in thousands of different ways, it may pay to optimize the hell out of it, so that you can cover all the diverse use cases efficiently. Even so, the need for performance must always be balanced against the need for readability, maintainability, elegance, extensibility, and so on."
https://softwareengineering.stackexchange.com/questions/80084/is-premature-optimization-really-the-root-of-all-evil
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment