We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
40 years ago, Donald Knuth called out premature optimization by calling it the root of all evil. Today, programming languages have changed a lot but this statement might still be true.
In this article, I'll explain what premature optimization and premature pessimization are and how JavaScript can benefit from (premature) optimization.
Premature Optimization
Premature optimization is when you make code more complex in the name of efficiency without data that it's actually needed.
Having a look at the definition, the key takeaway seems to be that you shouldn't optimize your code until you have data that proves that you need to optimize something. Such data might come from user feedback for example.
Clean code and readability should come first before you start improving minor implementation details that have close to no effect on the overall performance of your application, and in the worst case, increasing complexity.
How to optimize
If you've come to a point where you know that you have to improve performance, you shouldn't just go ahead and start optimizing
the first for
loop that crosses your way.
You can start by analyzing your website with tools like WebPageTest for example, which will help you find bottlenecks. Then, with this data backing you up, you can optimize the parts that truly impact performance.
Having a proper monitoring system in place is key maintaining good performance in the long run. I wrote an article about how to integrate WebPageTest into your CI pipeline with TravisCI.
The definition above implies that optimized code is more complicated. This, however, is not always true. In JavaScript for instance, more readable and structured code, more often than not, also translates into better performance:
Premature Pessimization
Premature pessimization is when you write code that is slower than it needs to be, usually by asking for unnecessary extra work, when equivalently complex code would be faster and should just naturally flow out of your fingers.
As we will see later, using const
in JavaScript makes it easier for the engine to optimize the code.
Some people could argue that this is premature optimization. However, const
is also more idiomatic.
There are no trade-offs in using const
over var
. This would be a case of premature pessimization.
It's important to be aware that performant and clean code is not mutually exclusive.
If you can write performant and clean code, why not do it?
JavaScript
JavaScript plays a different role in this topic. At the time when Donald Knuth wrote his statement 40 years ago, the programming landscape was dominated by languages using Ahead of Time (AOT) compilation, which already optimizes the code quite a bit before it is packaged for execution.
JavaScript's compilation, on the other hand, happens just before its execution. This is also known as Just in Time (JIT) compilation. This means that the optimization happens while it is being executed and depends on the engine where the code is running.
Firefox published some optimization strategies for its engine SpiderMonkey, that also align with writing clean code.
Use constants
SpiderMonkey attempts to optimize element accesses that always refer to the same object.
In ES6 syntax this could mean using const
when possible, which already tells the engine that this variable won't be re-assigned.
No dynamic types
The engine also optimizes the use of variables where it's clear that it's always of the same type. Using TypeScript could make this even easier for you during development.
Reuse code
This might be the most important optimization. The JIT compiler detects what functions are being used the most and marks them as hot. The engine will try to optimize those functions.
But aren't those optimizations premature?
I'd say no. Yes, they are optimizations but they also contribute to making your code more readable. According to the definition, you can only speak of premature optimization if you also increase complexity. Those examples actually decrease complexity.
Optimizations like these might add up to a noticeable impact on the performance of your application, while they are also essential to writing clean code.
Conclusion
I think premature pessimization might be a more common problem than the other way around. When hacking together a prototype or working on a small project, it's not important to focus on small performance optimizations but this shouldn't be an excuse for writing bad code. If you're used to writing high-quality code you'd probably also do so when working on smaller projects.
Whether improving performance or not, the primary goal should always be to write clean and maintainable code that other people like to work with. Adapting best practices could lead to code that is more performant and more readable at the same time.
It's also worth considering the impact of each decision. Choosing a database technology is a more important and long-term decision than choosing the right way to iterate over an array.
To sum it up, optimization is good and important but always with having the big picture in mind. Don't waste too much time on optimizing something that you think will improve performance. Make decisions based on data by monitoring the performance and identifying bottlenecks.
Links
https://wiki.c2.com/?PrematureOptimization
https://herbsutter.com/2013/05/13/gotw-2-solution-temporary-objects/