Visualizing Moore's Law2 minute read 29 Nov 2017
You’re probably already familiar with Moore’s Law. If not, it’s the famous / infamous observation of Gordon Moore (who co-founded Fairchild Semiconductor and Intel), who in 1965 described , and eventually revised his prediction in 1975, that the number of components per integrated circuit doubles every 2 years.
This is a classic curve that’s been shown a million times before, and, while I particularly like the graph of Moore’s Law that’s on the Wikipedia page, it’s a bit outdated. I thought it would be fun to give it a go myself - going so far as to scrape the Wikipedia table detailing the transistor count of processors over the years.
So, without further ado, an update to the visualization of Moore’s Law:
Yup - Moore’s empirical observation still holds. Although we’re definitely starting to see a slow-down from around 2012 onwards. By the way, just for fun, the curve for Moore’s Law in the plot above has the following mathematical expression:
(That was just an excuse to play around with MathJax in Markdown - something that I’ve enabled for the blog recently)
But how has this happened? Have processors also gotten physically larger? Or are transistor just more densely packed onto the CPU wafers? (You’ve probably correctly guessed that the answer is both, but I’ve never seen this broken down before, so I’d like to do it here). Let’s take a look at the area (in mm^2) of processors over time.
This definitely tells us that processors have increased in physical size extremely rapidly over the years. But is that the entire reason?
The “process” of an integrated circuit, nowadays specified in nm (nanometers), is defined as “the average half-pitch of a memory cell”, where a memory cell is a device that can store 1 bit of memory. In other words, the smaller the “process” for a particular integrated circuit, the more miniaturized the components (and hence, the closer we can pack them together). Smaller process equals higher transistor density.
Let’s take a look at the process used over time:
And that can mean only one thing - that transistor density has absolutely exploded since the 1970s:
Moore’s Law still holds for two reasons, then:
- Our ability to continue to miniaturize transistors.
- Our ability to reliably produce integrated circuits.
But there’s a problem. We’re already sitting with 14nm processors - that’s a handful of atoms. And despite major technological advancements getting us this far, we’re eventually going to run into physical limit. At least with the way transistors work currently. Ars Technica has, as usual, a fantastic article on the supposed death of Moore’s Law.
All doom and gloom, then? Not quite. The world isn’t going to grind to standstill. The demand for increased processing power is at an all time high (just look at present AI research), and will continue to grow. And when there’s a collective will that strong, I can almost guarantee there’s going to be a way for Moore’s Law to continue exist for a number of years to come. What’s got my excited is what happens after we reach the current physical limit - that would be interesting.