The number of transistors and resistors on a chip doubles every 24 monthsIt's a little tricky because you can always add more cores or even fluff like a GPU or fixed function accelerators to increase transistor counts even if yields degrade and programs suited for using these resources are rare. The law doesn't make an exception for these, or more pertinently, multicore designs, so let's see how the biggest chips from Intel compare over the past 20 years. I'll do GPU transistor counts as well to see how that compares. I expect that doubling transistor counts every two years is probably true, but for Nvidia rather than Intel.
An hour of googling later:
Man I hate wccftech so much. How such a garbage rumor site constantly gets top billing on Google searches is a mystery. It's almost a reverse hierarchy with rumor trash like wccftech and Motley fool on top, data-dredged (but still useful) SEO clickbait like CPUBoss after, followed by amateur enthusiast sites like Anandtech and Tom's Hardware, then professional sites like NextPlatform, and finally expert but accessible sites like David Kanter of
Anyway here are the tables.* First GPU:
Date | GPU Name | Transistor Count - millions | % change over previous | Expected |
April 1997 | Riva 128 | 3.5 | 3.5 | |
March 1999 | Riva TNT2 | 15 | 328.57% | 7.00 |
February 2001 | GeForce 3 | 63 | 320.00% | 14.00 |
January 2003 | GeForce FX5800 Ultra | 125 | 98.41% | 28.00 |
June 2005 | GeForce 7800GTX | 302 | 141.60% | 56.00 |
May 2007 | GeForce 8800 Ultra | 681 | 125.50% | 112.00 |
January 2009 | GeForce GTX 285 | 1400 | 105.58% | 224.00 |
May 2011 | GeForce GTX 580 | 3000 | 114.29% | 448.00 |
May 2013 | GeForce GTX Titan | 7100 | 136.67% | 896.00 |
May 2015 | GeForce GTX Titan X | 12000 | 69.01% | 1,792.00 |
1H 2017? | Titan Volta ? |
Wow! GPU workloads are more parallel but it's clear Nvidia is outpacing Moore's Law by a comfortable margin. Not quite Kurzweil "accelerating change" fast but Nvidia could stagnate for four years and still be ahead. I'd say good job Nvidia but they've already taken a ton of my money which is all they really ever wanted anyway.
Now CPU:
Date | CPU Name | Transistor Count - millions | % change over previous | Expected | Core Count |
May 1997 | Pentium II Klamath | 7.5 | 7.5 | 1 | |
February 1999 | Pentium III Katmai | 9.5 | 26.67% | 15 | 1 |
April 2001 | Pentium 4 | 42 | 342.11% | 30 | 1 |
March 2003 | Pentium M Banias | 77 | 83.33% | 60 | 1 |
May 2005 | Pentium D Smithfield | 230 | 198.70% | 120 | 1 |
April 2007 | Core 2 Kentsfield MCM | 586 | 154.78% | 240 | 2 |
March 2009 | Xeon Gainestown (w5580) | 750 | 27.99% | 480 | 2 |
April 2011 | Xeon Westmere (E7-8870) | 2600 | 246.67% | 960 | 10 |
June 2013 | Xeon Ivy Bridge (E5-2697 v2) | 2890 | 11.15% | 1920 | 12 |
May 2015 | Xeon Haswell (E7-8890 v3) | 5600 | 93.77% | 3840 | 18 |
1H 2017? | Xeon Kaby ? |
Well there it is, Intel really has been able to keep Moore's Law on track! Granted, the samples from Westmere onward aren't consumer chips but rather pricey server parts with lots of cores, but they aren't one-off tech demos either.
Moore's Law doesn't say anything about price or performance either which is why it technically doesn't matter that the E7-8890v3 was released at $7,200 whereas the Pentium II was $1,200 in 2017 Dollars.** Or that the 8890v3 is an 18 core chip that will be no faster than a one or two core version for most programs.
Anyway, I stand corrected. Or do I? Intel's answer to whether Moore's Law is dead was this chart:
The cost per transistor proves nothing except to show that even Intel isn't immune from imputing different intents to Moore's Law i.e., "What does a biannual doubling of transistor counts per chip mean?"
For many years, increasing transistor counts meant a direct increase in performance which is why there was a big difference between a 286, 386, 486, etc. Those gains are gone. Intel's presentation points out how they are able to pack more transistors per area than ever before and how cost per transistor has gone way down thanks to their $10 Billion fabs - although you wouldn't realize it given their SKU pricing over the past decade.***
Intel's cost savings haven't translated to customer savings although the incidental benefits of lower power consumption and better frequencies for those lower power parts have. At the consumer and high end, the lack of real competition and price gouging isn't a big secret. Thankfully, Zen will bring sanity back to the market so Intel will probably have to cut this slide of their 60%+ Gross Margins at their next Technology and Manufacturing Day.
But the attack of Naples and Ryzen won't bring back the old days of huge IPC increases. Maybe it brings large price drops across the board where the $7,200 5.6 Billion transistor E7 gets priced closer to the $800 12 Billion transistor 1080Ti. Big whoop.
The single thread performance leader has never been the chip with huge core counts. And there are still architectural advantages that are unique to AMD, Intel, and IBM that could provide cumulative improvements to future chips using existing technology. Then there are not-yet-existing technologies that promise the world. Popular Science/IEEE Spectrum sort of "breakthroughs" that you stopped paying attention to years ago because it was beyond vaporware. Plasmaware?
Photonics, neural networks, quantum computing. Maybe those really are the future but I'd rather see an effort into proven methods for improving performance. Extreme cooling. Optimize packages, processes, and IT infrastructure for superconducting temperatures. I'm just being Cray Cray.
* I didn't factor in paper launches, custom runs, or more exotic chips like Tesla and Phi. Currently, the Xeon Phi 7290 with 72 cores holds the record as Intel's biggest chip with over 8 billion transistors. On Nvidia's side is the 15.3 billion transistor P100. Both aren't that much larger than the "mainstream" parts.
** Back then the Pentium Pro was better thanks to faster cache despite being an older design. That sort of imprinted the importance of cache in me like a little baby duck. Caches used to be on a separate socket, then moved onto the CPU board on a separate chip, and then onto the CPU itself. But there's still a case to be made for caches physically separate but still closer than RAM as you have with eDRAM. For desktops where power consumption isn't important, it would be great to see a socketed cache maybe on the underside of the CPU package made of good ol' fashioned $RAM.
Change the standard for high performance computers to use liquid cooling to allow RAM to be even closer. We're at the point where the highest memory frequencies are on boards where RAM channels and traces are a shorter distance versus comparable motherboards.
*** One thing that would be really great to see adopted is Intel's definition of transistor density to differentiate processes. While I'm not sure about the weighting of 60% to NAND and 40% to Scan Flip Flop units, it's a whole lot better than the 14nm 10nm marketing that isn't fooling anyone. Actually I think just a raw maximum transistors per square mm and some sort of confidence interval for maximum frequencies achievable at various temperatures would be best.
Wow, all these cool ideas that I have no ability to implement. I'll design the logo.