Did Moore’s law predict the world or define it?


Intel co-founder Gordon Moore smiles next to a wall at Intel headquarters in Santa Clara, California on March 9, 2005.
| Photo Credit: AP
“Coordinating concept” — these are among the many words historian of science and technology Cyrus Mody used to describe Moore’s law in an article in 2015, to commemorate 50 years of the tenet. By them, Dr. Mody, now at Maastricht University in the Netherlands, meant to say that one of the things that kept the law going was the law itself, creating a form of self-fulfilling prophecy. The businessman and engineer for whom the law is named, Gordon Moore, co-founder of Intel Corporation, passed away on March 24.
In 1965, Dr. Moore observed that the number of transistors on an integrated circuit (IC) would double every year. A decade later, he revised it to say the number would double approximately every two years. Since then, as many reports, articles, and books have documented, this Moore’s law was found true: the number of transistors on an IC did increase in that fashion.

Physicist Stephen Hawking, left, looks at a new custom-built computer designed especially for him with Gordon Moore of the Intel Corporation in the library of The Issac Newton Institute of Mathematics in Cambridge on March 20, 1997.
| Photo Credit:
AP
But part of this growth, which precipitated the 20th century’s digital revolution, owed itself to many people and industries being guided by Moore’s law instead of the prediction naturally coming true. In the 1950s and 1960s, and roughly speaking, more than a few scientists and engineers found that making the components of a computing system smaller also made it more efficient.
Some time later, working with Moore, Carver Mead found that if the size of a microchip was reduced by some factor, its efficiency improved by the cube of that factor – an exponential gain. Where there is exponential gain, there is rapid advance to be had. And where there are rapid advances, the future becomes harder to predict.
As Wired cofounder Kevin Kelly wrote in 2009, “expectations of future progress guide current investments”. How should we define these expectations? Enter Moore’s law, playing the role of a “coordinating concept”. At a time when many things seemed possible, Moore’s law gave the semiconductor industry a way to set targets; it told governments and militaries where to invest and how much. Adenekan Dedeke, an expert in supply chain and information management, wrote in 2022 that the law “helps software developers to anticipate, broadly speaking, how much bigger their software releases should be.”
To rephrase Dr. Mody, “the producers and consumers of microelectronics” believed they were at the mercy of the law, but in truth they were actively maintaining it. The law practically defined expectations even as it aged with the world, and age it did. The slope of the Moore’s law curve is flagging. The CEO of Intel said in 2015 that the company’s “cadence” had dropped from the two years of Moore’s law to two-and-half. Chips are beginning to come up against material limits, such as the paucity of metals with suitable electronic properties, fabrication costs, and heat dissipation, encouraging the industry to look past silicon, transistors and, fundamentally, the idea that even smaller is better. The time has come, it would seem, for a new “coordinating concept”.