“Moore’s law” is the observation that the number of transistors in a dense integrated circuit will double approximately every two years. For the past 50 years, we have relied on Moore’s Law to provide increasing functionality at faster speeds and lower cost. Cheap, fast, and small transistors have allowed a supercomputer from the 70s, which at the time took up the space of a whole room and cost millions of dollars, to become small enough and affordable enough to fit in your pocket and be powered by a battery. If it were not for the semiconductor industry’s hard work and incredible success in shrinking transistors, there would be no iPhone, no Xbox, no HoloLens. The end of Moore’s Law has been forecast countless times, and it has been stated repeatedly, “now it will really end.” This would mean a likely slowdown in the IT industry, because the cost of computation will stop decreasing. We’ve been making a lot of progress in innovation over time (for instance, with machine learning) because every year computation and storage have continued to become cheaper and more abundant. If Moore’s Law ends, we may see a decrease in the pace of innovation.
Some argue for this catastrophic forecast, while others believe the situation is just how it has always been and the semiconductor industry will find a way to scale as it always has. However, there is no denying that it is getting more expensive to scale silicon, so higher volumes are required to amortize the investment in developing a new technology (although prices are not getting any higher). And there is also no denying that there is a physical limit to scaling, despite the fact that technologists do not agree on how close to this limit engineering improvements can take us. Transistor technology nodes are now at 14nm, with demonstrations of 10nm and 7nm—or roughly 70 atoms in width. It is very difficult to build transistors that small, which increases their cost of production significantly.
Instead of spending energy trying to estimate when we’ll hit physical or engineering limits, it would be both productive and interesting if people spent time thinking about the implications of hitting those limits and use their energy to prepare for it. For example, if cost per GFlop or per GByte stops dropping or drops at a slower rate, what will this do to your algorithms or applications? We’ve grown accustomed to riding the cost improvements that Moore’s Law has provided us for the last 50 years, but then what?
Once we hit the wall, how are we going to continue the breakneck innovation pace that Moore’s Law once made possible?
Karin Strauss is a Researcher at Microsoft Research and an Associate Affiliate Professor at University of Washington. Her research interests include computer architecture and systems, specifically emerging memory technologies and systems tailored to best leverage them, hardware support for machine learning, and in-cell DNA computation.
For more computer science research news, visit ResearchNews.com.