How We’ll Reach a 1 Trillion Transistor GPU


In 1997 the IBM Deep Blue supercomputer defeated world chess champion Garry Kasparov. It was a groundbreaking demonstration of supercomputer know-how and a primary glimpse into how high-performance computing may at some point overtake humanstage intelligence. Within the 10 years that adopted, we started to make use of artificial intelligence for a lot of sensible duties, reminiscent of facial recognition, language translation, and recommending films and merchandise.

Quick-forward one other decade and a half and synthetic intelligence has superior to the purpose the place it may possibly “synthesize information.” Generative AI, reminiscent of ChatGPT and Stable Diffusion, can compose poems, create art work, diagnose illness, write abstract reviews and computer code, and even design built-in circuits that rival these made by people.

Super alternatives lie forward for synthetic intelligence to turn into a digital assistant to all human endeavors. ChatGPT is an efficient instance of how AI has democratized the usage of high-performance computing, offering advantages to each particular person in society.

All these marvelous AI purposes have been as a result of three components: improvements in environment friendly machine-learning algorithms, the supply of large quantities of information on which to coach neural networks, and progress in energy-efficient computing by the development of semiconductor know-how. This final contribution to the generative AI revolution has obtained lower than its justifiable share of credit score, regardless of its ubiquity.

Over the past three a long time, the key milestones in AI have been all enabled by the modern semiconductor know-how of the time and would have been unattainable with out it. Deep Blue was applied with a mixture of 0.6- and 0.35-micrometer-node chip-manufacturing know-how. The deep neural community that received the ImageNet competitors, kicking off the present period of machine studying, was implemented with 40-nanometer technology. AlphaGo conquered the game of Go utilizing 28-nm know-how, and the preliminary model of ChatGPT was skilled on computer systems constructed with 5-nm know-how. The newest incarnation of ChatGPT is powered by servers utilizing much more superior 4-nm technology. Every layer of the pc programs concerned, from software program and algorithms right down to the structure, circuit design, and gadget know-how, acts as a multiplier for the efficiency of AI. But it surely’s honest to say that the foundational transistor-device know-how is what has enabled the development of the layers above.

If the AI revolution is to proceed at its present tempo, it’s going to want much more from the semiconductor trade. Inside a decade, it should want a 1-trillion-transistor GPU—that’s, a GPU with 10 instances as many gadgets as is typical in the present day.

Advances in semiconductor know-how [top line]—together with new supplies, advances in lithography, new varieties of transistors, and superior packaging—have pushed the event of extra succesful AI programs [bottom line]

Relentless Progress in AI Mannequin Sizes

The computation and reminiscence entry required for AI coaching have elevated by orders of magnitude up to now 5 years. Coaching GPT-3, for instance, requires the equal of greater than 5 billion billion operations per second of computation for a complete day (that’s 5,000 petaflops-days), and three trillion bytes (3 terabytes) of reminiscence capability.

Each the computing energy and the reminiscence entry wanted for brand new generative AI purposes proceed to develop quickly. We now must reply a urgent query: How can semiconductor know-how preserve tempo?

From Built-in Gadgets to Built-in Chiplets

Because the invention of the built-in circuit, semiconductor know-how has been about cutting down in characteristic measurement in order that we are able to cram extra transistors right into a thumbnail-size chip. Immediately, integration has risen one stage increased; we’re going past 2D scaling into 3D system integration. We are actually placing collectively many chips right into a tightly built-in, massively interconnected system. It is a paradigm shift in semiconductor-technology integration.

Within the period of AI, the potential of a system is immediately proportional to the number of transistors integrated into that system. One of many principal limitations is that lithographic chipmaking instruments have been designed to make ICs of not more than about 800 sq. millimeters, what’s referred to as the reticle restrict. However we are able to now prolong the scale of the built-in system past lithography’s reticle restrict. By attaching a number of chips onto a bigger interposer—a bit of silicon into which interconnects are constructed—we are able to combine a system that incorporates a a lot bigger variety of gadgets than what is feasible on a single chip. For instance, TSMC’s chip-on-wafer-on-substrate (CoWoS) know-how can accommodate as much as six reticle fields’ price of compute chips, together with a dozen high-bandwidth-memory (HBM) chips.

HBMs are an instance of the opposite key semiconductor know-how that’s more and more vital for AI: the power to combine programs by stacking chips atop each other, what we at TSMC name system-on-integrated-chips (SoIC). An HBM consists of a stack of vertically interconnected chips of DRAM atop a management logic IC. It makes use of vertical interconnects referred to as through-silicon-vias (TSVs) to get indicators by every chip and solder bumps to type the connections between the reminiscence chips. Immediately, high-performance GPUs use HBMextensively.

Going ahead, 3D SoIC know-how can present a “bumpless different” to the standard HBM know-how of in the present day, delivering far denser vertical interconnection between the stacked chips. Latest advances have proven HBM test structures with 12 layers of chips stacked utilizing hybrid bonding, a copper-to-copper reference to a better density than solder bumps can present. Bonded at low temperature on high of a bigger base logic chip, this reminiscence system has a complete thickness of simply 600 µm.

With a high-performance computing system composed of a lot of dies working massive AI fashions, high-speed wired communication might rapidly restrict the computation velocity. Immediately, optical interconnects are already getting used to attach server racks in information facilities. We are going to quickly want optical interfaces primarily based on silicon photonics that are packaged together with GPUs and CPUs. This may enable the scaling up of energy- and area-efficient bandwidths for direct, optical GPU-to-GPU communication, such that a whole lot of servers can behave as a single large GPU with a unified reminiscence. Due to the demand from AI purposes, silicon photonics will turn into one of many semiconductor trade’s most vital enabling applied sciences.

Towards a Trillion Transistor GPU

As famous already, typical GPU chips used for AI coaching have already reached the reticle discipline restrict. And their transistor depend is about 100 billion gadgets. The continuation of the pattern of accelerating transistor depend would require a number of chips, interconnected with 2.5D or 3D integration, to carry out the computation. The combination of a number of chips, both by CoWoS or SoIC and associated superior packaging applied sciences, permits for a a lot bigger whole transistor depend per system than will be squeezed right into a single chip. We forecast that inside a decade a multichiplet GPU could have greater than 1 trillion transistors.

We’ll must hyperlink all these chiplets collectively in a 3D stack, however fortuitously, trade has been in a position to quickly scale down the pitch of vertical interconnects, rising the density of connections. And there’s loads of room for extra. We see no cause why the interconnect density can’t develop by an order of magnitude, and even past.

Power-Environment friendly Efficiency Pattern for GPUs

So, how do all these revolutionary {hardware} applied sciences contribute to the efficiency of a system?

We are able to see the pattern already in server GPUs if we take a look at the regular enchancment in a metric referred to as energy-efficient efficiency. EEP is a mixed measure of the vitality effectivity and velocity of a system. Over the previous 15 years, the semiconductor trade has elevated energy-efficient efficiency about threefold each two years. We consider this pattern will proceed at historic charges. Will probably be pushed by improvements from many sources, together with new supplies, gadget and integration know-how, extreme ultraviolet (EUV) lithography, circuit design, system structure design, and the co-optimization of all these know-how parts, amongst different issues.

Specifically, the EEP enhance will probably be enabled by the superior packaging applied sciences we’ve been discussing right here. Moreover, ideas reminiscent of system-technology co-optimization (STCO), the place the completely different useful components of a GPU are separated onto their very own chiplets and constructed utilizing the most effective performing and most economical applied sciences for every, will turn into more and more crucial.

A Mead-Conway Second for 3D Built-in Circuits

In 1978, Carver Mead, a professor on the California Institute of Expertise, and Lynn Conway at Xerox PARC invented a computer-aided design method for integrated circuits. They used a set of design guidelines to explain chip scaling in order that engineers might simply design very-large-scale integration (VLSI) circuits with out a lot information of course of know-how.

That very same form of functionality is required for 3D chip design. Immediately, designers must know chip design, system-architecture design, and {hardware} and software program optimization. Producers must know chip know-how, 3D IC know-how, and superior packaging know-how. As we did in 1978, we once more want a typical language to explain these applied sciences in a approach that digital design instruments perceive. Such a {hardware} description language provides designers a free hand to work on a 3D IC system design, whatever the underlying know-how. It’s on the way in which: An open-source commonplace, referred to as 3Dblox, has already been embraced by most of in the present day’s know-how corporations and digital design automation (EDA) corporations.

The Future Past the Tunnel

Within the period of synthetic intelligence, semiconductor know-how is a key enabler for brand new AI capabilities and purposes. A brand new GPU is not restricted by the usual sizes and type components of the previous. New semiconductor know-how is not restricted to cutting down the next-generation transistors on a two-dimensional airplane. An built-in AI system will be composed of as many energy-efficient transistors as is sensible, an environment friendly system structure for specialised compute workloads, and an optimized relationship between software program and {hardware}.

For the previous 50 years, semiconductor-technology improvement has felt like strolling inside a tunnel. The highway forward was clear, as there was a well-defined path. And everybody knew what wanted to be finished: shrink the transistor.

Now, we now have reached the tip of the tunnel. From right here, semiconductor know-how will get tougher to develop. But, past the tunnel, many extra potentialities lie forward. We’re not certain by the confines of the previous.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *