How AMD Came From Behind to Mount a Challenge in the AI Chip Wars

When Lisa Su took over as chief executive of chip company Advanced Micro Devices in 2014, the company’s market value was just under $3 billion.
Today, it is worth more than $330 billion, a more-than-hundredfold increase that reflects how deftly AMD has pivoted from a strategy of mainly producing graphics cards for gaming and personal-computer processors to more tightly focusing on the data-center chips that power the artificial-intelligence revolution.
AMD’s share price rose 24% on Monday after the company announced a partnership with OpenAI , maker of the popular consumer AI model ChatGPT. Under the terms of the deal, OpenAI will buy tens of thousands of AMD chips to power 6 gigawatts of computing capacity for inference functions, which allow AI models to respond to user queries.
The deal has given rocket fuel to AMD’s share price and the company’s ambitions to compete with rival chip designer Nvidia , which is by far the dominant competitor in the AI semiconductor industry.
Monday’s deal specifies that OpenAI will be issued warrants for 160 million shares of AMD stock, at a marginal price of 1 cent a share, once OpenAI hits certain deployment targets and once AMD’s share price rises.
The final tranche of shares will be granted only if AMD’s stock hits $600 a share, which would give AMD a trillion-dollar valuation.
For now, with a market capitalization of $4.5 trillion, Nvidia is nearly 14 times the size of AMD, and most analyst estimates peg its market share for the graphics processing units, or GPUs, that power AI training and inference at more than 75%.
But in addition to AMD, Nvidia faces pressure from companies such as Broadcom , which produces application-specific custom chips for customers such as OpenAI, and even from large customers themselves, some of which have begun designing their own chips.
The OpenAI deal might have shifted the balance in AMD’s favor somewhat. How AMD got to this inflection point is a combination of careful strategic planning and being in the right place at the right time.
“Over the last few years, what’s been important is for us to understand the workloads that would really drive next-generation AI, training and inference,” Su said in an interview. “This deal is a huge expansion of the work that we’re doing.”
For much of the past decade, AMD’s archrival has been Intel , the troubled chip designer and manufacturer that recently received major investments from Nvidia and the U.S. government . On the strength of popular designs for the graphics chips used in the PlayStation and Xbox gaming systems and the CPUs—or main computer brains—used in consumer PCs, AMD has steadily eaten away at Intel’s market share for years.
Intel, meanwhile, was bogged down by a costly effort to turn around its chip-fabrication business. AMD spun off its manufacturing business, now known as GlobalFoundries , in 2009, while Intel has continued to pour money into its unprofitable foundry segment even as it fell badly behind more technologically advanced rivals such as Taiwan Semiconductor Manufacturing .
In 2018, AMD pivoted sharply to cloud computing, launching its Instinct line of data center GPUs, its first chips designed for AI workloads. Since then, AMD has struggled to keep up with Nvidia, which has dominated not just the AI chip space but also the software systems required to run large data-center clusters. undefined undefined For the past few years, as AI labs rushed to perfect their latest models, demand has surged for powerful chips that can be used to train those models on billions or even trillions of input parameters. Now, however, demand has shifted to inference functions, rather than training, as companies seek AI tools that are more useful in the worlds of business, entertainment and research. These applications tend to be more lucrative, as well.
“Compute has been more skewed toward training in the past, and in the coming years, it’s going to shift much more toward inference, as demand for these AI services grows,” said Jacob Feldgoise , an AI researcher at the Georgetown University Center for Security and Emerging Technology. “AMD has been increasingly trying to position itself as a preferred provider of solutions for inference.”
Su, the AMD chief, as well as OpenAI’s top executives, agreed that demand for inference will be the main driver of AI infrastructure, and argued that as the AI industry grows, any company that offers computing power to developers will see big benefits.
AMD has a few crucial advantages that can help in its quest to grab more market share from Nvidia. Its chips are generally less expensive than Nvidia’s, and their efficiency and quality is improving. There is also the question of availability: Because Nvidia’s chips are widely considered the best available, competition to buy them is fierce. The extensive rise in demand opens up an opportunity for AMD to offer its own products as a more affordable and readily available alternative.
“We really believe that the world is underestimating the demand for AI inference and that we’re heading to a world where there just simply is not enough,” said Greg Brockman , president and co-founder of OpenAI. “It’s a very positive-sum market, where people are just not building enough. There’s not going to be enough chips.”
Write to Robbie Whelan at robbie.whelan@wsj.com
Corrections & Amplifications undefined An earlier version of this article misspelled the last name of AI researcher Jacob Feldgoise as Felgoise.

Her er det finansielle overblik
