Skip to content

AMD’s Xilinx-enhanced Epycs will get datacenters’ attention • The Register

Comment AMD’s plans to integrate AI functionality from its Xilinx FPGAs with its Epyc server microprocessors presents several tantalizing opportunities for systems builders and datacenter operators alike, Glenn O’Donnell, research director at Forrester, told The Register.

A former semiconductor engineer, O’Donnell leads Forrester’s datacenter and networking studies. He sees several benefits to the kind of tight integration at the die or package level promised by AMD’s future CPUs.

“The more you can put on the same die or on the same package, the better,” he said.

greener data centers

One of the biggest benefits of integrating dedicated accelerators — like Xilinx’s AI engine — onto the CPU package is power consumption. It takes a lot of power to bring data on and off the chip, O’Donnell said. “If you can do it on chip or on package, it’s going to be a lot more efficient.”

And power consumption is a major concern for OEMs and datacenter operators, many of which have announced sweeping carbon neutrality goals in recent years.

So it’s no surprise why AMD CTO Mark Papermaster suggested domain-specific processors — like Xilinx FPGAs — will play a key role in meeting the company’s ambitious goal of delivering a 30-fold increase in power efficiency across its high-performance compute portfolio by 2025.

Greater power efficiency has other benefits that indirectly contribute to lower datacenter operating costs. The less efficient the chip, the greater the ratio of power that gets turned into waste heat, O’Donnell explained.

“Something that’s hard to cool means a lot of the electrical power you’re drawing is just going up in smoke. It’s generating heat instead of doing compute,” he said. “The more we can shift that towards compute and away from heat, that’s better for everybody, especially the planet.”

As much as 40 percent of datacenter power consumption today can be directly attributed to keeping the systems cool, Dell’Oro Group analyst Lucas Beran, told The Register.

And as chipmakers push toward greater power densities necessary to meet surging demand for AI/ML and data-intensive workloads, the thermal design power continues to creep upward. Nvidia’s latest GPUs, for example, are now available in configurations up to 700 watts.

“The appetite customers have for this kind of power is not dwindling by any stretch,” O’Donnell added.

The Edge and the Arm threat

While EPYC chips with integrated AI processing may soon find their way into the datacenter, O’Donnell finds it more likely the tech will take off at the edge, where power consumption and thermal characteristics are often the limiting factor.

“That’s where a lot of the demand is going to go. Datacenter is big, but I think edge is going to be much, much bigger — orders of magnitude bigger,” he said.

But for this technological gold rush to work, “we need the right kind of compute capability out at the edge, and it’s not necessarily the same kind thing that’s going to be in the datacenter,” O’Donnell added.

Here, AMD and its contemporaries face an existential threat: highly-efficient, Arm-based processors. “When you look at what’s going on, having an Arm-based architecture in some ways, has a competitive advantage because at the edge, power consumption becomes even more important,” O’Donnell said. “I’ve had big conversations with the big chipmakers about the Arm threat and they’re taking it very seriously.”

“I don’t see AMD versus Intel or Nvidia as being the big killer battlefront. I see Arm being the big killer battlefront, precisely because it’s more energy efficient,” he added.

Is Thinking next?

Xilinx’s AI engines might be the first to get integrated into AMD’s processors, but it’s unlikely to be the last.

Last month, AMD announced it would acquire networking startup Thinking in a deal valued at $1.9 billion.

While the purchase better positioned AMD to compete with Intel and Nvidia in the smartNIC and data processing unit (DPU) space, Forrest Norrad, head of AMD’s Data Center Solutions Group, previously hinted the technology could be integrated into the chipmaker’s CPUs.

“It certainly makes a lot of sense,” O’Donnell said. “Why would you buy a company like that and not try to do something like that.”

Pensando’s DPU tech has the potential to vastly improve chip interconnects, enabling much denser compute platforms, I added.

“If you can put that interconnect down as close to the silicon as possible, you now now have an interconnect for multiprocessor systems that blows the doors off anything that exists today,” O’Donnell said. “I really do believe we’re going to see a lot of the same software-defined networking concepts coming right down to the motherboard.”

“You could get some real screaming performance out of a system like that,” he said.

Intel, Nvidia aren’t far behind trend

AMD is by no means the only vendor looking to integrate additional co-processors or accelerators onto the CPU package.

Intel’s Sapphire Rapids Xeon Scalables — assuming they don’t get delayed again — will see Intel pivot to a chiplet architecture in the first half of 2022. The transition is expected to help Intel to achieve core-densities more inline with rival AMD, but its not the only reason the company has embraced a tiled architecture.

In addition to CPU tiles, the company is exploring a number of accelerator tiles, including GPUs, that can be packaged with along the CPU as a dedicated chiplet die.

Meanwhile, Nvidia at GTC this spring, announced it’d integrated a ConnectX7 smartNIC into its H100 line of GPUs in a bid to eliminate network bottlenecks in applications like multi-node AI training and 5G signal processing. ®

Leave a Reply

Your email address will not be published.