Computing

AMD Lines Up Alternate Chips as It Eyes a ‘Post-exaflops’ Future

Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on its chips appearing in Microsoft’s Xbox One and Sony’s PlayStation 4.

Around the same time, AMD set a goal to be the first chipmaker to pass the exaflops computing threshold, which it did with the Frontier supercomputer topping the most recent Top500 supercomputing charts, said CEO Lisa Su, when reflecting on the company past during the financial analyst day held this week.

AMD reversed course and returned to profitability, and presentations at the analyst day hinted at the company learning from the rough past. The company also laid out an ambitious roadmap in which it is diversifying risk by investing in a wider set of chips beyond just CPUs and GPUs.

Lisa Su

The journey back to profitability included a bet on its x86 Zen CPU design, recommitting to the server market after flailing with its Opteron server line and abandoning Arm architecture, and the acquisitions of Xilinx and Pensando, which are already profitable.

Su noted the Frontier supercomputer as a crowning jewel of the hard work the company put in to reshape its image to a leader in computing technologies, and that it can go neck-to-neck with rivals.

“We are the first company to deliver an exaflop or more of computing horsepower,” Su said, adding that “it absolutely was not easy. It was not easy at all. But it was a long-term vision. We set out on that path to break the exaflop barrier almost 10 years ago. ”

Su noted that the company looked very different from the last in-person analyst meeting held in March 2020. The company this year added programmable chips from Xilinx and the building blocks for network and data processing from Pensando.

“We started in the datacenter with one product,” Su said, adding “but the world has gotten a lot more complicated and a lot broader with all the workloads.”

The company Epyc GPUs and Instinct GPUs for general-purpose computing are at the heart of the Frontier supercomputer, and remain centerpieces in the company datacenter roadmap. But the roster of alternative chips is growing with Xilinx AI engines and FPGAs, which are part of its XDNA family targeted at AI and cloud workloads

“With the Xilinx acquisition, AMD is becoming more and more of an infrastructure company servicing the datacenter and the edge infrastructure. It’s a much higher margin business than the consumer and PCs market and more strategic to OEMs, ”said Patrick Moorhead, principal analyst at Moor Insights and Strategy.

AMD will continue servicing the consumer markets and PCs as it provides the scale required to make the datacenter and infrastructure edge work, Moorhead said.

AMD already has good relations with all the datacenter companies, and bringing Pensado and some Xilinx datacenter products into the mix gives it more sales channels, said Linley Gwennap, chief analyst at TechInsights.

“AMD was talking about going to some of their customers and saying ‘look, we can offer this complete package of datacenter components with CPUs, GPUs and network DPUs.’ That gives their customers more stuff that they can bundle together and more stuff that they can buy in one fell swoop, ”Gwennap said.

Su said the custom chip market will also grow as computing needs diverge to meet specific organizational requirements. AMD executives spent time explaining the company modular chip future in which customers can mix and match different types of processors in a custom chip package for hybrid computing.

The custom chip strategy will revolve around the newly announced Infinity Architecture 4.0, which is the interconnect for different types of chips – also called chiplets – to be integrated in a package.

The Infinity fabric, which is built on AMD’s proprietary tech, will be compatible with CXL (Compute Express Link) 2.0 rack-level interconnect, and will be extensible to support the UCIe (Universal Chiplet Interconnect Express), which is a chiplet level interconnect. UCIe is backed by the likes of Intel, AMD, Arm, Google, Meta and others. Meta plans to power its metaverse offerings with AMD Epyc chips, and Google Cloud is offering more instances on AMD’s chips.

AMD will be architecture agnostic, and allow the integration of x86 and Arm designs inside chiplet packages. AMD did not respond to a request for comment on if it would allow the integration chips based on the RISC-V instruction set architecture.

“We are focused on making it easier to implement chips with more flexibility,” which includes AMD’s own CPU, GPU, high-performance I / O, networking gear and accelerators, said Mark Papermaster, chief technology officer at AMD, during a presentation.

The chiplet strategy and Infinity architecture are a part of AMD’s CDNA architecture for high-performance computing. AMD said the first CDNA 3 architecture-based products are planned for 2023 and will deliver five times more performance than CDNA 2.

AMD’s top-layer datacenter roadmap is based on the upcoming Zen 4 (Genoa and Genoa-X) CPUs and Instinct GPUs, but also Xilinx’s AI chips and FPGAs that can be easily remodeled for functions such as AI or networking with just a software update. A register-transfer level (RTL) software code defines the function of an FPGA.

Victor Peng, president of the adaptive and embedded computing group, and formerly CEO of Xilinx, focused his presentation on the AI ​​Engine, an integrated chip that can be scaled up and down for applications such as AI, in which models and size of neural networks are growing quickly. The AI ​​Engine includes programmable compute engines alongside other components such as networking, memory and CPUs.

“You can think of it as a tiled array architecture and each tile has a very powerful execution engine, together with local memory and local data movement. And it scales really well because… you could scale up or down depending on the performance, power and cost point you’re trying to hit, ”Peng said.

The company has rolled out the AI ​​Engine in some 7nm Versal adaptable chips for markets that include AI, telecommunications, automotive and high-performance computing. The next adaptive system-on-chip will be made using the 3-nanometer process, and is scheduled for 2025, according to a product roadmap shown during Peng’s presentation.

“It’s not as critical to move to the absolute leading-edge node right away as long as we’re continuing to deliver capability and value,” Peng said.

The Xilinx and Pensando acquisitions also bought a number of networking assets, which include the Solarflare NICs, Alveo FPGA-based network accelerators and other Pensando hardware and software gear, said Forrest Norrod, senior vice president and general manager for datacenter solutions business group at AMD .

Elba DPU. Source: AMD.

With Pensando, AMD added a data-processing unit called Elba, which Norrod called “the world’s most intelligent DPU.”

Elba is a network packet-processing engine based on the P4 programming language, which can define specific microservices related to storage, networking, firewall, network security and telemetry. The DPU, which is in its second-generation and has 144 P4 packet processing units, is largely operating on software functions programmed into the chip using the P4 language.

The P4 language can process and route high-frequency data packets while handling security and tracking where data packets go. The P4 is seen as a software-defined replacement for the traditional networking chip market, which had broken up into high-bandwidth switching and highly programmable routing hardware. Software-defined deployments based on P4 allows for faster deployments, easy addition of features, quick bug fixes, and also reduces the time and cost involved in the development of fixed-function network hardware.

The P4 is also highly parallel and can run multiple services at the same time, making it relevant in heterogeneous computing environments.

Elba can be “updated in place with no disruption,” Norrod said. The Pensando team has also created a complete software stack to address every modern workload in the data center, and can be tweaked so customers can add their own functionality, Norrod said.

Source: AMD

Leave a Comment