Search

AMD releases new chips to power faster AI training - The Verge

kuaciasing.blogspot.com
Lisa Su onstage with MI300X processors
Image: PaulSakuma.com

AMD wants people to remember that Nvidia’s not the only company selling AI chips. It’s announced the availability of new accelerators and processors geared toward running large language models, or LLMs. 

The chipmaker unveiled the Instinct MI300X accelerator and the Instinct M1300A accelerated processing unit (APU), which the company said works to train and run LLMs. The company said the MI300X has 1.5 times more memory capacity than the previous M1250X version. Both new products have better memory capacity and are more energy-efficient than their predecessors, said AMD. 

“LLMs continue to increase in size and complexity, requiring massive amounts of memory and compute,” AMD CEO Lisa Su said. “And we know the availability of GPUs is the single most important driver of AI adoption.”

Su said during a presentation that MI300X “is the highest performing accelerator in the world.” She claimed MI300X is comparable to Nvidia’s H100 chips in training LLMs but performs better on the inference side — 1.4 times better than H100 when working with Meta’s Llama 2, a 70 billion parameter LLM. 

AMD partnered with Microsoft to put MI300X in its Azure virtual machines. Microsoft CTO Kevin Scott, a guest during Su’s speech, also announced the Azure ND MI300X virtual machines — first revealed in November — are now available on preview. Meta also announced it will deploy MI300 processors in its data centers. 

Su said AMD released the MI300A APU for data centers, which she said are expected to grow its total addressable market to $45 billion. APUs generally combine CPUs and GPUs for faster processing. AMD said the MI300A offers higher-performance computing, faster model training, and a 30 times energy efficiency improvement. Compared to the H100, AMD said it has 1.6 times the memory capacity. It also features unified memory, so there is no need to move data from different devices anymore. 

MI300A will power the El Capitan supercomputer built by Hewlett Packard Enterprise at the Lawrence Livermore National Laboratory. El Capitan is considered one of the most powerful supercomputers and is expected to deliver more than two exaflops of performance. 

The MI300A APU “is now in production and is being built into data centers.”

Pricing information was not immediately available.

Su teased the MI300 chips during the Code Conference, saying AMD was excited about the opportunity to tap more chip users, not just from cloud providers but from enterprises and startups. 

AMD also announced the latest addition to its Ryzen processors, the Ryzen 8040, which can put more native AI functions into mobile devices. The company said the 8040 series offers 1.6 times more AI processing performance than previous models and integrates neural processing units (NPUs). 

The company said Ryzen 8040 would not be limited to AI processing, as it claimed video editing would be 65 percent faster and gaming would be 77 percent faster than with competing products like Intel’s chips. 

AMD expects manufacturers like Acer, Asus, Dell, HP, Lenovo, and Razer to release products integrating Ryzen 8040 chips in the first quarter of 2024. 

Su said the next generation of its Strix Point NPUs will be released in 2024. 

AMD also announced the Ryzen AI Software Platform is now widely available, which will let developers building AI models on Ryzen-powered laptops offload models into the NPU so the CPU can reduce power consumption. Users will get support for foundation models like the speech recognition model Whisper and LLMs like Llama 2. 

To power AI models — and take advantage of the current hype for the tech — companies like AMD, Nvidia, and Intel have been locked in what’s basically an AI chip arms race. So far, Nvidia captured the largest market share with its highly coveted H100 GPUs used to train models like OpenAI’s GPT.

Adblock test (Why?)



"chips" - Google News
December 07, 2023 at 04:08AM
https://ift.tt/jPHwFUl

AMD releases new chips to power faster AI training - The Verge
"chips" - Google News
https://ift.tt/ZuAtGjE
https://ift.tt/lqwv41H

Bagikan Berita Ini

0 Response to "AMD releases new chips to power faster AI training - The Verge"

Post a Comment

Powered by Blogger.