Intel, Google, Microsoft, Meta, and other leading technology companies have come together to form a new industry consortium, the Ultra Accelerator Link (UALink) Promoter Group, aimed at advancing the development of components that interconnect AI accelerator chips within data centers.
Announced on Thursday, the UALink Promoter Group includes notable members such as AMD, Hewlett Packard Enterprise, Broadcom, and Cisco, although Arm is not part of the group. The coalition is advocating for a new industry standard to facilitate the connection of AI accelerator chips, which are increasingly prevalent in servers. AI accelerators encompass a range of chips, from GPUs to bespoke solutions designed to expedite the training, optimization, and execution of AI models.
Forrest Norrod, AMD’s General Manager of Data Center Solutions, emphasized the necessity of an open standard during a briefing on Wednesday. “The industry needs an open standard that can progress rapidly, in a format that allows multiple companies to contribute to the ecosystem,” he said. “We require a standard that fosters innovation without being hindered by any single entity.”
The inaugural version of this proposed standard, UALink 1.0, aims to interconnect up to 1,024 AI accelerators—specifically GPUs—within a single computing “pod,” which can consist of one or multiple server racks. Based on open standards such as AMD’s Infinity Fabric, UALink 1.0 will enable direct memory loads and stores between AI accelerators, enhancing speed and reducing data transfer latency compared to current interconnect specifications, according to the UALink Promoter Group.