Ampere Computing has revealed its latest development, the AmpereOne Aurora, an integrated AI accelerator aimed at meeting the rising demands of cloud-native AI computing. The Aurora is designed to provide high efficiency and is compatible with existing data centers, making it a versatile solution for various AI applications.
The Aurora is a 512-core Arm processor that incorporates on-chip High Bandwidth Memory (HBM) to support AI training and inference tasks. It features the scalable AmpereOne Mesh, which facilitates seamless connectivity between different compute types. Additionally, the Aurora includes integrated Ampere AI IP, marking a new addition to Ampere’s product lineup.
This new chip is touted to enhance AI compute capabilities significantly, particularly for workloads such as RAG and vector databases. The Aurora is reported to deliver three times the performance per rack compared to the current flagship AmpereOne processors.
It also supports air cooling, which enables its deployment in any existing data center, catering to a broad range of applications from public cloud to hyperscale data centers and edge computing.
Details about the manufacturing process of the Aurora are sparse, but the company has indicated the use of “die-to-die interconnects across chiplets,” suggesting a modular approach with multiple chiplets integrated within its scalable mesh architecture. The chip includes custom cores designed for both general-purpose and AI-specific workloads, reflecting the growing convergence of AI compute in the cloud.
Ampere faces competition from established players like Intel and AMD, who dominate the data center market with their x86-64 chips. Ampere’s emphasis on AI capabilities and cloud-native design aims to differentiate Aurora from these larger competitors.
Meanwhile, Ampere has also announced pricing for its AmpereOne M processors, with costs ranging from $2,936 to $5,555, though there is no word on the introduction of more budget-friendly models.