Contact Info

Atlas Cloud LLC 600 Cleveland Street Suite 348 Clearwater, FL 33755 USA

[email protected]

Client Area
Recommended Services
Supported Scripts
WordPress
Hubspot
Joomla
Drupal
Wix
Shopify
Magento
Typeo3

Emerging arm server chip company Ampere Computing recently unveiled its forthcoming products, which will feature up to 512 cores and include an AI-specific processing unit.

However, potential buyers will need to wait, as the next-gen processor named Aurora won’t be available until 2026. Currently, Ampere’s shipping product, AmpereOne, includes 192 cores, and they plan to release a 256-core version dubbed AmpereOne MX next year. Additionally, Aurora will incorporate Ampere’s AI processor.

“Utilizing our own Ampere AI IP, which we incorporate directly into the SOC alongside our interconnect and high bandwidth memory, we are strategically addressing essential AI use cases. Initially focusing on inference, we are also preparing for expansion into training,” stated Jeff Wittich, chief product officer at Ampere Computing, during a press conference.

The Aurora processor will also introduce a scalable AmpereOne Mesh. According to Ampere, this feature facilitates a seamless integration of various computing types and includes a distributed coherence engine that maintains coherency across all nodes, purportedly tripling the performance per rack compared to its current top-tier processors, AmpereOne.

Aurora is engineered to provide robust AI computing capabilities suitable for applications such as RAG and vector databases. According to Wittich, Aurora is versatile enough to support a variety of enterprise applications beyond just cloud environments, boasting ease of deployment across numerous settings, not limited to large-scale cloud providers.

Wittich highlighted that Aurora’s design allows for air-cooling, which facilitates its integration into any standard data center without the need for upgrades like liquid cooling systems. He pointed out the energy efficiency of the AmpereOne product line, making it compatible with the power constraints of existing data centers.

He further explained, “77% of data centers globally have a maximum power draw per rack of under 20 kilowatts, with over half having less than 10 kilowatts. This means that larger setups, such as the Nvidia DGX unit, are not feasible for more than half of the current data centers.”

Wittich concluded by stressing the importance of energy-efficient solutions that are adaptable to existing air-cooled data centers to ensure that AI technology is not restricted to specific locations or limited to a handful of companies.


Welcome to DediRock, your trusted partner in high-performance hosting solutions. At DediRock, we specialize in providing dedicated servers, VPS hosting, and cloud services tailored to meet the unique needs of businesses and individuals alike. Our mission is to deliver reliable, scalable, and secure hosting solutions that empower our clients to achieve their digital goals. With a commitment to exceptional customer support, cutting-edge technology, and robust infrastructure, DediRock stands out as a leader in the hosting industry. Join us and experience the difference that dedicated service and unwavering reliability can make for your online presence. Launch our website.

Share this Post
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x