Amazon relies heavily on Arm Neoverse with the second-generation Graviton2, as well as custom copies for AI model training.
Inside AWS Graviton: Arm servers available for the first time in the public cloud
Amazon’s development of a homegrown CPU, combined with new AMD EPYC-powered instances announced this month, threatens Intel’s hegemony in cloud computing and enterprise servers.
AWS CEO Andy Jassy strongly praised the importance of their modified silicon – the Nitro hypervisor silicon, a second version of the Graviton CPU with Arm Neoverse cores and a new interference-oriented Arm CPU for AI model training during the keynote address at: Invent 2019 in Las Vegas.
The Graviton2 processors offer custom-made 64-bit Neoverse cores with a 7 mm production process, internally designed by the Annapurna Labs team from Amazon, who were responsible for the first-generation Graviton. The offer on Graviton2 includes up to 64 vCPUs, 25 Gbps improved networks and 18 Gbps EBS bandwidth. Larry Dignan from ZDNet elaborates on the second generation Graviton.
SEE: AWS Lambda: a guide for the serverless computing framework (free PDF) (TechRepublic)
Graviton2 powered instances are available in three configurations:
- General Use (M6g and M6gd) – 1-64 vCPUs and up to 256 GiB memory.
- Compute-Optimized (C6g and C6gd) – 1-64 vCPUs and a maximum of 128 GiB memory.
- Optimized memory (R6g and R6gd) – 1-64 vCPUs and up to 512 GiB memory.
Graviton2 “can deliver up to 7x the performance of the A1 instances, including twice the floating point performance. Additional memory channels and double caches per core accelerate memory access up to 5x,” said Jeff Barr, AWS chief evangelist, in a blog post.
Similarly, Barr increases touted performance between Graviton and Graviton2:
- SPECjvm® 2008: + 43% (estimated)
- SPEC CPU® 2017 integer: + 44% (estimated)
- SPEC CPU floating point 2017: + 24% (estimated)
- HTTPS task allocation with Nginx: + 24%
- Memcached: + 43% performance, with lower latency
- X.264 video coding: + 26%
- EDA simulation with Cadence Xcellium: + 54%
The Nitro-hypervisor silicon makes some of the benefits of Graviton2 possible. Although Nitro has been around for years, AWS has redesigned its infrastructure around it, with Graviton2 explicitly built with Nitro in mind.
Inf1 instance for EC2, named “the fastest in-cloud inferences” with low latency, 3x higher throughput and up to 40% lower costs per inference compared to G4, with out-of-box support for TensorFlow, PyTorch and MXNet. Inf1 is now available in EC2, with support for EKS and SageMaker.
Amazon’s continued interest in building their own silicon is a problem for a number of hardware vendors, especially Intel – in view of Intel’s own problems with steps to 10 nm. The long-standing king of enterprise compute is confronted with attacks from all sides. AMD’s reversal of fortunes with Zen architecture – and AWS, Azure and Oracle offering Zen-powered instances – makes them the obvious alternative to Intel, given that they share the x86-64 instruction set, and migration requires nothing more than changing the instance type.
Fundamentally, its success will be determined by pricing – if Amazon Graviton2 instances are significantly lower than Intel and AMD instances, users will take over. If not, the recording is probably low.
Cloud Insights newsletter
Your knowledge base for the latest news on AWS, Microsoft Azure, Google Cloud Platform, Docker, SaaS, IaaS, cloud security, containers, the public cloud, the hybrid cloud, the industry cloud and much more.
Delivered on Mondays
Register today
Also see
AWS CEO Andy Jassy presents the keynote address at re: Invent 2019 in Las Vegas.
The numbers are wrong. Graviton2 is 2x faster per core as Graviton1 and 20-40% faster than Intel.