AWS has enjoyed a continuous run of strong growth, maintaining mid double digit growth even as Amazon’s hyperscale cloud business ripped past the $10 billion quarterly revenue mark. This growth and success have been largely attributed to what AWS CEO Andy Jassy consistently refers to as meeting its customers where they are. This has more specifically been done with what feels like an endless rollout of new products, services, instances, features, and more.
Essentially what started as an e-commerce company setting out to solve a problem for companies seeking to use the cloud to store data–has turned into the world’s largest cloud infrastructure company and has led to a rapidly diversified strategy taking it well beyond IaaS into PaaS, SaaS, and more.
AWS Identifies AI as a Big Opportunity
Artificial Intelligence is a key focus for enterprises today. With data scaling at an exponential rate, companies are rapidly seeking to do more with their data, which presents a perfect setup for the investment in technology that enables greater organization, enrichment, management, and data deployment to meet business needs. This serves as a perfect opportunity to implement AI/ML services in the hardware and software layers. As I see it, AWS has been steadfast in recognizing this opportunity and delivering a broad set of instances that support enterprise requirements for today and tomorrow’s AI needs.
At this year’s AWS reInvent, Jassy kicked off the event discussing how AI is no longer some experiment within a small cross-section of the enterprise. Rather, it is a mainstream capability that businesses are compelled to up-level investment around. The company sees strong adoption of its SageMaker, with what Jassy referred to as tens of thousands of customers now utilizing the fully managed platform for building, training, and deploying machine learning models. Sagemaker acts as one of the three AI partitions that AWS focuses on. The others are infrastructure and frameworks, and then its growing offering of AI services. All of these pieces come together nicely to form a complete stack for enterprises seeking to scale AI deployments.
This talk of growth in AI was backed up by a trove of announcements by the company, which further expanded its AI portfolio at the event. This came through with announcements of a revamp to SageMaker, a series of applied AI for business that included BI via its expanded Quicksight Q service, updates to Redshift, and the launch of Amazon Connect Wisdom for the contact center, and finally by announcing new chip partnerships with Intel Habana and new homegrown silicon for AI Training that it calls Trainium. This chip expands the company’s homegrown instances, including its popular AI inference chip, “Inferentia,” and its popular Graviton CPU based Compute instances, that are powered by Arm-based processors.
Completeness and Openness the Hallmarks of AWS’ AI Strategy
In the coming years, we can expect the AI market to become increasingly crowded with companies seeking to deliver on the promise of applied AI and supporting infrastructure. This will come via the software, platform, and infrastructure layers, much as we have seen with the proliferation of other compute-intensive workloads.
As I see it, AWS is taking a strong approach that will position the company well in the long term. Its complete portfolio with solutions addressing the entire stack coupled with more applied solutions and vertical tools gives most of its enterprise customers the full set of tools needed to employ AI at scale into the business. This builds seamlessly upon the company’s broader hybrid architectures that have matured rapidly in the past few years with greater capabilities to deal with hybrid and multi-cloud, and container-based developer tools that enable more robust and simple to manage data migration and workload deployment.
I also believe AWS. has been steadfast in building a complete infrastructure and framework set for AI that at this point is second to none. From core to edge, with a diverse set of hardware combinations, AWS seems to be boldly approaching the hardware as open to the best. This has been evident with the company’s consistent adoption of the newest NVIDIA hardware, including the most recent updates to support the A100 known as its EC2 P4D instances. But also announced support for Intel’s Habana discrete GPU variants for training and inference, and then finally, the continued development and deployment of its homegrown Inferentia and Trainium chips. This complete and open set of AI hardware instances for training and inference represent choice as a key consideration for AWS as it pertains to AI. This seems to be in line with Jassy’s “Meet the Customer” mentality. It gives users more flexibility and paths to apply the right infrastructure and framework to the specific use-case.
Final Thoughts on AWS AI Approach
Despite the strengths I mentioned, AWS is likely going to only further turn up its investment in the AI patch. It’s too big of a growth opportunity to be ignored, and with the company’s massive customer base, the broad tools and capabilities will provide a lower gate to expanding AI deployments. It’s hard not to be bullish on the prospects of AWS broadly, and its approach to AI fits in nicely with that bullish thesis. I expect strong growth numbers from top to bottom and across the stack for AWS, starting with the earnings results that will post at the end of this month.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.