Amazon 2Q25 Results - AWS Analysis
Azure Is Winning, Google Cloud Is Soaring, and AWS is… Falling Behind?
Amazon’s 2Q25 result came in ahead of guidance and market expectations. Revenue growth and operating income (EBIT) both accelerated and beat expectations. Despite this the stock reacted negatively and fell 8% on the day.
The headline financials were:
Group revenue growth accelerated to 13% from 9% in the previous quarter driven by a record Prime Day and strength in Advertising. This outpaced the guide for 7%-11% growth.
Operating income rose 31% to $19.2b, surpassing the guide of $13.0b-$17.5b.
Operating margins expanded from 9.9% last year to 11.4%, ahead of the 9.4% midpoint of guidance.
It was a quarter where everything came together for the core stores business. Revenue and operating profit reaccelerated while margins expanded. Advertising was the standout with revenue growth going from 18% in the previous quarter to 23%.
The result underscores Amazon’s strong value proposition amid rising tariffs, an uncertain economic environment and ongoing competition from Walmart and Temu. Despite these pressures, margins were able to expand due to the underlying mix-shift to higher margin segments and the continued cost benefits from scale, robotics, generative AI, strategic operational enhancements and more.
On the call, CEO Andy Jassy highlighted there is more to go:
This combination of robotics and generative AI is just getting started. And while we've made significant progress, it's still early with respect to what will roll out in the next few years.
Both North America and International margins continue to shift higher.
The quarter is another step in validating the thesis that Amazon is becoming a better business with more resilient and higher value (Advertising, Subscription, 3P, AWS) earnings - explained in detail in the Amazon Ecommerce deep dive.
What is happening with AWS?
The remainder of this note focuses on AWS, Amazon’s cloud infrastructure arm. AWS was the major disappointment as revenue growth did not reaccelerate like peers including Google Cloud Platform (GCP) and Microsoft Azure.
The market has bucketed companies into either AI winners or losers, and Amazon now sits in the latter.
The chart below illustrates the quarterly cloud growth trends and the market’s worries. Growth bumped up for Azure and GCP in the recent quarter with both signalling rising demand from AI workloads. AWS did not see this.
This quarter, Microsoft disclosed Azure surpassed $75b in revenue for FY25. By backsolving from reported growth rates, I’ve estimated approximate market share figures below.
It is a known phenomenon that AWS has been losing share to Azure and GCP. This is acknowledged given the higher base and that absolute growth is stronger. However, the last few quarters has seen Azure increasingly step ahead - see below.
AWS did see a positive uplift in the 2Q - the issue is that other cloud players did too and more so.
Where is Azure’s demand coming from?
Microsoft has seen incremental Azure growth come from AI Services demand. See below. Note the graph excludes 4Q as numbers were not disclosed.
A large portion of this incremental demand is likely coming from the top AI labs - namely OpenAI. Microsoft CEO Satya Nadella highlighted:
Back in the day, when I was getting started on Azure, I used to look over the lake and sort of see Netflix and Amazon and I'd say I wish Netflix ran on Azure. And in some sense, that's kind of what we now have, which is the largest AI workloads run on Azure. And when that happens, you learn the workload faster, you optimize the entire platform faster. Everything from what we're doing with Cosmos DB for a chat interface like ChatGPT or Copilot, is, guess what, going to be most relevant for any AI application going forward.
Nadella also noted:
In Azure and other cloud services, revenue grew 39%, significantly ahead of expectations, driven by accelerated growth in our core infrastructure business, primarily from our largest customers.
Over the past few months, OpenAI’s CEO Sam Altman has openly expressed the company’s significant capacity constraints and need for more graphics processing units (GPUs). He highlighted:
Open AI currently cannot supply nearly as much AI as the world wants.
In a June interview with Bloomberg:
Oh man this has gone from a lot of compute to like, the biggest infrastructure project in history… Microsoft will do a lot of compute for us a lot, a lot..the main thing that’s been on my mind, and I think on many people’s mind, is just how much inference demand there is. We are crazily constrained. We have a gigantic compute fleet like gigantic and yet still, if we had twice as much, we would be able to offer much better products and services. So for me, there are all the technical lessons about what we’ve learned and how we want to build this, but mostly we just want a lot.
Microsoft has a right of first refusal to supply new capacity for OpenAI. Most inference demand today likely originates here given ChatGPT was the first to market and commands an estimated 75% AI chatbot market share.
Weekly active users continues to ramp significantly.
In comparison, AWS’ customers such as Perplexity and Anthropic have 22m and 19m monthly active users (MAUs) respectively, representing a fraction of ChatGPT. Compute demand is thus lower.
This year, OpenAI entered partnerships with GCP, CoreWeave and Oracle via the StarGate project to expand infrastructure requirements. AWS does not any formal infrastructure partnership yet. In a world where OpenAI continues to dominate the consumer market, AWS does not benefit.
Is this a winner take most market?
The AI race is highly dynamic with the leading model changing every few months. The below graph provided by LLM stats shows the constant shift in the top ranked model based on benchmarks, pricing and capabilities.
Things change quickly. The clear market share leader, OpenAI has fallen behind in terms of model capabilities. Google Gemini 1.5 was well behind last year but has now shifted to the top of the rankings with Gemini 2.5. Meta is poaching talent from other AI labs by offering $100m+ sign-on bonuses to try and rebuild its way to the top.
What this implies is that having the most users does not necessarily translate into the best AI model. Software, data, research, talent and more GPUs can close the gap to the top. The industry does not exhibit the traits of network economies (more users leads to more valuable product) which implies it is not a winner take most market.
In addition, LLM stats ranks the top models across various categories.
No model is leading across more than one category. Customers ultimately want choice and have different needs. Some may want lower cost queries, while others value safe and reliable responses and others seek the most well reasoned answers. Given the relatively low customer switching costs, I think this also points to multiple winners over time.
Progress across multiple AI labs validates that this phenomenon is playing out. The table below shows revenue traction and funds raised across some AI labs.
OpenAI is expected to deliver $12.7b this year and its July ARR is estimated to be $12b. Anthropic, the leader for enterprise LLM adoption, has lifted ARR from a $2b runrate in March to around $5b in July. Cursor has surpassed $500m in ARR, a 60% increase from the $300m in mid-April.
Amazon CEO Jassy sums up the current dynamics:
If you look at what's really happening in the space, you have – it's very top heavy. So you have a small number of very large frontier models that are being trained that spend a lot on computing, a couple of which are being are being trained on top of AWS and others are being trained elsewhere.
And then you also have, I would say, a relatively small number of very large-scale generative AI applications. The one category would be Chatbots with the largest by a fair bit being ChatGPT, but the other category being really, I'll call it, coding agents. So these are companies like Cursor, Vercel, Lovable and some of the companies like that. Again, several of which run significant chunks on top of AWS.
And then you've got a very large number of generative AI applications that are in pilot mode – or they're in pilots or that are being developed as we speak and a very substantial number of agents that also people are starting to try to build and figure out how to get into production in a broad way, but they're all – they're quite early… We have a very significant number of enterprises and startups who are running applications on top of AWS' AI services. And then – but they're all – again, like the amount of usage and the expansiveness of the use cases and how much people are putting them into production and the number of agents that are going to exist, it's still just earlier stage than it's going to be.
AWS has a foot in the camp of many leading AI models (Anthropic, Perplexity, Cursor) that have yet to scale from a compute and inference perspective. These players have momentum and if growth continues, AWS will benefit. Just not this quarter.
Is Amazon losing in AI?
This is the narrative right now. As described above, I think an element has to do with the customer cohort and this being a point in time.
The other issue is that Amazon does not have a leading LLM exclusive to its platform. Google has Gemini. Azure has GPT. AWS has its own Nova models but they are far from leading. AWS has Anthopic’s Claude but this is also available on GCP.
For AWS customers to gain access to Gemini or GPT models, they need to use competitor platforms. Note, this may be changing with GPT models.
GCP has been gaining share as it has developed Gemini into a top model and operates with a full AI stack powered by advanced TPUs (custom chips). The business has won numerous high profile deals with OpenAI, the US Department of Defence, ServiceNow and Salesforce. CEO Sundar Pichai notes:
We see strong customer demand, driven by our product differentiation and our comprehensive AI product portfolio. Four stats show this. One, the number of deals over $250 million, doubling year-over-year; two, in the first half of 2025 we signed the same number of deals over $1 million that we did in all of 2024; three, the number of new GCP customers increased by nearly 28% quarter-over-quarter; four, more than 85,000 enterprises, including LVMH, Salesforce, and Singapore's DBS Bank now built with Gemini, driving a 35x growth in Gemini usage year-over-year.
Microsoft also acknowledged new deals coming from AI:
Three things are really happening. One is the migrations…The second thing that's also happening is cloud-native applications that are scaling. This is even excluding all of the AI stuff, just the classic cloud native e-commerce company let's say. These are scaling in a big way. And some of those customers were not on Azure previously, but now they're increasingly there because they have come for AI perhaps but they now stay for more than AI. And so to me, that's another thing you see in overall what's happening across the Azure number. And then, of course, there are the new AI workloads, so those are three things that are all, in some sense, building on each other, but that's kind of what's driving our growth.
With AI capabilities increasingly becoming a key purchasing decision for cloud customers - not having a leading model exclusive to its platform remains a strategic disadvantage for AWS. Having just the leading cloud capabilities is not enough now.
Can AWS overcome this?
AWS has the advantage of being the largest cloud provider (Azure has around 65% of its revenue) with the most workloads, in an industry that has high switching costs. To efficiently scale and leverage the true benefits of AI, customers will need to run models where their data resides. This is mostly at AWS.
The business rationale for customers needs to be very strong to move large existing workloads given the cost, training, redevelopment, disruption risk and effort to shift. There is a significant advantage to being the primary and incumbent cloud provider.
AWS has positioned itself as an independent infrastructure partner via its Bedrock platform which provides seamless access to multiple leading foundational models from Anthropic, Meta and more. Its strategic focus is on reducing the cost of AI training and inference, enabling more efficient model scaling and driving broader enterprise adoption.
Custom silicon is key here. CEO Jassy explains this below:
We are making that investment and it’s a huge area of opportunity for us because today it’s too expensive to continue to ramp at the rates of the cost of the infrastructure. That’s a big part of Trainium, investing in how to get the cost down for training. I think the inference side has to drive costs down too, which is incredibly important for the adoption side of it. So you have to do both. It won’t work if you just do one side.
AWS is reducing AI costs by developing its own Trainium chips as an alternative to the more expensive H100 and Blackwell chips from Nvidia. The Blackwell chip is higher-performing than Trainium2, but the AWS chip offers better cost performance.
The company has invested in custom chips for years with the advantage of designing chips specifically for its own data centre environment. Rather, Nvidia designs singular chips (ie Blackwell) for multiple vendor environments.
AWS charges significantly less per Trainium Ultraserver compared to the leading Nvidia instances below. It’s around 30% the cost of comparable H100 instances.
An important proof of concept is that Anthropic’s latest Claude Opus 4 AI model is being powered by over half a million Trainium2 chips through Project Rainier.
AWS is reducing its reliance on Nvidia as reflected in the below graph from Morgan Stanley - illustrating the allocation for the latest generation Nvidia GB200 chips.
The race is on to bring training and inference costs down. The next generation Trainium3 chips get better with double the performance of Trainium2 and energy costs reduced by an additional 50%. Nvidia is also working hard to release new generations that drive exponential price performance. Time will tell what happens.
Azure remains heavily reliant on Nvidia, with its internal custom chip efforts facing setbacks as evidenced by the delay of its next-generation Maia AI chips, now pushed back to 2026. Google has its TPUs and is well positioned.
I think AWS has played its cards well here and is investing in what truly matters for enterprise customers over the long-term. I think its leadership in custom silicon positions them well to win in AI by lowering costs and driving broader adoption across enterprises.
The Bigger Picture
AWS is the leading player in a highly attractive industry with a long runway to drive greater cloud adoption. The company is growing slower than peers which is not ideal. Would they rather have OpenAI as a customer - yes.
Is AWS an AI loser? Probably not. The company has:
The largest cloud infrastructure
Access to most leading AI models
Strong custom silicon capabilities that significantly reduce costs
Partnerships with leading AI labs including Anthropic, Perplexity and Cursor
It may not be experiencing the spectacular growth at peers but AWS is still benefiting. There is room for multiple players given the significant market size and high switching costs. Emerging players such as Oracle and CoreWeave are also doing well. At the end of the day, 17% growth off a $107b base is still pretty good.
The outlook is positive with AWS still capacity constrained and seeing momentum with enterprise customers. CEO Jassy notes:
I do believe that the combination of more enterprises who have resumed their march to modernize their infrastructure and move from on-premises to the cloud, coupled with the fact that AI is going to accelerate in terms of more companies deploying more AI applications into production that start to scale, coupled with the fact that I do think that more capacity is going to come online in the coming months and quarters, make me optimistic about the AWS business.
I think revenue growth will accelerate over the next few years.
Margins
The other disappointment was AWS margins - declining from 39.5% in the previous quarter to 33%.
Half of this decline came from annual stock-based compensation, which was evident in the prior comparative period.
The other half mainly came from higher depreciation expense. Unlike its peers, AWS has taken the conservative route with server useful lives amid the faster technology cycles:
Effective January 1, 2025 we changed our estimate of the useful lives of a subset of our servers and networking equipment from six years to five years. The shorter useful lives are due to the increased pace of technology development, particularly in the area of artificial intelligence and machine learning.
The company can offset this ramp in D&A by becoming a more efficient operator. Initiatives include a reduction in specific headcount and driving automation and productivity improvements.
I expect gradual margin contraction in the coming years due to the ramp in capital investments. Although this quarter was a one time blip with margins to revert higher to around 36%.
Summary
Amazon continues to remain a compelling investment. A market leader across two highly attractive industries. It is becoming a better business and is seeing an inflection in its financials.
AWS is expected to grow at a slower pace than peers. Although, the business unit remains strong with double digit revenue growth and 35%+ operating margins.
The thesis has not changed. After the market re-rating, Amazon trades on a fwd P/E of 30x and EV/EBIT of 24x growing operating earnings at >20% over the next 4 years. I think the multiple is attractive for the business quality and growth prospects.
Disclaimer: All posts on “cosmiccapital” are for informational purposes only. This is NOT a recommendation to buy or sell securities discussed. Please do your own work before investing your money.
What’s your gut on how fast enterprise adoption happens with AWS now on the table?