Something unusual showed up in a routine SEC filing on April 6. Broadcom disclosed an agreement to supply Anthropic with approximately 3.5 gigawatts of Google TPU compute capacity, starting in 2027.
Three point five gigawatts. That is enough electrical capacity to power a small country, all allocated to training and running one company’s AI models.
What the Deal Actually Covers
Broadcom, which helps Google design and manufacture its tensor processing units, filed documents confirming the three-party arrangement. Google provides the TPU architecture. Broadcom builds the chips. Anthropic consumes the compute. The capacity comes online in phases starting in 2027, with most of the data center infrastructure inside the United States.
This extends an October 2025 agreement where Anthropic secured over 1 gigawatt of TPU capacity. Six months later, the commitment has grown by roughly 3.5x. On an earnings call in March, Broadcom CEO Hock Tan told analysts that Anthropic demand for 2027 was expected to “surge in excess of 3 gigawatts.” The final number ended up even higher.
Krishna Rao, Anthropic’s chief financial officer, called the agreement the company’s “most significant compute commitment to date” in a blog post published alongside the filing.
The Revenue Number That Changes the Math
Anthropic also disclosed that its annualized revenue run rate has surpassed $30 billion. At the end of 2025, that figure was $9 billion. A tripling in roughly four months.
The company now counts over 1,000 business customers spending more than $1 million on an annualized basis. Two months ago, that number was roughly 500. The customer base doubled in eight weeks.
This happened despite a very public dispute between Anthropic and the U.S. Defense Department, which labeled the company a supply-chain risk in early 2026. Anthropic won an injunction against the designation in late March. The controversy appeared to boost consumer interest rather than hurt it. The Claude app hit the top free app in Apple’s U.S. App Store in February, right when the Pentagon story broke.
In February, Anthropic closed a $30 billion Series G funding round at a $380 billion valuation. The new compute deal ties into a broader $50 billion commitment the company announced to invest in U.S. compute infrastructure.
Why Broadcom Rose 6 Percent in a Single Day
Investors noticed the filing fast. Broadcom shares jumped 6 percent on April 7, the chip designer’s second-best trading day of 2026. The stock had been down nearly 10 percent year-to-date before that rally, weighed down by concerns about AI buildout sustainability and rising energy costs tied to the conflict in Iran.
Mizuho maintained its buy rating, with analysts led by Vijay Rakesh estimating $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027. The firm noted that the tighter TPU partnership “strengthens AVGO’s position” in custom silicon.
Citi went further, projecting that Broadcom would surpass the $100 billion total revenue target Hock Tan has discussed publicly and could reach more than $130 billion. Matt Britzman at Hargreaves Lansdown said the deals “should help ease some of the recent nervousness around TPU competition” and show that Google sees “meaningful demand visibility well into the future.”
9.5 Gigawatts and Counting
The Anthropic-Broadcom-Google deal does not exist in a vacuum. OpenAI, Anthropic’s primary rival, has committed to 6 gigawatts of AMD GPU capacity, with the first gigawatt expected in the second half of 2026. Both companies still rely heavily on Nvidia GPUs through cloud providers like Amazon, Google, and Microsoft.
The acceleration is what stands out. In 2024, a 1-gigawatt deployment was ambitious. Now two AI companies are talking about a combined 9.5 gigawatts. The energy requirements, data center construction timelines, and chip manufacturing capacity needed to deliver on these agreements represent logistical challenges the tech industry has never encountered at this speed.
Broadcom is also collaborating with OpenAI on custom silicon. The company is supplying both sides of the most competitive rivalry in AI. Hock Tan told analysts in March that he expects AI chip revenue in 2027 to be “significantly in excess of $100 billion.” If the Anthropic and Google deals deliver as planned, that projection starts to look conservative.
Why Compute Is the Real Moat
Model quality matters. Talent matters. Distribution matters. None of those factors matter if a company cannot secure enough hardware to train and serve its models at scale. Compute is the constraint that determines everything else.
Anthropic’s decision to anchor its infrastructure to Google TPUs rather than Nvidia GPUs is a bet on an alternative hardware path. Google’s custom silicon has been gaining ground on Nvidia in specific training and inference workloads. A 3.5-gigawatt commitment from one of the most demanding AI customers in the world is a massive vote of confidence in TPU architecture.
The energy question looms over all of this. Powering 3.5 gigawatts of compute requires new power plants, new transmission lines, new cooling infrastructure. Most of Anthropic’s capacity will be built in the United States, which means this single deal will have a measurable impact on American electricity demand by 2028. Utilities and grid operators are already scrambling to keep up with data center expansion. Deals at this scale will only intensify that pressure.
Anthropic went from $9 billion to $30 billion in annual run rate in four months. If that growth continues at even half the pace, the 3.5 gigawatts might not last. The company could be back at the negotiating table before the first TPU clusters come online, asking for more.
Comments