Skip to main content

NVIDIA’s $1.5B Strategy: Renting Its Own GPUs from Lambda

·472 words·3 mins
NVIDIA AI GPUs Lambda CoreWeave Cloud Computing
Table of Contents

According to a report from The Information, NVIDIA has entered into a $1.5 billion deal to lease AI GPU servers from Lambda, a smaller but fast-growing cloud service provider.

The deal is split into two parts:

  • A four-year, $1.3 billion agreement to rent 10,000 GPU servers.
  • A separate $200 million agreement to rent 8,000 additional servers with no fixed timeframe.

At first glance, this business model looks unusual. But in reality, it’s a strategic move by NVIDIA to maintain its leadership in the AI industry.


How NVIDIA’s “Circular Flow” Strategy Works
#

Here’s the step-by-step breakdown of NVIDIA’s unconventional business model:

  1. NVIDIA Invests – NVIDIA injects capital into smaller cloud service providers like Lambda.
  2. Cloud Provider Buys GPUs – The provider uses that investment to purchase NVIDIA’s AI chips and build GPU server clusters.
  3. NVIDIA Rents Them Back – NVIDIA then rents those servers, paying billions to the same provider.

This creates a circular revenue flow that benefits both parties:

  • For the cloud provider, it ensures huge revenue growth and boosts valuation, making IPOs (public offerings) more achievable.
  • For NVIDIA, it drives chip sales, generates long-term rental capacity, and opens potential equity profits if the provider goes public.

NVIDIA’s Playbook: From CoreWeave to Lambda
#

This isn’t the first time NVIDIA has used this playbook.

  • CoreWeave, originally a crypto mining company, pivoted to cloud GPU services with NVIDIA’s backing.
  • NVIDIA provided investment, GPUs, and rental agreements.
  • CoreWeave went public in March 2025, raising $1.5 billion—one of the largest venture-backed tech IPOs in recent years.

Now, NVIDIA is replicating this strategy with Lambda, effectively building a new network of loyal cloud partners.


Why NVIDIA Is Doing This
#

NVIDIA’s dominance is being challenged. Big Tech giants like Microsoft, Google, Amazon, and Meta are:

  • Major customers of NVIDIA today.
  • Simultaneously developing their own AI chips to reduce reliance on NVIDIA.

This creates a double threat:

  • NVIDIA could lose its largest customers.
  • Those same customers could become direct competitors.

By partnering with smaller cloud providers, NVIDIA creates a parallel ecosystem fully dependent on its hardware. This strategy:

  • Locks in long-term demand for its GPUs.
  • Diversifies its revenue sources beyond Big Tech.
  • Strengthens NVIDIA’s leadership in the AI compute market.

Key Takeaways
#

  • NVIDIA’s $1.5B deal with Lambda mirrors its earlier partnership with CoreWeave.
  • The strategy builds a self-reinforcing cycle of investment, chip sales, and GPU rentals.
  • This helps NVIDIA hedge against Big Tech rivals developing their own AI chips.
  • By creating a network of allied cloud providers, NVIDIA secures its future dominance in AI infrastructure.

Final Thoughts
#

NVIDIA’s move to rent its own GPUs through partners like Lambda may look strange, but it’s a brilliant strategic hedge. By nurturing smaller players, NVIDIA reduces its dependency on hyperscalers and builds a dedicated AI ecosystem that ensures it remains the undisputed leader in the GPU market.


Related

NVIDIA Blackwell Ultra: First GPU to Support PCIe 6.0
·483 words·3 mins
NVIDIA GPU PCIe 6.0 Blackwell Ultra AI HPC
Cisco, NVIDIA, and VAST Launch Enterprise-Grade Agentic AI Factory
·623 words·3 mins
CISCO NVIDIA Vast Data Agentic AI AI Infrastructure RAG Acceleration
AMD MI500 MegaPod: Rack-Scale AI Supercomputer Coming in 2027
·496 words·3 mins
AMD MI500 AI Supercomputer Data Center GPU EPYC NVIDIA