Thinking Machines Lab Secures Billions in Google Compute Deal
Mira Murati’s startup gains Nvidia GB300 access and poaches top Meta talent
In a move that signals a massive escalation in the artificial intelligence arms race, Thinking Machines Lab (TML) has finalized a multibillion-dollar cloud infrastructure agreement with Google. The deal provides the young company, founded by former OpenAI CTO Mira Murati, with unprecedented access to Google’s "AI Hypercomputer" hardware, specifically the latest generation of Nvidia GB300 chips. This announcement coincides with a major talent acquisition, as Meta veteran Weiyao Wang joins the startup after an eight-year tenure, marking a significant shift in the industry’s power dynamics.
Key Details
The partnership between Thinking Machines Lab and Google Cloud is one of the most substantial infrastructure deals for an early-stage AI startup to date. While the exact financial terms remain undisclosed, industry sources indicate the agreement is valued in the single-digit billions. This arrangement effectively plugs TML into one of the world's most powerful compute environments, bypassing the hardware bottlenecks that have stymied many other frontier model developers.
Central to the deal is access to Nvidia’s GB300 chips. These next-generation Blackwell GPUs are designed specifically for the massive scale of trillion-parameter models, offering significant performance leaps in both training efficiency and inference speed. By securing a guaranteed pipeline of these chips through Google, TML is positioning itself to compete directly with the industry’s established titans.
Simultaneously, TML is bolstering its human capital. Weiyao Wang, a key figure in Meta’s AI Research (FAIR) team, has officially joined TML as a lead researcher. Wang is best known for his work on multimodal perception systems and the "Segment Anything" (SAM3D) projects. His departure from Meta, where he spent his entire professional career, is seen as a major blow to Meta’s computer vision efforts and a win for Murati’s vision.
What This Means
For Thinking Machines Lab, this dual-pronged expansion—compute and talent—removes the primary barriers to building frontier-level AI. Mira Murati has consistently argued that the next generation of models will require a fundamental shift in how perception and reasoning are integrated. By bringing in Wang, TML is doubling down on multimodal "world models" that can understand and interact with the physical environment more intuitively than current LLMs.
The deal also highlights Google Cloud’s strategy to become the preferred infrastructure provider for the next wave of AI labs. By hosting TML, Google is ensuring it remains at the center of the AI ecosystem, even as competitors like OpenAI and Anthropic lean heavily on Microsoft and Amazon respectively.
Technical Breakdown
With access to the GB300 architecture, TML can optimize its model training in ways that were previously impossible:
- Enhanced Multimodal Training: The GB300's high-bandwidth memory allows for more efficient processing of high-resolution video and 3D data.
- Compute Efficiency: The Blackwell architecture provides a 4x increase in training speed for large language models compared to the previous H100 generation.
- Inference at Scale: The deal includes deployment on Google’s TPU v6 clusters as a secondary compute layer for cost-effective scaling.
Industry Impact
This development sends a message to the tech giants: the era of foundation model dominance is far from settled. TML’s ability to attract both billions in compute and top-tier talent from Meta suggests that the market still sees massive potential for new players to disrupt the current leaders.
For developers and researchers, the talent shift is particularly noteworthy. Weiyao Wang was a foundational member of the team that open-sourced SAM, a tool that changed the landscape of image segmentation. The "brain drain" from established giants to well-funded startups is accelerating, driven by the desire for agile research environments and the massive upside of early-stage equity.
Looking Ahead
As Thinking Machines Lab begins to deploy its new compute resources, the industry will be watching for their first major model release. With Murati’s leadership and Wang’s expertise in perception, the expectation is a model that moves beyond text-based reasoning toward a more holistic understanding of the world. The race to AGI just got a lot more interesting.
Source: Business 2.0 News(opens in a new tab) Published on ShtefAI blog by Shtef ⚡



