Google in talks with Marvell Technology to develop two new AI chips for inference workloads

Reviewed byNidhi Govil

13 Sources

Share

Alphabet's Google is negotiating with Marvell Technology to build two specialized AI chips—a memory processing unit and an inference-focused TPU. The move aims to diversify Google's supply chain beyond Broadcom while addressing surging inference compute demands as AI products scale to hundreds of millions of users.

Google and Marvell Technology Enter Deal Talks for Custom AI Chips

Alphabet's Google is in deal talks with Google and Marvell Technology to develop two new AI chips designed to run AI models more efficiently, according to a report by The Information

2

. The potential partnership involves creating a memory processing unit that would complement Google's existing Tensor Processing Unit (TPU) infrastructure, alongside a new TPU built specifically for AI inference—the phase where models serve user queries rather than undergo training . While discussions remain ongoing and no contract has been signed, the talks signal a strategic shift in how Google manages its custom chip development pipeline as inference costs increasingly dominate AI economics

3

.

Source: BNN

Source: BNN

Marvell Technology's shares jumped 7% in premarket trading following the news, positioning the company to add more than $9 billion to its market value of $122.15 billion

4

. The chip designer trades at 33.35 times estimates of its earnings for the next 12 months, compared with 27.84 for Broadcom, and carries a "buy" rating from 44 analysts with a median price target of $125 .

Diversify Supply Chain Without Replacing Broadcom Partnership

The timing of these discussions is notable. They emerged just days after Broadcom, Google's primary custom chip partner, announced a long-term agreement to design and supply TPUs and networking components through 2031

2

. Rather than replacing Broadcom, Google appears to be building a multi-supplier architecture where different partners handle distinct segments of its TPU program. The company already works with Broadcom for high-performance chip variants, MediaTek for cost-optimized "e" variants at 20 to 30% lower cost, and TSMC for fabrication

2

.

This strategy mirrors how Big Tech companies are moving fast to reduce dependence on external chip suppliers by expanding their custom chip efforts

4

. Meta recently extended its deal with Broadcom to produce several generations of custom AI processors, paying the company $2.3 billion last year for AI chip design and related services . The approach allows hyperscalers to avoid vendor lock-in while managing costs as demand for specialized processors used in advanced data centers powering AI workloads continues to surge

4

.

Running AI Models at Scale Drives Inference Economics

Google's seventh-generation TPU, Ironwood, debuted this month as what the company calls "the first Google TPU for the age of inference"

2

. It delivers ten times the peak performance of the TPU v5p and scales to 9,216 liquid-cooled chips in a superpod spanning roughly 10 megawatts, producing 42.5 FP8 exaflops. Google plans to build millions of Ironwood units this year

2

. The Marvell-designed chips would supplement rather than replace Ironwood, potentially targeting different workload profiles or cost points

5

.

Source: Wccftech

Source: Wccftech

The shift from training to inference as the primary demand driver is reshaping the chip market. Training a frontier model is a one-time event requiring enormous compute for weeks or months. Inference runs continuously, serving every query from every user, and its costs scale with demand rather than capability

2

. As AI products reach hundreds of millions of users, inference becomes the dominant expense, and purpose-built inference silicon becomes a competitive advantage that general-purpose GPUs from Nvidia cannot match on cost or efficiency

2

.

Marvell's Growing Custom Silicon Business

Marvell's data center revenue reached a record $6.1 billion in its fiscal year ending February 2026, with total revenue of $8.2 billion, up 42% year over year

2

. The semiconductor company runs a custom silicon business with a $1.5 billion annual run rate across 18 cloud-provider design wins, building chips for Amazon's Trainium processors, Microsoft's Maia AI accelerator, and Meta's new data processing unit, in addition to its existing work with Google on the Axion ARM CPU

2

.

Source: ET

Source: ET

Nvidia invested $2 billion in Marvell at the end of March, partnering through NVLink Fusion to integrate Marvell's custom chips and networking with Nvidia's interconnect fabric

2

. In December 2025, Marvell acquired Celestial AI for up to $5.5 billion, gaining photonic interconnect technology that CEO Matt Murphy said would deliver "the industry's most complete connectivity platform for AI and cloud customers"

2

. Murphy is targeting 20% market share in custom AI chips and expects roughly 30% year-over-year revenue growth in fiscal 2027

2

.

The companies aim to finalize the design of the memory processing unit as soon as next year before handing it off for test production

3

. TPU sales have become a key driver of growth in Google's cloud revenue as it aims to show investors that its AI investments are generating returns

3

. AI lab Anthropic uses a range of chips, including TPUs designed by Google, to develop and run its AI software and chatbot Claude

4

. The custom ASIC market is projected to grow 45% in 2026 and reach $118 billion by 2033

2

, making partnerships with chip designers like Marvell increasingly valuable for hyperscalers seeking alternatives to Nvidia's offerings.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo