Marketdash

Broadcom Unleashes a 'Monster' Chip That Doubles AI Network Speeds

MarketDash
Broadcom's new Tomahawk 6 chip is now shipping, promising to supercharge AI data centers by doubling network capacity and slashing energy costs. Here's why it matters.

Get Broadcom Alerts

Weekly insights + SMS alerts

So you think your internet is fast? Try moving data at 102.4 terabits per second. That's what Broadcom Inc. (AVGO) is now doing with its new Tomahawk 6 networking chip, which has officially entered full production and is shipping to customers. This isn't just an incremental upgrade—it's a monster chip that doubles the data capacity of the previous version, and it's designed for one thing: feeding the insatiable appetite of artificial intelligence.

Think of it this way: training and running massive AI models requires connecting huge groups of computers into a single, coherent brain. The faster data can move between those computers, the smarter and more powerful that brain can become. Broadcom's new chip is essentially the central nervous system for that brain, and it just got twice as fast. The company managed to move from early testing to mass production in less than three quarters, which in chip-making terms is basically warp speed.

What the Analysts Are Saying

This isn't happening in a vacuum. JPMorgan analyst Harlan Sur has been watching Broadcom's AI business closely. He forecasts the company's AI revenue will come in above $9 billion, supported by strong demand for Alphabet Inc.'s (GOOGL) Google TPU chips and, of course, Broadcom's own networking products like the Tomahawk switches.

But Sur thinks that's just the beginning. He believes Broadcom could guide for April-quarter revenue of $21 billion to $22 billion, which would be above current estimates, with AI revenue potentially hitting $10 billion to $11 billion. Looking further out, his projections get even more eye-popping: AI revenue could exceed $65 billion in fiscal 2026 and surpass $120 billion in fiscal 2027 as production ramps up and new programs scale. In other words, the AI infrastructure build-out is a multi-year super-cycle, and Broadcom has a front-row seat.

More Than Just a Chip

Shipping a record-breaking switch is great, but Broadcom knows it takes more than one piece of silicon to run an AI data center. So the company is rolling out a whole suite of additional technologies designed to make these massive computing clusters more efficient. The goal is simple: move data faster, use less power, and make the whole system more reliable when thousands of computers are working in concert.

Many of these new tools will be on display at the Optical Fiber Communications Conference (OFC) 2026 in Los Angeles. It's all part of Broadcom's push to support what it calls the "next generation of AI infrastructure," as companies continue to build ever-larger computing clusters that would have been science fiction a few years ago.

Get Broadcom Alerts

Weekly insights + SMS (optional)

Playing Well With Others

Here's a crucial piece of the puzzle: no company builds an AI data center alone. Broadcom is actively working with industry partners to develop common standards so that equipment from different vendors can actually talk to each other. A key initiative here is the Optical Compute Interconnect (OCI) agreement, which Broadcom helped launch. The aim is to create a shared specification that allows networking hardware and optical technologies from multiple suppliers to connect seamlessly within AI systems.

At OFC 2026, Broadcom plans to demonstrate its technologies alongside more than 30 partners, showing how its products help power the large-scale AI data centers used by cloud and tech giants. Collaboration is also happening on the physical side of things. The company recently introduced a new chip designed to move huge amounts of data more efficiently within data centers and is partnering on cooling technology for powerful processors.

Specifically, Broadcom is working with JetCool, a unit of Flex Ltd (FLEX), to develop liquid cooling systems that remove heat directly from AI chips. Flex will manufacture the cooling equipment, while Broadcom provides the processors. As computing demands grow and chips get hotter, this kind of direct cooling isn't a luxury—it's a necessity for keeping the hardware running efficiently and preventing a meltdown, both literally and figuratively.

So, what's the bottom line? Broadcom is betting big that the future of computing is AI, and the future of AI is built on networks that can move unimaginable amounts of data at blistering speeds while somehow using less power. With the Tomahawk 6 now in production, a growing toolkit of supporting technologies, and a web of industry partnerships, the company is positioning itself as a foundational player in building that future. It's a classic infrastructure play: when there's a gold rush, sell the shovels. In this case, the shovels are networking chips, and they just got a lot sharper.

Broadcom Unleashes a 'Monster' Chip That Doubles AI Network Speeds

MarketDash
Broadcom's new Tomahawk 6 chip is now shipping, promising to supercharge AI data centers by doubling network capacity and slashing energy costs. Here's why it matters.

Get Broadcom Alerts

Weekly insights + SMS alerts

So you think your internet is fast? Try moving data at 102.4 terabits per second. That's what Broadcom Inc. (AVGO) is now doing with its new Tomahawk 6 networking chip, which has officially entered full production and is shipping to customers. This isn't just an incremental upgrade—it's a monster chip that doubles the data capacity of the previous version, and it's designed for one thing: feeding the insatiable appetite of artificial intelligence.

Think of it this way: training and running massive AI models requires connecting huge groups of computers into a single, coherent brain. The faster data can move between those computers, the smarter and more powerful that brain can become. Broadcom's new chip is essentially the central nervous system for that brain, and it just got twice as fast. The company managed to move from early testing to mass production in less than three quarters, which in chip-making terms is basically warp speed.

What the Analysts Are Saying

This isn't happening in a vacuum. JPMorgan analyst Harlan Sur has been watching Broadcom's AI business closely. He forecasts the company's AI revenue will come in above $9 billion, supported by strong demand for Alphabet Inc.'s (GOOGL) Google TPU chips and, of course, Broadcom's own networking products like the Tomahawk switches.

But Sur thinks that's just the beginning. He believes Broadcom could guide for April-quarter revenue of $21 billion to $22 billion, which would be above current estimates, with AI revenue potentially hitting $10 billion to $11 billion. Looking further out, his projections get even more eye-popping: AI revenue could exceed $65 billion in fiscal 2026 and surpass $120 billion in fiscal 2027 as production ramps up and new programs scale. In other words, the AI infrastructure build-out is a multi-year super-cycle, and Broadcom has a front-row seat.

More Than Just a Chip

Shipping a record-breaking switch is great, but Broadcom knows it takes more than one piece of silicon to run an AI data center. So the company is rolling out a whole suite of additional technologies designed to make these massive computing clusters more efficient. The goal is simple: move data faster, use less power, and make the whole system more reliable when thousands of computers are working in concert.

Many of these new tools will be on display at the Optical Fiber Communications Conference (OFC) 2026 in Los Angeles. It's all part of Broadcom's push to support what it calls the "next generation of AI infrastructure," as companies continue to build ever-larger computing clusters that would have been science fiction a few years ago.

Get Broadcom Alerts

Weekly insights + SMS (optional)

Playing Well With Others

Here's a crucial piece of the puzzle: no company builds an AI data center alone. Broadcom is actively working with industry partners to develop common standards so that equipment from different vendors can actually talk to each other. A key initiative here is the Optical Compute Interconnect (OCI) agreement, which Broadcom helped launch. The aim is to create a shared specification that allows networking hardware and optical technologies from multiple suppliers to connect seamlessly within AI systems.

At OFC 2026, Broadcom plans to demonstrate its technologies alongside more than 30 partners, showing how its products help power the large-scale AI data centers used by cloud and tech giants. Collaboration is also happening on the physical side of things. The company recently introduced a new chip designed to move huge amounts of data more efficiently within data centers and is partnering on cooling technology for powerful processors.

Specifically, Broadcom is working with JetCool, a unit of Flex Ltd (FLEX), to develop liquid cooling systems that remove heat directly from AI chips. Flex will manufacture the cooling equipment, while Broadcom provides the processors. As computing demands grow and chips get hotter, this kind of direct cooling isn't a luxury—it's a necessity for keeping the hardware running efficiently and preventing a meltdown, both literally and figuratively.

So, what's the bottom line? Broadcom is betting big that the future of computing is AI, and the future of AI is built on networks that can move unimaginable amounts of data at blistering speeds while somehow using less power. With the Tomahawk 6 now in production, a growing toolkit of supporting technologies, and a web of industry partnerships, the company is positioning itself as a foundational player in building that future. It's a classic infrastructure play: when there's a gold rush, sell the shovels. In this case, the shovels are networking chips, and they just got a lot sharper.