Skip to main content
AVGO logo

Broadcom Inc

Exchange: NASDAQSector: TechnologyIndustry: Semiconductors

Broadcom Inc., a Delaware corporation headquartered in San Jose, CA, is a global technology leader that designs, develops and supplies a broad range of semiconductor and infrastructure software solutions. Broadcom's category-leading product portfolio serves critical markets including data center, networking, enterprise software, broadband, wireless, storage and industrial. Our solutions include data center networking and storage, enterprise, mainframe and cyber security software focused on automation, monitoring and security, smartphone components, telecoms and factory automation.

Did you know?

AVGO's revenue grew at a 18.9% CAGR over the last 6 years.

Current Price

$354.91

+1.22%

GoodMoat Value

$220.56

37.9% overvalued
Profile
Valuation (TTM)
Market Cap$1.68T
P/E67.38
EV$1.58T
P/B20.70
Shares Out4.74B
P/Sales24.64
Revenue$68.28B
EV/EBITDA46.47

Broadcom Inc (AVGO) — Q1 2026 Earnings Call Transcript

Apr 4, 202616 speakers5,774 words47 segments
JY
Ji YooHead of Investor Relations

Thank you, operator, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; Charlie Kawwas, President, Semiconductor Solutions Group; and Ram Velaga, President, Infrastructure Software Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the first quarter fiscal year 2026. If you did not receive a copy, you may obtain the information from the Investor section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for 1 year through the Investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our first quarter fiscal year 2026 results, guidance for our second quarter of fiscal year 2026 as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.

HT
Hock TanPresident and CEO

Thank you, Ji, and thank you, everyone, for joining us today. In our fiscal Q1 2026, total revenue reached a record $19.3 billion, and that’s up 29% year-on-year and exceeding our guidance on the back of better-than-expected growth in AI semiconductors. This top line strength translated into exceptional profitability with Q1 consolidated adjusted EBITDA hitting a record $13.1 billion, which is 68% of revenue. These figures demonstrate that our scale continues to drive significant operating leverage. Now we expect this momentum to accelerate as our custom AI XPUs hit their next phase of deployment among our 5 customers. So looking ahead to next quarter Q2 '26, we're guiding for consolidated revenue of approximately $22 billion, which represents 47% year-on-year growth. Let me now give you more color on our semiconductor business. In Q1, revenue was a record $12.5 billion as year-on-year growth accelerated to 52%. This robust growth was driven by AI semiconductor revenue, which grew 106% year-on-year to $8.4 billion, well above our outlook. In Q2, this momentum accelerates, and we expect semiconductor revenue to be $14.8 billion, up 76% year-on-year. Driving this is AI revenue growth, which will accelerate sharply to 140% year-on-year to $10.7 billion. Now our custom accelerator business grew 140% year-on-year in Q1. This momentum continues in Q2. The ramp of custom AI accelerators across all our 5 customers is progressing very well. For Google, we continue our trajectory of growth in '26 with strong demand for the seventh-generation Ironwood TPU. In 2027 and beyond, we expect to see even stronger demand from the next generations of TPU. For Anthropic, we are off to a great start in 2026 for 1 gigawatt of TPU compute. And for '27, this demand is expected to surge in excess of 3 gigawatts of compute. Our XPU franchise, I should add, extends beyond TPUs. Now contrary to recent analyst reports, Meta's custom accelerator MTIA roadmap is alive and well. We're shipping now. In fact, for the next-generation XPUs, we will scale to multiple gigawatts in '27 and beyond. Rounding off for customers 4 and 5, we see strong shipments this year, which we expect to more than double in 2027. We also now have a sixth customer. We expect OpenAI to deploy their first-generation XPU in volume in 2027 at over 1 gigawatt of compute capacity. Let me take a second to emphasize that our collaboration with these 6 customers to develop AI XPUs is deep, strategic, and multi-year. We bring to the partnerships unmatched technology in SerDes, silicon design, process technology, advanced packaging, and networking to enable each of these customers to achieve optimal performance for their differentiated LLM workloads. We have the track record to deliver these XPUs in high volumes at an accelerated time to market with very high yields. Beyond technology, we provide multi-year supply agreements as our customers scale up deployment of their compute infrastructure. Our ability to assure supply in these times of constrained capacity in leading-edge wafers, high-bandwidth memory, and substrates ensures the durability of our partnerships, and we have fully secured capacity of these components for '26 through '28. Consistent with the strong outlook for our XPUs, demand for AI networking is accelerating. Q1 AI networking revenue grew 60% year-on-year and represented one-third of total AI revenue. In Q2, we project AI networking to accelerate significantly and grow to 40% of total AI revenue. We are clearly gaining share in networking. Let me explain. In scale-out, our first-to-market Tomahawk 6 switch at 100 terabits per second, as well as our 200G SerDes, are capturing demand from hyperscalers, whether they use XPUs or GPUs this year. This lead will extend in '27 with our next-generation Tomahawk 7 featuring double the performance. Meanwhile, in scale-up, as cluster sizes and our customers expand, we are uniquely positioned to enable these customers to stay on direct attached copper through our 200G SerDes. As we next step up to 400G SerDes in 2028, our XPU customers will likely continue to stay on direct attached copper. And this is a huge advantage, as the alternative of going to optical is more expensive and requires significantly more power. Reflecting the foregoing factors, our visibility in 2027 has dramatically improved. Today, in fact, we have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027. We have also secured the supply chain required to achieve this. Now turning to non-AI semiconductors. Q1 revenue of $4.1 billion was flat year-on-year, in line with guidance. Enterprise networking, broadband, and server storage revenues were up year-on-year, offset by a seasonal decline in wireless. In Q2, we forecast non-AI semiconductor revenue to be approximately $4.1 billion, up 4% from a year ago. Let me now talk about our Infrastructure Software segment. Q1 Infrastructure Software revenue of $6.8 billion was in line with our guidance, up 1% year-on-year. For Q2, we forecast Infrastructure Software revenue to be approximately $7.2 billion, up 9% year-on-year. VMware revenue grew 13% year-on-year. Bookings continue to be strong, and total contract value booked in Q1 exceeded $9.2 billion, sustaining an annual recurring revenue growth of 19% year-on-year. Let me reinforce that this growth in our Infrastructure Software business reflects our focus and investments in foundational infrastructure, and our Infrastructure Software is not disrupted by AI. In fact, VMware Cloud Foundation, VCF, is the essential software layer in data centers integrating CPUs, GPUs, storage, and networking into a common high-performance private cloud environment. As the permanent abstraction layer between AI software and physical chips, silicon, VCF cannot be disintermediated or replaced. It allows enterprises to scale complex generative AI workloads effectively with agility that hardware alone cannot provide. We are confident that the growth in generative and agentic AI will create the need for more VMware, not less. So in summary, let me put it all together for Q2 2026. We expect consolidated revenue growth to accelerate to 47% year-on-year and reach approximately $22 billion, and we expect adjusted EBITDA to be approximately 68% of revenue. So with that, let me turn the call over to Kirsten.

KS
Kirsten SpearsCFO

Thank you, Hock. Let me now provide additional detail on our Q1 financial performance. Consolidated revenue was a record $19.3 billion for the quarter, up 29% from a year ago. Gross margin was 77% of revenue in the quarter. Consolidated operating expenses were $2 billion, of which $1.5 billion was R&D. Q1 operating income was a record $12.8 billion, up 31% from a year ago. Operating margin increased 50 basis points year-over-year to 66.4% on favorable operating leverage. Adjusted EBITDA of $13.1 billion or 68% of revenue was above our guidance of 67%. Now let's go into detail for our two segments. Starting with semiconductors. Revenue for our Semiconductor Solutions segment was a record $12.5 billion, with growth accelerating to 52% year-on-year, driven by AI. Semiconductor revenue represented 65% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was up 30 basis points year-on-year to approximately 68%. Operating expenses of $1.1 billion reflected increased investment in R&D for leading-edge AI semiconductors and represented 8% of revenue. Semiconductor operating margin of 60% was up 260 basis points year-on-year, reflecting strong operating leverage. Now moving on to Infrastructure Software. Revenue for Infrastructure Software of $6.8 billion was up 1% year-on-year and represented 35% of revenue. Gross margin for Infrastructure Software was 93% in the quarter and operating expenses were $979 million in the quarter. Q1 software operating margin was up 190 basis points year-on-year to 78%. Moving on to cash flow. Free cash flow in the quarter was $8 billion and represented 41% of revenue. We spent $250 million on capital expenditures. We ended the first quarter with inventory of $3 billion as we continue to secure components to support strong AI demand. Our days of inventory on hand were 68 days in Q1 compared to 58 days in Q4 in anticipation of accelerating AI semiconductor growth. Turning to capital allocation. In Q1, we paid stockholders $3.1 billion of cash dividends based on a quarterly common stock cash dividend of $0.65 per share. During the quarter, we repurchased $7.8 billion or approximately 23 million shares of common stock. In total, in Q1, we returned $10.9 billion to shareholders through dividends and share repurchases. In Q2, we expect the non-GAAP diluted share count to be approximately 4.94 billion shares, excluding the impact of potential share repurchases. We ended the first quarter with $14.2 billion of cash. Today, we are announcing our Board of Directors has authorized an additional $10 billion for our share repurchase program effective through the end of calendar year 2026. Now moving on to guidance. Our guidance for Q2 is for consolidated revenue of $22 billion, up 47% year-on-year. We forecast semiconductor revenue of approximately $14.8 billion, up 76% year-on-year. Within this, we expect Q2 AI semiconductor revenue of $10.7 billion, up approximately 140% year-on-year. We expect Infrastructure Software revenue of approximately $7.2 billion, up 9% year-on-year. For your modeling purposes, we expect consolidated gross margin to be flat sequentially at 77%. We expect Q2 adjusted EBITDA to be approximately 68%. We expect the non-GAAP tax rate for Q2 in fiscal year 2026 to be approximately 16.5% due to the impact of the global minimum tax and the geographic mix of income compared to that of fiscal year '25. That concludes my prepared remarks. Operator, please open up the call for questions.

Operator

And our first question will come from Blayne Curtis with Jefferies.

O
BC
Blayne CurtisAnalyst

Just a clarification and a question. Just clarification, Hock, on the greater than $100 billion. I think you said AI chips. I just want to make sure you're clarifying the difference between the ASICs and networking, and didn't know how rack revenue fits in there. And then the question, I think the biggest overhang on the group here is that you grew roughly double in the quarter AI. I think that's what kind of cloud CapEx is growing this year. I'm just kind of curious your perspective, I think given the outlook that you have for '27, you should be a share gainer. I'm just kind of curious your perspective in terms of the pessimism that investors kind of think of that the hyperscalers need to get a return on investment in this year or next year or if not, the year after. I'm just kind of curious your perspective, how you factor that into your outlook.

HT
Hock TanPresident and CEO

What we have observed in recent months is that our customers, including both hyperscalers and non-hyperscalers, share a common goal of developing large language models, commercializing them, and creating platforms for enterprise use cases like code assistance or agentic AI, as well as consumer subscription models. We see a limited number of prospects, many of whom are now our customers, who are focused on creating these platforms, whether it's generative AI or agentic AI. We are experiencing increasingly strong demand for compute capacity for training, which is a constant requirement for them. Interestingly, there is also a significant demand for inference, which is essential for productizing and monetizing their latest LLMs. This inference demand is driving a considerable amount of compute capacity, which benefits us as our key customers are working on their custom accelerators and designing their own networking architecture for these accelerators. I anticipate that demand will continue to rise, especially given the announcements we have seen over the past six months. To clarify my earlier point, I believe our revenue in 2027 will be well over $100 billion, largely driven by chip sales, whether they are XPUs, switch chips, or DSPs, referring specifically to the silicon content we are focusing on.

Operator

One moment for our next question, and that will come from the line of Harlan Sur with JPMorgan.

O
HS
Harlan SurAnalyst

Congratulations to the team on the strong results. Hock, there's been a lot of noise around CSPs and hyperscalers embarking on their own internal XPU, TPU design efforts, right? We call it COT, or customer-owned tooling. This is not a new dynamic with ASICs, right? I think the Broadcom team has been through this COT competitive dynamic before over the 30 years, right, that you've been a leader in the ASIC industry. And very few of these COT initiatives have ever been successful. Now on AI, some of these COT initiatives are coming to the market now, but it looks like they're at least 2x less performant than your current generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP. So maybe just a quick 2-part question. Hock, one for you is, given your visibility into next year, do you see these COT science projects taking any meaningful TPU, XPU share from Broadcom? And then maybe the second quick question for either you or Charlie is, given that Broadcom's TPU, XPU programs from a performance complexity IP perspective are 12 to 18 months ahead of any of these COT programs, how does the Broadcom team widen this gap further?

HT
Hock TanPresident and CEO

That's a great question. I intentionally took the time in my opening remarks to point out that when any hyperscaler or LLM developer attempts to achieve complete self-sufficiency in creating customer-owned tooling, they encounter significant challenges. One major challenge is the technology required for producing the silicon chips, particularly XPUs, essential for computing and optimizing the workloads generated by their LLM. This technology comes from various dimensions. You need an exceptional silicon design team and cutting-edge SerDes along with advanced packaging. Additionally, it's crucial to know how to effectively network clusters of these components. We have been in the silicon business for over 20 years, and in the current generative AI landscape, if you're an LLM player designing your own chip, having a chip that is merely adequate isn’t enough. The competition demands the best chips available because you're vying against other LLM players, as well as NVIDIA, which consistently enhances its chip offerings with each new generation. As an LLM attempting to establish your platform, you need to create chips that are superior to or at least competitive with those from NVIDIA and other platform competitors. For that, we believe, and we see it firsthand, that you need a partner in silicon with the best technology, intellectual property, and execution capabilities. Modestly speaking, we believe we are far ahead in this regard, and we do not expect to see competition in customer-owned tooling for many years. While competition will eventually emerge, we are still far from that point because the race is ongoing.

CK
Charlie KawwasPresident, Semiconductor Solutions Group

I think you covered it very well, Hock.

Operator

One moment for our next question, and that will come from the line of Ross Seymore with Deutsche Bank.

O
RS
Ross SeymoreAnalyst

Hock, you emphasized the networking differentiation more than before. I have two questions: First, what is driving the increase to 40% of AI revenues in the short term? Secondly, in the long term, is that percentage mix within the $100 billion plus changing? What level of leadership do you anticipate maintaining in that segment, whether it’s scale-out or scale-up? Additionally, is your leadership position there supporting your XPU initiatives by optimizing both the compute and networking aspects?

HT
Hock TanPresident and CEO

Let's tackle the first part of your question. In networking, particularly with the latest GPUs and XPUs, we're achieving 200 gigabit SerDes in terms of bandwidth. The Tomahawk 6, which we launched around nine months ago, is unique in the market. Our customers and hyperscalers are aiming for the best networking solutions with maximum bandwidth for their clusters, leading to high demand for this exclusive 100 terabit per second switch. Additionally, we are scaling optical transceivers at 1.6 terabits and are the only company offering DSP at that level. This combination is accelerating the growth of our networking components even faster than the growth of our XPUs, which is already impressive. While I anticipate a eventual stabilization, we will maintain our current pace, as we plan to introduce the next-generation Tomahawk 7 in 2027, which will double performance and likely keep us ahead in the market, sustaining our momentum. To directly answer your question, I expect that in any quarter, AI networking components will comprise between 33% and 40% of our total AI revenue.

Operator

One moment for our next question, and that will come from the line of C.J. Muse with Cantor Fitzgerald.

O
CM
Christopher MuseAnalyst

I'm curious, how are you thinking about the move to disaggregate prefill and decode from the GPU ecosystem and the impact to custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and custom silicon?

HT
Hock TanPresident and CEO

I'm not sure I fully understand your question, C.J., could you clarify what you mean by disaggregate?

CM
Christopher MuseAnalyst

Sure. Pushing off workloads to CPX for prefill and working off a Groq for decode and having that disaggregated kind of world. And does that put any pressure in terms of the demand for custom versus going with a full GPU stack?

HT
Hock TanPresident and CEO

I understand what you're getting at with the term disaggregation, which initially confused me. Essentially, you're asking about the evolution of AI accelerator architecture, whether it's GPU or XPU, as workloads change. We're definitely observing this shift. The general-purpose GPU approach has its limits; it can still manage various workloads, like using a mixture of experts technique, but for optimal performance, especially with sparsity, GPUs are designed primarily for dense matrix multiplication. While software kernels can support these techniques, they aren't as effective as custom silicon implementations. XPUs are designed to be more efficient for specific workloads, similarly for inference tasks. This situation leads to a greater customization of XPU designs tailored to the unique requirements of our specific LLM customers. The design is moving away from traditional GPU standards, which is why we previously mentioned that XPUs will likely become the preferred option. They provide the flexibility to create designs suited for particular workloads, whether for training or inference. For instance, one might be optimized for prefill while another excels in post-training, reinforcement learning, or scaling during testing. You can adjust your XPUs to cater specifically to the kinds of LLM workloads needed. We're witnessing this development with all our five customers.

Operator

One moment for our next question, and that will come from the line of Timothy Arcuri with UBS.

O
TA
Timothy ArcuriAnalyst

I had just a question on sort of the puts and takes on gross margin as you begin to ship these racks. I mean, obviously, it's going to pull the blended margin down, but I'm wondering if there's any guardrails you can give us on this. It seems like the racks are maybe 45%, 50% gross margin. So I guess, should we think about that pulling gross margin down like 500 basis points roughly as these racks begin to ship? And I guess part of that, Hock, is there some like floor to the gross margin below which you wouldn't be willing to do more racks?

HT
Hock TanPresident and CEO

Hate to tell you that you must be a bit mistaken. Our gross margin is solidly at the number Kirsten reported. We will not be affected by the gross margin and by more and more AI products going out. We have gotten our yields. We've gotten our cost to the point where the model we have in AI will be fairly consistent with the models we have in the rest of the semiconductor business.

KS
Kirsten SpearsCFO

I would agree with that. I think on further study relative to even comments that I did make last quarter, the impact relative to our overall mix is actually not going to be substantial at all. So I wouldn't worry about it.

Operator

One moment for our next question, and that will come from the line of Stacy Rasgon with Bernstein.

O
SR
Stacy RasgonAnalyst

I don't know if this is for Hock or Kirsten, but I wanted to dig in a little more to this substantially more than $100 billion next year. I'm trying to just count up the gigawatts. I counted, I don't know, 8 or 9, you have 3 from Anthropic, 1 from OpenAI, so that's 4. You said Meta was multiple, so at least 2. That gets me to 6. Google, I figure, should be bigger than Meta, so like at least 3, that's 9 and then you got a few others. I had thought that your content per gigawatt was sort of, call it, in a $20 billion per gigawatt range. I guess what I'm asking is, is my math around the gigawatts you plan to ship in '27 correct? And how do I think about your content per gigawatt as that ships? Maybe it will be substantially more than $100 billion.

HT
Hock TanPresident and CEO

Stacy, I appreciate your interesting perspective. You’re correct that it’s more appropriate to consider gigawatts instead of dollars, as that’s how we market our chips. It’s important to note that, depending on our LLM customers, which now number six, the dollar value per gigawatt chip can vary significantly. However, it's not far off from the dollar figures you mentioned. Looking ahead to 2027, we anticipate approaching 10 gigawatts.

Operator

And our next question will come from the line of Ben Reitzes with Melius Research.

O
BR
Benjamin ReitzesAnalyst

Hock, it's great to speak with you. I wanted to ask about your comments on supply visibility for the four major components through 2028. How did you arrive at this information? You're the first to provide insights extending to 2028. Additionally, after such impressive growth in 2027 for your AI business, do you have enough clarity to continue growing significantly in 2028 based on what you see in terms of supply?

HT
Hock TanPresident and CEO

The best answer is, yes, you're right. We anticipate this sharp accelerated growth, now nobody could anticipate the rate of growth we're showing, but we kind of anticipate a large part of it, I guess, for longer than 6 months. We were early in being able to lock up T-glass. It’s the infamous T-glass you all heard about. We were very early. We've locked up substrates. We have worked on our good partners on the rest of the stuff we talked about. And so the answer to your question is, it’s somewhat anticipation early and the fact that we have very good partners out there in these key components. What else can I say except that, yes. Charlie, you want to add anything?

CK
Charlie KawwasPresident, Semiconductor Solutions Group

Sure, I have a couple of quick points. You addressed that aspect very well. Additionally, as Hock mentioned, we create custom silicon for six customers, and we have extensive strategic partnerships with them spanning multiple years. They provide us with insights regarding their expectations for at least the next two to three years, and sometimes up to four years. This foresight is precisely why we secured all the elements Hock referred to. Securing this involves investments with our partners and sometimes entails developing not only additional capacity but also the right technology and capacity for their needs. Therefore, we must ensure that we have these commitments in place for multiple years, and you are correct that we are likely the first to secure this up to 2028 and beyond.

BR
Benjamin ReitzesAnalyst

And can you grow in '28 with what you see in supply? Sorry to sneak that in.

CK
Charlie KawwasPresident, Semiconductor Solutions Group

Yes.

Operator

Our next question will come from the line of Vivek Arya with Bank of America Securities.

O
VA
Vivek AryaAnalyst

Hock, I just wanted to first clarify the Anthropic project you're doing, the $20 billion or so for 1 gigawatt this year, how much of that is chips and how much of that is kind of racks? I just wanted to understand when you say $100 billion in chips, is there a distinction between chips versus your rack scale projects because just that project is supposed to triple next year? And then my question is, your AI business is transitioning from kind of one large customer that was where you had kind of exclusive partnership to now multiple customers who are using multiple suppliers. So how do you get the visibility and the confidence about how your share will progress at these multiple customers? Because it's a very kind of fragmented engagement that they have across a whole range of cloud service providers and so on. So what are you doing to ensure that you have solid visibility and the right market share at this fragmented set of customers who are using multiple suppliers?

HT
Hock TanPresident and CEO

Vivek, it's important to note that we have a very limited number of customers, specifically just six, for the revenue we are generating. Previously, it was even fewer. Each customer spends significant amounts and is heavily invested in their projects. That's why I mentioned Meta's MTIA custom accelerator program. For all our customers in this sector, their work is strategic rather than optional. They are focused on positioning their custom silicon within the development of large language models (LLMs) and the inference processes for productizing those LLMs. We have clear visibility in that area. In contrast, any GPU usage or cloud services are more transactional and seen as optional. You accurately pointed out the potential confusion, but it’s not confusing for us or our customers. They are strategic and intentional about what they are building and the capacity they aim to develop each year, and their main concern is simply whether we can deliver results faster. Everything else is opportunistic in nature for them, so the distinction is quite clear. I'd rather not answer that, but we're okay. As Kirsten said, we're good on our dollars and margin.

Operator

One moment for our next question, and that will come from the line of Tom O'Malley with Barclays.

O
TO
Thomas O'MalleyAnalyst

I have one for Hock and one for Charlie. So Hock, I know you're very specific and particular about what you put in the preamble, and you noted that customers are staying at direct attached copper through 400 gig SerDes. Is there any reason you're pointing that out in particular, especially as a leading pioneer in CPO? And then on Charlie's side, as you're adding more customers here, I would imagine customers that design ASICs with you are going to use scale-up Ethernet. Maybe talk about scale-up protocols and how you see Ethernet developing there as well.

HT
Hock TanPresident and CEO

Okay. I'm emphasizing that our technology uniquely positions us to assist not only our customers but also those using general-purpose GPUs. If you're working on creating large language models and designing your own AI data centers, you ideally want to connect XPUs directly to each other whenever possible. The most efficient way to achieve this is through direct attach copper, which offers the lowest latency, power consumption, and cost. It's crucial to maintain this approach as long as you can in a scale-up scenario, particularly within a rack or cluster domain. While we can transition to optical solutions for scaling out, we recommend using direct attach copper in scaling up. With the technology Broadcom provides, we can effectively connect XPU to XPU or GPU to GPU using copper, advancing from 100G to 200G and even 400G. We now have SerDes technology capable of 400G, enabling longer distances for copper connections in a rack. My main point is that there's no immediate need to pursue newer technologies like CPOs, even though we are leaders in that area. CPOs will develop over time, but not necessarily in the immediate future.

CK
Charlie KawwasPresident, Semiconductor Solutions Group

Yes. No. Well said, Hock. And on the question of Ethernet, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last 2 decades. If you look at the debut of the back-end networks, as Hock articulated, there was a 2-year period ago, a big fight about what protocol should be used to achieve the latency, the scale necessary on scale-out. And the industry at the time, 24 months ago, was not clear. We were clear. We were very clear actually about what the answer should be. And again, because of the deep engagements with our partners, they made it very clear to all of us and the industry, GPU or XPU, that Ethernet is the scale-out of choice, checkmark. Today, everyone is talking about scaling out with Ethernet. Now when it comes to scale-up, yes, exactly like what happened 3, 4 years ago on scale-up now, what’s the right answer for this. And what we're hearing consistently and what we're seeing is the right answer is Ethernet. And as you know, last year, we've announced with multiple hyperscalers and many of our peers in the semiconductor industry that Ethernet scale-up is the right choice. That's what we believe will happen. Time will tell, but a lot of the XPU designs we're doing, we're being asked to scale up through Ethernet, and we're happy to enable that.

Operator

And our next question will come from the line of Jim Schneider with Goldman Sachs.

O
JS
James SchneiderAnalyst

Hock, it was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs. As we look into next year, is it fair to assume that those are mostly targeting inference applications or not? And then could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast in such a large scale?

HT
Hock TanPresident and CEO

Thanks. Most of our customers start with inference because it's often the easiest way to begin, mainly due to the reduced computational requirements. The question arises whether there's a need for high-performance GPUs when custom inference silicon XPUs could accomplish the task more efficiently, effectively, and at a lower cost with reduced power consumption. We find that these customers are beginning with inference, but they are also advancing into training, with our XPUs being utilized for both purposes. These XPUs can serve as replacements for GPUs, which, while better suited for training, can also handle inference tasks. We are noticing that more advanced customers are beginning to develop two chips each year—one for training and one for inference—designed for specialization. This strategy is crucial because, for large language model (LLM) developers, after achieving a state-of-the-art LLM through training, focus must shift to productization through inference. However, this process takes about a year, during which time competitors may develop superior LLMs. Therefore, it's essential to invest in inference concurrently with training to foster progress towards greater intelligence in LLMs. Our insights are becoming clearer as we observe these six customers mature in their journey toward better LLMs. This is the trend we're observing. While not all six customers are at this stage yet, a majority are currently moving in that direction.

Operator

One moment for our next question, and that will come from the line of Joshua Buchalter with TD Cowen.

O
JB
Joshua BuchalterAnalyst

Congrats on the results. Appreciate all the details on the expectations for deployments at specific customers. I was hoping you could just maybe reflect on how visibility has changed over the last 1 to 2 quarters that gave you the confidence to give us more details. And then on a specific one, you mentioned greater than 1 gigawatt for OpenAI in 2027. With that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it? And was that sort of always the plan?

HT
Hock TanPresident and CEO

Yes, as we've observed in the ongoing development of generative AI, it’s not really a race but rather a progression among key players. Each competitor is striving to create a more advanced large language model tailored for specific purposes, whether for enterprise, consumer, or search applications. This process goes beyond just training models; it also involves inference for product development and monetization. Having collaborated with several of these clients for over two years, we are gaining better insights as they increasingly trust that the XPUs they are developing with us are meeting their needs. With this confidence in our XPU silicon and the software and algorithms, we see continuous improvement. As our clients gain confidence, we also gain visibility, which is crucial. As Charlie noted, we only have six key customers, and they are all approaching the XPUs and AI strategically, considering multiple generations and years ahead. Despite the noise around available options, they are focused on long-term deployment of the XPUs we create together to enhance their large language models and monetize them effectively. We are integrated into their strategic planning rather than being just an optional choice for cloud training. The long-term investments these customers are making in this technology are significant, and we feel fortunate to be part of that enduring strategy rather than a transactional one. In summary, our XPU business represents a sustainable strategic venture for the six customers we currently engage with.

Operator

That is all the time we have for Q&A today. I would now like to turn the call back over to Ji Yoo for any closing remarks.

O
JY
Ji YooHead of Investor Relations

Thank you, Sherry. Broadcom currently plans to report its earnings for the second quarter of fiscal year 2026 after the close of market on Wednesday, June 3, 2026. A public webcast of Broadcom's earnings conference call will follow at 2:00 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.

Operator

This concludes today's program. Thank you all for participating. You may now disconnect.

O