Broadcom Inc
Broadcom Inc., a Delaware corporation headquartered in San Jose, CA, is a global technology leader that designs, develops and supplies a broad range of semiconductor and infrastructure software solutions. Broadcom's category-leading product portfolio serves critical markets including data center, networking, enterprise software, broadband, wireless, storage and industrial. Our solutions include data center networking and storage, enterprise, mainframe and cyber security software focused on automation, monitoring and security, smartphone components, telecoms and factory automation.
AVGO's revenue grew at a 18.9% CAGR over the last 6 years.
Current Price
$354.91
+1.22%GoodMoat Value
$220.56
37.9% overvaluedBroadcom Inc (AVGO) — Q2 2025 Earnings Call Transcript
Thank you, operator, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; and Charlie Kawwas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the second quarter of fiscal year 2025. If you did not receive a copy, you may obtain the information from the Investors section of Broadcom's website. This conference call is being webcast live, and an audio replay of the call can be accessed for 1 year through the Investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our second quarter fiscal year 2025 results, guidance for our third quarter of fiscal year 2025 as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.
Thank you, Ji, and thank you, everyone, for joining us today. In our fiscal Q2 2025, total revenue was a record $15 billion, up 20% year-on-year. This 20% year-on-year growth was all organic as Q2 last year was the first full quarter with VMware. Now revenue was driven by continued strength in AI semiconductors and the momentum we have achieved in VMware. Now reflecting excellent operating leverage, Q2 consolidated adjusted EBITDA was $10 billion, up 35% year-on-year. Now let me provide more color. Q2 semiconductor revenue was $8.4 billion, with growth accelerating to 17% year-on-year, up from 11% in Q1. And of course, driving this growth was AI semiconductor revenue of over $4.4 billion, which is up 46% year-on-year and continues the trajectory of 9 consecutive quarters of strong growth. Within this, custom AI accelerators grew double digits year-on-year, while AI networking grew over 170% year-on-year. AI networking, which is based on Ethernet was robust and represented 40% of our AI revenue. As a standard-based open protocol, Ethernet enables one single fabric for both scale out and scale up and remains the preferred choice by our hyperscale customers. Our networking portfolio of Tomahawk switches, Jericho routers and NICs is what's driving our success within AI clusters in hyperscalers. And the momentum continues with our breakthrough Tomahawk 6 switch just announced this week. This represents the next-generation 102.4 terabits per second switch capacity. Tomahawk 6 enables clusters of more than 100,000 AI accelerators to be deployed in just 2 tiers instead of 3. This flattening of the AI cluster is huge because it enables much better performance in training next-generation frontier models through a lower latency, higher bandwidth and lower power. Turning to XPUs or custom accelerators. We continue to make excellent progress on the multiyear journey of enabling our 3 customers and 4 prospects to deploy custom AI accelerators. As we had articulated over 6 months ago, we eventually expect at least 3 customers to each deploy 1 million AI accelerated clusters in 2027, largely for training their frontier models. And we forecast and continue to do so a significant percentage of these deployments to be custom XPUs. These partners are still unwavering in their plan to invest despite the certain economic environment. In fact, what we've seen recently is that they are doubling down on inference in order to monetize their platforms. And reflecting this, we may actually see an acceleration of XPU demand into the back half of 2026 to meet urgent demand for inference on top of the demand we have indicated from training. And accordingly, we do anticipate now our fiscal 2025 growth rate of AI semiconductor revenue to sustain into fiscal 2026. Turning to our Q3 outlook. As we continue our current trajectory of growth, we forecast AI semiconductor revenue to be $5.1 billion, up 60% year-on-year, which would be the 10th consecutive quarter of growth. Now turning to non-AI semiconductors in Q2. Revenue of $4 billion was down 5% year-on-year. Non-AI semiconductor revenue is close to the bottom, has been relatively slow to recover, but they had bright spots. In Q2, broadband, enterprise networking and server storage revenues were up sequentially. However, industrial was down and as expected, wireless was also down due to seasonality. In Q3, we expect enterprise networking and broadband to continue to grow sequentially, but server storage, wireless and industrial are expected to be largely flat. And overall, we forecast non-AI semiconductor revenue to stay around $4 billion. Now let me talk about our infrastructure software segment. Q2 infrastructure software revenue of $6.6 billion was up 25% year-on-year, above our outlook of $6.5 billion. As we have said before, this growth reflects our success in converting our enterprise customers from perpetual vSphere to the full VCF software stack subscription. Customers are increasingly turning to VCF to create a modernized private cloud on-prem, which will enable them to repatriate workloads from public clouds while being able to run modern container-based applications and AI applications. Of our 10,000 largest customers, over 87% have now adopted VCF. The momentum from strong VCF sales over the past 18 months since the acquisition of VMware has created annual recurring revenue or otherwise known as ARR growth of double digits in our core infrastructure software. In Q3, we expect infrastructure software revenue to be approximately $6.7 billion, up 16% year-on-year. So in total, we are guiding Q3 consolidated revenue to be approximately $15.8 billion, up 21% year-on-year. We expect Q3 adjusted EBITDA to be at least 66%.
Thank you, Hock. Let me now provide additional detail on our Q2 financial performance. Consolidated revenue was a record $15 billion for the quarter, up 20% from a year ago. Gross margin was 79.4% of revenue in the quarter, better than we originally guided on product mix. Consolidated operating expenses were $2.1 billion, of which $1.5 billion was related to R&D. Q2 operating income of $9.8 billion was up 37% from a year ago, with operating margin at 65% of revenue. Adjusted EBITDA was $10 billion or 67% of revenue, above our guidance of 66%. This figure excludes $142 million of depreciation. Now a review of the P&L for our 2 segments. Starting with semiconductors. Revenue for our semiconductor solutions segment was $8.4 billion, with growth accelerating to 17% year-on-year, driven by AI. Semiconductor revenue represented 56% of total revenue in the quarter. Gross margin for our semiconductor solutions segment was approximately 69%, up 140 basis points year-on-year driven by product mix. Operating expenses increased 12% year-on-year to $971 million on increased investment in R&D for leading-edge AI semiconductors. Semiconductor operating margin of 57% was up 200 basis points year-on-year. Now moving on to infrastructure software. Revenue for infrastructure software of $6.6 billion was up 25% year-on-year and represented 44% of total revenue. Gross margin for infrastructure software was 93% in the quarter compared to 88% a year ago. Operating expenses were $1.1 billion in the quarter, resulting in infrastructure software operating margin of approximately 76%. This compares to operating margin of 60% a year ago. This year-on-year improvement reflects our disciplined integration of VMware. Moving on to cash flow. Free cash flow in the quarter was $6.4 billion and represented 43% of revenue. Free cash flow as a percentage of revenue continues to be impacted by increased interest expense from debt related to the VMware acquisition and increased cash taxes. We spent $144 million on capital expenditures. Day sales outstanding were 34 days in the second quarter compared to 40 days a year ago. We ended the second quarter with inventory of $2 billion, up 6% sequentially in anticipation of revenue growth in future quarters. Our days of inventory on hand were 69 days in Q2 as we continue to remain disciplined on how we manage inventory across the ecosystem. We ended the second quarter with $9.5 billion of cash and $69.4 billion of gross principal debt. Subsequent to quarter end, we repaid $1.6 billion of debt, resulting in gross principal debt of $67.8 billion. The weighted average coupon rate and years to maturity of our $59.8 billion in fixed rate debt is 3.8% and 7 years, respectively. The weighted average interest rate and years to maturity of our $8 billion in floating rate debt is 5.3% and 2.6 years, respectively. Turning to capital allocation. In Q2, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $0.59 per share. In Q2, we repurchased $4.2 billion or approximately 25 million shares of common stock. In Q3, we expect the non-GAAP diluted share count to be approximately 4.97 billion shares, excluding the potential impact of any share repurchases. Now moving on to guidance. Our guidance for Q3 is for consolidated revenue of $15.8 billion, up 21% year-on-year. We forecast semiconductor revenue of approximately $9.1 billion, up 25% year-on-year. Within this, we expect Q3 AI semiconductor revenue of $5.1 billion, up 60% year-on-year. We expect infrastructure software revenue of approximately $6.7 billion, up 16% year-on-year. For modeling purposes, we expect Q3 consolidated gross margin to be down approximately 130 basis points sequentially, primarily reflecting a higher mix of XPUs within AI revenue. As a reminder, consolidated gross margins through the year will be impacted by the revenue mix of infrastructure software and semiconductors. We expect Q3 adjusted EBITDA to be at least 66%. We expect the non-GAAP tax rate for Q3 and fiscal year 2025 to remain at 14%. And with this, that concludes my prepared remarks. Operator, please open up the call for questions.
Hock, I wanted to jump on to the AI side and specifically some of the commentary you had about next year. Can you just give a little bit more color on the inference commentary you gave? And is it more the XPU side, the connectivity side or both that's given you the confidence to talk about the growth rate that you have this year being matched next fiscal year?
Thank you, Ross. Good question. I think we're indicating that what we are seeing and what we have quite a bit of visibility increasingly is increased deployment of XPUs next year, much more than we originally thought and hand-in-hand with it, of course, more and more networking. So it's a combination of both.
Great job on the quarterly execution. Hock, good to see the positive growth inflection quarter-over-quarter, year-over-year growth rates in your AI business. As the team has mentioned, right, the quarters can be a bit lumpy. So if I smooth out kind of first 3 quarters of this fiscal year, your AI business is up 60% year-over-year. It's kind of right in line with your 3-year kind of SAM growth CAGR, right? Given your prepared remarks and knowing that your lead times remain at 35 weeks or better, do you see the Broadcom team sustaining the 60% year-over-year growth rate exiting this year? And I assume that, that potentially implies that you see your AI business sustaining the 60% year-over-year growth rate into fiscal '26, again, based on your prepared commentary, which again is in line with your SAM growth figure. Is that kind of a fair way to think about the trajectory this year and next year?
Harlan, that's a very insightful set of analysis here, and that's exactly what we're trying to do here because over 6 months ago, we gave you guys a point, a year, 2027. As we come into the second half of 2025 and with improved visibility and updates we are seeing in the way our hyperscale partners are deploying data centers, AI clusters, we are providing you some level of guidance visibility to what we are seeing, how the trajectory of '26 might look like. I'm not giving you any update on '27. We're just still establishing the update we have in '27 6 months ago. But what we're doing now is giving you more visibility into where we're seeing '26 headed.
AI networking was really strong in the quarter, and it seemed like it must have beat expectations. I was wondering if you could just talk about the networking in particular, what caused that? And how much of that is your acceleration into next year? And when do you think you will see Tomahawk contributing to that acceleration?
I believe AI networking is closely linked to the deployment of AI accelerator clusters. The deployment schedule for these is similar to that of the accelerators, whether they are XPUs or GPUs. This is progressing, especially in scale-out scenarios where Ethernet is the preferred protocol. However, it's also increasingly shifting towards what we refer to as scale-up within data centers. Here, the consumption and density of switches are significantly higher than initially anticipated in scale-out scenarios. In fact, the increased density in scale-up is 5 to 10 times greater than in scale-out. This was a pleasant surprise for us, which is reflected in the AI networking segment maintaining about 40% growth compared to what we reported last quarter for Q1. At that time, I predicted it would decrease, but it hasn't.
And your thoughts on Tomahawk driving acceleration for next year and when it kicks in?
Tomahawk 6, yes, that's extremely strong interest. Now we're not shipping big orders or any orders other than basic proof of concepts out to customers, but there is tremendous demand for this new 102 terabits per second Tomahawk switches.
Great results. I just wanted to ask maybe following up on the scale-out opportunity. So today, I guess your main customer is not really using kind of an NVLink switch style scale-up. I'm just kind of curious your visibility or the timing in terms of when you might be shipping a switched Ethernet scale-up network to your customers?
You're talking scale up? Yes. Well, scale up is very rapidly converting to Ethernet now, very much so. For our fairly narrow band of hyperscale customers, scale up is very much Ethernet.
Hock, I still wanted to follow up on that AI 2026 question. I want to just put some numbers on it, just to make sure I got it right. So if you did 60% in the first 3 quarters of this year, if you grow 60% year-over-year in Q4, it puts you at like, I don't know, $5.8 billion, something like $19 billion or $20 billion for the year. And then are you saying you're going to grow 60% in 2026 would put you $30 billion plus in AI revenues for 2026? I'm just wondering, is that the math that you're trying to communicate to us directly?
I think you're doing the math. I'm giving you the trend. But I did answer that question, I think Harlan asked earlier. The rate we are seeing now so far in fiscal '25 and will presumably continue, we don't see any reason why it doesn't given lead time visibility in '25. What we are seeing today based on what we have visibility on '26 is to be able to ramp up this AI revenue in the same trajectory. Yes. I'm not playing a SAM game here. I'm just giving a trajectory towards where we drew the line on '27 before. So I have no response to if the SAM is going up or not. Stop talking about SAM now.
I had a near and then a longer-term question on the XPU business. So Hock, for near term, if your networking upside in Q2 and overall AI was in line, it means XPU was perhaps not as strong. So I realize it's lumpy, but anything more to read into that, any product transition or anything else? So just a clarification there. And then longer term, you have outlined a number of additional customers that you're working with. What milestones should we look forward to? And what milestones are you watching to give you the confidence that you can now start adding their addressable opportunity into your '27 or '28 or other numbers? Like how do we get the confidence that these projects are going to turn into revenue in some reasonable time frame from now?
Okay. Regarding your first question, it feels like you're trying to count how many angels can fit on a pinhead. Networking is currently thriving, but that doesn't imply that XPU is declining. It's progressing exactly as we anticipated, without any fluctuations or softness. The trajectory remains consistent with our expectations for this quarter and likely extends into the next and beyond. We have clear visibility on the short-term trajectory. As for 2027, we are not changing any projections at this moment. Six months ago, we estimated the size of the Serviceable Available Market based on a million GPU XPU clusters across three customers, and that estimate still holds. We haven't provided any new updates and don't plan to do so right now. When we gain clearer visibility, which will probably not be until 2026, we will be happy to share updates. For now, in today’s prepared remarks and in response to some questions, we intend to provide more insight into the growth trajectory we see for 2026.
I was hoping to follow up on Ross's question regarding inference opportunity. Can you discuss workloads that are optimal that you're seeing for custom silicon? And that over time, what percentage of your XPU business could be inference versus training?
I believe there is no distinction between training and inference when it comes to using merchant accelerators as opposed to custom accelerators. The main idea behind adopting custom accelerators continues to hold; it's not just about cost. As custom accelerators are utilized and developed in collaboration with specific hyperscalers, there is a learning process involved. This learning process includes optimizing how algorithms for their large language models are written and integrated with the hardware. This capability is a significant advantage in developing algorithms that enhance the performance of their LLMs, far exceeding a simple separation of hardware and software. It is essential to merge both hardware and software throughout this journey. This learning process does not occur overnight; it takes several iterations to improve continuously. The true benefit of creating proprietary hardware lies in the ability to tailor your software to the hardware, ultimately achieving much higher performance than would be possible otherwise. We are already witnessing this in practice.
Hock, you spoke about the much higher content opportunity in scale-up networking. I was hoping you could discuss how important is demand adoption for co-packaged optics in achieving this 5 to 10x higher content for scale-up networks? Or should we anticipate much of the scale-up opportunity will be driven by Tomahawk and Thor NICs?
I'm trying to understand your question, so let me respond in a way that I think clarifies things. A lot of the scaling that’s happening, particularly with GPU interconnects, is currently done using copper interconnects. The size of these scale-up clusters isn't large enough yet to move away from copper, and that's still the primary method in use today. However, I believe that as we aim to exceed around 72 GPU interconnects, we may have to transition to a different protocol and medium, shifting from copper to optical interconnects. This shift may make co-packaged optics relevant, although there are other options available, such as continuing to use pluggable low-cost optics. This would allow us to interconnect a switch with a capacity of 512 connections, facilitating a far greater scale-up scenario. This change is likely to occur within the next couple of years, and we plan to be at the forefront of it. It could involve co-packaged optics, which we are actively developing, or it might begin with pluggable optics. The essential question is when we will transition from copper to optical for GPU interconnects. This advancement will be significant, but it doesn’t necessarily hinge solely on co-packaged optics. That is certainly one direction we are exploring.
I realize it's a bit nitpicky, but I wanted to ask about gross margins in the guide. So your revenue implies sort of $800 million incremental increase with gross profit up, I think, $400 million to $450 million, which is kind of pretty well below corporate average fall-through. I appreciate that semis is dilutive and custom is probably dilutive within semis. But anything else going on with margins that we should be aware of? And how should we think about the margin profile of custom longer term as that business continues to scale and diversify?
Yes. We've historically said that the XPU margins are slightly lower than the rest of the business other than wireless. And so there's really nothing else going on other than that. It's just exactly what I said, that the majority of it quarter-over-quarter, the 130 basis point decline is being driven by more XPUs.
There are more moving parts here than your simple analysis proves here. And I think your simple analysis is totally wrong in that regard.
I also wanted to ask about scale up, Hock. So there's a lot of competing ecosystems. There's UALink, which, of course, you left. And now there's the big GPU company opening up NVLink, and they're both trying to build ecosystems, and there's an argument that you're an ecosystem of one. What would you say to that debate? Does opening up NVLink change the landscape? And sort of how do you view of your AI networking growth next year? Do you think it's going to be primarily driven by scale up? Or will it still be pretty scale-out heavy?
People like to develop platforms and new protocols and systems. The reality is that scaling up can be done easily and is currently possible. It's based on open standards, open source, and Ethernet. Additionally, there's no need to create new systems just for the sake of it when networking in Ethernet can accomplish the same tasks. I hear a lot about interesting new protocols and standards being proposed, many of which are proprietary despite claims to the contrary. The true open source and open standards are embodied by Ethernet, and we believe it will continue to dominate as it has over the past 20 years in traditional networking. There is no justification for developing a new standard when transferring bits and bytes of data can be easily achieved.
Yes. My question is for you, Hock. It's kind of a bigger picture one here. And this kind of acceleration that we're seeing in AI demand, do you think that this acceleration is because of a marked improvement in ASICs or XPUs closing the gap on the software side at your customers? Do you think it's these require tokenomics around inference, test time compute driving that, for example. What do you think is actually driving the upside here? And do you think it leads to a market share shift faster than we were expecting towards XPU from GPU?
Yes, it's an interesting question, but none of the points you mentioned really apply. The reason inference has become very prominent lately is that we are only selling to a select few customers, specifically hyperscalers with platforms and large language models. That's the extent of our customer base, and we haven't added any new ones. However, these hyperscalers and those utilizing large language models need to validate their spending. Training enhances the performance of frontier models, akin to research in science. Developing sophisticated algorithms that require significant compute resources leads to smarter models. The goal is to monetize inference, and this is what's driving the trend. As I noted earlier, the need to justify investments largely centers around training, and the return on that investment comes from creating numerous AI use cases. We're beginning to see this among our limited group of clients.
Hock, just going back on the AI server revenue side. I know you said fiscal '25 kind of tracking to that up 60%-ish growth. If you look at fiscal '26, you have many new customers ramping and probably you have the 4 of the 6 hyperscalers that you have talked in the past. Would you expect that growth to accelerate into fiscal '26 about that kind of the 60% you talked about?
My prepared remarks indicated that the growth rate we expect in 2025 will continue into 2026, supported by improved visibility and the increasing demand for inference alongside training as our clusters expand. I believe this projection stands firm, and we anticipate that the transition from 2025 to 2026 represents our most reliable forecast at this time.
I think all my questions on scale-up have been asked. But I guess, Hock, given the execution that you guys have been able to do with the VMware integration, looking at the balance sheet, looking at the debt structure, I'm curious if you could give us your thoughts on how the company thinks about capital return versus the thoughts on M&A and the strategy going forward.
That's an interesting question, and I would say it's timely. We've made significant progress with the integration of VMware, which is reflected in our free cash flow from operations. As we've mentioned, our capital usage focuses on providing returns through dividends, which account for half of our free cash flow from the previous year. As Kirsten mentioned in previous earnings calls, our priority for the other half of the free cash flow is to reduce our debt to achieve a ratio of no more than 2 to 1 for debt to EBITDA. However, we may also choose to buy back shares opportunistically, as we did last quarter when we repurchased $4.2 billion in stock. Part of this buyback is to cover taxes on vested employee RSUs, but we also took advantage of favorable market conditions to repurchase shares. Overall, our cash usage outside of dividends will primarily focus on reducing our debt. Regarding M&A, we consider significant deals that would likely require taking on debt, making it a prudent use of our free cash flow to lower our debt and maintain or expand our borrowing capacity for future acquisitions.
Hock, a couple of clarifications. First, on your 2026 expectation, are you assuming any meaningful contribution from the 4 prospects that you talked about?
No comment. We don't talk about prospects. We only talk about customers.
Okay. Fair enough. And then my other clarification is that I think you talked about networking being about 40% of the mix within AI. Is that the right kind of mix that you expect going forward? Or is that going to materially change as we, I guess, see XPUs ramping going forward?
No. I've always said and I expect that to be the case going forward in '26 as we grow that networking should be a ratio to XPU should be closer in the range of less than 30%, not the 40%.
You mentioned that you wouldn't be affected by export controls on AI. Considering the many changes in the industry since your last call, is that still accurate? Can you assure everyone that there will be no future impact from this?
Nobody can provide reassurance in this environment, Joe. However, the rules are changing significantly as bilateral trade agreements are being negotiated in a very dynamic landscape. To be honest, I don't know much more than you do, and I might even know less regarding export controls and how they will be implemented. We're making educated guesses. Therefore, I prefer not to speculate on that because I really don't have clarity on whether it will be an issue.
I wanted to ask about VMware. Can you comment as to how far along you are in the process of converting customers to the subscription model? Is that close to complete? Or is there still a number of quarters that we should expect that, that conversion continues?
That's a good question. A reliable way to assess this is that most of our VMware contracts typically last around 3 years, which was the standard before we acquired them and is what we continue to do. Therefore, regarding the renewals, we are about two-thirds of the way through, which means we have roughly another year, or possibly 1.5 years, remaining for the process.
Thank you, operator. Broadcom currently plans to report its earnings for the third quarter of fiscal year 2025 after close of market on Thursday, September 4, 2025. A public webcast of Broadcom's earnings conference call will follow at 2:00 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.