Skip to main content
AVGO logo

Broadcom Inc

Exchange: NASDAQSector: TechnologyIndustry: Semiconductors

Broadcom Inc., a Delaware corporation headquartered in San Jose, CA, is a global technology leader that designs, develops and supplies a broad range of semiconductor and infrastructure software solutions. Broadcom's category-leading product portfolio serves critical markets including data center, networking, enterprise software, broadband, wireless, storage and industrial. Our solutions include data center networking and storage, enterprise, mainframe and cyber security software focused on automation, monitoring and security, smartphone components, telecoms and factory automation.

Did you know?

AVGO's revenue grew at a 18.9% CAGR over the last 6 years.

Current Price

$354.91

+1.22%

GoodMoat Value

$220.56

37.9% overvalued
Profile
Valuation (TTM)
Market Cap$1.68T
P/E67.38
EV$1.58T
P/B20.70
Shares Out4.74B
P/Sales24.64
Revenue$68.28B
EV/EBITDA46.47

Broadcom Inc (AVGO) — Q1 2025 Earnings Call Transcript

Apr 4, 202615 speakers5,626 words32 segments
G
GuHead of Investor Relations

Thank you, Sherry, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kawwas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for the first quarter of fiscal year 2025. If you did not receive a copy, you may obtain the information from the investors section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our first quarter fiscal year 2025 results, guidance for our second quarter of fiscal year 2025, as well as commentary regarding the business environment. We will take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hock.

HT
Hock TanPresident and CEO

Thank you, Gu. And thank you everyone for joining today. In our fiscal Q1 2025, total revenue was a record $14.9 billion, up 25% year on year. And consolidated adjusted EBITDA was a record again, $10.1 billion, up 41% year on year. So let me first provide color on our semiconductor business. Q1 semiconductor revenue was $8.2 billion, 11% year on year. Growth was driven by AI, as AI revenue of $4.1 billion was up 77% year on year. We repeat our guidance for AI revenue of $3.8 billion due to stronger shipments of networking solutions to hyperscalers on AI. Our hyperscale partners continue to invest aggressively in the next generation Frontier models, which do require high-performance accelerators as well as AI data centers with larger clusters. Consistent with this, we are stepping up our R&D investment on two fronts. One, we're pushing the envelope of technology in creating the next generation of accelerators. We're taping out the industry's first two-nanometer AI XPU packaging 3.5D, as we drive towards a 10,000 teraflops XPU. Secondly, we have a view towards scaling clusters of 500,000 accelerators for hyperscale customers. We have doubled the radix capacity of this existing Tomahawk 5. Beyond this, to enable AI clusters to scale up on Ethernet towards one million XPUs. We have taped out our next-generation 100 terabit Tomahawk 6 switch running 200G SerDes and 1.6 terabit bandwidth. We will be delivering samples to customers within the next few months. These R&D investments are very aligned with the roadmap of our three hyperscale customers as they each race towards one million XPU clusters by the end of 2027. And, accordingly, we do reaffirm what we said last quarter, that we expect these three hyperscale customers will generate a serviceable addressable market or SAM in the range of $60 to $90 billion in fiscal 2027. Beyond these three customers, we had also mentioned previously that we are deeply engaged with two other hyperscalers in enabling them to create their own customized AI accelerator. We are on track to tape out their XPUs this year. In the process of working with the hyperscalers, it has become clear that while they are excellent in software, Broadcom is the best in hardware. Working together is what optimizes large language models. It is therefore no surprise to us since our last earnings call, the two additional hyperscalers have selected Broadcom to develop custom accelerators to train their next-generation Frontier models. So even as we have three hyperscale customers, we are shipping XPUs in volume today. There are now four more who are deeply engaged with us to create their own accelerators. And to be clear, of course, these four are not included in our estimated SAM of $60 billion to $90 billion in 2027. So we do see an exciting trend here. New Frontier models and techniques put unexpected pressures on AI systems. It's difficult to serve all classes of models with a single system design point. Therefore, it is hard to imagine that a general-purpose accelerator can be configured and optimized across multiple Frontier models. And as I mentioned before, the trend towards XPUs is a multiyear journey. So coming back to 2025, we see a steady ramp in the deployment of all our XPUs and networking products. So Q1 AI revenue was $4.1 billion, and we expect Q2 AI revenue to grow to $4.4 billion, which is up 44% year on year. Turning to non-AI semiconductors, revenue of $4.1 billion was down 9% sequentially on a seasonal decline in wireless. In aggregate, during Q1, the recovery in non-AI semiconductors continues to be slow. Broadband, which bottomed in Q4 2024, showed a double-digit sequential recovery in Q1 and is expected to be up similarly in Q2 as service providers and telcos step up spending. Server storage was down single digits sequentially in Q1 but is expected to be up high single digits sequentially in Q2. Meanwhile, enterprise networking continues to remain flattish in the first half of fiscal 2025 as customers continue to work through channel inventory. Wireless was down sequentially due to a seasonal decline. It remained flat year on year. In Q2, wireless is expected to be the same, flat again year on year. Resales in industrial were down double digits in Q1 and are expected to be down in Q2. So reflecting the foregoing puts and takes, we expect non-AI semiconductor revenue in Q2 to be flattish sequentially even though we are seeing bookings continue to grow year on year. In summary, for Q2, we expect total semiconductor revenue to grow 2% sequentially and up 17% year on year to $8.4 billion. Turning now to infrastructure software. Q1 infrastructure software revenue of $6.7 billion was up 47% year on year and up 15% sequentially, exaggerated though by deals which slipped from Q4 to Q1. Now this is the first quarter Q1 2025 where the year on year comparables include VMware in both quarters. We're seeing significant growth in the software segment for two reasons. One, we're converting from a footprint of largely perpetual licenses to one of full subscription. As of today, we are over 60% done. Two, these perpetual licenses were only largely for virtualization, otherwise called vSphere. We are upselling customers to a full stack VCF, which enables the entire data center to be virtualized and enables customers to create their own private cloud environment on-prem. As of the end of Q1, approximately 70% of our largest 10,000 customers have adopted VCF. As these customers consume VCF, we do see a further opportunity for future growth. As large enterprises adopt AI, they have to run their AI workloads on their own on-prem data centers, which will include both GPU servers as well as traditional CPUs. Just as VCF virtualizes these traditional data centers using CPUs, VCM will also virtualize GPUs on a common platform and then enable enterprises to import AI models to run their own data on-prem. This platform, which virtualizes the GPU, is called the VMware Private AI Foundation. As of today, in collaboration with NVIDIA, we have 39 enterprise customers for the VMware Private AI Foundation. Customer demand has been driven by our open ecosystem, superior load balancing, and automation capabilities, which allow them to intelligently pool and run workloads across both GPU and CPU infrastructure, leading to very reduced costs. Moving on to Q2 outlook for software, we expect revenue of $6.5 billion, up 23% year on year. So in total, we're guiding Q2 consolidated revenue to be approximately $14.9 billion, up 19% year on year. And we expect this will drive Q2 adjusted EBITDA to approximately 66% of revenue. With that, let me turn the call over to Kirsten.

KS
Kirsten SpearsChief Financial Officer

Thank you, Hock. Let me now provide additional detail on our Q1 financial performance. From a year on year comparable basis, keep in mind that Q1 of fiscal 2024 was a 14-week quarter and Q1 of fiscal 2025 is a 13-week quarter. Consolidated revenue was $14.9 billion for the quarter, up 25% from a year ago. Gross margin was 79.1% of revenue in the quarter, better than we originally guided on higher infrastructure software revenue and more favorable semiconductor revenue mix. Consolidated operating expenses were $2 billion, of which $1.4 billion was for R&D. Q1 operating income of $9.8 billion was up 44% from a year ago with operating margin at 66% of revenue. Adjusted EBITDA was a record $10.1 billion or 68% of revenue, above our guidance of 66%. This figure excludes $142 million of depreciation. Now a review of the P&L for our two segments. Starting with semiconductors. Revenue for our semiconductor solutions segment was $8.2 billion and represented 55% of total revenue in the quarter. This was up 11% year on year. Gross margin for our semiconductor solutions segment was approximately 68%, up 70 basis points year on year driven by revenue mix. Operating expenses increased 3% year on year to $890 million on increased investment in R&D for leading-edge AI semiconductors, resulting in semiconductor operating margin of 57%. Now moving on to infrastructure software. Revenue for infrastructure software of $6.7 billion was 45% and up 47% year on year based primarily on increased revenue from VMware. Gross margin for infrastructure software was 92.5% in the quarter compared to 88% a year ago. Operating expenses were approximately $1.1 billion in the quarter, resulting in infrastructure software operating margin of 76%. This compares to operating margin of 59% a year ago. This year on year improvement reflects our disciplined integration of VMware and sharp focus on deploying our VCF strategy. Moving on to cash flow. Free cash flow in the quarter was $6 billion and represented 40% of revenue. Free cash flow as a percentage of revenue continues to be impacted by cash interest expense from debt related to the VMware acquisition and cash taxes due to the mix of US taxable income, the continued delay in the reenactment of section 174, and the impact of Corporate AMT. We spent $100 million on capital expenditures. Day sales outstanding were 30 days in the first quarter compared to 41 days a year ago. We ended the first quarter with inventory of $1.9 billion, up 8% sequentially, to support revenue in future quarters. Our days of inventory on hand were 65 days in Q1 as we continue to remain disciplined on how we manage inventory across the ecosystem. We ended the first quarter with $9.3 billion of cash and $68.8 billion of gross principal debt. During the quarter, we repaid $495 million of fixed-rate debt and $7.6 billion of floating-rate debt with new senior notes, commercial paper, and cash on hand, reducing debt by a net $1.1 billion. Following these actions, the weighted average coupon rate and years to maturity of our $58.8 billion in fixed-rate debt is 3.8% and 7.3 years respectively. The weighted average coupon rate and years to maturity of our $6 billion in floating-rate debt is 5.4% and 3.8 years respectively. Our $4 billion in commercial paper is at an average rate of 4.6%. Turning to capital allocation, in Q1, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $59 per share. We spent $2 billion to repurchase 8.7 million AVGO shares from employees as those shares vested or withholding taxes. In Q2, we expect the non-GAAP diluted share count to be approximately 4.95 billion shares. Now moving on to guidance. Our guidance for Q2 is for consolidated revenue of $14.9 billion with semiconductor revenue of approximately $8.4 billion, up 17% year on year. We expect Q2 AI revenue of $4.4 billion, up 44% year on year. For non-AI semiconductors, we expect Q2 revenue of $4 billion. We expect Q2 infrastructure software revenue of approximately $6.5 billion, up 23% year on year. We expect Q2 adjusted EBITDA to be about 66%. For modeling purposes, we expect Q2 consolidated gross margin to be down approximately 20 basis points sequentially on the revenue mix of infrastructure software. As Hock discussed earlier, we are increasing our R&D investment in leading-edge AI in Q2 and accordingly, we expect adjusted EBITDA to be approximately 66%. We expect the non-GAAP tax rate for Q2 and fiscal year 2025 to be approximately 14%. That concludes my prepared remarks. Operator, we will now open for questions.

Operator

Thank you. As a reminder, to ask a question, you will need to press star one one on your telephone. To withdraw your question, press star one one again. Due to time restraints, we ask that you please limit yourself to one question. Please standby while we compile the Q&A roster. And our first question will come from the line of Ben Reitzes with Melius. Your line is open.

O
BR
Ben ReitzesAnalyst

Hey, guys. Thanks a lot and congrats on the results. Hock, you talked about four more customers coming online. Can you just talk a little bit more about the trend you're seeing? Can any of these customers be as big as the current three? And what does this say about the custom silicon trend overall and your optimism and upside to the business long term? Thanks.

HT
Hock TanPresident and CEO

Well, very interesting question, Ben. And thanks for your kind wishes. But what we're seeing is—and by the way, these four are not customers as we define it. As I've always said, you know, in developing and creating XPUs, we are not really the creators of those XPUs, to be honest. We enable each of those hyperscaler partners we engage with to create that chip and to basically create that compute system. Call it that way. And it comprises the model, the software model, working closely with the compute engine, the XPU, and the networking that ties together the clusters of those multiple XPUs as a whole to train those large Frontier models. The fact that we create the hardware still has to work with the software models and the algorithms of those partners of ours before it becomes fully deployable and scale, which is why we define customers in this case as those where we know they have deployed at scale and will receive the production volume to enable it. Right. For that, we only have just the three. The four are, I call them partners, who are trying to create the same thing as the first three and to run their own Frontier models, each of it on there to train their own Frontier models. And as I also said, it doesn't happen overnight. To do the first chip could take typically a year and a half, and that's very accelerated, which we could accelerate given that we essentially have a framework and a methodology that works right now. It works for the three customers. No reason for it to not work for the four. Still need those four partners to create and develop a software, which we don't do, to make it work, and to answer your question, there's no reason why these four guys would not create a demand in the range of what we're seeing with the first three guys. But probably later. It's a journey. They started it later, and so they will probably get there later.

HS
Harlan SurAnalyst

Good afternoon, and great job on the strong quarterly results, Hock and team. Great to see the continual momentum in the AI business here in the first half of your fiscal year, and the continued broadening out of your AI ASIC customers. I know, Hock, last earnings, you did call out a strong ramp in the second half of the fiscal year, driven by new three-nanometer AI accelerated programs kind of ramping. Can you just help us either qualitatively or quantitatively profile the second half step-up relative to what the team just delivered here in the first half? Has the profile changed? Either favorably or less favorably versus what you thought, maybe ninety days ago because quite frankly, I mean, a lot has happened since last earnings. Right? You've had the dynamics like deep seek and focus on AI model efficiency. But on the flip side, you've had strong CapEx outlooks by your cloud and hyperscale customers. So any color on the second half AI profile would be helpful.

HT
Hock TanPresident and CEO

You're asking me to look into the minds of my customers. And I hate to tell you they don't tell me, they don't show me the entire mindset here. But why are we beating the numbers so far in Q1? Seems to be encouraging in Q2. Pardon me? From improved networking shipments, as I indicate, that's to foster those XPUs and AI accelerators even in some cases, GPUs together, for the hyperscalers. And that's good. We think there is some pull-ins of shipments and acceleration, call it that way, of shipments in fiscal 2025.

HS
Harlan SurAnalyst

And on the second half, that you talked about ninety days ago, the second half three-nanometer ramp, is that still very much on track?

HT
Hock TanPresident and CEO

Alan, thank you. I only got you two. Sorry. Let me let's not speculate on the second half.

WS
William SteinAnalyst

Great. Thank you for taking my question. Congrats on these pretty great results. It seems from the news headlines about tariffs and about Deepseek that there may be some disruption. Some customers and some other complementary suppliers seem to feel a bit paralyzed perhaps. Having difficulty making tough decisions. Those tend to be really useful times for great companies to sort of emerge as something bigger and better than they were in the past. You've grown this company in a tremendous way over the last decade plus. You're doing great now, especially in this AI area. But I wonder if you're seeing that sort of disruption from these dynamics that we suspect are happening based on the headlines of what we see from other companies. And how aside from adding these customers in AI, sure there's other great stuff going on, but should we expect some bigger changes to come from Broadcom as a result of this?

HT
Hock TanPresident and CEO

You posted a very interesting set of issues and questions. Those are very relevant interesting issues. And don't need to issue the only problem we have at this point is I would say it's really toward to no way we're all linked. I mean, there's the threat, the noise of tariffs, especially on chips, that has a material lines in it. Nor do we know how it will be structured. So we don't know. But we do experience and we are leaving it now. Is the disruption on that is paused in a positive way. I should add a very positive disruption in semiconductors, on generative AI. Generative AI for sure. At the risk of repeating, you know, but it's we feel it more than ever. Is really accelerating the development of semiconductor technology. Both process and packaging as well as design. Towards higher and higher performance accelerators and networking functionality. We've seen that innovation and those upgrades occur every month. We face new interesting challenges. Specifically, with XPUs. We're trying to optimize the Frontier models of our partners, our customers, as well as our hyperscale partners. It's a privilege for us to participate in and try to optimize. When optimizing, you look at an accelerator. You can look at it for simple terms, high level. For compute capacity, how many teraflops? It's more than that. It's also tied to the fact that this is a distributed computing problem. It's not just the single compute capacity of a single XPU or GPU. It's also the network bandwidth. It ties itself to the next adjacent XPU or GPU. You have to do that. You'll have to balance that. Then you decide, are you doing training or you're doing pre-filling? Post-training. Fine-tuning. And again, how much memory do you balance against that? And with it, how much latency you can afford, which is memory bandwidth. You'll need to look at at least four variables, maybe even five. If you're cloning memory bandwidth. Not just memory capacity, when you go straight to inference. So we have all these variables to play with and we try to optimize it. All this is an excellent experience for our engineers to push their envelope on how to create all those chips. That's the biggest disruption we see right now. From the sheer effort to create and push the envelope on generative AI. Trying to create the best hardware infrastructure to run it. Beyond that, there are other things too that come into play, especially with AI, as I indicated, which does not just drive hardware for enterprises; it drives the way they architect their data centers. The data requirement they keep keeping data private on is important. The push of what loads towards public cloud may take a little pause as large enterprises particularly have to take direct device decisions to run AI workloads. They're probably thinking very hard about running them on-prem. They need to upgrade their own data centers to manage their own data to run it on-prem. That trend has been seen over the past twelve months. Hence, my comments on VMware Private AI Foundation. This is to especially enterprises pushing direction and quickly recognizing that where they run their AI workloads. Those are trends we see today and a lot of it is coming out of AI, alongside sensitive rules on sovereignty in cloud and data.

RS
Ross SeymoreAnalyst

May I speak to ask a question? I want to go back to the XPU side of things. And going from the four new engagements, not yet named customers, to last quarter and two more today that you announced. I want to talk about going from kind of design winded deployment. How do you judge that? Because there is some debate about, you know, tons of design wins, but the deployments actually don't happen either that they never occur or that the volume is never what is originally promised. How do you view that kind of conversion ratio with there a wide range around it? Or is there some way you could help us kind of understand how that works?

HT
Hock TanPresident and CEO

It's, Ross, the interesting question I'll take the opportunity to say that we look at design wins is probably very different from how many of our peers look at it out there. To begin with, we believe design win when we know our product is produced at scale and is actually deployed. Literally deployed in production. That takes a long lead time because from taping out to getting the product in the hands of our partner, it takes a year. From that point, it goes into scale production; it will take six months to a year. It's our experience that we've seen. Number one and number two, producing and deploying five thousand SKUs—that's a joke. That's not real production in my view. We also limit ourselves in selecting partners to those who really need that large volume. You need that large volume from our viewpoint in scale right now, mostly in framing, training of large language models from tier models in a continuing trajectory. So we tend to be very selective about how many customers or how many potential customers exist out there, Ross. To summarize, when we say design win, it really is at scale. It's not something that starts in six months and dies or a year and dies again. Basically, it's a selection of customers just the way we run our ASIC business in general for the last fifteen years. We pick and choose customers because we know that guy, and we do multi-year roadmaps with these customers because we know these customers are sustainable. We don't do it for start-ups.

SR
Stacy RasgonAnalyst

Hi, guys. Thanks for taking my question. I wanted to go to the three customers that you do have in volume today. I wanted to ask whether there are any concerns about some of the new regulations or the AI diffusion rules that are going to get put in place supposedly in May impacting any of those design wins or shipments. It sounds like you think all three of those are still on at this point. But anything you could tell us about worries about new regulations or AI diffusions impacting any of those wins would be helpful.

HT
Hock TanPresident and CEO

Thank you. In this era or this current era of geopolitical tensions and fairly dramatic actions all around by governments, yes, there’s always some concern at the back of everybody’s mind. But to answer your question directly, no, we don’t have any concerns.

VA
Vivek AryaAnalyst

Thanks for taking my question. Hock, whenever you have described your AI opportunity, you've always emphasized the training workload. But the perception is that the AI market could be dominated by the inference workload, especially with these new reasoning models. So what happens to your opportunity and share if the mix moves more towards inference? Does it create a bigger TAM than the $60 to $90 billion? Does it keep it the same, but is there a different mix of product, or does a more inference-heavy market favor a GPU over an XPU? Thank you.

HT
Hock TanPresident and CEO

That's a good interesting question. By the way, I talk a lot about training. We do carry out our experience as also focus on inference as a separate product line. It's a combination of both that adds up to this $60 to $90 billion. If I have not been clear, I do apologize. It's a combination of both. But having said that, the larger part of the dollars comes from training, not inference. If within the server service, the same we have talked about so far.

HK
Harsh KumarAnalyst

Thanks, Broadcom team. And again, great execution. Just talk about a quick question. We've been hearing that almost all of the large clusters that are 100k plus they're all going to Ethernet. I was wondering if you could help us understand the importance of when the customer's making a selection, you know, choosing between a guy that has the best switch ASICs such as you versus a guy that might have the compute there. Can you talk about what the customer is thinking and what the final points they want to hit upon when they make that selection for the mixed cards?

HT
Hock TanPresident and CEO

In the case of the hyperscalers, it's very driven by performance. If you are in the race to really get the best performance out of your hardware as you train and continue to train your Frontier models, that matters more than anything else. The basic first thing they go for is proven. They want hardware that is proven and a proven system subsystem in our case that makes it work. We tend to have a big advantage because networking is ours. You know, switching and routing has been ours for the last ten years at least. The fact that it's AI just makes it more interesting for our engineers to work on. So we keep stepping up the rate of investment in coming up with our products where we take a ton of five. We doubled the radix to deal with just one hyperscaler because they want high radix to create larger clusters while running bandwidth that are smaller. But that doesn't stop us from moving ahead to the next generation of Tomahawk 6, and I dare say, we even planning to come out 7 and 8 right now. We're speeding up the rate of development. We're making a lot of investment for very few customers, hopefully, with very large serviceable markets. That's the big bets we are placing.

TA
Timothy ArcuriAnalyst

Thanks a lot. Hock, in the past, you have mentioned XPU units growing from about two million last year to about seven million, you said, in the 2027, 2028 time frame. My question is, do these four new customers, do they add to that seven million unit number? I know in the past, you talked about an ASP. It was around twenty grand by then, so those three—the first three customers are clearly a subset of that seven million units. So do these new four engagements drive that seven higher, or do they just fill in to get to that seven million?

HT
Hock TanPresident and CEO

To clarify, as I made, I thought I made it clear in my comments. No, the market we're talking about includes when you translate the unit—it's only among the three customers we have today. The other four, we talk about engagement partners; we don't consider that as customers yet, and therefore, they are not in our served available market.

CM
CJ MuseAnalyst

Yeah. Good afternoon. Thank you for taking the question. I guess, I'll have to follow-up on your prepared remarks and comments earlier around optimization with your best hardware and hyperscalers with their great software. I'm curious how you're expanding your portfolio now to six mega scale kind of frontier models. Will enable you to share tremendous information, but at the same time, in a world where these six truly want to differentiate. The goal for all of these players is exaflops per second per dollar of capex per watt. To what degree are you aiding them in these efforts? And where does maybe the Chinese wall start where they want to differentiate and not share with you some of the work that you're doing?

HT
Hock TanPresident and CEO

We only provide very basic fundamental technology in semiconductors to enable these guys to use what we have and optimize it to their own particular models and algorithms that relate to those models. That's it. That's the level of optimization we do for each of them. And as I mentioned earlier, there are maybe five degrees of freedom. We do play with that. Even if there are five degrees of freedom, there’s only so much we can do at that point. What we do now is what the XPU model is. The optimization translates to performance but also power. That's very important how they play. It’s about the total cost of ownership, and it's how you design it in terms of power and how we balance it in terms of the size of the cluster and whether they use it for training, pre-training, post-training, or inference. They all have their own characteristics.

CR
Christopher RollandAnalyst

Hey. Thanks so much for the question. This one’s maybe for Hock and for Kirsten. I’d love to know just because you have kind of the complete connectivity portfolio, how you see new greenfield scale-up opportunities playing out here? This includes, you know, optical or copper or really anything. What additives could this provide for your company? And then, Kirsten, I think OpEx is up; maybe just talk about where those OpEx dollars are going towards within the AI opportunity and whether they relate.

HT
Hock TanPresident and CEO

In our portfolio, we deploy—we have the advantage as a lot of the customer hyperscalers. They're talking about a lot of expansion. It's almost all greenfield. It’s all next-generation. This opportunity is very high. We deploy in copper, but what we see a lot of opportunities from is providing networking connectivity through optical. There are a lot of active elements included either multimode lasers which are called VCSELs or intervening lasers. We do both. So, there’s a lot of opportunity to connect with active elements, especially in scale-up scenarios versus scale-out scenarios. We used to do, we still do a lot of other protocols beyond Ethernet such as PCI Express where we’re on the leading edge of that architecture. It ends up adding to around 20% of our total AI revenue, maybe going to 30%. Last quarter, we hit almost 40%, but that’s not the norm.

KS
Kirsten SpearsChief Financial Officer

On the R&D front, as I outlined at a consolidated basis, we spent $1.4 billion in R&D in Q1, and I stated that it would be going up in Q2. The company focuses on R&D across all of our product lines to stay competitive with next-generation product offerings, but I did outline that we are focusing on taping out the industry’s first two-nanometer AIXPU package in 3D. Our focus is on doubled radix capacity of existing Tomahawk 5 to enable our AI customers to scale up on Ethernet towards one million XPUs.

VR
Vijay RakeshAnalyst

Just a question on the networking side. How much does it go up sequentially on the AI side? And any thoughts around M&A going forward? Still a lot of headlines around the include products, Blue Care, so thanks.

HT
Hock TanPresident and CEO

On the networking side, as indicated, Q1 showed a bit of a surge. However, I don't expect that mix of sixty-forty-six percent compute and forty percent networking to be normal. I think the norm is closer to seventy-thirty, which might be better for our future prospects. M&A? No. We're too busy doing AI and VMware at this point. We’re not thinking of it.

Operator

Thank you. That is all the time we have for our question and answer session. I would now like to turn the call back over to Gu for any closing remarks.

O
G
GuHead of Investor Relations

Thank you, Sherry. Broadcom currently plans to report its earnings for the second quarter of fiscal year 2025 after close of market on Thursday, June 5th, 2025. A public webcast of Broadcom's earnings conference call will follow at 2 PM Pacific. That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.

Operator

Thank you. Ladies and gentlemen, thank you for participating. This concludes today's program. You may now disconnect.

O