Skip to main content
NVDA logo

NVIDIA Corp

Exchange: NASDAQSector: TechnologyIndustry: Semiconductors

NVIDIA is the world leader in accelerated computing.

Did you know?

Profit margin of 55.6% — that's well above average.

Current Price

$177.39

+0.93%

GoodMoat Value

$221.97

25.1% undervalued
Profile
Valuation (TTM)
Market Cap$4.31T
P/E35.90
EV$4.22T
P/B27.40
Shares Out24.30B
P/Sales19.96
Revenue$215.94B
EV/EBITDA29.46

NVIDIA Corp (NVDA) — Q2 2025 Earnings Call Transcript

Apr 5, 202613 speakers6,983 words32 segments

Operator

Good afternoon. My name is Abby and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Second Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question-and-answer session. Thank you. And Mr. Stewart Stecker, you may begin your conference.

O
SS
Stewart SteckerModerator

Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the second quarter of fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I would like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2025. The content of today's call is NVIDIA's property. It cannot be reproduced or transcribed without prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of risks, significant risks, and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K, and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 28th, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. Let me highlight an upcoming event for the financial community. We will be attending the Goldman Sachs Communacopia and Technology Conference on September 11 in San Francisco, where Jensen will participate in a keynote fireside chat. Our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday, November 20th, 2024. With that, let me turn the call over to Colette.

CK
Colette KressCFO

Thanks, Stewart. Q2 was another record quarter. Revenue of $30 billion was up 15% sequentially and up 122% year-on-year and well above our outlook of $28 billion. Starting with data center, data center revenue of $26.3 billion was a record, up 16% sequentially and up 154% year-on-year, driven by strong demand for NVIDIA Hopper, GPU computing, and our networking platforms. Compute revenue grew more than 2.5 times, networking revenue grew more than 2 times from the last year. Cloud service providers represented roughly 45% of our data center revenue and more than 50% stemmed from the consumer, Internet, and enterprise companies. Customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell. Key workloads driving our data center growth include generative AI, model training, and inferencing. Video, image, and text data pre and post-processing with CUDA and AI workloads, synthetic data generation, AI-powered recommender systems, SQL, and vector database processing as well. Next-generation models will require 10 to 20 times more compute to train with significantly more data. The trend is expected to continue. Over the trailing four quarters, we estimate that inference drove more than 40% of our data center revenue. CSPs, consumer Internet companies, and enterprises benefit from the incredible throughput and efficiency of NVIDIA's inference platform. Demand for NVIDIA is coming from frontier model makers, consumer Internet services, and tens of thousands of companies and startups building generative AI applications for consumers, advertising, education, enterprise, healthcare, and robotics. Developers desire NVIDIA's rich ecosystem and availability in every cloud. CSPs appreciate the broad adoption of NVIDIA and are growing their NVIDIA capacity given the high demand. The NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet, and enterprise companies. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering, over 40% more memory bandwidth compared to the H100. Our data center revenue in China grew sequentially in Q2 and is a significant contributor to our data center revenue. As a percentage of total data center revenue, it remains below levels seen prior to the imposition of export controls. We continue to expect the China market to be very competitive going forward. The latest round of MLPerf inference benchmarks highlighted NVIDIA's inference leadership with both NVIDIA Hopper and Blackwell platforms combining to win gold medals on all tasks. At Computex, NVIDIA with the top computer manufacturers unveiled an array of Blackwell architecture-powered systems and NVIDIA networking for building AI factories and data centers. With the NVIDIA MGX modular reference architecture, our OEMs and ODM partners are building more than 100 Blackwell-based systems designed quickly and cost-effectively. The NVIDIA Blackwell platform brings together multiple GPU, CPU, DPU, NVLink, NVLink switch, and the networking chips systems, and NVIDIA CUDA software to power the next-generation of AI across the cases, industries, and countries. The NVIDIA GB 200 NVL72 system with the fifth-generation NVLink enables all 72 GPUs to act as a single GPU and deliver up to 30 times faster inference for LLMs, workloads, and unlocking the ability to run trillion parameter models in real-time. Hopper demand is strong and Blackwell is widely sampling. We executed a change to the Blackwell GPU mask to improve production yields. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26. In Q4, we expect to ship several billion dollars in Blackwell revenue. Hopper shipments are expected to increase in the second half of fiscal 2025. Hopper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year. Networking revenue increased 16% sequentially. Our Ethernet for AI revenue, which includes our Spectrum-X end-to-end Ethernet platform, doubled sequentially with hundreds of customers adopting our Ethernet offerings. Spectrum-X has broad market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers, and enterprise, including X-AI to connect the largest GPU compute cluster in the world. Spectrum-X supercharges Ethernet for AI processing and delivers 1.6 times the performance of traditional Ethernet. We plan to launch new Spectrum-X products every year to support demand for scaling compute clusters from tens of thousands of DPUs today to millions of GPUs in the near future. Spectrum-X is well on track to begin a multi-billion dollar product line within a year. Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure as national imperatives for their society and industries. Japan's National Institute of Advanced Industrial Science and Technology is building its AI bridging cloud infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low double-digit billions this year. The enterprise AI wave has started. Enterprises also drove sequential revenue growth in the quarter. We are working with most of the Fortune 100 companies on AI initiatives across industries and geographies. A range of applications are fueling our growth, including AI-powered chatbots, generative AI copilots, and agents to build new monetizable business applications and enhance employee productivity. Amdocs is using NVIDIA generative AI for their smart agent, transforming the customer experience and reducing customer service costs by 30%. ServiceNow is using NVIDIA for its Now Assist offering, the fastest-growing new product in the company's history. SAP is using NVIDIA to build dual co-pilots. Cohesity is using NVIDIA to build their generative AI agent and lower generative AI development costs. Snowflake serves over 3 billion queries a day for over 10,000 enterprise customers and is working with NVIDIA to build copilots. Wistron is using NVIDIA AI Omniverse to reduce end-to-end cycle times for their factories by 50%. Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using NVIDIA in their data centers. Automotive will drive multi-billion dollars in revenue across on-prem and cloud consumption and will grow as next-generation AV models require significantly more compute. Healthcare is also on its way to being a multi-billion dollar business as AI revolutionizes medical imaging, surgical robots, patient care, electronic health record processing, and drug discovery. During the quarter, we announced a new NVIDIA AI foundry service to supercharge generative AI for the world's enterprises with Meta's Llama 3.1, a collection of models. This marks a watershed moment for enterprise AI. Companies for the first time can leverage the capabilities of an open-source frontier-level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business. Accenture is the first to adopt the new service to build custom Llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications. NVIDIA NIM accelerates and simplifies model deployment. Companies across healthcare, energy, financial services, retail, transportation, and telecommunications are adopting NIMs, including Aramco, Lowe's, and Uber. AT&T realized 70% cost savings and 8 times latency reduction after moving into NIMs for generative AI, call transcription, and classification. Over 150 partners are embedding NIMs across every layer of the AI ecosystem. We announced NIM agent Blueprints, a catalog of customizable reference applications that include a full suite of software for building and deploying enterprise generative AI applications. With NIM agent blueprints, enterprises can refine their AI applications over time, creating a data-driven AI flywheel. The first NIM agent blueprints include workloads for customer service, computer-aided drug discovery, and enterprise retrieval augmented generation. Our system integrators, technology solution providers, and system builders are bringing NVIDIA NIM agent blueprints to enterprises. NVIDIA NIM and NIM agent blueprints are available through the NVIDIA AI enterprise software platform, which has great momentum. We expect our software, SaaS, and support revenue to approach a $2 billion annual run rate exiting this year, with NVIDIA AI Enterprise notably contributing to growth. Moving to gaming and AI PCs. Gaming revenue of $2.88 billion increased 9% sequentially and 16% year-on-year. We saw sequential growth in console, notebook, and desktop revenue and demand is strong and growing, and channel inventory remains healthy. Every PC with RTX is an AIPC. RTX PCs can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptop designs from leading PC manufacturers. With 600 AI-powered applications and games and an installed base of 100 million devices, RTX is set to revolutionize consumer experiences with generative AI. NVIDIA ACE, a suite of generative AI technologies, is available for RTX AI PCs. Mecha BREAK is the first game to use NVIDIA ACE, including our small large small language model, Minitron-4B optimized on device inference. The NVIDIA gaming ecosystem continues to grow, recently added RTX and DLSS titles including Indiana Jones and the Great Circle, Dune Awakening, and Dragon Age: The Veilguard. The GeForce NOW library continues to expand with a total catalog size of over 2,000 titles, the most content of any cloud gaming service. Moving to Pro visualization. Revenue of $454 million was up 6% sequentially and 20% year-on-year. Demand is being driven by AI and graphic use cases, including model fine-tuning and Omniverse-related workloads. Automotive and manufacturing were among the key industry verticals driving growth this quarter. Companies are racing to digitalize workflows to drive efficiency across their operations. The world's largest electronics manufacturer, Foxconn, is using NVIDIA Omniverse to power digital twins of the physical plants that produce NVIDIA Blackwell systems. Several large global enterprises, including Mercedes-Benz, signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins for factories. We announced new NVIDIA USD NIMs and connectors to open Omniverse to new industries and enable developers to incorporate generative AI Copilots and agents into USD workflows, accelerating their ability to build highly accurate virtual worlds. WPP is implementing USD NIM microservices in its generative AI-enabled content creation pipeline for customers such as the Coca-Cola Company. Moving to automotive and robotics, revenue was $346 million, up 5% sequentially and up 37% year-on-year. Year-on-year growth was driven by the new customer ramps in self-driving platforms and increased demand for AI cockpit solutions. At the Computer Vision and Pattern Recognition conference, NVIDIA won the Autonomous Brand Challenge in the end-to-end driving at-scale category, outperforming more than 400 entries worldwide. Austin Dynamics, BYD Electronics, Figure, Intrinsic, Siemens, Skilled ADI, and Teradyne Robotics are using the NVIDIA Isaac Robotics platform for autonomous robot arms, humanoids, and mobile robots. Now moving to the rest of the P&L. GAAP gross margins were 75.1% and non-GAAP gross margins were 75.7%, down sequentially due to a higher mix of new products within the data center and inventory provisions for low-yielding Blackwell material. Sequentially, GAAP and non-GAAP operating expenses were up 12%, primarily reflecting higher compensation-related costs. Cash flow from operations was $14.5 billion. In Q2, we utilized cash of $7.4 billion towards shareholder returns in the form of share repurchases and cash dividends, reflecting the increase in dividend per share. Our Board of Directors recently approved a $50 billion share repurchase authorization to add to our remaining $7.5 billion of authorization at the end of Q2. Let me turn to the outlook for the third quarter. Total revenue is expected to be $32.5 billion, plus or minus 2%. Our third-quarter revenue outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products. We expect Blackwell production ramp in Q4. GAAP and non-GAAP gross margins are expected to be 74.4% and 75%, respectively, plus or minus 50 basis points. As our data center mix continues to shift to new products, we expect this trend to continue into the fourth quarter of fiscal 2025. For the full year, we expect gross margins to be in the mid-70% range. GAAP and non-GAAP operating expenses are expected to be approximately $4.3 billion and $3.0 billion, respectively. Full-year operating expenses are expected to grow in the mid-to-upper 40% range as we work on developing our next generation of products. GAAP and non-GAAP other income and expenses are expected to be about $350 million, including gains and losses from non-affiliated investments and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website. We are now going to open the call for questions.

Operator

Thank you. And your first question comes from the line of Vivek Arya with Bank of America Securities. Your line is open.

O
VA
Vivek AryaAnalyst

Thanks for taking my question. Jensen, you mentioned in the prepared comments that there is a change in the Blackwell GPU mask. I'm curious, are there any other incremental changes in back-end packaging or anything else? And I think related, you suggested that you could ship several billion dollars of Blackwell in Q4 despite the change in the design. Is it because all these issues will be solved by then? Just help us size what is the overall impact of any changes in Blackwell timing? What that means to your revenue profile and how are customers reacting to it?

JH
Jensen HuangCEO

Yes, thank you, Vivek. The update to the mask for Blackwell is completed, and no functional changes were required. We are currently testing functional samples of Blackwell across various system configurations. Around 100 different Blackwell-based systems were showcased at Computex, and we are working to enable our ecosystem to begin sampling them. The functionality of Blackwell remains unchanged, and we anticipate starting production in the fourth quarter.

Operator

And your next question comes from the line of Toshiya Hari with Goldman Sachs. Your line is open.

O
TH
Toshiya HariAnalyst

Hi, thank you so much for taking the question. Jensen, I had a relatively longer-term question. As you may know, there's a pretty heated debate in the market on your customers and customer's customers' return on investment and what that means for the sustainability of CapEx going forward. Internally at NVIDIA, like what are you guys watching? What's on your dashboard as you try to gauge customer return and how that impacts CapEx? And then a quick follow-up maybe for Colette. I think your sovereign AI number for the full year went up maybe a couple of billion. What's driving the improved outlook? And how should we think about fiscal '26? Thank you.

JH
Jensen HuangCEO

Thanks, Toshiya. First, when I mentioned ship production in Q4, I meant shipping out, not starting production. To address the longer-term question, we’re currently experiencing two simultaneous platform transitions. The first transition involves moving from general-purpose computing to accelerated computing due to the slowdown in CPU scaling, which has significantly hindered progress. Meanwhile, the demand for computing continues to grow rapidly, potentially doubling each year. Without a new approach, this increase in computing demand could lead to rising costs for all companies and heightened energy consumption in data centers globally, which we are already starting to witness. Accelerated computing addresses these issues by speeding up applications and enabling larger-scale computing for tasks like scientific simulations or database processing. This transition leads to lower costs and reduced energy usage. Recently, a blog was published discussing new libraries we offer, which are central to our shift from general-purpose computing to accelerated computing. It's common for users to save as much as 90% on their computing costs, as applications can become significantly faster, leading to substantial cost reductions. The second transition is made possible by accelerated computing, resulting in drastically lower costs for training large language models and deep learning. This has enabled the development of massive scale models that can be pre-trained on a vast knowledge corpus, allowing them to understand human language and learn reasoning, which is fueling the generative AI revolution. Generative AI is significant because it represents a fundamental shift in software development, moving away from human-engineered algorithms to data-driven models. Instead of prescribing algorithms, we provide the AI with expected answers and previous observations, letting it discover the underlying functions itself. This capability allows AI to learn and predict a wide variety of structured information based on past examples. Generative AI is creating a transformative impact across the computing landscape, influencing everything from CPUs to GPUs and how algorithms are constructed. The advances in generative AI include the growth of frontier models which are increasingly scaled and diversified. As models increase in size, the datasets required for training must also increase, which means the computational needs will rise substantially. We are likely to see that next-generation models could demand 20 to 40 times more computational resources than their predecessors. Thus, we must continually enhance performance to reduce energy consumption and training costs. We are observing larger frontier models trained on a broader range of modalities, and there are more model developers now than there were last year. This is one of the dynamics driving growth in generative AI. While applications such as ChatGPT and image generators are widely recognized, they represent just the surface; behind them are large-scale systems like recommender systems that are transitioning from CPUs to generative AI. These systems, including custom ad targeting and search, now employ generative AI for large-scale applications. The rise of generative AI startups is leading to substantial cloud rental opportunities for our partners, as countries recognize their data as a vital resource to develop their own digital intelligence infrastructures. In enterprise AI, as Colette mentioned previously, we are witnessing significant interest as major IT companies collaborate with us to deploy the NVIDIA AI enterprise platform across businesses, with many expressing enthusiasm about boosting productivity. Furthermore, in the domain of robotics, we are now able to learn physical AI by observing videos, demonstrations, and synthesizing data through reinforcement learning. This development enables collaboration with a wide range of robotics companies to explore general robotics. Overall, the momentum behind generative AI continues to accelerate.

CK
Colette KressCFO

And Toshiya, to answer your question regarding sovereign AI and our goals in terms of growth, in terms of revenue, it certainly is a unique and growing opportunity, something that surfaced with generative AI and the desires of countries around the world to have their generative AI that would be able to incorporate their own language, incorporate their own culture, and incorporate their own data in that country. So more and more excitement around these models and what they can be specific for those countries. So yes, we are seeing some growth opportunity in front of us.

Operator

And your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open.

O
JM
Joe MooreAnalyst

Great. Thank you. Jensen, in the press release, you talked about Blackwell anticipation being incredible, but it seems like Hopper demand is also really strong. I mean, you're guiding for a very strong quarter without Blackwell in October. So how long do you see sort of coexisting strong demand for both? And can you talk about the transition to Blackwell? Do you see people intermixing clusters? Do you think most of the Blackwell activity is new clusters? Just some sense of what that transition looks like?

JH
Jensen HuangCEO

Yes, thank you, Joe. The demand for Hopper is exceptionally high, and it's true that the demand for Blackwell is remarkable as well. There are a couple of reasons for this. Firstly, if you examine the world's cloud service providers, they have virtually no available GPU capacity. This is primarily because they are deploying these resources internally to enhance their own workloads, particularly in data processing. Data processing is often overlooked because it seems mundane and doesn't produce visual content or text. However, nearly every company globally processes data in the background. NVIDIA's GPUs are the only accelerators available that can effectively process and accelerate data such as SQL and via data science toolkits like Pandas and Polars, which are among the most widely used data processing platforms. Beyond CPUs, which are becoming less effective, NVIDIA's accelerated computing is crucial for improving performance, making it the primary use case long before generative AI emerged—this is about migrating applications to accelerated computing. Secondly, there's a significant demand for rentals. Companies developing models or startups, particularly those in generative AI, invest most of their capital into infrastructure to leverage AI for product development. Therefore, they need access to this technology immediately. There is a sense of urgency as they can't delay processing; they need to act now. Another reason for the current urgency around Hopper is the competition to reach the next advancement in AI. The first to reach this next level can introduce groundbreaking improvements, while the second to arrive is often only marginally better. Establishing leadership in innovation is vital, and NVIDIA aims to consistently set the bar through our advanced GPUs and AI frameworks. We strive to be the best in the world, and part of that drive is manifesting our future aspirations and the positive impacts we can create for society. Model makers share this ambition; they aim to be the best and the first. Although Blackwell is expected to start shipping in large quantities at the year's end, the readiness of that capacity is still weeks to a month away. In the meantime, the generative AI market dynamics are very active. There is real urgency, whether for operational needs or the need for accelerated computing, as they prefer not to invest more in general-purpose infrastructure. When it comes to world-class business infrastructure, the choice between investing in CPU infrastructure or deploying Hopper infrastructure is clear. There is a real push to update the considerable existing infrastructure to utilize Hopper's state-of-the-art capabilities.

Operator

And your next question comes from the line of Matt Ramsay with TD Cowen. Your line is open.

O
MR
Matt RamsayAnalyst

Thank you very much. Good afternoon, everybody. I wanted to kind of circle back to an earlier question about the debate that investors are having about, I don't know, the ROI on all of this CapEx, and hopefully this question and the distinction will make some sense. But what I'm having discussions about is with the percentage of folks that you see that are spending all of this money and looking to sort of push the frontier towards AGI convergence and, as you just said, a new plateau and capability. And they're going to spend regardless to get to that level of capability because it opens up so many doors for the industry and for their company versus customers that are really, really focused today on CapEx versus ROI. I don't know if that distinction makes sense. I'm just trying to get a sense of how you're seeing the priorities of people that are putting the dollars in the ground on this new technology and what their priorities are and their time frames are for that investment? Thanks.

JH
Jensen HuangCEO

Thank you, Matt. Those investing in NVIDIA infrastructure are seeing immediate returns. It's currently the best investment for computing infrastructure. To simplify it, consider the vast amount of existing general-purpose computing infrastructure worth a trillion dollars. The key question is whether more should be built. For every billion dollars spent on general CPU-based infrastructure, you likely pay less in rental costs because it's commoditized. With a trillion already in place, the need for more is questionable. The demand for this infrastructure is driven by companies developing Hopper-based and soon Blackwell-based systems, which lead to notable cost savings. This is primarily because efficient data processing reduces expenses, which is a significant factor in pricing. Additionally, recommended systems contribute to these savings. Moreover, any new capacity is likely to be rented quickly due to the surge in generative AI startups, yielding a strong return on investment. Finally, consider your own business objectives. Whether it's launching cutting-edge services or enhancing existing ones with advanced advertising, recommendation, or search systems, generative AI provides quick returns. Ultimately, the most efficient computing infrastructure to invest in now is accelerated computing from NVIDIA, aligning with the industry's shift towards such technologies. If you're modernizing your cloud and data centers, building with NVIDIA's accelerated computing is the optimal approach.

Operator

And your next question comes from the line of Timothy Arcuri with UBS. Your line is open.

O
TA
Timothy ArcuriAnalyst

Thanks a lot. I had a question on the shape of the revenue growth both near and longer-term. I know, Colette, you did increase OpEx for the year. If I look at the increase in your purchase commitments and your supply obligations, that's also quite bullish. On the other hand, there's some school of thought that not that many customers really seem ready for liquid cooling and I do recognize that some of these racks can be air-cooled. But Jensen, is that something to consider sort of on the shape of how Blackwell is going to ramp? And then I guess when you look beyond next year, which is obviously going to be a great year and you look into '26, do you worry about any other gating factors like, say, the power, supply chain, or at some point, models start to get smaller? I'm just wondering if you could speak to that? Thanks.

JH
Jensen HuangCEO

I'm going to start from the end. Thank you for the question, Tim. It's important to note that the world is transitioning from general-purpose computing to accelerated computing. In the near future, we will see about $1 trillion invested in data centers that will primarily focus on accelerated computing. Historically, data centers only used CPUs, but going forward, every data center will incorporate GPUs. This shift is crucial for accelerating workloads, which will help us maintain sustainability and reduce computing costs, preventing an increase in computing inflation as we process more data. Additionally, GPUs are essential for a new computing paradigm known as generative AI, which will significantly change the future of computing. To elaborate, the next trillion dollars invested in infrastructure will be quite different from the previous trillion and will focus heavily on acceleration. Regarding our scaling strategy, we offer various configurations of Blackwell, including the traditional Blackwell that utilizes the HGX form factor we introduced with Volta, which we have been shipping for quite a while and is air-cooled. The Grace Blackwell variant, which is liquid-cooled, is gaining popularity. The demand for liquid-cooled data centers is strong because they can provide three to five times the AI throughput compared to traditional setups due to their energy efficiency and lower total cost of ownership, along with the advantages of NVLink that allows for the connection of up to 144 GPUs. The advancements in this architecture will enable extremely low-latency and high-throughput performance for large language model inference, which will be transformative. Many of the cloud service providers we are collaborating with are adopting both types of cooling solutions. I am confident in our ability to scale effectively. Regarding your second question, we anticipate next year to be a fantastic year with significant growth in our data center business. Blackwell will be a pivotal development for our industry, and its impact will extend into the following year. It’s critical to understand that computing is undergoing two simultaneous platform transitions: from general-purpose computing to accelerated computing, and from human-engineered software to generative AI or AI-driven software.

Operator

And your next question comes from the line of Stacy Rasgon with Bernstein Research. Your line is open.

O
SR
Stacy RasgonAnalyst

Hi guys. Thanks for taking my questions. I have two short questions for Colette. The first, several billion dollars of Blackwell revenue in Q4. I guess is that additive? You said you expected Hopper demand to strengthen in the second half. Does that mean Hopper strengthens Q3 to Q4 as well on top of Blackwell adding several billion dollars? And the second question on gross margins, if I have mid-70s for the year, explaining where I want to draw that. If I have 75% for the year, I'd be something like 71% to 72% for Q4 somewhere in that range. Is that the kind of exit rate for gross margins that you're expecting? And how should we think about the drivers of gross margin evolution into next year as Blackwell ramps? Hopefully, I guess the yields and the inventory reserves and everything come up.

CK
Colette KressCFO

Yes. So Stacy, let's first take your question that you had about Hopper and Blackwell. We believe our Hopper will continue to grow into the second half. We have many new products for Hopper, our existing products for Hopper that we believe will start continuing to ramp, in the next quarters, including our Q3 and those new products moving to Q4. Hopper there for versus H1 is a growth opportunity for that. Additionally, we have the Blackwell on top of that, and the Blackwell starting to ramp in Q4. So hope that helps you on those two pieces. Your second piece is in terms of on our gross margin. We provided gross margin for our Q3. We provided our gross margin on a non-GAAP at about 75%. We'll work with all the different transitions that we're going through, but we believe we can do that 75% in Q3. We provided that we're still on track for the full year also in the mid-70s or approximately the 75%. So we're going to see some slight difference possibly in Q4. Again, with our transitions and the different cost structures that we have on our new product introductions. However, I'm not in the same number that you are there. We don't have exact guidance, but I do believe you're lower than where we are.

Operator

And your next question comes from the line of Ben Reitzes with Melius. Your line is open.

O
BR
Ben ReitzesAnalyst

Yes, hey, thanks a lot for the question, Jensen and Colette. I wanted to ask about the geographies. There was the 10-Q that came out and the United States was down sequentially while several Asian geographies were up a lot sequentially. Just wondering what the dynamics are there? Obviously, China did very well. You mentioned it in your remarks, what are the puts and takes? And then I just wanted to clarify from Stacy's question, if that means the sequential overall revenue growth rates for the company accelerate in the fourth quarter given all those favorable revenue dynamics? Thanks.

CK
Colette KressCFO

Let me discuss our disclosure related to the 10-Q, which is a required disclosure that involves selecting appropriate geographies. It can be quite challenging to create this disclosure because it hinges on a critical component. This component pertains to our customers and specifically who we bill. What you're seeing reflects our billing practices, which doesn’t necessarily indicate where the product will ultimately end up or where it might go to the end customer. For the most part, our products are shipped to our OEMs, ODMs, and system integrators. So, what you observe can sometimes be a rapid shift in who they are utilizing to finalize their configurations before these products reach data centers or notebooks. This shift does occur occasionally. Regarding our figures for China, they include elements from gaming, data centers, and automotive sectors. In reference to your question about gross margin and the projected revenue from Hopper and Blackwell, we anticipate Hopper will continue to grow in the second half of the year. While we cannot provide specific guidance on Q4 just yet, we believe the current demand signals indicate a growth opportunity in that quarter, along with our Blackwell architecture.

Operator

And your next question comes from the line of C.J. Muse with Cantor Fitzgerald. Your line is open.

O
CM
C.J. MuseAnalyst

Yes, good afternoon. Thank you for taking the question. You've embarked on a remarkable annual product cadence with challenges only likely becoming more and more given rising complexity and a radical limit in advanced package world. So curious, if you take a step back, how does this backdrop alter your thinking around potentially greater vertical integration, supply chain partnerships and then thinking through the consequential impact to your margin profile? Thank you.

JH
Jensen HuangCEO

Thank you for your question. The reason our velocity is so high is that while the complexity of the model is increasing, we also aim to reduce its costs. As the model grows, we want to scale it further. We believe that by scaling our AI models, we can achieve extraordinary usefulness, which could lead to the next industrial revolution. We are determined to push ourselves to achieve this scale. We have a unique ability to create an AI factory because we possess all the necessary components. It’s challenging to establish a new AI factory annually without having all the parts. Next year, we plan to ship significantly more CPUs than ever before, along with more GPUs, NVLink switches, CX DPUs, ConnectX EPUs for East and West, Bluefield DPUs for North and South, and storage processing to InfiniBand for supercomputing centers to Ethernet. This is a new product for us and is on track to become a multi-billion dollar business to bring AI to Ethernet. Our access to all these parts, combined with our singular architectural stack, enables us to introduce new market capabilities as we finalize development. Otherwise, you would end up shipping parts, seeking customers, and needing someone to build an AI factory, which requires extensive software integration. We appreciate that our disaggregated supply chain allows us to service various partners like Quanta, Foxconn, HP, Dell, Lenovo, and Supermicro. The numerous ecosystem partners we have enable them to integrate our well-functioning architecture in customized ways across the world's cloud service providers and enterprise data centers. The scale and reach necessary from our ODMs and integrator supply chain is massive due to the vastness of the market. Our focus is not on integration; rather, we want to remain a technology provider.

Operator

And your final question comes from the line of Aaron Rakers with Wells Fargo. Your line is open.

O
AR
Aaron RakersAnalyst

Yes, thanks for taking the questions. I wanted to go back into the Blackwell product cycle. One of the questions that we tend to get asked is how you see the Rack Scale system mix dynamic as you think about leveraging NVLink? You think about GB, NVL72, and how that go-to-market dynamic looks as far as the Blackwell product cycle? I guess put this distinctly, how do you see that mix of Rack Scale systems as we start to think about the Blackwell cycle playing out?

JH
Jensen HuangCEO

Yes, Aaron, thank you. The Blackwell Rack system is designed as a rack, but we sell it in separate components rather than as a complete rack. This approach is necessary because each rack varies slightly. Some adhere to OCP standards while others don’t, and the power limits can differ. The choices for cooling distribution units, power bus bars, and the overall integration into different data centers vary widely. We designed the software to function seamlessly across the entire rack, while the individual system components, like the CPU and GPU compute board, fit into a modular system architecture known as MGX. MGX is exceptionally well thought out, and we have various original design manufacturers and integrators around the world. Almost any configuration is possible, and delivery needs to be arranged close to the data center due to the weight of the rack. Integration is typically performed near the locations of cloud service providers and data centers. Our presentations might give the impression that we handle integration, but that is not what our customers want; they prefer to manage integration themselves. Similarly, our supply chain partners are focused on integration as their value proposition. We have an extensive network of ODM and OEM partners who excel in this area. Integration is not the driving factor behind our rack offerings; in fact, we prefer to avoid acting as an integrator, choosing instead to be a technology provider. Thank you. I want to reiterate a few points I made earlier. Data centers globally are actively modernizing the entire computing stack with accelerated computing and generative AI. Demand for Hopper remains strong, and the excitement for Blackwell is remarkable. I’d like to highlight the five key aspects of our company. Accelerated computing has reached a pivotal moment, CPU scaling is slowing, and developers need to enhance performance wherever possible. This begins with CUDA-X libraries, which open up new market opportunities for NVIDIA. We have launched several new libraries, such as accelerated Polars, Pandas, and Spark, which are leading in data science and processing, as well as QVS for vector databases, which is very popular at the moment. We also introduced Aerial and Sionna for 5G wireless base stations, expanding our reach into a wide range of data centers. Parabricks for gene sequencing and AlphaFold 2 for protein structure prediction are now accelerated by CUDA. We are at the early stage of our journey to modernize data centers worth $1 trillion from general-purpose to accelerated computing. Secondly, Blackwell represents a significant advancement over Hopper. It is an AI infrastructure platform, not just a GPU, although it is the name of our GPU as well. As we share more details about Blackwell and provide samples to our partners and customers, the magnitude of Blackwell's advantages becomes evident. The vision for Blackwell took nearly five years and involved the creation of seven unique chips, including the Grace CPU, Blackwell dual GPU, coVS package, ConnectX DPU for East-West traffic, BlueField DPU for North-South and storage traffic, NVLink switch for GPU communications, and Quantum and Spectrum-X for both InfiniBand and Ethernet to support the high traffic of AI. Blackwell AI factories are substantial computing systems. NVIDIA has designed and optimized the entire Blackwell platform from chips to systems and networking, including structured cables, power, cooling, and software to facilitate the rapid establishment of AI factories by customers. These infrastructures require significant investment, and customers are eager to deploy them as soon as possible to enhance performance and TCO. Blackwell delivers 3 to 5 times more AI throughput in a power-limited data center compared to Hopper. The third highlight is NVLink, which is transformative with its all-to-all GPU switch. The Blackwell system enables us to connect 144 GPUs in 72-GB packages into a single NVLink domain, providing a total NVLink bandwidth of 259 terabytes per second in one rack. For context, this is about ten times higher than Hopper, reflecting the need to accelerate the training of extensive models. This massive data transfer is crucial for GPU-to-GPU communication and necessary for low-latency, high-throughput generation of large language model tokens during inference. We now offer three networking platforms: NVLink for GPU scalability, Quantum InfiniBand for supercomputing and dedicated AI factories, and Spectrum-X for AI on Ethernet. NVIDIA's networking capabilities have expanded significantly. The momentum for generative AI is increasing, with developers of generative AI frontier models striving to escalate to new heights while enhancing model safety and intelligence. We are also expanding our understanding across various modalities, including text, images, video, 3D, physics, chemistry, and biology. While chatbots, coding AIs, and image generators are rapidly evolving, this is merely the beginning. Internet services are leveraging generative AI for extensive recommenders, advertising targeting, and search mechanisms. AI startups are utilizing vast amounts of cloud capacity, and countries are increasingly recognizing the significance of AI, investing in sovereign AI infrastructure. NVIDIA AI and NVIDIA Omniverse are paving the way for a new era in AI general robotics. The enterprise AI wave has commenced, and we are ready to assist companies in transforming their operations. The NVIDIA AI Enterprise platform features NeMo, NIMS, NIM agent blueprints, and AI Foundry, which our leading IT ecosystem partners utilize to tailor AI models and develop customized applications. Enterprises can deploy these solutions on the NVIDIA AI Enterprise runtime, and at $4,500 per GPU per year, NVIDIA AI Enterprise represents exceptional value for AI deployment. The potential for NVIDIA's software TAM is substantial as the base of CUDA-compatible GPUs expands from millions to tens of millions. As mentioned, NVIDIA software is expected to finish the year reaching a $2 billion run rate. Thank you all for joining us today.

Operator

Ladies and gentlemen, this concludes today's call, and we thank you for your participation. You may now disconnect.

O