NVIDIA Corp
NVIDIA is the world leader in accelerated computing.
Profit margin of 55.6% — that's well above average.
Current Price
$177.39
+0.93%GoodMoat Value
$221.97
25.1% undervaluedNVIDIA Corp (NVDA) — Q3 2023 Earnings Call Transcript
Operator
Good afternoon. My name is Emma, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's third quarter earnings call. Simona Jankowski, you may begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the third quarter of fiscal 2023. With me today from NVIDIA are Jen-Hsun Huang, president and chief executive officer; and Colette Kress, executive vice president and chief financial officer. I'd like to remind you that our call is being webcast live on NVIDIA's investor relations website. The webcast will be available for replay until the conference call to discuss our financial results for the fourth quarter and fiscal 2023. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, November 16, 2022, and based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.
Thanks, Simona. Q3 revenue was $5.93 billion, down 12% sequentially and down 17% year on year. We delivered record data center and automotive revenue while our gaming and pro visualization platforms declined as we work through channel inventory corrections and challenging external conditions. Starting with data center. Revenue of $3.83 billion was up 1% sequentially and 31% year on year. This reflects solid performance in the face of macroeconomic challenges, new export controls, and lingering supply chain disruptions. Year-on-year growth was driven primarily by leading U.S. cloud providers and a broadening set of consumer Internet companies for workloads such as large language models, recommendation systems, and generative AI. As the number and scale of public cloud computing and Internet service companies deploying NVIDIA AI grows, our traditional hyperscale definition will need to be expanded to convey the different end market use cases. We will align our data center customer commentary going forward accordingly. Other vertical industries, such as automotive and energy, also contributed to growth with key workloads relating to autonomous driving, high-performance computing, simulations, and analytics. During the quarter, the U.S. government announced new restrictions impacting exports of our A100 and H-100 based products to China, and any product destined for certain systems or entities in China. These restrictions impacted third quarter revenue, largely offset by sales of alternative products into China. That said, demand in China more broadly remains soft, and we expect that to continue in the current quarter. We started shipping our flagship H-100 data center GPU based on the new hopper architecture in Q3. A100-based systems are available starting this month from leading server makers including Dell, Hewlett Packard Enterprise, Lenovo, and SuperMicro. Early next year, the first H-100 based cloud instances will be available on Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure. A100 delivered the highest performance and workload versatility for both AI training and inference in the latest MLPerf industry benchmarks. H-100 also delivers incredible value compared to the previous generation for equivalent AI performance. It offers three times lower total cost of ownership while using five times fewer server nodes and 3.5 times less energy. Earlier today, we announced a multiyear collaboration with Microsoft to build an advanced cloud-based AI supercomputer to help enterprises train, deploy, and scale AI, including large state-of-the-art models. Microsoft Azure will incorporate our complete AI stack, adding tens of thousands of A100 and H-100 GPUs, Quantum 2 400 gigabit per second InfiniBand networking, and the NVIDIA AI enterprise software suite to its platform. Oracle and NVIDIA are also working together to offer AI training and inference at scale to thousands of enterprises. This includes bringing to Oracle Cloud infrastructure the full NVIDIA accelerated computing stack and adding tens of thousands of NVIDIA GPUs, including the A100 and H-100. Cloud-based high-performance computing is seeing new scale as adopting NVIDIA AI enterprise and other software to address the industrial scientific communities' rising demand for AI in the cloud. NVIDIA AI will bring new capability to rescale high-performance computing as a service offerings, which include simulation and engineering software used across industries. Networking posted strong growth driven by hyperscale customers and easing supply constraints. Our new Quantum 240 gigabit per second InfiniBand and Spectrum Ethernet networking platforms are building momentum. We achieved an important milestone this quarter with VMware, whose leading server virtualization platform, vSphere, has been rearchitected over the last two years to run on DPUs and now supports our BlueField DPUs. Our joint enterprise AI platform is available first on Dell PowerEdge servers. The BlueField DPU design win pipeline is growing and the number of infrastructure software partners is expanding, including Arista, Check Point, Juniper, and Red Hat. The latest top 500 list of supercomputers released this week at Supercomputing '22 has the highest ever number of NVIDIA-powered systems, including 72% of the total and 90% of new systems on the list. Moreover, NVIDIA powers 23 of the top 30 of the Green 500 list, demonstrating the energy efficiency of accelerated computing. The number one most energy-efficient system is the Flat Iron Institute Henry, which is the first top 500 system featuring our H-100 GPUs. At GTC, we announced the NVIDIA Omniverse Computing System, or OVS, reference designs featuring the new L4 GPU based on the Ada Lovelace architecture. These systems are designed to build and operate 3D virtual worlds using NVIDIA Omniverse enterprise. NVIDIA OBX systems will be available from Inspur, Lenovo, and SuperMicro by early 2023. Lockheed Martin and Jaguar Land Rover will be among the first customers to receive OVS systems. We are further expanding our AI software and services offerings with NVIDIA and Bio Nemo large language model services, which are both entering early access this month. These enable developers to easily adopt large language models and deploy customized AI applications for content generation, text summarization, chatbots, co-development, protein structure, and biomolecular property predictions. Moving to gaming. Revenue of $1.57 billion was down 23% sequentially and down 51% from a year ago, reflecting lower sell-in to partners to help align channel inventory levels with current demand expectations. We believe channel inventories are on track to approach normal levels as we exit Q4. Sell-through for our gaming products was relatively solid in the Americas and EMEA, but softer in Asia Pacific as macroeconomic conditions and COVID lockdowns in China continued to weigh on consumer demand. Our new Ada Lovelace GPU architecture had an exceptional launch. The first Ada GPU, the GeForce RTX 4090, became available in mid-October and received tremendous positive feedback from the gaming community. We sold out quickly in many locations and are working hard to keep up with demand. The next member of the Ada family, RTX 4080, is available today. The RTX 40 Series GPUs feature DLSS 3, the neural rendering technology that uses AI to generate entire frames for faster gameplay. Our third-generation RTX technology has raised the bar for computer graphics and helped supercharge gaming. For example, the 15-year-old classic game Portal has now been reimagined with full ray tracing and DLSS 3 and has made it onto Steam's top 100 most wish-listed games. The total number of RTX games and applications now exceeds 350. There is tremendous energy in the gaming community that we believe will continue to fuel strong fundamentals over the long term. The number of simultaneous users on Steam just hit a record of 30 million, surpassing the prior peak of 28 million in January. Activision's Call of Duty: Modern Warfare 2 set a record for the franchise with more than $800 million in opening weekend sales, topping the combined box office openings of movie blockbusters Top Gun: Maverick and Doctor Strange in the Multiverse of Madness. Moreover, this month's League of Legends World Championship in San Francisco sold out within minutes, with 18,000 esports fans packed into the arena where the Golden State Warriors play. We continue to expand the GeForce NOW cloud gaming service. In Q3, we added over 85 games to the library, bringing the total to over 1,400. We also launched GeForce NOW on new gaming devices, including Logitech's Cloud handheld, cloud gaming Chromebooks, and Razor 5G Edge. Moving to Pro Visualization, revenue of $200 million was down 60% sequentially and down 65% from a year ago, reflecting lower sell-in to partners to help align channel inventory levels with current demand expectations. These dynamics are expected to continue in Q4. Despite near-term challenges, we believe our long-term opportunity remains intact, fueled by AI simulation, computationally intensive design, and engineering workloads. At GTC, we announced NVIDIA Omniverse Cloud Services, our first software and infrastructure as a service offering, enabling artists, developers, and enterprise teams to design, publish, and operate metaverse applications from anywhere on any device. Omniverse Cloud Services runs on Omniverse cloud computer, a computing system comprised of NVIDIA OBX for graphics and physics simulation, NVIDIA HDX for AI workloads, and the NVIDIA graphics delivery network, a global scale, distributed data center network for delivering low-latency metaverse graphics on the edge. Leaders in some of the world's largest industries continue to adopt Omniverse. Home improvement retailer Lowe's is using it to help design, build, and operate digital twins for their stores. Charter Communications and advanced analytics company heavy AI are creating Omniverse-powered digital twins to optimize Charter's wireless network. Deutsche Bahn, operator of the German National Railway, is using Omniverse to create digital twins of its rail network and train AI models to monitor the network, increasing safety and reliability. Moving to automotive, revenue of $251 million increased 14% sequentially and 86% from a year ago. Growth was driven by increased AI automotive solutions as our customers ramp up production. Automotive has great momentum and is on its way to becoming our next multibillion-dollar platform. Global Cars unveiled the all-new flagship Volvo EX90 SUV powered by the NVIDIA Drive platform. This is the first model to use Volvo's software-defined architecture with a centralized core computer containing both Drive Orin and DRIVEXaviar, along with 30 sensors. Other recently announced design wins and new model introductions include Ton, Auto, NIO, Polestar, and others. At GTC, we also announced that the NVIDIA Drive Super Chip, the successor to Orin in our automotive SoC roadmap, delivers up to 2,000 teraflops of performance and leverages technologies introduced in our Grace, Hopper, and Ada architectures. It is capable of running both the automated driving and in-vehicle infotainment systems simultaneously, offering high performance while reducing costs and energy consumption. Drive will be available for automakers' 25 models, with Geely-owned automaker Zika as the first announced customer. Moving to the rest of the P&L, GAAP gross margin was 53.6% and non-GAAP gross margin was 56.1%. Gross margins reflect $702 million in inventory charges largely related to lower data center demand in China, partially offset by a warranty benefit of approximately $70 million. Year-on-year, GAAP operating expenses were up 31%, and non-GAAP operating expenses were up 30%, primarily due to higher compensation expenses related to headcount growth and salary increases, and higher data center infrastructure expenses. Sequentially, both GAAP and non-GAAP operating expense growth was in the single-digit percent, and we plan to keep it relatively flat at these levels over the coming quarters. We returned $3.75 billion to shareholders in the form of share repurchases and cash dividends. At the end of Q3, we had approximately $8.3 billion remaining under our share repurchase authorization through December 23. Let me turn to the outlook for the fourth quarter of fiscal 2023. We expect our data center revenue to reflect early production shipments of the A100, offset by continued softness in China. In gaming, we expect to resume sequential growth, with our revenue still below end demand as we continue to work through the channel inventory correction. And in automotive, we expect continued ramp in our Orin design wins. All in, we expect modest sequential growth driven by automotive, gaming, and data center. Revenue is expected to be $6 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 63.2% and 66%, respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be approximately $2.56 billion. Non-GAAP operating expenses are expected to be approximately $1.78 billion. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $40 million, excluding gains and losses on non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 9%, plus or minus 1%, excluding any discrete items. Capital expenditures are expected to be approximately $500 million to $550 million. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We'll be attending the Credit Suisse conference in Phoenix on November 30, the Rate Virtual Tech Conference on December 5, and the JPMorgan Forum on January 5 in Las Vegas. Our earnings call to discuss the results of our fourth quarter and fiscal 2023 is scheduled for Wednesday, February 22. We will now open the call for questions.
Operator
Your first question comes from Vivek Arya with Bank of America Securities. Your line is now open.
Thanks for taking my question. Colette, just wanted to clarify first, I think last quarter you gave us a sell-through rate for your gaming business at about $2.5 billion a quarter. I think you said China is somewhat weaker. So I was hoping you could update us on what that sell-through rate is right now for gaming. And then, Jen-Hsun, the question for you. A lot of concerns about large hyperscalers cutting their spending and pointing to a slowdown. So if, let's say, U.S. cloud capex is flat or slightly down next year, do you think your business can still grow in the data center and why?
Yes. Thanks for the question. Let me first start with the sell-through on our gaming business. We had indicated that if you put two quarters together, we would see approximately $5 billion in normalized sell-through for our business. Now, during the quarter, sell-through in Q3 was relatively solid. We've indicated that although China lockdowns continue to challenge our overall China business, it was still relatively solid. Notebook sell-through was also quite solid, while desktop was a bit softer, particularly in the China and Asia areas. We expect though stronger end demand as we enter into Q4, driven by the upcoming holidays, as well as the continuation of the Ada adoption.
Vivek, our data center business is indexed to two fundamental dynamics. The first has to do with general-purpose computing no longer scaling. Therefore, acceleration is necessary to achieve the necessary level of cost efficiency, scale, and energy efficiency scale so that we can continue to increase workloads while saving money and saving power. Accelerated computing is generally recognized as the path forward as general-purpose computing slows. The second dynamic is AI. We are seeing surging demand in some very important sectors of AIs with important breakthroughs in AI. One is called deep recommender systems, which is now quite essential to the best content, item, or product recommendation for someone using a device like a selfie camera or interacting with a computer using only voice. You need to really understand the nature and context of the person making the request to make the appropriate recommendation. The second has to do with large language models. This started several years ago with the invention of the transformer, which led to BERT, which led to GPT-3, and now a whole range of models associated with that. We now have the ability to learn representations of languages of all kinds. It could be human language, or even the languages of biology, chemistry, etc. Recently, I just saw a breakthrough called Jeans LM, which is one of the first examples of learning the language of human genomes. The third is generative AI. For the first ten years, we've dedicated ourselves to perception AI, but the ultimate goal of AI is to create something and contribute; to generate products. This is now the beginning of the era of generative AI. We see it everywhere, whether it's generating images, videos, or text of all kinds, it enhances our performance to improve productivity and reduce costs with whatever we have to work with. Productivity is more crucial than ever. Our company is indexed to both power efficiency and cost efficiency, and both are becoming more important.
Operator
Your next question comes from the line of C.J. Muse with Evercore. Your line is now open.
Yeah, good afternoon and thank you for taking the question. You started to bundle NVIDIA enterprise now with the H-100. I'm curious if you can talk about how we should think about timing around software monetization? And how we should kind of see this flow through the model, particularly with the focus on the AI enterprise and Omnivore side of things?
Yes. Thanks, CJ. We're making excellent progress in NVIDIA AI enterprise. In fact, you probably saw that we made several announcements this quarter associated with clouds. You know that NVIDIA has a rich ecosystem, and over the years, our rich ecosystem and our software stack has been integrated into developers and startups of all kinds. But now more than ever, we are at the tipping point of clouds, which is fantastic. Because if we could get NVIDIA's architecture and our full stack into every single cloud, we could reach more customers more quickly. This quarter, we announced several initiatives and partnerships, including one with Microsoft, which has everything to do with scaling up AI. We have many startups clamoring for large installations of our GPU to facilitate language model training and scaling AI efforts in enterprises, as well as amongst the world's Internet service providers. Every company we're talking to would like to have the agility and scale, flexibility of clouds. Over the last year, we've been focusing on moving all of our software stacks to the cloud. Today, we announced that Microsoft and ourselves are going to standardize on the NVIDIA stack for a significant part of what we're doing together, so that we can take a full stack out to the world's enterprises. This encompasses all software as well. A month ago, we announced a similar partnership with Oracle. You also saw that Rescale, a leader in high-performance cloud computing, has integrated NVIDIA AI into their stack. NVIDIA has also been integrated into Google Cloud Platform. We've recently announced Nemo, and Bio Nemo large language model services, putting NVIDIA software in the cloud, and we've also made Omniverse available in the cloud. The goal of all of this is to move the NVIDIA platform and software stack offboarding to the cloud, allowing customers to engage with our software much more quickly. Customers can access our software on a per-GPU instance hour basis in the cloud or on-prem through software licensing. In both cases, our software is practically available everywhere, and our partners are excited because NVIDIA's rich ecosystem is global, which brings new consumption into their clouds and connects new opportunities to the APIs and services they offer. Our software stack is making incredible progress.
Operator
Your next question comes from the line of Chris Caso with Credit Suisse. Your line is now open.
Yes. Thank you. Good evening. I wonder if you could give some more color about the inventory charges you took in the quarter, and then internal inventory in general. In the documentation, you talked about that being a portion of inventory on hand plus some purchase obligations. And you also spoke in your prepared remarks that some of this was due to China data centers. So if you can clarify what was in those charges. And then, in general, for your internal inventory, does that still need to be worked down? And what are the implications if that needs to be worked down over the next couple of quarters?
Thanks for the question, Chris. As we highlighted in our prepared remarks, we booked an entry of $702 million for inventory reserves within the quarter. Most of that, primarily all of it, is related to our data center business due to the change in expected demand looking forward for China. When we look at the data center products, a good portion of this was also the A100, which we wrote down. Now, looking at our inventory that we have on hand and the inventory that has increased, a lot of that is just due to our upcoming architectures coming to market. Our Ada architecture, our Hopper architecture, and even more in terms of our networking business. We have been building for those architectures to come to market, and as such, we are always looking at our inventory levels at the end of each quarter for our expected demand going forward. I think we've done a solid job that we used in this quarter just based on that expectation.
Operator
Your next question comes from the line of Timothy Arcuri with UBS. Your line is now open.
Thank you very much. Colette, I have a two-part question. First, is there any impact from stockpiling in the data center guidance? I ask this because you now have the A800, which is a modified version of the A100 with a lower data transfer rate. One could assume that customers might be accumulating that while they can still obtain it. The second part of my question is regarding the inventory charge. Could you elaborate on that? Last quarter, it was understandable that you took a charge due to revenue being lower than anticipated, but this time revenue came in almost as expected, and it seemed like China was neutral overall. Is the charge connected to just reducing A100 inventory more quickly? Is that what the charges are related to?
Sure. So let me talk about the first statement that you indicated. Most of our data center business that we see is we're working with customers specifically on their needs to build out accelerated computing and AI. It's just not a business in terms of where units are being held for that. They're usually very specific products and projects that we see. So I'm going to answer no—nothing that we can see. Your second question regarding the inventory provisions. At the end of last quarter, we were beginning to see softness in China. We've always been looking at our needs long term. It's not a statement about the current quarter in inventory, as you can see. It usually takes two or three quarters for us to build product for the future demand. So that's always a case of the inventory that we are ordering. So now looking at what we've seen in terms of continued lockdowns and continued economic challenges in China, it was time for us to take a hard look at what we think we'll need for the data center going forward and not lag for write-downs.
Operator
Your next question comes from the line of Stacy Rasgon with Bernstein. Your line is now open.
Hi, guys. Thanks for taking my question. Colette, I had a question on the commentary you gave on the sequentials. It kind of sounded like data center maybe had some China softness issues. You said gaming resumed sequential growth. But then you said sequential growth for the company driven by auto gaming and data center. How can all three of those grow sequentially if the overall guidance is kind of flat? Are they all just growing just a little bit or is one of them actually down? How do we think about the segments into Q4 given that commentary?
Yes. So your question is regarding the sequential growth from Q3 to our guidance that we provided for Q4. As we are seeing the numbers in terms of our guidance, you're correct, is only growing about $100 million. We've indicated that three of those platforms will likely grow just a little bit. But our pro visualization business we think will be flattish and likely not growing as we're still working on correcting the channel inventory levels to get to the right amount. It's very difficult to say which will have that increase. But again, we are planning for all three of those different market platforms to grow just a little bit.
Operator
Your next question comes from the line of Mark Lipacis with Jefferies. Your line is now open.
Hi. Thanks for taking my question. Jen-Hsun, I think for you, you've articulated a vision for the data center as a solution with an integrated solution set of a CPU, GPU, and DPU deployed for all workloads or most workloads, I think. Could you just give us a sense of or talk about where this vision is in the penetration cycle? And maybe talk about Grace's importance for realizing that vision, what will Grace deliver versus an off-the-shelf x86? Do you have a sense of where Grace will get embraced first or the fastest within that vision?
Grace's data moving capability is exceptional. Grace also has memory coherence with our GPU, which allows our GPU to expand its effective GPU memory by a factor of ten. That's not possible without special capabilities designed between Hopper and Grace and the architecture of Grace. Grace is designed for processing large data sets at very high speeds. Applications such as data processing, which is related to recommender systems, operate on petabytes of live data at a time. They need to be quick to make recommendations within milliseconds to hundreds of millions of users using our services. It's also very effective for AI training and machine learning. I've previously mentioned that we will have production samples in Q1, and we're still on track to deliver that.
Operator
Your next question comes from the line of Harlan Sur with J.P. Morgan. Your line is now open.
Good afternoon and thanks for taking my question. Your data center networking business, I believe, is driving about $800 million per quarter in sales, very, very strong growth over the past few years. Near term, as you guys pointed out, the team is driving strong networking and BlueField attachment to your own compute solutions like DGX and more partner announcements like VMware. But we also know that networking has pretty large exposure to general-purpose cloud and hyperscale compute spending trends. So what's the visibility and growth outlook for the networking business over the next few quarters?
If I could take that. First, thanks for your question. Our networking, as you know, is heavily indexed to high-performance computing. We don't serve the vast majority of commodity networking. All our network solutions are very high-end, designed for data centers that move a lot of data. In today's hyperscale data centers, deploying a large number of AI applications is likely to mean that the network bandwidth provisioned has substantial implications for overall throughput. The small incremental investment made in high-performance networking translates to billions of dollars in savings through efficiently provisioning the service or billions of dollars more in throughput, which increases their economics. These days, with disaggregated AI applications provisioning and data centers, high-performance networking is quite essential. It pays for itself almost immediately. You might have noticed that NVIDIA and Microsoft are building one of the largest AI infrastructures in the world, powered completely by NVIDIA's InfiniBand 400 gigabits per second network. That network pays for itself instantaneously because an investment into the infrastructure is significant and if you are slowed down by slow networks, the overall infrastructure efficiency is greatly reduced. Our focus on networking remains crucial. You might recall that NVIDIA's acquisition of Mellanox was at the time doing about a few hundred million dollars a quarter, and now we are approaching those numbers in a single quarter.
Operator
Your next question comes from the line of Aaron Rakers with Wells Fargo. Your line is now open.
Thanks for taking the question. I want to expand on the networking question a little bit further. When we look at the Microsoft announcement today, considering what Meta is doing on the AI footprint that they're deploying, Jen-Hsun, can you help us understand where your InfiniBand networking sits relative to traditional data center switching? And maybe kind of build on that, how you're positioning Spectrum for the market, does that compete against a broader set of opportunities in the Ethernet world for AI fabric networking?
The math is like this. If you're going to spend $20 billion on an infrastructure and the efficiency of that overall data center improves by 10%, the savings are enormous. When we work on large language models and recommender systems, the processing is distributed across the entire data center. We distribute workloads across multiple GPUs and nodes, needing them to run for extended periods. Thus, the importance of the network cannot be overstated. A mere 10% improvement in overall efficiency, while challenging, makes a significant difference. Using NVIDIA's InfiniBand along with our entire software stack with what we call Magnum IO, we can do computation in the network itself. A lot of software runs in the network, not just data transfer. We call it in-network computing, meaning a great deal of software runs at the edge of the network. We achieve considerable differences in overall efficiency. Thus, if you're investing in infrastructure, whether at a large or small scale, the difference in efficiency is significant.
Operator
Your next question comes from the line of Ambrish Srivastava with BMO. Your line is now open.
Hi. Thank you very much. I actually had a couple of clarifications. Colette, on the data center side, is it a fair assumption that compute was down quarter-over-quarter in the reported quarter because the quarter before Mellanox or the networking business was up, as was called out? And again, you said it grew quarter-over-quarter. So is that a fair assumption? And then, I had a clarification on the USG band. Initially, it was supposed to be a $400 million, really going to what the government was trying to firewall. Is the A800— I'm trying to understand—is that against the spirit of what the government is trying to do, i.e., to firewall high-performance computing? Or is A800 going to a different set of customers?
Thank you for the question. Looking at our compute for the quarter is about flat. Yes, we’re seeing growth in terms of our networking, but you should see our Q3 compute is about flat with last quarter.
Ambrish, A800 hardware ensures that it always meets U.S. government's clear test for export control. It cannot be reprogrammed by customers to exceed its capabilities. It is hardware-limited, and it is the hardware that determines the A800's capabilities. So it meets regulations both in letter and spirit. We raised the concern about the $400 million of A100s because we were uncertain about our ability to execute. The introduction of A800 to our customers and through our supply chain in time required remarkable efforts to ensure that our business and customers were not affected. A800 hardware fully ensures that it meets U.S. government's clear tests for export control.
Operator
Your next question comes from the line of William Stein with Truist Securities. Your line is now open.
Thank you. I'm hoping you can discuss the pace of 100 growth as we progress over the next year. We've had many questions about whether the ramp in this product should look like a sort of traditional product cycle where there is quite a bit of pent-up demand for this significantly improved performance product and that there is supply available as well. So does this rollout look relatively typical from that perspective or should we expect a more delayed start to the growth trajectory, where we see maybe substantially more growth in, let's say, the second half of '23?
The H-100 ramp is different than the A100 ramp in several ways. The first is that the total cost of ownership, the operational cost benefits because of the energy savings are crucial now that every data center is power-limited. This incredible transformer engine serves the needs of the latest AI models, and there is a pent-up demand for Hopper due to the new models I mentioned earlier—deep recommender systems and large language models. Customers are keen to ramp Hopper as quickly as possible, and we are striving to support that. We are fully mobilized to help cloud service providers stand up supercomputers. NVIDIA is the only company in the world that produces and ships semi-custom supercomputers in high volume. It is rare to ship a supercomputer every three years, and even more remarkable to ship supercomputers to every cloud service provider in a quarter. We are working hand-in-hand with each one, racing to set up Hoppers. We expect cloud services based on Hopper to be available in Q1, with production shipments expected in Q4, leading to large volumes in Q1. This transition process is faster than what we experienced with Ampere due to the dynamics I described.
Operator
Your next question comes from the line of Matt Ramsay with Cowen. Your line is now open.
Yeah. Thank you very much. Good afternoon. I guess, Colette, I heard in your script that you had talked about a new way of commenting on or reporting hyperscaler revenue in your data center business. I wonder if you could give us a little more detail about what you're thinking there and what sort of drove the decision? And I guess the derivative of that, Jen-Hsun, how that decision to talk about the data center business to hyperscalers differently means for the business. Is that simply a reflection of where demand is, and you're going to break things out differently, or is something changing about the mix of internal properties versus vertical industry demand within the hyperscale customer base?
Yes, Matt, thanks for the question. Let me clarify a little bit in terms of what we believe we should be looking at when we go forward and discuss our data center business. Our data center business is becoming larger, and our customers are complex. When we talk about hyperscale, we tend to refer to seven or eight different companies, but the reality is there are many very large companies that we could add to that discussion based on their purchases. We're also looking at cloud purchases and considering what our customers are building for the cloud because this is where our enterprise, researchers, and higher education customers are also making purchases. We aim to find a better way to describe the developments we are observing in the cloud and provide a clearer understanding of some of the larger installations we are seeing in hyperscalers.
Let me underscore what Colette just said, which is absolutely correct. Two significant dynamics are occurring. First, the adoption of NVIDIA in Internet service companies worldwide has expanded significantly. These are companies offering services that aren't just public cloud computing firms. The second important factor relates to cloud computing. We are now at a tipping point for cloud computing, where nearly every enterprise worldwide is adopting both cloud-first and multi-cloud strategies. This explains why all the announcements we made over this quarter and during GTC showcase new platforms now available in the cloud. A CSP, or hyperscaler, can function as both a customer and a partner on the public cloud side of their business. Given the richness of NVIDIA's ecosystem, we continue to have a strong relationship with our cloud service providers. It’s evident now that the public cloud side of their business will likely account for the vast majority of their overall consumption. As a result, our interactions with CSPs and hyperscalers are shifting to encompass both internal consumption and joint efforts on the public cloud side.
Operator
Your next question comes from the line of Joseph Moore with Morgan Stanley. Your line is now open.
Great. Thank you. I wonder if you could reflect on the past impact of crypto on your numbers. Obviously, that's now gone from your figures, but do you see any potential for liquidation of GPUs that are in the mining network, and any impact going forward? And do you foresee blockchain being an important part of your business at some point down the road?
We don't expect to see blockchain becoming an important part of our business in the future. The resell market is always present. If you look at major resale platforms, like eBay, there are secondhand graphics cards available for sale all the time. The reason being that a 3090 that was purchased today may be upgraded to a 4090 later. That 3090 can be sold to someone else at the right price. The availability of secondhand and used graphics cards has always existed. When inventory exceeds demand, prices may drop, affecting the lower ends of our market. However, the current trajectory with ADA is focused clearly on the upper half of our market. Early indicators show that the ADA launch was very successful. We shipped a substantial number of 4090s, which quickly sold out worldwide due to our preparedness. The positive reception for the 4090 and the 4080 has been exceptional, reflecting the strength and vibrancy of the gaming market. We are highly enthusiastic about our ADA launch, and many more products are on the way.
Operator
Your last question today comes from the line of Toshiya Hari with Goldman Sachs. Your line is now open.
Great. Thank you so much for squeezing me in. I had two quick ones for Colette. On supply, I think there was some mixed messaging in your remarks. I think you talked about supply being a headwind at one point, and then when you were speaking to the networking business, you mentioned supply easing. So I was hoping you could discuss supply, if you're caught up to demand at this point. And then, secondly, just on stock-based compensation, pretty mundane topic, I realize, but I think in the quarter, it was about $700 million. It is becoming a bigger piece of your operating expenses. So I am curious how we should be modeling that going forward.
When we look at our supply constraints from the past in each and every quarter, this is improving. Networking was one of our issues probably a year ago, and it has taken us up until this quarter and next quarter to see significant supply improvements to support our customer's pipeline. In contrast, we also discussed our customers' supply constraints in setting up data centers and accessing data center capacity, which has been challenging. Therefore, this can impact their purchasing decisions as they still seek certain parts of the supply chain to come through. I hope this clarifies what we were discussing regarding two areas of supply. On stock-based compensation, it remains difficult to predict as it can vary based on the influx of incoming employees and once-a-year offerings to our employees, which are fixed. Thus, it's challenging to determine but stock-based compensation is critical in our employees' overall compensation structure and will persist as such. Therefore, we will continue assessing it from a broader compensation perspective.
Thanks, everyone. We are quickly adapting to the macro environment. We're correcting inventory levels, offering alternative products to data center customers in China, and keeping our operating expenses flat for the next few quarters. Our new platforms are off to a great start and form the foundation for our resumed growth. MRTX is reinventing 3D graphics with ray tracing and AI. The launch of the 4090 is phenomenal. Gamers waited in long lines worldwide, and 4090 stocks sold out quickly. Hopper, with its revolutionary transformer engine, is just in time to meet the demand for recommender systems, large language models, and generative AI. NVIDIA networking ensures the highest data center throughput and is achieving record results. Orin is the world's first computing platform designed for AI-powered autonomous vehicles and robotics, paving the road for automotive to become our next multibillion-dollar platform. These computing platforms run NVIDIA AI and the NVIDIA Omniverse, software libraries and engines that help companies build and deploy AI products and services. Our pioneering work in accelerated computing is more crucial than ever as general-purpose computing slows and AI demands more computing. Scaling exclusively through general-purpose computing is no longer viable, both from a cost and power perspective. Accelerated computing is the path forward. We look forward to updating you on our progress next quarter.
Operator
We look forward to updating you on our progress next quarter.