NVIDIA Corp
NVIDIA is the world leader in accelerated computing.
Profit margin of 55.6% — that's well above average.
Current Price
$177.39
+0.93%GoodMoat Value
$221.97
25.1% undervaluedNVIDIA Corp (NVDA) — Q4 2022 Earnings Call Transcript
Operator
Good afternoon. My name is David, and I will be your conference operator today. At this time, I'd like to welcome everyone to NVIDIA's Fourth Quarter Earnings Call. Today's conference is being recorded. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. Thank you. Simona Jankowski, you may begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for the Fourth Quarter of Fiscal 2022. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2023. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, February 16, 2022, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.
Thanks, Simona. We had an excellent quarter with revenue up 53% year-on-year to $7.6 billion. We set records for total revenue as well as for Gaming, Data Center and Professional Visualization. Full year revenue was a record $26.9 billion, up 61%, compounding the prior year's growth of 53%. Starting with Gaming. Revenue of $3.4 billion rose 6% sequentially and was up 37% from a year earlier. Fiscal year revenue of $12.5 billion was up 61%. Gaming has become the top entertainment medium that continues to show strong momentum. Just last month, Steam hit a record 28 million concurrent users, up 50% in two years. Record desktop revenue in the quarter was led by our growth in our GeForce RTX 30 Series products with continued strength in the high end. At CES, we announced the RTX 3050 GPO, which hit retail in late January, bringing NVIDIA RTX and AI technologies to more mainstream audiences. Laptop gaming revenue also set a record, driven by the ramp of the new GeForce RTX 3070 Ti and 3080 Ti GPUs, which were also announced at CES. These leverage our fourth-generation Max-Q technology to enable quiet thin and light gaming laptops. All in, we announced over 160 new laptop designs for NVIDIA and tier architecture RTX 30 Series GPUs. These include a number of studio systems targeting the tens of millions of creators driving the future of design, innovation and virtual world. In addition to supporting the new RTX 30 Series GPUs, Studio laptops will support NVIDIA software with Omniverse, Tundus and Broadcast. Availability of our gaming products in the channel remains low. The NVIDIA RTX ecosystem continues to expand with over 30 new RTX games and applications added this quarter, including blockbuster hits like Battlefield 2042, Grand Theft Auto, Call of Duty: Vanguard and God of War. In addition, several new titles support NVIDIA Reflex for low-latency impact. Our GPUs are capable of cryptocurrency mining, so we have limited visibility into how much of this impacts our overall GPU demand. Nearly all desktop NVIDIA Ampere architecture GeForce GPU shipments are light cache rate to help direct GeForce supply to gamers. Cryptomining processor revenue was $24 million, which is included in OEM and other. We continue to expand the NVIDIA GeForce NOW cloud gaming ecosystem with new hit titles, including EA's Battlefield IV and Battlefield V. At CES, we announced a partnership with Samsung to integrate GeForce NOW in its smart TVs starting in Q2 of this year. This follows last month's beta release of GeForce NOW for LG smart TVs. In addition, we teamed up with AT&T to bring GeForce NOW to 5G mobile devices in the U.S. We also added our first GFN data center in Canada. Moving to pro visualization. Q4 revenue was $643 million, up 11% sequentially and up 109% from a year ago. Fiscal year revenue of $2.1 billion was up 100%. Sequential growth in the quarter was driven by a shift to higher-value workstation and the continued ramp of our NVIDIA healthcare architecture. We believe strong demand is fueled by continued build-outs for hybrid work environments as well as growth in key workloads, including 3D design, AI and rendering. For example, Sony Pictures ImageWorks is using NVIDIA RTX to accelerate ray-tracing for rendering-related applications. Motion is using NVIDIA RTX for AI to assist in predictive maintenance of the vehicles. Duke Energy is using NVIDIA RTX for AI and VR to map, view and maintain energy facilities. NVIDIA Omniverse enterprise software entered general availability. While it's still in early days, customer feedback so far has been very positive, with multiple significant enterprise licensees already signed. In addition to software licenses, Omniverse also drives computing opportunities for NVIDIA RTX in laptops, workstations, on-prem servers and the cloud. Omniverse can be used by individuals for free and by enterprise teams via software subscriptions. At CES, we made the free version of Omniverse for individuals available. Omniverse allows creators with RTX GPUs to connect leading 3D design applications to a single scheme and superset their work with AI and physics. We also announced early access to Omniverse Cloud, which adds one-click capability to collaborate with other artists, whether across the room or across the globe. For digital twin applications, we announced the Isaac Autonomous Mobile Robot platform using Omniverse and securely orchestrated and cloud delivered with the platform optimizes operational efficiency and accelerate deployment from logistics. It consists of several NVIDIA AI technologies and SDKs, including data for high-precision mapping and situational awareness for real-time route optimization. Moving to automotive. Q4 revenue was $125 million, declining 7% sequentially and 14% from the year-ago quarter. Fiscal year revenue of $566 million was up 6%. We have just started shipments of our Orion-based product platform and expect to return to sequential revenue growth in Q1 with more meaningful inflection in the second half of the fiscal year and momentum building into calendar 2023. I will now hand it over to Jensen to provide more color on this morning's automotive news.
Thanks, Colette. Earlier today, we announced a partnership with Jaguar Land Rover to jointly develop and deliver fleets of software-defined cars. Starting in 2025, all new Jaguar and Land Rover vehicles will have next-generation automated driving systems, plus AI-enabled software and services built on the NVIDIA DRIVE platform. DRIVE Orin will be the AI computer brain running our DRIVE AV and DRIVE IX software. The DRIVE Hyperion sensor network will be the central nervous system. This new vehicle architecture will enable a wide spectrum of active safety, automated driving and parking systems. Inside the vehicle, the system will deliver AI features, including driver and occupant monitoring and advanced visualization of the vehicle's surroundings. We are very much looking forward to partnering with Thierry Bolloré, JLR's CEO, and his team to reinvent the future of luxury cars. Our full stack end-to-end approach is a new business model that offers downloadable AV and AI services to the fleet of JLR vehicles with a shared software revenue stream for both companies over the life of the fleet.
Thanks, Jensen. Moving to Data Center. Record revenue of $3.3 billion grew 11% sequentially and 71% from a year earlier. Fiscal year revenue of $10.6 billion was up 58%. Data center growth in the quarter was once again led by our compute products on strong demand for NVIDIA AI. Hyperscale and cloud demand was outstanding, with revenue more than doubling year-on-year. Vertical Industries also posted strong double-digit year-on-year growth led by consumer Internet companies. The flagship NVIDIA A100 GPU continues to drive strong growth. Inference-focused revenue more than tripled year-on-year. Accelerating inference growth has been enabled by widespread adoption of our Triton and France server software, which helps customers deliver fast and scalable AI in production. Data center compute demand was driven by continued deployment of our Ampere architecture-based product for fast-growing AI workloads such as natural language processing and deep learning recommendation systems as well as cloud executing. For example, Block Inc., a global leader in payment, uses conversational AI in its Square Assistant to schedule appointments with customers. These AI models are trained on video GPUs in AWS and perform inference 10x faster on the AWS GP service than on our CPUs. Social media company Snap used NVIDIA GPUs and Merlin deep recommendator software to improve inference cost efficiency by 50% and decrease latency to 2x. For the third year in a row, industry benchmarks show that NVIDIA AI continues to lead the industry in performance. Along with partners like Microsoft Azure, NVIDIA set records in the latest benchmarks for AI training across 8 popular AI workloads, including computer vision, natural language processing, recommendation systems, reinforcement learning and detection. NVIDIA AI was the only platform to make submissions across all benchmarks and use cases, demonstrating versatility as well as our performance. The numbers show performance gains on our A100 GPUs of over 5x in just 2 months, thanks to continuous innovations across the full stack in AI algorithms, optimization tools and system software. Over the past 3 years, they have seen performance gains of over 20x powered by advances we have made across our full stack offering GPUs, networks, systems and software. The top-notch performance of NVIDIA AI is in demand by some of the most technologically advanced companies globally. Meta Platforms recently introduced its new AI supercomputer research, SuperCluster, which utilizes over 6,000 A100 GPUs. Early benchmarks from Meta indicated that this system can train large natural language processing models three times faster and conduct computer vision tasks twenty times faster than the previous system. Later this year, in a second phase, the system will scale up to 16,000 GPUs, which Meta anticipates will achieve five times the mixed precision AI performance. Alongside high performance, Meta highlighted the system's extreme reliability, security, privacy, and flexibility to support a diverse array of AI models as crucial criteria for its selection. We continue to broaden the reach and ease the adoption of NVIDIA AI into vertical industries. Our ecosystem of NVIDIA-certified systems expanded with Cisco and Hitachi which joined Dell, Hewlett Packard Enterprise, Insper, Lenovo and Supermicro, among other server manufacturers. We released version 1.1 of our NVIDIA AI Enterprise software, allowing enterprises to accelerate annual workloads on VMware, on mainstream IT infrastructure as well. And we expanded the number of system integrators qualified for NVIDIA AI Enterprise. Forrester Research in its evaluation of Enterprise AI infrastructure providers recognized NVIDIA in the top category of leaders. An example of a partner that's helping to expand our reach into enterprise IT is Deloitte, a leading global consulting firm, which has built its center for AI computing on NVIDIA DGX Superpod. At CES, we extended our collaboration to AV development, leveraging our own robust AI infrastructure and Deloitte's team of 5,500 system integration developers and 2,000 data scientists to architect solutions for truly intelligent transportation. Our networking products posted strong sequential and year-over-year growth, driven by exceptional demand across use cases ranging from computing, supercomputing and enterprise to storage. Adopters-led growth driven by adoption of our next-generation products and higher-speed deployments. While revenue was gated by supply, we anticipate improving capacity in coming quarters, which should allow us to serve the significant customer demands we're seeing. Across the board, we are excited about the traction we are seeing with our new software business models, including NVIDIA AI, NVIDIA Omniverse and NVIDIA DRIVE. We are still early in the software revenue ramp. Our pipelines are building as customers across the industry seek to accelerate their pace of adoption and innovation with NVIDIA. Now let me turn it back over to Jensen for some comments on Arm.
Thanks, Colette. Last week, we terminated our efforts to purchase Arm. When we entered into the transaction in September 2020, we believed that it would accelerate Arm's focus on high-performance CPUs and help Arm expand into new markets, benefiting all our customers in the entire ecosystem. Like any combination of pioneers of important technologies, our proposed acquisition spurred questions from regulators worldwide. We appreciated the regulatory concerns. For over a year, we worked closely with SoftBank and Arm to explain our vision for Arm and reassure regulators that NVIDIA would be a worthy steward of the Arm ecosystem. We gave it our best shot, but the headwinds were too strong, and we could not give regulators the comfort they needed to approve our deal. NVIDIA's work in accelerated computing and our overall strategy will continue as before. Our focus is accelerated computing. We are on track to launch our Arm-based CPU, targeting giant AI and HPC workloads in the first half of next year. Our 20-year architectural license to Arm's IP allows us the full breadth and flexibility of options across technologies and markets. We will deliver on our 3-chip strategy across CPUs, GPUs and DPUs. Whether x86 or Arm, we will use the best CPU for the job. And together with partners in the computer industry, we will offer the world's best computing platform to tackle the impactful challenges of our time. Back to you, Colette.
Thanks, Jensen. We're going to turn to our P&L and our outlook. For the discussion of the rest of the P&L, please refer to the CFO commentary published earlier today on our Investor Relations page. Let me turn to the outlook for the first quarter of fiscal 2023. We expect sequential growth to be driven primarily by Data Center. Gaming will also contribute to growth. Revenue is expected to be $8.1 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 65.2% and 67%, respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be $3.55 billion, including the Arm write-off of $1.36 billion. Non-GAAP operating expenses are expected to be $1.6 billion. For the fiscal year, we expect to grow non-GAAP operating expenses at a similar rate as in fiscal 2022. GAAP and non-GAAP other operating expenses are both expected to be an expense of approximately $55 million, excluding gains and losses on nonaffiliated investments. Non-GAAP tax rate are expected to be 11% and 13%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $350 million to $400 million. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We will be attending the Morgan Stanley Technology, Media and Telecom Conference in person on March 7. We will also be hosting a virtual Investor Day on March 22, alongside the GPU Technology Conference. This will follow Jensen's opening keynote, which we invite you to tune into. Our earnings call to discuss the results for our first quarter of fiscal 2023 is scheduled for Wednesday, May 27. We will now open the call for questions. Operator, will you please poll for questions?
Operator
Thank you. We'll take our first question from Toshiya Hari with Goldman Sachs & Company. Your line is open.
Great. Thank you so much for taking the question. Jensen and Colette, I wanted to ask about Data Center. Colette, based on your guidance, you're probably guiding Data Center growth on a year-over-year basis to accelerate into the April quarter. You talked about hyperscale cloud growing more than 2x and enterprise verticals growing strong double digits in the January quarter. Can you kind of speak to the drivers for April and perhaps speak to visibility into the second half of the fiscal year as well in Data Center? Thank you.
Sure.
I'll start, and I'll turn it over to Jensen. For Q1, our guidance can include an acceleration of Data Center from where we left in terms of Q4. We will have growth across several of our market platforms within Q1, both Data Center, Gaming and probably a couple of others. But yes, there is expected to be accelerated growth as we move into Q1. I'll turn it over to Jensen to talk about the drivers that we see for the quarter and also for the full year.
Yes. It's great to hear from you, Toshiya. We have several growth drivers in data centers, including hyperscale, public cloud, enterprise core, and enterprise edge. We're witnessing growth across the entire spectrum. There are various use cases that are particularly exciting, such as large language models and language understanding models inspired by the invention of transformers, which is likely one of the most significant AI models developed in recent times. Additionally, conversational AI for customer service, chatbots, and a wide range of customer service applications are emerging. These can be web-based, point-of-sale based, or cloud-based. Deep learning-based recommender systems are also achieving groundbreaking improvements. Furthermore, cloud graphics, including our initiatives in rendering and simulations in the cloud, as well as cloud gaming and Android cloud gaming, are significantly driving adoption in the cloud. Overall, there are numerous use cases across all platforms in data centers.
Operator
Next, we'll go to C.J. Muse with Evercore ISI.
Yes. Good afternoon. Thank you for taking the question. I guess another question on the data center side. Curious if you can speak to supply constraints on the wafer side and whether that played a role in terms of capping revenues in the January quarter and how you see that becoming less of a headwind for you as you proceed through the year?
Thank you, C.J., for the question. I’ll begin with the data center supply. As we mentioned last quarter and again today, we are still experiencing some supply constraints in various areas of our business. Networking within the Data Center segment has been particularly impacted. We are making progress every day and expect supply to improve each quarter as we move into fiscal year '23. This is likely the main focus within our Data Center operations. However, there may be other considerations to address, so I'll pass the rest of the question to Jensen regarding the outlook for the remainder of the year.
Yes, Colette captured it well. We are supply constrained, with demand exceeding supply. Our data center product line includes GPUs, Bluefield DPUs, Quantum and Spectrum switches, and HGX, which involves delivering the entire motherboard or GPU board together due to complexity. We offer a wide range of products for data centers, from AI model training to large-scale inferencing, universal GPUs for public cloud and industry-standard servers, commodity servers for enterprise use, and supercomputing systems utilizing InfiniBand and quantum switches. The application space is extensive. We've experienced demand constraints across almost all areas. Our operations team has done an excellent job this year, effectively managing these complex products and expanding our supply base. We anticipate supply will improve every quarter moving forward. This quarter, based on Colette's guidance, aligns with an increasing supply base. While we expect to remain demand constrained, our supply base will see substantial growth this quarter and even more in the second half of the year.
Operator
Next, we'll go to Joe Moore with Morgan Stanley.
Great. Thank you. I wonder if you could talk a little bit more about Grace now that the strategy kind of separated from the acquisition of Arm. What are your aspirations there? Is it going to be primarily oriented to the DGX and HX Systems business versus merchant chips? Just how are you thinking about that opportunity long-term?
Yes. Thanks, Joe. We have multiple-arm projects ongoing in the company connected from devices to robotics processors such as the new Orin that are going into autonomous vehicles and robotic systems and industrial automation. Orin is doing incredibly well. It started production. As we mentioned earlier, it's going to drive an inflection point starting in Q2, but accelerating through Q3 and the several years after as we ramp into all of the electric cars and all of the robotic applications and robot taxis and such. We also have Arm projects with the CPU that you mentioned, Grace. We have Grace, and we surely have the follow-ons to Grace, and you could expect us to do a lot of CPU developments around the Arm architecture. One of the things that's really evolved nicely over the last couple of years is the success that Arm has seen in hyperscalers and data centers. It's really accelerated and motivated them to accelerate the development of higher-end CPUs. You’re going to see a lot of exciting CPUs coming from us. Grace is just the first example. You're going to see a whole bunch of them beyond that. But our strategy is accelerated computing. That's ultimately what we do for a living. We, as you know, love it where there's any CPUs. If it's an x86 from any vendor. So long as we have a CPU, we could connect NVIDIA's platform to it and accelerate it for artificial intelligence or computer graphics, robotics and such. We love to see the expansion of CPU footprints, and we're just thrilled that Arm is now growing into robotics and autonomous vehicles and cloud computing and supercomputing and in all these different applications, and we intend to bring the full spectrum of NVIDIA's accelerated computing platform to NVIDIA Arm CPUs.
Operator
Next, we'll go to John Pitzer with Credit Suisse.
Just on the inventory purchase obligations, I think this was the fourth quarter in a row where you've seen greater than 30% sequential growth, and it's the first quarter where that number is now eclipsing your quarterly revenue guidance. I'm trying to figure out to what extent is this just a reflection of how tight things are across the semi industry? To what extent is this the poker tale of how bullish you are on future demand? And relative to your commentary on supply starting to get better throughout the year, should we expect that number to start to level off? Or as the mix moves more to data center and longer cycle times, more complicated devices, should that number continue to grow?
The factors, the drivers that you mentioned in the supply chain, we expanded our supply chain footprint significantly this year to prepare us for both increased supply base and supply availability in each of the quarters going forward, but also in preparation for some really exciting product launches. As mentioned, Orin ramping into autonomous vehicles is brand new. This is the inflection point of us growing into autonomous vehicles. This is going to be a very large business for us going forward. It was already mentioned, Grace is a brand-new product that has never been on NVIDIA's roadmap. We already see great success with customers who love the architecture of it and are desperately in need of the type of capability that Grace brings. This should be a pretty exciting year for new product launches. We’re preparing for all of that laying the foundation for us to bring all those exciting products to the marketplace.
Operator
Next, we'll go to Tim Arcuri with UBS.
Obviously, there's a lot more talk from you about software. And I think it's still kind of a little bit of a black box for investors. I know, Jensen, that you've talked about software as a medium to basically open up new markets. But I'm wondering if you can sort of quantify how big the software licensing revenue is today and maybe when you might start to break it out like you did data center, which really got the stock moving in a huge way?
Yes, NVIDIA is fundamentally a software-driven business, and so is accelerated computing. It begins with identifying the types of applications we want to enhance and can enhance, and then constructing a complete system that includes processors, systems, system software, acceleration engines, and even applications like NVIDIA DRIVE, NVIDIA AI, and NVIDIA Omniverse. These applications operate on top of system software and are highly valuable in the marketplace. Regarding our software licensing, we've always focused on software, but now we have licensed software available for customers for the first time. The licensing model for NVIDIA AI Enterprise is based on the number of server nodes. There are approximately 20 to 25 million servers currently installed in enterprises, not counting cloud services. We envision a future where every server will run AI software, and we aim to provide an engine that allows enterprises to utilize the most advanced and trusted AI engine available. This is the target market for NVIDIA AI. NVIDIA Omniverse aims to support creators who contribute content to a virtual world, integrating it with robots that also generate content in that environment. There are 40 million designers and creators around the world. There are going to be hundreds of millions of robots. Every single car will essentially be a robot someday. Those are connections that will be connected into a digital twin system like Omniverse. The Omniverse business model is per connection per year. In the case of NVIDIA DRIVE, we share the economics of the software that we deliver, whether it's AV software, parking software or cabin-based AI software. Whatever the licensing is or whatever the service, if it's an upfront license, we share the economics. If it's a monthly service subscription, we share the economics of that. For the cars that we are part of, that we are developing, the end-to-end service, we will get the benefits of the economics of that for the entire life of the fleet of the car. You could imagine, with 10 million cars, with modern car lifetimes of 10 to 20 years, the economics and the market, the installed opportunity is quite high. Our software business really started several years ago with virtual GPUs, but this year was when we really stepped it up and offered for the very first time NVIDIA AI Enterprise, Omniverse and DRIVE. I believe this is going to be a very significant business opportunity for us, and we look forward to reporting on it.
Operator
Next, we'll go to Vivek Arya with Bank of America.
Jensen, in the past, you've mentioned about a 10% or so adoption rate for AI among your customer base. I was hoping you would quantify where we are in that adoption curve. Do you tend to differentiate between the adoption differences between your hyperscale and enterprise customers? And then kind of related to that, is there an inorganic element to your growth now that you have over $20 billion of cash on the balance sheet? How are you planning to deploy that to kind of accelerate your growth as well?
Yes. The applications for AI are unquestionably growing, and it's growing incredibly fast. In enterprises and financial services, it could be fraud detection. In cases of consumer pointing businesses, customer service, conversational AI, where people are calling chat bots. In the future, every website will have a chat bot; every phone number will have a chat bot, whether it's a human in the loop or not human in the loop, we'll have a chat bot. Customer service will be heavily supported by artificial intelligence in the future. Almost every point of sales, I think, whether it's in fast food or a quick-service business, will have chat bots and AI-based customer service. All of this is made possible by a couple of breakthroughs, computer vision, of course, because the agents, the AIs have to make eye contact and recognize your posture and such, recognize speech, understand the context and what is being spoken about and have a reasonable conversation with people so that you could provide good customer service. The ability to have human in the loop is one of the great things about AI, much more so than a recording, which is obviously not intelligent and therefore it's difficult to call your manager or call somebody to provide services that they can't. The number of different applications that have been enabled by natural language understanding in customer service in just the last couple of years has grown tremendously. I think we're still in the early days of our adoption. It's incredible how fast it has grown and how many different applications are now possible with AI. I think we remain in the early innings yet of AI, and this is going to be one of the largest industries of software that we have ever known. Regarding capital, we had just terminated our Arm agreement. We have a regular capital strategy process, and we'll go through that, and we'll make the best judgment about how to use our capital in helping our growth and sustaining our growth and accelerating our growth, and we'll have all of those sensible conversations during those capital allocation meetings. We're just delighted to have so much capital. Just to put it out there.
Operator
And next, we'll go to Aaron Rakers with Wells Fargo.
This is Michael on behalf of Aaron. Can you guys talk about how the launch of the RTX 3050 is going so far? And maybe more broadly, your view of where we are in the product cycle on gaming?
Thanks, Michael. RTX is an unqualified home run. RTX completely reinvented modern computer graphics. It brought forward ray tracing about a decade earlier than anybody thought possible. The combination of RTX with artificial intelligence, which enabled this technology we call DLSS, is able to not only do a ton more computation using our processors but also engage the powerful Tensor processors that we have in our GPUs to generate beautiful images. RTX is being adopted by just every game developer on the planet now. It's being adopted by just about every design tool on the planet now. If not for RTX, Omniverse wouldn't be possible. We wouldn't be able to do physically based path tracing and simulate sensors like radars and LiDARs and ultrasonics and cameras, and simulate these cameras physically and still be able to deliver the type of performance that we do. RTX was a game changer for the industry. It reset modern computer graphics. It was an enabler for us to build an entire new platform from Omniverse. We're about, I think, about a third of the way through upgrading an installed base that is growing. Video games is now the world's largest gaming genre. Steam over the last 2 years has grown by 50%. The number of concurrent players on Steam has grown tremendously. In just the last couple of years, a brand-new game store from Epic came on, and it's already a multi-hundred million dollar business. I think it's close to $1 billion that they're doing incredibly well. I'm so happy to see it. The overall gaming market is growing, and it's growing quite nicely. But in addition to resetting computer graphics for our entire installed base, the growing of our installed base because gaming is growing, there are a couple of other growth dynamics that are associated with GeForce and RTX that's really quite brand new. One of them is hybrid work. This is a permanent condition. We now are seeing across the board people who are designers and creators now have to set up essentially a new workstation or new home workstation design studio so that they could do their work at home. In addition, the creative economy, the digital economy, the creative economy is doing fantastically because everything has to be done in 3D now. Print ads are done in 3D. So 2D print is done in 3D. Video is done in 3D. In live video broadcast, the millions of influencers now augment their broadcast with rich augmented reality and 3D graphics. 3D graphics is now not just for video games and 3D content; it's actually used for all forms of digital content creation. RTX has all of these different drivers working behind it, and we're definitely in the early innings of RTX.
Operator
Next, we'll go to Stacy Rasgon with Bernstein Research.
So you said that the growth in the next quarter is about $450 million, give or take, driven by Data Center. Can you give us some feeling for how that growth is being driven by units versus pricing versus mix, and how those drivers might differ between Gaming and Data Center, if at all, for Colette?
It's really early in the quarter to determine, Stacy, our exact mix that we will have based on the unit and an ASP. Our overall growth quarter-over-quarter going into Q1 will be driven by data center primarily. We will see a little bit of growth there in gaming. I think that's important to understand that even after Q4 holiday, moving into Q1, we'll still probably see growth in gaming, which is different in terms of what we've seen seasonally. We will probably have growth in automotive as well sequentially between Q4 and Q1. There are still some areas that are so constrained. We are working again to try and improve that for every quarter going forward, but that's how you should look at our earnings for Q1 primarily from Data Center.
Operator
Next, we'll go to Harlan Sur with JPMorgan.
Congratulations on the solid results and execution. The networking connectivity portfolio addition has been pretty solid for the NVIDIA team, especially in enabling scaling of your GPU systems, improving connectivity bottlenecks in yours and your customers' accelerated compute platforms. In a year where spending is growing 30%, you've got a strong networking upgrade cycle, which is good for your NIC products and just continued overall good attach rates. If the team can unlock more supply, will the networking connectivity business grow in line or faster than the overall Data Center business this year? And then for Jensen, have you driven synergies between Mellanox's leadership in networking connectivity? And for example, leveraging their capabilities for your internally developed NVLink connectivity and switching architectures?
Yes, absolutely. If not for the work that we did so closely with Mellanox, the scalability of DGX and DGX Super Pine and the research supercomputer that was just installed in Meta would just not be possible. The concepts of overlapping networking and compute, moving some computing into the fabric, into the network, the work that we're doing with Synchronoss and Precision Timing so that we could create Omniverse computers that obey the laws of physics and space-time, these things are just simply not possible. The number of innovations, that are countless. I am so thrilled with the combination and the work the Mellanox team is doing. We've accelerated roadmaps as a result of the combination that we could leverage a much larger base of chip design. BlueField's roadmap has been accelerated probably by about a year. The quantum switch and the spectrum switch, the SerDes are absolutely world-class, shared between Ethernet and InfiniBand and NVLink, absolutely the best servers in the world. The list of opportunities or the list of combination benefits is really quite countless. I'm super thrilled with that. With respect to networking growth, we should be growing. If we weren't supply constrained, we should be growing faster than overall CSP growth. The reason for that is twofold. The first is because the networking leadership position of Mellanox, Mellanox is heavily weighted in the upper end of networking, where the adoption of higher-speed networks tends to move. It makes sense that as new data centers are built, the first preference is to install it with higher-speed networking than the last-generation networking. Mellanox's networking technology is unambiguously world-class. The second reason is because the areas where overall NVIDIA is strong has to do with the areas that are growing quite fast, which relate to artificial intelligence or cloud AI and such. Those different applications are growing faster than the core. So it would be sensible that we have the opportunity to continue to grow faster than CSPs overall.
Operator
Our next question will come from Matt Ramsay with Cowen.
Yes, Jensen. I maybe wanted to expand on some of the things that you were just speaking about in your last answer with respect to the Data Center business. It's not often, maybe ever, that you have both x86 server vendors having new big platform upgrades in the same year, which will probably happen later this year. There's a lot going on there, PCIe, some CXL stuff. I wonder if you could talk a bit about your Data Center business broadly and what you feel might be memory and I/O constrained currently that these systems might unlock for you both in the cloud and enterprise side, but also in the DGX business.
Thank you, Matt. There are a few key bottlenecks that I want to emphasize. The first is memory speed, which is why we utilize the fastest types of memory available, such as HBM and GDDR. We are the largest users of high-speed memory globally, with no close competitor that I am aware of. Our significant use of fast memory is crucial for our operations. The second bottleneck is networking performance, which is why we implement the fastest networks. We have one of the most rapid networking systems, featuring eight InfiniBand connections at peak speeds directly linked to our HGX or DGX servers. Our work with GPU direct memory, RDMA, and GPU direct storage, along with in-network computing, data reductions, and data movement within the network, is truly top-notch. I take immense pride in this area. All of that is so we could be less bottlenecked by the CPU. Remember, inside our DGX system is one CPU and 8 GPUs. The fundamental goal is to offload as much as we can and utilize the resources that we have as much as we can. This year, we expect a transition from PCIe Gen 4 to Gen 5. We are constrained on Gen 4. We'll be constrained on Gen 5, but we're used to that, and that's something that we're very good at. We'll continue to support Gen 4 well through next year, maybe well through the next couple of years, and all of the installed base of Gen 4 systems that are going to be all over the world, so we will take advantage of Gen 5 as much as we can. We have all kinds of new technologies and strategies to improve the throughput of systems and avert the bottlenecks that are there.
Operator
Our final question comes from the line of Raji Gill with Needham & Co.
Yes. Congrats on the good quarter and guide. Colette, question on the gross margin and to Jensen's point about really creating a software business driven by Omniverse, DRIVE and Enterprise. When you're contemplating your margin profile over the next couple of years, how do we think about that? Is it really going to be driven by an increasing mix of software as a percentage of your revenue over time? Is there more margin upside on the hardware side in terms of some of your segments? The software opportunity is very exciting, but I'm just curious how that would translate to your longer-term margin profile.
Thank you for the question regarding gross margin and the long term. When we consider long-term gross margin, we have already integrated software into many of our platforms, particularly in high-value areas like data centers, which has significantly improved our gross margins thus far. We have effectively managed our growth over the years. I believe these sectors will continue to present growth opportunities, especially as we enhance our capabilities to package solutions for our Enterprise customers in the data center and with existing agreements. This presents a promising future for both gross and operating margins, and we are committed to pursuing this. We have established a foundation to package our offerings for separate sales, enabling us to create a business model and develop partnerships to support this effort. We are confident that this will drive our long-term success.
Operator
Thank you. I'll now turn it back over to Jensen Huang for closing remarks.
Thanks, everyone. The tremendous demand for our computing platforms, NVIDIA RTX, NVIDIA HPC and NVIDIA AI drove a great quarter, capping a record year. Our work propels advances in AI, digital biology, climate sciences, gaming, creative design to autonomous vehicles and robotics in some of today's most impactful fields. Our open computing platform optimized across the full stack architecture for data center scale is adopted by customers globally from cloud to core to edge and robotics. I am proud of the NVIDIA operations team as we make substantial strides in broadening our supply base to scale our company and better serve customer demand. This year, we introduced new software business models with NVIDIA AI Enterprise, NVIDIA Omniverse and NVIDIA DRIVE. NVIDIA DRIVE is a full stack end-to-end platform that serves the industry with AV chips, data center infrastructure for AI and simulation, mapping and the autonomous driving application service. Our data center infrastructure is used by just about anybody building AVs, robotics, robot taxis, shuttles and trucks. EV companies have selected our Orin chip globally. Our partnerships with Mercedes-Benz and Jaguar Land Rover have opened up a new software and services business model for millions of cars for the life of the fleet. NVIDIA Omniverse is a world simulation engine that connects simulated digital worlds to the physical world. Omniverse is a digital twin, a simulation of the physical world. The system can be a building, a factory, a warehouse, a car, a fleet of cars, a robotic factory orchestrating a fleet of robots building cars that are themselves robotic. Today's Internet is 2D and AI is in the cloud. The next phase of the Internet will be 3D, and AI will be connected to the physical world. We created Omniverse to enable the next wave of AI where AI and robotics touches our world. Omniverse can sound like science fiction, but there are real-world use cases today. Hundreds of companies are evaluating Omniverse. We can't wait to share more of our progress at next month's GTC, learn about new chips, new computing platforms, new AI and robotic breakthroughs and the new frontiers of Omniverse. Hear from the technologists of Deloitte, Epic Games, Mercedes-Benz, Microsoft, Pfizer, Sony, Visa, Walt Disney, Zoom and more. This GTC promises to be our most exciting developers conference ever. We had quite a year, yet nothing makes me more proud than the incredible people who have made NVIDIA one of the best companies to work for and the company where they do their lives' work. We look forward to updating you on our progress next quarter. Thank you.
Operator
This concludes today's conference call. You may now disconnect.