NVIDIA Corp
NVIDIA is the world leader in accelerated computing.
Profit margin of 55.6% — that's well above average.
Current Price
$177.39
+0.93%GoodMoat Value
$221.97
25.1% undervaluedNVIDIA Corp (NVDA) — Q2 2019 Earnings Call Transcript
Operator
Good afternoon. My name is Kelsey, and I am your conference operator for today. Welcome to NVIDIA’s Financial Results Conference Call. All lines have been placed on mute. After the speakers’ remarks, there will be a question-and-answer period. Thank you. I’ll now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the second quarter of fiscal 2019. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. It’s also being recorded. You can hear a replay by telephone until August 23, 2018. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2019. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 16, 2018, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO Commentary, which is posted on our website. With that, let me turn the call over to Colette.
Thanks, Simona. This is a big week for NVIDIA. We just announced the biggest leap in GPU architecture in over a decade. We can’t wait to tell you more about it. But first, let’s talk about the quarter. We had another strong quarter, led by Datacenter and Gaming. Q2 revenue reached $3.12 billion, up 40% from a year earlier. Each market platform, Gaming, Datacenter, Pro Visualization, and Automotive hit record levels with strong growth, both sequentially and year-on-year. These platforms collectively grew more than 50% year-on-year. Our revenue outlook had anticipated cryptocurrency-specific products declining to approximately $100 million, while actual crypto-specific product revenue was $18 million, and we now expect a negligible contribution going forward. Gross margins grew nearly 500 basis points year-on-year, while both GAAP and non-GAAP net income exceeded $1 billion for the third consecutive quarter. Profit nearly doubled. From a reporting segment perspective, GPU revenue grew 40% from last year to $2.66 billion. Tegra Processor revenue grew 40% to $467 million. Let’s start with our Gaming business. Revenue of $1.8 billion was up 52% year-on-year and up 5% sequentially. Growth was driven by all segments of the business with desktop, notebook, and gaming consoles up all strong double-digit percentages year-on-year. Notebooks were standout this quarter with strong demands for thin and light form factors, based on our Max-Q technology. Max-Q enables gaming PC OEMs to pack a high-performance GPU into a slim notebook that is just 20 millimeters thick or less. All major notebook OEMs and ODMs have adopted Max-Q for their top-of-the-line gaming notebooks, just in time for back-to-school. And we expect to see 26 models, based on Max-Q, in stores for the holidays. The gaming industry remains vibrant. The eSports audience now approaches 400 million, up 18% over the past year. The unprecedented success of Fortnite and PUBG has popularized this new Battle Royale genre and expanded the gaming market. In fact, the Battle Royale mode is coming to games like the much-anticipated Battlefield 5. We are thrilled to partner with EA to make GeForce the best PC gaming platform for the release of Battlefield 5 in October. We’ve also partnered with Square Enix to make GeForce the best platform for its upcoming Shadow of the Tomb Raider. Monster Hunter World arrived on PCs earlier this month, and it was an instant hit. And many more titles are lined up for what promises to be a big holiday season. It’s not just new titles that are building anticipation. The gaming community is excited about the Turing architecture, announced earlier this week at SIGGRAPH. Turing is our most important innovation since the invention of the CUDA GPU over a decade ago. The architecture includes new dedicated ray-tracing processors or RT Cores and new Tensor Cores for AI inferencing which together will make real-time ray tracing possible for the first time. We will enable cinematic quality gaming, amazing new effects powered by neural networks and fluid interactivity on highly complex models. Turing will reset the look of video games and open up the $250 billion visual effects industry to GPUs. Turing is the result of more than 10,000 engineering years of effort. It delivers up to 6x performance increase over Pascal for ray-traced graphics and up to 10x boost for peak inference swaps. This new architecture will be the foundation of new portfolio products across our platforms going forward. Moving to Datacenter. We had another strong quarter with revenue of $760 million, accelerating to 83% year-on-year growth and up 8% sequentially. This performance was driven by hyperscale demand as internet services used daily by billions of people increasingly leverage AI. Our GPUs power real-time services such as search, voice recognition, voice synthesis, translation, recommender engines, fraud detection, and retail applications. We also saw growing adoption of our AI and high-performance computing solutions by vertical industries, representing one of the fastest areas of growth in our business. Companies and sectors ranging from oil and gas to financial services to transportation are harnessing the power of AI and our accelerated computing platform to turn data into actionable insights. Our flagship Tensor Core GPU, the Tesla V100, based on Volta architecture continued to ramp for both AI and high-performance computing applications. Volta has been adopted by every major cloud provider and hyperscale datacenter operator around the world. Customers have quickly moved to qualify the new version of V100, which doubled the on-chip DRAM to 32 GB to support much larger data sets and neural networks. Major server OEMs, HP Enterprise, IBM, Lenovo, Cray, and Supermicro also brought the V100 32 GB to market in the quarter. We should continue to gain traction with AI inference solution which helped expand our addressable market in the datacenter. During the quarter, we released our TensorRT 4 AI inference accelerator software for general availability. While prior versions of the TensorRT optimized image and video-related workloads, TensorRT 4 expands the aperture to include more use cases such as speech recognition, speech synthesis, translation, and recommendation systems. This means we can now address a much larger portion of deep learning inference workloads, delivering up to 190x performance speed-up relative to CPUs. NVIDIA and Google engineers have integrated TensorRT into the TensorFlow deep learning framework, making it easier to run AI inference on our GPUs. And Google Cloud announced that NVIDIA Tesla P4 GPU, our small form factor GPU for AI inference and graphic virtualization is available on Google Cloud Platform. Datacenter growth was also driven by DGX, our fully optimized AI server which incorporates V100 GPUs, our proprietary high-speed interconnect and our fully optimized software stack. The annual run rate for DGX is in the hundreds of millions of dollars. DGX-2, announced in March at our GPU Technology Conference, is being qualified by customers and is on track to ramp in the third quarter. At GTC Taiwan in June, we announced that we are bringing DGX-2 technology to our HGX-2 server platform. We make HGX-2 available to OEM and ODM partners, so they can quickly deploy our newest innovations in their own server designs. In recent weeks, we announced partnerships with NetApp and Pure Storage to help customers speed AI deployment from months to days or even hours, with highly integrated, optimized solutions that combine DGX with the company’s all-flash storage offerings and third-party networking. At GTC Taiwan, we also revealed that we set high-speed records for AI training and inference. Key to our strategy is our software stack. From CUDA to our training and inference SDKs as well as our work with developers to accelerate their applications. It is the reason we can achieve such dramatic performance gains in such a short period of time. And our developer ecosystem is getting stronger. In fact, we just passed 1 million members in our developer program, up 70% from one year ago. One of our proudest moments this quarter was the launch of the Summit AI supercomputer in Oak Ridge National Laboratory. Summit is powered by over 27,000 Volta Tensor Core GPUs and helped the U.S. reclaim the number one spot on the TOP500 supercomputer list for the first time in five years. Other NVIDIA-powered systems joined the TOP500 list were Sierra at Lawrence Livermore National Laboratory in the third spot and ABCI Japan’s fastest supercomputer in the fifth spot; NVIDIA now powers five of the world’s seven fastest supercomputers, reflecting the broad shift in supercomputing to GPUs. Indeed, the majority of the computing performance added to the latest TOP500 list comes from NVIDIA GPUs and more than 550 HPC applications are now GPU accelerated. With our Tensor Core GPUs, supercomputers can now combine simulation with the power of AI to advance many scientific applications from molecular dynamics to seismic processing to genomics and material science. Moving to Pro Visualization. Revenue grew to $281 million, up 20% year-over-year and 12% sequentially, driven by demand for real-time rendering and mobile workstations, as well as emerging applications like AI and VR. These emerging applications now represent approximately 35% of Pro Visualization sales. Strength extended across several key industries including healthcare, oil and gas, and media and entertainment. Key wins in the quarter include Raytheon, Lockheed, GE, Siemens, and Phillips Healthcare. In announcing the Turing architecture at SIGGRAPH, we also introduced the first Turing-based processors, the Quadro RTX 8000, 6000, and 5000 GPUs, bringing interactive ray-tracing to the world years before it’s been predicted. We also announced that the NVIDIA RTX Server, a full ray-tracing global illumination rendering server that will give a giant boost to the world’s rendering firms as Moore’s Law ends. Turing is set to revolutionize the work of 5 to 50 million designers and artists, enabling them to render photorealistic scenes in real time and add new AI-based capabilities to the workflows. Private GPUs based on the Turing will be available in the fourth quarter. Dozens of leading software providers, developers, and OEMs have already expressed support for Turing. Our ProViz partners view it as a game-changer for professionals in the media, entertainment, architecture, and manufacturing industries. Finally, turning to Automotive. Revenue was a record $161 million, up 13% year-over-year and up 11% sequentially. This reflects growth in our autonomous vehicle production and development engagements around the globe, as well as the ramp of next-generation AI-based, smart cockpit infotainment solutions. We continue to make progress on our autonomous vehicle platform with key milestones and partnerships announced this quarter. In July, Daimler and Bosch selected DRIVE Pegasus as the AI brain for their level 4 and level 5 autonomous fleets. Pilot testing will begin next year in Silicon Valley. This collaboration brings together NVIDIA’s leadership in AI and self-driving platforms, Bosch’s hardware and systems expertise as the world’s largest tier 1 automotive supplier, and Daimler’s vehicle expertise and global brand synonymous with safety and quality. This quarter, we started shipping development systems for DRIVE Pegasus, an AI supercomputer designed specifically for autonomous vehicles. Pegasus delivers 320 trillion operations per second to handle diverse and redundant algorithms and is architected for safety as well as performance. This automotive-grade, functionally safe production solution uses two NVIDIA Xavier SoCs and two next-generation GPUs, designed for AI and visual processing, delivering more than 10x greater performance and 10x higher data bandwidth compared to the previous generation. With co-designed hardware and software, the platform is created to achieve ASIL D ISO 26262, the industry’s highest level of automotive functional safety. We have created a scalable AI car platform that spans the entire range of automated and autonomous driving, from traffic jam pilots to level 5 robotaxis. More than 370 companies and research institutions are using NVIDIA’s automotive platform. With this growing momentum and accelerating revenue growth, we remain excited about the intermediate and long-term opportunities for autonomous driving business. This quarter, we also introduced our Xavier platform for Jetson for the autonomous machine market. With more than 9 billion transistors, it delivers over 30 trillion operations per second, more processing capability than a powerful workstation while using one-third the energy of a light bulb. Jetson Xavier establishes customers to deliver AI computing at the edge, powering autonomous machines like robots or drones with applications in manufacturing, logistics, retail, agriculture, healthcare, and more. Lastly, in our OEM segment, revenue declined by 54% year-on-year and 70% sequentially. This was primarily driven by the sharp decline of cryptocurrency revenues to fairly minimal levels. Moving to the rest of the P&L, Q2 GAAP gross margin was 63.3% and non-GAAP was 63.5%, in line with our outlook. GAAP operating expenses were $818 million. Non-GAAP operating expenses were $692 million, up 30% year-on-year. We continue to invest in the key platforms, driving our long-term growth including Gaming, AI, and Automotive. GAAP net income was $1.1 billion and EPS was $1.76, up 89% and 91%, respectively, from a year earlier. Some of the upside was driven by a tax rate near 7% compared to our outlook of 11%. Non-GAAP net income was $1.21 billion and EPS was $1.94, up 90% and 92%, respectively, from a year ago, reflecting revenue strength, as well as gross and operating margin expansion and lower taxes. Quarterly cash flow from operations was $913 million, capital expenditures were $128 million. With that, let me turn to the outlook for the third quarter of fiscal 2019. We are including no contribution from crypto in our outlook. We expect revenue to be $3.25 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 62.6% and 62.8%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $870 million and $730 million, respectively. GAAP and non-GAAP OI&E are both expected to be income of $20 million. GAAP and non-GAAP tax rates are both expected to be 9%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $125 million to $150 million. Further financial details are included in the CFO Commentary and other information available on our IR website. In closing, I’d like to highlight some of the upcoming events for the financial community. We will be presenting at the Citi Global Technology Conference on September 6th and meeting with the financial community at our GPU technology conferences in Tokyo on September 13th and Munich on October 10th. Our next earnings call to discuss our financial results is in the third quarter of 2019, which will take place on November 15. We will now open the call for questions. If you could limit your questions to one or two? And operator, would you please poll for questions? Thank you.
Operator
Yes. Your first question comes from Mark Lipacis with Jefferies.
The question is on ray tracing. To what extent is this creating new markets versus enabling greater capabilities in your existing markets? Thanks.
Yes, Mark. So, first of all, Turing, as you know, is the world’s first ray-tracing GPU. And it completes our new computer graphics platform, which is going to reinvent computer graphics altogether. It unites four different computing modes: rasterization, accelerated ray tracing, computing with CUDA, and artificial intelligence. It uses these four basic methods to create imagery for the future. There are two major ways that we’ll experience the benefits right away. The first is for the markets of visualization today, they require photorealistic images. Whether it’s an IKEA catalog, a movie, architectural engineering, or product and car design, all of these types of markets require photorealistic images. The only way to achieve that is to use ray tracing with physically based materials and lighting. The technology is rather complicated and has been computing intensive for a long time. It wasn’t until now that we’ve been able to accomplish it in a productive way. And so, Turing has the ability to do ray tracing, accelerated ray tracing, and it also has the ability to combine very large frame buffers because these data sets are extremely large. And so, that marketplace is quite large, and it’s never been served by GPUs before. Until now, all of that has been run on CPU render farms, gigantic render farms in all these movie studios and service centers and so on and so forth. The second area where you’re going to see the benefits of ray tracing, we haven’t announced.
Okay. If I could have a follow-up on the gaming side. Where do you think the industry is on creating content that leverages that kind of capability? Thank you.
Yes, Mark. At GTC last March, we introduced a new platform called NVIDIA RTX, which features four computation methods for image generation. We launched this platform with Microsoft's support, referring to it as Microsoft DirectX Raytracing. Major game engine developers like Epic have integrated real-time ray tracing and RTX into the Unreal Engine. At GDC and GTC, we showcased this capability on four Volta GPUs for the first time. Our goal was to make this platform available to all game developers, and we have been collaborating with them during this time. This week at SIGGRAPH, we revealed our Quadro RTX 8000, 6000, and 5000, which are the world's first GPUs designed for accelerated ray tracing. I demonstrated one Quadro running the same application we showcased on the four Volta GPUs in March, and the performance was impressive. To answer your question, all developers have access to RTX; it is included in Microsoft's DirectX, incorporated into the most popular game engine globally, and we will see developers starting to utilize it. On the workstation side, all major ISVs have adopted it. At SIGGRAPH this year, numerous developers demonstrated NVIDIA RTX with accelerated ray tracing, creating fully realistic images. I can confidently say that no platform in our history has experienced such immediate developer engagement right from the announcement. Stay tuned for more stories about RTX.
Operator
Your next question is from Matt Ramsay with Cowen.
Thank you very much. Colette, I had a couple of questions about inventory. The first of which is, I understand you’ve launched a new product set in ProViz, and the Datacenter business is obviously ramping really strongly. But, if you look at the balance sheet, I think the inventory level is up by mid-30% sequentially and you’re guiding revenue up 3% or so. Maybe you could help us sort of walk through the contributions of that inventory and what it might mean for future products? And secondly, if you could talk a little bit about the gaming channel, in terms of inventory, how things are looking in the channel, as you guys see it, during this period of product transition? Thank you.
Sure. Thanks for your questions. So, when you look at our inventory on the balance sheet, I think it’s generally consistent with what you have seen over the last several months in terms of what we will be bringing to market. Turing is an extremely important piece of architecture, and as you know, it will be with us for some time. I think the inventory balance is getting ready for that. And don’t forget, our work in terms of Datacenter and what we have for Volta is also a very complex computer in some cases. Those factors together, along with our Pascal architecture still in play, makes up almost all of what we have there in terms of inventory.
Matt, on the channel inventory side, we see inventory in the lower end of our stack. That inventory is well-positioned for back-to-school and the building season that’s coming up on Q3. I feel pretty good about that. The rest of our product launches and the ramp-up of Turing is going really well. I think the rest of the announcements we haven’t made, but stay tuned. The RTX family is going to be a real game-changer for us. The reinvention of computer graphics altogether has been embraced by many developers. We’re going to see some really exciting stuff this year.
Operator
Next question is from Vivek Arya with Bank of America.
Actually, just a clarification, and then the question. On the clarification, Colette, if you could also help us understand the gross margin sequencing from Q2 to Q3? And then, Jensen, how would you contrast the Pascal cycle with the Turing cycle? Because, I think in your remarks, you mentioned Turing is a very strong advancement over what you had before. But, when you launched Pascal, you had guided to very strong Q3s and then Q4s. This time, the Q3 outlook, even though it’s good on an absolute basis, on a sequential and a relative basis, it’s perhaps not as strong. So, could you just help us contrast the Pascal cycle with what we should expect with the Turing cycle?
Let me start first with your question regarding gross margins. We have essentially reached, as we move into Q3, normalization of our gross margins. I believe over the last several quarters, we have seen the impact of crypto and what that can do to elevate our overall gross margins. We believe we’ve reached a normal period, as we’re looking forward to essentially no cryptocurrency as we move forward.
Let’s see. Pascal was really successful. Pascal, relative to Maxwell, was a leap in fact, and it was a significant upgrade. The architectures were largely the same. They were both programmable shading. They were both at the same generation programmable shading. However, Pascal was much more energy efficient, around 30% to 40% more energy efficient than Maxwell. That translated to performance benefits to customers. The success of Pascal was fantastic. There’s simply no comparison to Turing. Turing is a reinvention of computer graphics; it is the first ray-tracing GPU in the world; it’s the first GPU that will be able to ray trace light in an environment and create photorealistic shadows and reflections. The images are going to be so subtle and beautiful. Yet it’s backwards compatible with everything that we’ve done. This new hybrid rendering model extends what we’ve built before but adds two new capabilities: artificial intelligence and accelerated ray tracing. We did a good job laying the foundations of the development platform for the developers. We partnered with Microsoft to create DXR, Vulkan RT is also coming, and we have OptiX that are used by ProViz renderers and developers globally. As a result, upon Turing's release, we’re going to have a richness of applications that gamers will be able to enjoy. You mentioned guidance. I believe that on a year-over-year performance, we’re doing terrific. I’m super excited about the ramp of Turing. It is the case that we benefited in the last several quarters from an unusual lift from crypto. In the beginning of the year, we thought and projected that crypto would be a larger contribution through the year's end. However, at this time, we consider it to be immaterial for the second half. That makes comparisons on a sequential basis harder. But on a year-to-year basis, I believe we’re doing terrific. Every single one of our platforms is growing; high-performance computing, of course, Datacenters is growing. AI's adoption continues to seep from industry to industry. The automation brought about by AI will yield productivity gains like never seen before. With Turing, we’re going to reignite the Professional Visualization business, open us up to photorealistic rendering for the very first time, render farms, and everybody designing products needing photorealistic visualization to reinvent and reset graphics for video games. I believe we’re in a great position and look forward to reporting Q3, when the time comes.
Operator
Your next question is from Atif Malik with Citi.
Colette, I have a question about the Datacenter. In your remarks, you mentioned that AI and high-performance computing are driving new verticals, some of which are among the fastest growing. However, some of your competitors have indicated that enterprise spending on server units is slowing down in the second half of this year, while it seems that your units are more closely related to AI adoption. I’m interested in your thoughts on the Datacenter growth for the second half of the year.
As you know, we generally give our view on guidance for one quarter out. You are correct that our Datacenter results that we see are always a tremendous, unique mix every single quarter. However, there are some underlying points that will likely continue. The growth in terms of use by the hyperscales, continuing industry-by-industry coming on-board, are essentially due to the needs of accelerated computing for the workloads and data they have is so essential. We expect, as we go into Q3, for Datacenter to grow both sequentially and year-over-year. We’ll likely see a mix of both, selling our Tesla V100 platforms, and a good contribution from DGX.
That’s right. Atif, let me just add a little more to that. The simplest way to think about that is this. In the transportation industry, there are two dynamics happening that will transform the industry. The first, of course, is ride hailing and ride sharing. Those platforms need to make a recommendation of which taxi to bring to which passenger, which customer. It’s a really large computing problem; it’s a machine learning problem and optimization problem at very large scale. In each of those instances, you need high-performance computers to use machine learning to make that optimal match. The second is self-driving cars. Every single car company working on robot taxis or self-driving cars needs to collect data, label data, train a whole bunch on neural networks, and run those neural networks in cars. You can make a list of how many people are building self-driving cars, and every single one will need even more GPU accelerated servers. And that’s just for developing the model. The next stage is simulating the entire software because we know that the world travels 10 trillion miles per year. The best we could do is drive several million normal miles. To do that, we have to simulate and stress test our software stack. The only way to do that is in virtual reality. It requires another supercomputer to simulate all the miles collected over the years to ensure software integrity before OTA updates. Transportation will be a massive opportunity. Healthcare is the same, from medical imaging that is now using AI everywhere to genomics discovering deep learning benefits. The list continues across various industries. We’re discovering the advantages of deep learning that could revolutionize industries.
Operator
Your next question is from C.J. Muse with Evercore ISI.
I guess, short-term and long-term. For short-term, as you think about your gaming guidance, are you embedding any drawdown of channel inventory there? And then, longer-term, can you talk about Turing Tensor Cores? Can you elaborate on differentiation versus Volta V100, particularly regarding 8-bit integer and the opportunities there for inferencing? Thank you.
We expect the channel inventory to work itself out. We manage our channel expertly and understand it very well. Our go-to-market strategy is through the channels around the globe, and we’re not concerned about the channel inventory. As we ramp Turing, we always start from the top down when launching a new architecture. We have plenty of opportunities during back-to-school and the upcoming gaming cycle to manage the inventory, and we feel good about it. Regarding Volta and Turing, entering, CUDA is compatible. That’s one of the benefits of CUDA; applications that take advantage of CUDA are built on top of cuDNN, which is our network platform optimizing output for runtime. All those tools run on top of Volta, Turing, and Pascal. What Turing adds is the same Tensor Core found in Volta. Of course, Volta is designed for large-scale training and features fast HBM2 memories. It’s suited for datacenter applications, has 64-bit double-precision, ECC, and high-resilience computing, along with robust software capabilities. Turing focuses on three major applications: opening Pro Visualization, a massive market that historically used render farms, and couldn’t leverage GPUs until we enabled full path trace and global illumination with large data sets. That’s a fresh market created due to Turing. The second is reinventing graphics in real-time video games. The images created by Turing will vastly exceed what existed previously. The third and final application is the supercharged Tensor Core for image generation and high-throughput deep learning inference for datacenters. Turing's multiple SKUs highlight our strong engineering; we can scale one architecture across many platforms simultaneously. I hope that answers your question—the Tensor Core inference capability of Turing is going to be impressive.
Operator
Your next question is from Joe Moore with Morgan Stanley.
Great. Thank you. I wonder if you could talk about cryptocurrency. Now that things have settled, you have done a good job of outlining how much of the OEM business has been influenced by it, but there has also been a sense that some of the GeForce business was impacted by crypto. Looking back, can you quantify that for us? Also, I'm trying to understand how crypto would affect your guidance for October, considering it seemed to have a minimal impact in the July quarter.
I think the second question is easier to answer, and the reason the first one is just ambiguous. It’s hard to predict or estimate regardless. The second question, the answer is we’re projecting zero basically. Regarding the first question, the extent that GeForce was used for crypto? A lot of gamers at night could mine while they sleep. So, did they buy it for mining or gaming? It’s hard to say. Some miners couldn't buy our OEM products, so they turned to retail. That happened significantly in the last several quarters, probably starting from late Q3, Q4, Q1, and very little last quarter. We’re projecting no crypto-mining moving forward.
Operator
Your next question is from Toshiya Hari with Goldman Sachs.
I had one for Jensen and one for Colette. Jensen, I was hoping you could remind us how meaningful your inference business is today within Datacenter, and how would you expect growth to come about over the next two years as your success at accounts like Google proliferates across a broader set of customers? For Colette, if you can give directional guidance for each of your platforms? Each segment, if you can talk about that. If gaming specifically, if you can talk about whether or not new products are embedded in that guide?
Inference is going to be a very large market for us. It’s material now in our Datacenter business. It’s not the largest segment, but I believe it’s going to be very large within the Datacenter business. There are approximately 30 million servers globally and many millions more in enterprises. I believe almost every server will be accelerated in the future. The reason is that AI, deep learning software, and prediction models will be integrated everywhere. Acceleration has proven to be the best approach. We’ve laid the foundations for inference for two to three years. Earlier this year, we announced that we successfully integrated the Tesla P4, low-profile, high-energy-efficiency inference accelerator into hyperscale datacenters and announced our fourth generation TensorRT optimizing compiler, the neural network optimizing compiler. TensorRT 4 goes beyond CNNs and image recognition now, supporting voice recognition, natural language understanding, recommendation systems, and translation. These applications are pervasive across internet services. We're actively working with internet service providers globally to embed inference acceleration into their systems. They demand high throughput and low latency. Voice recognition is only useful when it’s responsive within a short time frame. Our platform excels in this area. This week, we announced that Turing has 10 times the inference performance of Pascal, which is already a few hundred times more than CPUs. The pace of progress is remarkable regarding supporting different neural networks, optimizing compilers, and advancing our processors. I think we’re raising the bar.
When you look at our overall segments, as you’ve seen in our results in Q2, there is growth across every single one of our platforms from a year-over-year standpoint. We probably see that again in our Q3 guidance: year-over-year growth across each and every one of those platforms. Our OEM business will likely be down year-over-year again due to the absence of cryptocurrency in our forecast. For sequential growth expectations, we hope our Datacenter will grow and likely see growth from our Gaming business as well. It's still early to make predictions for ProViz and Automotive, however, Gaming and Datacenter should grow sequentially.
Operator
Your next question is from Blayne Curtis with Barclays.
Two on gross margin. Colette, I just want to ensure I understood July to October gross margins down. I know you’ve been getting a benefit from crypto, but it was pretty minimal in July. Is there any other than pieces? Additionally, from a longer-term perspective, how do you see the ramp of Turing affecting gross margins? You’re enabling many capabilities to get paid for it, 12 nanometers are relatively stable. I'm curious how to consider gross margins over the next few quarters.
Yes. Let me address your first point about gross margins. Although crypto revenue may not be substantial, it still has a derivative impact on our stack in terms of sales and channel replenishment. Over the last several quarters, we benefited from strong sales and our margins were able to prosper. In Q2, year-over-year, we experienced 500 basis points growth, and we're excited about what we have planned for Q3, which will also see significant year-over-year growth. Our high value-added platforms, particularly in Datacenter, as well as what we expect from Turing on our Quadro segment, should also drive gross margins as we move forward. However, we still need time to see this unfold, therefore, no further announcements at this time.
Operator
Your next question is from Aaron Rakers with Wells Fargo.
I’m curious, as we look at the data center business, if you can help us understand the breakdown of demand between hyperscale, supercomputing, and AI demand? Additionally, one of the metrics that’s been remarkable over the last couple of quarters is your significant growth in China. I'm curious if that relates to the Datacenter business or what's really driving that as a follow-up question. Thank you.
Yes, Aaron. At the highest level, demand is continuing to grow at historical levels of 10x computing demand. Computing demand is increasing at historical levels of 10x every five years—essentially Moore’s Law. However, Moore’s Law has stagnated. The demand for high-performance computing, medical imaging, life sciences computing, and artificial intelligence forms a significant gap that can only be filled through alternative methods. NVIDIA GPUs stand to benefit from this shift. This year, you heard Colette mention NVIDIA GPUs represent 56% of new performance in the world’s TOP500, which reflects the future of computing. I expect that from one vertical industry to another, as computing demand continues to escalate, developers will logically adopt NVIDIA GPU computing to meet their needs.
Operator
Your next question is from Harlan Sur with J.P. Morgan.
Good afternoon. Thanks for taking my question. When we think about cloud and hyperscale, we tend to think about the top players designing their own platforms, using your Tesla-based products, or even designing their own chips for AI and deep learning. However, there’s a larger base of smaller and medium-sized cloud and hyperscale customers who can’t afford R&D scale. I believe your HGX platform is focused on that segment. Jensen, can you provide us with an update on the uptake of your first generation HGX-1 reference platform and initial interest in HGX-2?
HGX-1 was essentially a prototype for HGX-2. HGX-2 is doing incredibly well, for all the reasons you mentioned. The largest hyperscale data centers can’t afford to create complicated motherboards at the scale we’re discussing. We've designed HGX-2, and it was quickly adopted by several of the most important hyperscalers in the world. At GTC Taiwan, we announced that all leading server OEMs and ODMs are supporting HGX-2 and are ready to take it to market. We’re wrapping up HGX-2 and ramping into production, leading me to believe that HGX-2 will be a significant success.
Operator
Your next question is from Tim Arcuri with UBS.
Thank you. I have two questions, Jensen, both for you. First, now that Crypto has declined, I’m curious what you think about the potential for a large number of cards being resold on eBay or other channels that could cannibalize new Pascal sales. Is that something that concerns you? And number two, regarding Gaming and Datacenter, while I know you don’t typically discuss customers, Tesla mentioned you on their call, curious if you could share your views on the development for Hardware 3 and their efforts to move away from your DRIVE platform?
Sure. The Crypto mining market is vastly different today than it was three years ago. Even though new cards—at current prices—aren’t appealing for mining. The existing capacity is still in use, and you can see the hash rates continue. My perception is that the installed base of miners will keep using their cards. Importantly, we’re in the process of announcing a new approach to computer graphics. With Turing and our RTX platform, computer graphics will never be the same. I believe our new generation of GPUs will be exceptional. I also appreciate Elon’s comments about our company; I think Tesla builds incredible cars, and I drive one happily. Regarding the next generation, when we began working on autonomous vehicles, they needed our support. We utilized a three-year-old Pascal GPU for the current autopilot computers. It’s clear that to develop a safe autopilot system, we need significantly more computing power. To ensure safe driving, the algorithms must be rich and capable of handling edge cases in various situations. When there are more corner cases, smoother drives, and quicker turns, those requirements demand greater computing capability. That's precisely why we built Xavier, which is now in production and earning positive feedback. In conclusion, we had a great quarter. Our core platforms exceeded expectations, even as crypto largely disappeared. Each of our platforms, AI, Gaming, ProViz, and self-driving cars, continues to enjoy incredible adoption. These markets we enable are among the most impactful globally. We launched Turing this week after a decade of work, completing the NVIDIA RTX platform. I’m incredibly proud of our company for this challenge. We’re reinventing the graphic stack and reinvigorating the industry. Stay tuned as we unveil the exciting RTX story. See you next time.
Operator
Thank you for joining. You may now disconnect.