NVIDIA Corp
NVIDIA is the world leader in accelerated computing.
Profit margin of 55.6% — that's well above average.
Current Price
$177.39
+0.93%GoodMoat Value
$221.97
25.1% undervaluedNVIDIA Corp (NVDA) — Q1 2019 Earnings Call Transcript
Operator
Good afternoon. My name is Kelsey and I am your conference operator for today. Welcome to NVIDIA's financial results conference call. All lines have been placed on mute. After the speakers' remarks, there will be a question-and-answer period. Thank you. I'll now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2019. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It's also being recorded. You can hear a replay by telephone until May 16, 2018. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2019. The content of today's call is NVIDIA's property; it can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 10, 2018, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO Commentary, which is posted on our website. With that, let me turn the call over to Colette.
Thanks, Simona. We had an excellent quarter with growth across all our platforms led by gaming and datacenter. Q1 revenue reached a record $3.21 billion, up 66% year-over-year, up 10% sequentially and above our outlook of $2.9 billion. Once again, all measures of profitability set records, with GAAP gross margins at 64.5%, operating margins at 40.4% and net income at $1.24 billion. From a reporting segment perspective, Q1 GPU revenue grew 77% from last year to $2.77 billion. Tegra Processor revenue rose 33% to $442 million. Let's start with our gaming business. Revenue was $1.72 billion, up 68% year-on-year and down 1% sequentially. Demand was strong and broad-based across regions and products. The gaming market remains robust and the popular Battle Royale genre is attracting a new wave of gamers to the GeForce platform. We also continue to see demand from upgrades with about 35% of our installed base currently on our Pascal architecture. The launch of popular titles, like Far Cry 5 and Final Fantasy XV continued to drive excitement in the quarter. Gamers are increasingly engaging in social gameplay and gaming is rapidly becoming a spectator sport, while the production value of games continues to increase. This dynamic is fueling a virtuous cycle that expands the universe of gamers and drives a mix shift to higher-end GPUs. At the recent Game Developers Conference, we announced our real-time ray tracing technology, NVIDIA RTX. Ray tracing is a movie-quality rendering technique that delivers lifelike lighting, reflections and shadows. This has long been considered the holy grail of graphics, and we've been working on it for over 10 years. We look forward to seeing amazing, cinematic games that take advantage of this technology come to the market later this year, with the pipeline building into next year and beyond. We expect RTX, as well as other new technologies like 4K and virtual reality, to continue driving gamers' requirements for higher GPU performance. While supply was tight earlier in the quarter, the situation is now easing. As a result, we were pleased to see that channel prices for our GPUs are beginning to normalize, allowing gamers who had been priced out of the market last quarter to get their hands on the new GeForce GTX at a reasonable price. Cryptocurrency demand was again stronger than expected, but we were able to fulfill most of it with crypto-specific GPUs, which are included in our OEM business at $289 million. As a result, we could protect the vast majority of our limited gaming GPU supply for use by gamers. Looking into Q2, we expect crypto-specific revenue to be about one-third of its Q1 level. Gaming notebooks also grew well, driven by an increasing number of thin and light notebooks based on our Max-Q design. Nintendo Switch contributed strongly to year-on-year growth, reflecting that platform's continued success. Moving to datacenter, we had another phenomenal quarter with revenue of $701 million, up 71% year-on-year, up 16% sequentially. Demand was strong in all market segments and customers increasingly embraced our GPUs and CUDA platform for high-performance computing and AI. Adoption of our Volta architecture remained strong across a wide range of verticals and customers. In the public cloud segment, Microsoft Azure announced general availability of Tesla V100 instances joining Amazon, IBM and Oracle. Google Cloud announced that the V100 is now publicly available in beta. Many other hyperscale and consumer Internet companies also continued their ramp of Volta, which delivers five times the deep learning performance of its predecessor, Pascal. Volta has been chosen by every major cloud provider and server maker, reinforcing our leadership in AI deep learning. In high-performance computing, strength from the broad enterprise vertical more than offset the ramp down of major supercomputing projects such as the U.S. Department of Energy's Summit system. We see a strong pipeline across a number of vertical industries, from manufacturing to oil and gas, which should help sustain the trajectory of high-performance computing next quarter and beyond. Traction is also increasing in AI inference. Inference GPU shipments to cloud service providers more than doubled from last quarter. Our pipeline is growing into next quarter. We dramatically increased our inference capabilities with the announcement of the TensorRT 4 AI inference accelerator software at our recent GPU Technology Conference in San Jose. TensorRT 4 accelerates deep learning inference up to 190 times faster than CPUs for common applications, such as computer vision, neural machine translation, automatic speech recognition, speech synthesis and recommendation systems. It also dramatically expands the use cases prepared with the prior version. With TensorRT 4, NVIDIA's market reach has expanded to approximately 30 million hyperscale servers worldwide. At GTC, we also announced other major advancements in our deep learning platform. We doubled the memory of Tesla V100 to 32 GB VRAM, which is a key enabler for customers building virtual networks for larger data sets. We also announced a new GPU interconnect fabric called NVIDIA NVSwitch, which joins up to 16 V100 GPUs at a speed of 2.4 terabytes per second or five times faster than the best PCIe switch. We also announced our DGX-2 system, which leverages these new technologies and is updated, fully optimized software stack to deliver a 10x performance boost beyond last year's DGX. DGX-2 is the first single server capable of delivering 2 petaflops of computational power. We are seeing strong interest from both hyperscale and enterprise customers and we look forward to bringing this technology to cloud customers later this year. At our Investor Day in March, we updated our forecast for the datacenter addressable market. We see the datacenter opportunity as very large, fueled by growing demand for accelerated computing in applications ranging from AI to high-performance computing across multiple market segments and vertical industries. We estimate the TAM at $50 billion by 2023, which extends our previous forecast of $30 billion by 2020. We see strong momentum in the adoption of our accelerated computing platform and the expansion of our development ecosystem to serve this rapidly growing market. About 8,500 attendees registered for GTC, up 18% from last year. CUDA downloads have continued to grow, setting a fresh record in the quarter. Our total number of developers is well over 850,000, up 72% from last year. Moving to pro visualization, revenue grew to $251 million, up 22% from a year ago and accelerating from last quarter, driven by demand for real-time rendering, as well as emerging applications like AI and VR. Strength extended across several key industries, including public sector, healthcare and retail. Key wins in the quarter included Columbia University, using high-end Quadro GPUs for AI, and Siemens, using them for CT and ultrasound solutions. At GTC, we announced the Quadro GV100 GPU with NVIDIA RTX technology, capable of delivering real-time ray tracing to the more than 25 million artists and designers throughout the world. RTX makes computationally intensive ray tracing possible in real time, when running professional design and content creation applications. This allows media and entertainment professionals to see and interact with their creations with correct light and shadows and do complex renders up to 10 times faster than a CPU alone. The NVIDIA OptiX AI denoiser built into RTX delivers almost 100 times the performance of CPUs for real-time noise-free rendering. This enables customers to replace racks of servers in traditional render farms with GPU servers at one-fifth the cost, one-seventh the space and one-seventh the power. Lastly, automotive. Revenue grew 4% year-on-year to a record $145 million. This reflects the ongoing transition from our infotainment business to our growing autonomous vehicle development and production opportunities around the globe. At GTC and Investor Day, we made key product announcements on the advancement of autonomous vehicles and established a total addressable market opportunity of $60 billion by 2035. We believe that every vehicle will be autonomous one day. By 2035, this will encompass 100 million autonomous passenger vehicles and 10 million robo taxis. We also introduced NVIDIA DRIVE Constellation, a platform that will help car companies, carmakers, Tier 1 suppliers and others developing autonomous vehicle test and validate their systems in a virtual world across a wide range of scenarios before deploying on the road. Each year, 10 trillion miles are driven around the world. Even if test cars can eventually cover millions of miles, that's an insignificant fraction of all the scenarios that require testing to create a safe and reliable autonomous vehicle. DRIVE Constellation addresses this challenge by enabling cars to safely drive billions of miles in virtual reality. The platform has two different servers. The first is loaded with GPUs and simulates the environment that the car is driving in, as in a hyper-real video game. The second contains the NVIDIA DRIVE Pegasus autonomous vehicle computer, which possesses the simulated data as if it were coming from the sensors of a car driving on the road. Real-time driving commands from the DRIVE Pegasus are fed back to the simulation for true hardware-in-the-loop verification. Constellation will enable the autonomous vehicle industry to safely test and validate their AI self-driving systems in ways that are not practical or possible with on-road testing. We also extended our product roadmap to include our next-generation DRIVE autonomous vehicle computer. We have created a scalable AI car platform that spans the entire range of autonomous driving, from traffic jams, pilots, to level 5 robo taxis. More than 370 companies and research institutions are now using NVIDIA's automotive platform. With this growing momentum, we remain excited about the intermediate and long-term opportunities for our autonomous driving business. Now moving to the rest of the P&L, Q1 GAAP gross margins were 64.5% and non-GAAP was 64.7%, records that reflect continued growth in our value-added platforms. GAAP operating expenses were $773 million. Non-GAAP operating expenses were $648 million, up 25% year-on-year. We continue to invest in key platforms driving our long-term growth, including gaming, AI and automotive. GAAP net income was a record $1.24 billion and EPS was $1.98, up 145% and 151% respectively from a year earlier. Some of the upside was driven by a tax rate of 5% compared to our guidance of 12%. Non-GAAP net income was $1.29 billion and EPS was $2.05, both up 141% from a year ago, reflecting the revenue strength as well as gross margins and operating margin expansion and slightly lower tax. Our quarterly cash flow from operations reached record levels at $1.45 billion. Capital expenditures were $118 million. With that, let me turn to the outlook for the second quarter of fiscal 2019. We expect revenue to be $3.1 billion plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 63.6% and 63.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $810 million and $685 million, respectively. GAAP and non-GAAP OI&E are both expected to be income of approximately $15 million. GAAP and non-GAAP tax rates are both expected to be 11%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $130 million to $150 million. Further financial details are included in the CFO Commentary and other information available on our IR website. In closing, I'd like to highlight a few upcoming events for the financial community. We'll be presenting at the JPMorgan Technology Conference next week on May 15 and at the Bank of America Global Technology Conference on June 5. We will also hold our Annual Meeting of Stockholders online on May 16. We will now open the call for questions. Simona and I are here in Santa Clara and Jensen is dialing in from the road.
Operator
Your first question is from Stacy Rasgon with Bernstein Research.
Hi guys, thanks for taking my questions. First, I had a question on gaming seasonality. It's usually down pretty decently in Q1. It was obviously flat this time as you were trying to fill up the channel. Now that's done. I was just wondering what the supply demand dynamics as well as any thoughts on crypto might mean for typical seasonality into Q2 versus what would be typical where it would usually be down or usually be up pretty decently. How are you looking at it? And this is a question for Colette.
Jensen, why don't you start on the question for Stacy and I'll follow up afterwards after you speak.
Okay. Hi Stacy, so let's see. Q1, as you probably know, Fortnite and PUBG are global phenomena. The success of Fortnite and PUBG is beyond comprehension, really. Those two games are a combination of Hunger Games and Survivor, and they have captured the imagination of gamers all over the world. We saw the uptick and we saw the demand on GPUs from all over the world. Surely, there was scarcity as you know. Crypto miners bought a lot of our GPUs during the quarter and it drove prices up. I think that a lot of the gamers weren't able to buy into the new GeForce as a result. We're starting to see the prices come down. We monitor spot pricing every single day around the world. Prices are starting to normalize. It's still higher than where they should be. The demand is still quite strong out there. My sense is there's a fair amount of pent-up demand still. Fortnite is still growing in popularity. PUBG is doing great. We've got some amazing titles coming out. Our job is to make sure that we work as hard as we can to get supply out into the marketplace. Hopefully, by doing that, pricing will normalize and gamers can buy their favorite graphics card at a price that we hope they can get. The simple answer to your question is Fortnite and PUBG. The demand is just really great. They did a great job.
Operator
Your next question is from Joe Moore with Morgan Stanley.
I wonder—Colette had talked about the inference doubling in sales quarter-over-quarter with cloud. Can you just talk about where you're seeing the early applications for inference? Is that sort of as-a-service business or are you looking at internal cloud workloads? Just any color you can give us on where you guys are sitting in the inference space. Thank you.
Sure hi Joe. As you know, there are 30 million servers around the world. They were put in place during the time when the world didn't have deep learning. Now, with deep learning and machine learning approaches, the accuracy of prediction and recommendation has jumped dramatically. Internet service providers with a lot of different customers are jumping onto this new software approach. To optimize a neural network involves optimizing for the platform it targets. That's why we created TensorRT, which is an optimizing graph neural network compiler that allows different types of neural networks to run smoothly and quickly. The answer to your question is internal consumption. Applications like video recognition or detecting inappropriate video will require enormous amounts of computation.
Operator
Next question is from Vivek Arya with Bank of America.
Thank you for taking my question and congratulations on the strong growth and consistent execution. Jensen, I have two questions about the datacenter, one from a growth perspective and the second from a competition perspective. From the growth side, you guys are doing about $3 billion or so annualized, but you have outlined a market that could be $50 billion. What needs to happen for the next inflection? Is it something in the market that needs to change? Is it something in the product set? How do you grow and address that $50 billion market, considering that you have only penetrated a few percent of it today? What needs to change for the next inflection point? On the competition side, how should we think about competition coming from some of your cloud customers, like Google announcing a TPU 3.0 or others looking at competing technologies? Any color on both how you look at growth and competition would be very helpful. Thank you.
Thanks, Vivek. At its core, CPU scaling has really slowed. If you think about the several hundred billion dollars worth of computer equipment installed in the cloud and datacenters globally, as applications for machine learning and high-performance computing come along, the world needs a solution. GPU computing is that solution we pioneered a decade and a half ago. We find ourselves in a great position today. As Colette mentioned, we have close to 1 million developers now on this platform. It's incredibly fast, speeding up CPUs by 10, 20, 50 times or more, depending on the algorithm. There are three major segments we're focusing on: deep learning training, inference, and high-performance computing. The number of AI engineers worldwide is growing quickly. It's important we nurture the ecosystem for supercomputing growth. Regarding competition, the CPU scaling has slowed, so the world needs another approach. Google has announced TPU 3.0, but we're still ahead with our Tensor Core GPU. Our approach is more flexible and programmable, which is a significant advantage.
Operator
Your next question comes from Toshiya Hari with Goldman Sachs.
Great. Thank you so much. Jensen, I had a question regarding your decision to pull the plug on your GeForce Partner Program. I think most of us read your blog from last Friday, so we understand the basic background. Can you describe what led to this decision and perhaps talk a little bit about the potential implications, if any, in terms of your ability to compete or gain share? That will be really helpful. Thank you so much.
Yeah. Thanks for your question, Toshiya. At the core, the program was about ensuring that gamers who buy graphics cards know exactly the GPU brand that's inside. The gaming experience of a graphics card depends on the GPU chosen. Using one brand, while interchanging the GPU underneath causes confusion. Most of the ecosystem loved it, but some disliked it. Instead of all that distraction, we're doing well. We decided to discontinue it to eliminate unnecessary distractions so we can focus on continuing to help gamers choose graphics cards as we've always done.
Operator
Next question is from Sajal Dogra with Evercore ISI.
Hi. This is Sajal Dogra calling in for C.J. Muse. Thank you for taking my question. I had a question on HPC. TSMC, on their recent call, raised their accelerator attach rate forecast in HPC to 50% from mid-teens. I'd love to get further details on what exactly NVIDIA is doing with software services that create this competitive positioning in HPC and AI. If I could also ask about benchmarks. There has been some news on AI benchmarks whether it's Stanford DAWNBench, etc. I'd love your thoughts on the current state of benchmarks for AI workloads and your relative positioning of ASICs versus GPUs as we move towards newer neural networks like RN and GAN.
Yeah, thanks for the question. At the core, CPU scaling has stalled and its limits of physics have been reached. The world needs another approach going forward. We created the GPU computing approach a decade and a half ago. With the number of developers and applications emerging, it's clear that HPC's future is accelerating. Our GPU approach is in the perfect position to serve the void. We recently released three speed records: fastest single GPU, fastest single computer node, and fastest cloud instance. We love benchmarks because they simplify our leadership position. The number of networks continues to grow requiring a lot of software support, which we excel at as a full-stack computing company.
Operator
Next question is from Blayne Curtis with Barclays.
Thanks for taking my question. Jensen, I wanted to ask on the inference side about edge inference and beyond autos when you look at sizing that TAM. What are the other big areas that you think you can penetrate with GPUs in edge inference besides autos?
Yeah, Blayne. The largest inference opportunity for us is actually in the cloud and the datacenter. There's an explosion in the number of different types of neural networks available including image recognition, video sequencing, speech recognition, and many others. Creating one ASIC adapted to all these different types is a real challenge. Once ASICs are created, we will have since developed the Tensor Core GPU, which has the flexibility to adapt to this variety of applications. The next largest market will be in verticals, such as self-driving cars. We will see significant opportunities both in creating autonomous vehicle stacks and in the datacenter for developing those neural network models.
Operator
Your next question is from Timothy Arcuri with UBS.
Thank you. I wanted to go back to the question about seasonality for gaming in June. Normal seasonal sounds like it's up mid-teens for June in gaming. But obviously, the comps are skewed a little because of the channel restock and the crypto stuff. So does the guidance for June assume that gaming is better or worse than that mid-teens normal seasonal? Thank you.
We're expecting Q2 to be better than seasonality. We're expecting Q2 to be better than Q1 and also better than seasonality. Did that answer your question?
Operator
Your next question is from Atif Malik with Citi.
Hi. Thanks for taking my question and good job on the results. Colette, I have a question about your gross margins. Your gross margins have been expanding on product mix despite component pricing headwinds on the DRAM side. When do you expect component pricing to become a tailwind to your gross margins?
Thanks for the question. When you think about our gross margins, just over this last quarter, we were working on stabilizing the overall supply that was out there in the market for consumer GPUs. We benefited from that with a higher gross margin as we filled and completed that. We've seen us absorb a significant amount of component pricing changes, particularly around memory. We're not able to forecast when those pricing of those components will stabilize. But we believe the value added that our platforms provide is critical.
Operator
Your next question is from Chris Caso with Raymond James.
Yes, hi. Thanks for taking the question. My question is the progress on the deployment of Volta into the cloud service providers. You talked in your prepared remarks about the five deployments, including Google beta. Can you talk about how soon we can expect to see some of those remaining deployments? And of those already launched, how far are they along? I guess, to say proverbially, what inning are we in with these deployments?
Yes. Volta is a reinvented GPU, designed to excel at deep learning. Every cloud will adopt it. Initial deployment has been for internal use. Volta has been shipping to cloud providers and they are moving fast to open up Volta for customer consumption. I expect to see many more going online this quarter.
Operator
Your next question comes from Mark Lipacis with Jefferies.
Hi, thanks for taking my question. I had a question about the DGX family of products. Our own field work indicates very positive reception for DGX. Could you help us understand how much of the high growth in the datacenter business is being driven by DGX? When DGX-2 starts to ramp in the back half of the year, does DGX-1 replace the original DGX or are they targeting different segments? Any color on how to think about those two products would be helpful. Thank you.
DGX-2 and DGX-1 will both be in the market at the same time. The DGX business is a few hundred million dollar business and was introduced last year. It's designed for enterprises that need supercomputing without the need to build a supercomputer themselves. DGX serves enterprises in areas from car companies to healthcare, doing life sciences work or medical imaging work. We see great success there. In short, both products will coexist in the market.
Operator
Next question is from Mitch Steves with RBC Capital Markets.
Hey guys. I'm actually going to go to a more nitty-gritty question just on the financial side to make sure I'm understanding this right. So the OEM beat was pretty material given a lot of crypto revenue. Is it still the case that OEM is materially lower gross margin than your corporate average at this time?
Sure, I'll take that question. Generally, our OEM business can be a little volatile as it incorporates our mainstream GPUs as well as our Tegra integrated products. They are slightly below our corporate averages, as discussed in prior calls. So yes, you're correct, it remains a small part of our business right now.
Operator
Your next question comes from Christopher Rolland with Susquehanna.
Hey guys, thanks for the question. Your competitor thinks that just 10% of their sales were from crypto or like $150 million, $160 million. You guys did almost $300 million there. This could imply that you have two-thirds or more of that market? What's going on there? Is there a pricing dynamic allowing you to have such share, or do you think it's your competitors that don't know what's actually being sold to miners versus gamers? Why such implied share in that market? Thanks.
We try to transparently review our numbers as best we can. Our strategy is to create a SKU that allows crypto miners to fulfill their needs, which we call CMP. We fulfill demand that way as much as possible. Sometimes, it's just not feasible due to demand being too great, but that's the way we operate.
Operator
Your next question is from Craig Ellis with B. Riley.
Thanks for sneaking me in and congratulations on all the financial records in the quarter. Jensen, I just wanted to come back to an announcement you made at GTC with ray tracing. The technology looked high fidelity, and I think you noted that it was very computationally intensive. As we think about the gaming business and the potential for ray tracing to enter that platform group, what does it mean for dynamics in the past of pushing the high end of the market? For example, will it provide you further flexibility for those launches as you bring exciting high-end technology to market?
NVIDIA RTX is the biggest computer graphics invention in the past 15 years. It took us a decade to create, and it merges computer graphics rasterization with light simulation known as ray tracing, as well as deep learning and AI into one framework to achieve cinematic rendering in real time. Previously, it took hours to render just one frame. RTX enables this in real time, dramatically reducing costs and time for rendering. Such advancements not only improve existing markets but will also drive greater GPU demand, benefiting the entire ecosystem.
Operator
Your last question comes from Stacy Rasgon with Bernstein.
Hi guys, thanks for fitting me in for my follow-up. This is a question for Colette. I want to follow up again on the seasonality. Understanding the prior comments, normal seasonal for Q2 for gaming would be up in the double digits. Given your commentary on crypto declining in Q2 and the drivers around datacenter and Volta ramp, I can't bring that together with the idea of gaming being above seasonal within the context of your guidance envelope. How should I reconcile those things? How are you actually thinking about seasonality for gaming into Q2 within the context of the scenarios in your guidance for next quarter?
Sure, Stacy. Let me see if I can bridge that understanding. Q1, we outgrew seasonality significantly. We ended Q4 with very low inventory, and we spent Q1 establishing adequate inventory. Based on our performance in Q1, we left the quarter with healthy channel inventory levels. In Q2, since we won't need to fill channels like we did previously, the focus is now on the demand from gamers. Our guidance reflects that while gaming remains strong. Historically, our H2s are usually higher than H1s, and gaming seasonality fits into this. I hope this clarifies where we stand.
Operator
Your last question is from Will Stein with SunTrust.
Hi. Thank you for taking my question. I'm wondering about the supply chain challenges you discussed in the gaming end market. Is there something particular to that market that makes the shortages concentrated there? Or are other end markets like the datacenter also restricted from achieving their potential growth due to shortages? Please talk about the pace of recovery in those aspects.
Let me start off. Our datacenter business performed phenomenally. Volta is doing extremely well. There's a lot of time for qualification in datacenters, and our overall growth there has been outstanding. There are no significant supply challenges on that end. I'll pass it over to you, Jensen, for further context.
The reason miners love GeForce is because miners are everywhere in the world. Cryptocurrency is digital and distributed. GeForce is the single largest distributed supercomputing infrastructure globally. Every gamer essentially has a supercomputer at their PC. When new cryptocurrencies arise, GeForce is a good candidate for that demand. We try to directly serve the major miners to alleviate some of the demand pressure off the GeForce market, so that pricing would normalize for gamers. The demand for gaming is strong, especially with Fortnite and PUBG capturing the imagination of millions globally. Overall, I expect inventory levels to normalize, which would benefit gamers greatly.
Operator
Unfortunately, we ran out of time. I will now turn it back over to Jensen for any closing remarks.
Let's see here. Is it my turn again? We had another great quarter, record revenue, record margins, record earnings, growth across every platform. Datacenter achieved another record with strong demand for Volta and AI inference. Gaming was strong. We are delighted to see prices normalizing, allowing us to better serve pent-up gamer demand. The heart of our opportunity is the incredible growth of computing demand for AI just as traditional computing has slowed. The GPU computing approach we pioneered is ideal for filling this vacuum. Our invention of the Tensor Core GPU has further enhanced our strong position to power the AI era. I look forward to giving you another update next quarter. Thank you.
Operator
This concludes today's conference call. Thank you for joining. You may now disconnect.