NVIDIA Corp
NVIDIA is the world leader in accelerated computing.
Profit margin of 55.6% — that's well above average.
Current Price
$177.39
+0.93%GoodMoat Value
$221.97
25.1% undervaluedNVIDIA Corp (NVDA) — Q2 2017 Earnings Call Transcript
Operator
Good afternoon. My name is Desiree, and I'll be your conference operator today. I would like to welcome you to the NVIDIA Financial Results Conference Call. All lines have been placed on mute. After the speakers' remarks there will be a question-and-answer period. I would now turn the call over to Arnab Chanda, Vice President of Investor Relations at NVIDIA. You may begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the second quarter of fiscal 2017. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that today's call is being webcast live on NVIDIA's Investor Relations website. It is also being recorded. You can hear a replay by telephone until the 18th of August 2016. The webcast will be available for replay up until next quarter's conference call to discuss Q3 financial results. The content of today's call is NVIDIA's property. It cannot be reproduced or transcribed without our prior written consent. During the course of this call, we may make forward-looking statements based on current expectations. These forward-looking statements are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All of our statements are made as of today, the 11th of August 2016 based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary which is posted on our website. With that, let me turn the call over to Colette.
Thanks, Arnab. This quarter we introduced our new family of Pascal-based GPUs, one of our most successful launches ever. We also benefited from both the ongoing adoption of deep learning and our expanding engagement with hyperscale datacenters around the world as they apply deep learning to all the services they provide. Revenue continued to accelerate, rising 24% to a record $1.43 billion. We saw strong sequential and year-on-year growth across our four platforms: Gaming, Professional Visualization, Datacenter, and Automotive. Our business model based on driving GPU compute platforms into highly targeted markets is clearly succeeding. The GPU business was up 25% to $1.2 billion from a year ago. The Tegra Processor Business increased 30% to $166 million. In Q2, our four platforms contributed nearly 89% of revenue, up from 85% a year earlier, and 87% in the preceding quarter. They collectively increased 29% year-over-year. Let's begin with our Gaming platform. Gaming revenue increased 18% year-on-year to $781 million, reflecting the success of our latest integration of Pascal-based GPUs. Demand was strong in every geographic region. The Pascal architecture offers a number of amazing technological advances and enables unprecedented performance and efficiencies for playing sophisticated AAA gaming titles and driving rich immersive VR experiences. In our most successful launch ever, we introduced four major products. They are GeForce GTX 1080, 1070, and 1060 for the enthusiast market, and the TITAN X, the world's fastest consumer GPU for deep learning development, digital content creation, and extreme gaming. WIRED magazine called the GTX 1080 an unprecedented piece of electronic precision, one that performs Herculean feats of computational strength. Forbes called GTX 1060, which brings a premium VR experience within reach of many, a fantastic product. And Hardware Canucks described the TITAN X as a technological tour de force with frame rates that are simply mind-boggling. The GTX 1080, 1070, 1060, and TITAN X are now in full production and available to consumers worldwide. VR's potential is on vivid display in a new open-source game that we released during the quarter. Available on Steam, NVIDIA VR Funhouse is an open-source title created with our GamesWorks SDK. It integrates physical simulation into VR along with amazing graphics and precise haptics that make you feel like you're actually at a carnival. Moving to Professional Visualization, Quadro revenue grew to a record $214 million, up 22% year-on-year and up 13% sequentially. Growth came from the high-end of the market for real-time rendering tools and mobile workstations. The M6000 GPU 24 gig, launched earlier this year, is drawing strong interest from a broad range of customers. Digital Domain, a leading special-effects studio, is using Quadro to accelerate productivity for its work on films and commercials, which requires especially tight turnaround times. Engineering giant AECOM and the Yale School of Architecture are using Quadro to accelerate their design and engineering workflows. Last month at the SIGGRAPH conference, we introduced a series of new products that embed photorealistic and immersive experiences into workflows, incorporating Iray and VR. We launched the Pascal-based Quadro P6000, the most advanced workstation GPU, enabling designers to manipulate complex designs up to twice as fast as before. We demonstrated how deep learning is being brought to the realm of industrial design to create better products faster. And we launched eight new and updated software libraries such as the VRWorks 360 video SDK, which brings panoramic video to VR. Moving to the datacenter; revenue reached a record $151 million, more than doubling year-on-year and up 6% sequentially. This impressive performance reflects strong growth in supercomputing, hyperscale datacenters, and grid virtualization. Interest in deep learning is surging as industries increasingly seek to harness this revolutionary technology. Hyperscale companies remain fast adopters of deep learning, both for training and real-time inference, particularly for natural language processing, video, and image analysis. Among them are Facebook, Microsoft, Amazon, Alibaba, and Baidu. Major cloud providers are also offering GPU computing for their customers. Microsoft Azure is now using NVIDIA's GPUs to provide computing and graphics virtualization. During the quarter, we began shipping Tesla P100, the world's most advanced GPU accelerator based on the Pascal architecture. Designed to accelerate deep learning training, it allows application performance to scale up to eight GPUs using our NVLink interconnect. We also announced a variant of P100 based on PCI Express that makes it suitable for a wide range of accelerated servers. At our GPU Technology Conference in April, we introduced DGX-1, the world's first deep learning supercomputer. Equipped with eight P100s in a single box, it provides deep learning performance that is equivalent to 250 traditional servers. It comes loaded with NVIDIA software and AI application developers. We are seeing strong interest in DGX-1 from researchers and developers across academia, government labs, and large enterprises. Two days ago, Jen-Hsun delivered the very first DGX-1 production model to the Open AI Institute. They plan to use this system in part to build autonomous agents like chatbots, cars, and robots. Broader deliveries will commence later this quarter. We will be talking more about deep learning later this year at regional versions of our GPU Technology Conference set for eight cities around the world, among them Beijing, Amsterdam, Tokyo, and Seoul, as well as Washington D.C. Our GRID graphics virtualization business more than doubled in the quarter. Adoption is accelerating across a variety of industries, particularly automotive and AEC, with customers including Statoil, a Norwegian oil and gas company. Finally, in automotive, revenue increased to a record $119 million, up 68% year-over-year and up 5% sequentially, driven by premium infotainment and digital cockpit features in mainstream cars. Our effort to help partners develop self-driving cars continues to accelerate. We have started to ship our DRIVE PX 2 automotive supercomputer to the 80-plus companies using both our hardware and DriveWorks software to develop autonomous driving technologies. We remain on track to ship our autopilot solution based on the DRIVE platform. Beyond our four platforms, our OEM and IP business was $163 million, down 6% year-on-year in line with mainstream PC demand. Now, turning to the rest of the income statement. We had record GAAP gross margin of 57.9%, while non-GAAP gross margin was 58.1%. These reflect the strength of our GeForce gaming GPUs, the success of our platform approach, and strong demand for deep learning. GAAP operating expenses were $509 million, down 9% from a year earlier. Non-GAAP operating expenses were $448 million, up 6% from a year earlier. This reflects increased hiring in R&D and marketing expenses, partially offset by lower legal fees. GAAP operating income for the second quarter was $317 million, compared to $76 million a year earlier. Non-GAAP operating income was $382 million, up 65%. Non-GAAP operating margins improved 680 basis points from a year ago to 26.8%. Now, turning to the outlook for the third quarter of fiscal 2017. We expect revenue to be $1.68 billion plus or minus 2%. Our GAAP and non-GAAP gross margins are expected to be 57.8% and 58% respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be approximately $530 million. Non-GAAP operating expenses are expected to be approximately $465 million. And GAAP and non-GAAP tax rates for the third quarter of fiscal 2017 are both expected to be 21% plus or minus 1%. Further financial details are included in the CFO commentary and other information available on our IR website. We will now open the call for questions.
Operator
Operator, could you please poll for questions? Thank you. And your first question comes from the line of Mark Lipacis.
Hi. Thanks for taking my questions. First question on the datacenter business. Can you help us understand to what extent is the demand being driven by deep learning applications verses the classic computationally intense design applications?
Sure, Mark. Our datacenter business is comprised of three basic markets, as you're alluding to; one is high-performance computing, which you could characterize as a traditional supercomputing market, very computationally intensive. Our second market is GRID, which is our datacenter virtualization, enabling graphics application virtualization. You could stream and serve any PC or any PC application from the datacenter to any client device. The third application is deep learning, and this is largely our hyperscale datacenters applying deep learning to enhance their applications to make them much smarter and more delightful. The vast majority of the growth comes from deep learning by far, and the reason for that is because high-performance computing is a relatively stable business; it's still growing, and I expect the high-performance computing to do quite well over the coming years. GRID is a fast-growing business. I think Colette said that it was growing 100% year-over-year, but it's from a much smaller base. Deep learning is significant in size and is growing quite substantially.
That's very helpful. Thank you. And then last question. On the new – so you're just starting to ship Pascal now, and I guess my understanding is that historically, as you're shipping the new product, the yields have opportunity for improvement and the more volume is shipped, the more you climb down the yield curve. What classically happens here on the yield, and does that positively impact gross margins over the next three or four quarters? Thank you.
Yeah. So we've talked extensively about the way we prepare for new process nodes over the last several years. For long-term NVIDIA followers, you might recall that 40-nanometer was a very challenging node for us. With all of these challenges, we had opportunities to improve our company, and we've implemented a rigorous process node preparation methodology, which starts with some of the world's best process design engineers, circuit design engineers, and process readiness teams. We have a fantastic group dedicated just to getting process ready for us. The second part of it is how that process readiness is integrated throughout the entire company. I'm really proud of the way the company executed on Pascal. 16-nanometer FinFET is no trivial task, not to mention the speed of the memories that we used. It's the world's first G5X. We also ramped the world's first HBM2 memory and 3D memory stacking. The number of technological challenges we overcame in the ramp of Pascal is quite extraordinary. I'm super proud of the team. Now, going forward, we're going to continue to refine yields, and that is absolutely the case. However, we came into 16-nanometer with a great deal of preparedness, and it's too early to guess what's going to happen to yields and margins long-term, but we'll guide one quarter at a time.
Operator
And your next question comes from the line of Toshiya Hari.
Hi. Thank you for taking my questions and congrats on a very strong quarter. Your Q3 revenue guide implies further acceleration on a year-over-year basis. Are there one or two end markets where you expect outsized growth, or should we expect growth in the quarter to be broad-based?
Yeah, Toshiya. I appreciate it. We're experiencing growth in all of our businesses. Our strategy of focusing on deep learning, self-driving cars, gaming, and virtual reality—markets where GPU makes a significant difference—is really paying off. I think this quarter is the first quarter where we saw growth across every single one of our businesses. My expectation is that we're going to see growth across all of our businesses next quarter as well. But it's driven by our focus on these key markets, and away from traditional commodity component businesses. I think the one dynamic that sticks out is deep learning. Deep learning is a new computing approach, a new computing model, and it requires a new computing architecture. This is where the parallel approach of GPUs is perfectly suited. Five years ago, we started to invest in deep learning quite substantially. We made fundamental changes and enhancements for deep learning across our entire stack of technology, from the GPU architecture to the GPU design to the systems that GPUs connect into; for example, NVLink, to other system software designed for it, like cuDNN and DIGITS, to all the deep learning experts in our company. We've quietly invested in deep learning because we believe the future of deep learning is so impactful to the entire software industry; if you will, we pushed it all in. We find ourselves at the epicenter of this very important dynamic, and it's probably—if there is one particular growth factor that is of great significance—it would be deep learning.
Operator
And your next question comes from the line of Vivek Arya.
Thank you for taking my question and congratulations on good growth and the execution. Jen-Hsun, the first question is tied to PC gaming; very strong trends. I was curious if you could quantify how much of your base has upgraded to Pascal, and have you noticed any change in the behavior of gamers in this upgrade cycle—whether it's the price or part of the stack they are buying now, and how quickly they're refreshing versus what you might have seen in the Kepler and Maxwell cycles?
Sure. Thanks a lot, Vivek. Let's say, on PC gaming there's a few dynamics. Our installed base represents around 80 million active GeForce users worldwide. Only about a third has even upgraded to Maxwell, and we only started shipping Pascal halfway through this last quarter. That gives you a sense of how much of the installed base has yet to upgrade—and Pascal is unquestionably the biggest leap we've ever made generationally in GPUs. Not only is it high-performance, but it's also energy-efficient and includes exciting new graphics technologies for VR and others. I believe Pascal will be enormously successful for us. It comes at a time when the PC gaming marketplace is also quite different than it was five years ago. One dynamic that's powerful is that the production quality of video games today is much higher than ever. The reason is that the installed base of capable game platforms is architecturally compatible—meaning that PlayStation 4, Xbox One, and PCs are essentially architecturally compatible. As a result, the footprint for developers has grown tremendously over previous generations. I mean, this is a relatively new dynamic. As a result, the quality of games increases, which means the consumption of GPU capability goes up with it. I’m excited about the next-generation consoles coming soon, allowing game content providers to aim even higher. That's supportive of long-term expansion of our gross margins and ASPs of PC gaming. Some other dynamics are also at play, like eSports—not just an interest but a global phenomenon, particularly powerful in Asia and across developing countries. We're also seeing positive results with VR as we get more solid content out there, and we just launched NVIDIA Ansel, the world’s first in-game photography system, allowing users to create VR photographs. To summarize, how much of the installed base has upgraded to Pascal? Very small, of course, because we just started ramping production, but only a third has upgraded to Maxwell, so there's a significant upgrade opportunity ahead.
Operator
And your next question comes from the line of Stephen Chin.
Hi. Thanks for taking my questions. Jen-Hsun, the first one if I could on the datacenter competitive landscape; early this week, we saw one of your datacenter competitors make an acquisition of a smaller private company. I was wondering if you could talk a bit more about how you view your position in the datacenter market with respect to machine learning and AI. How are your products positioned for high-end or low-end machine learning application performance?
Sure. Thanks. Well, as you can imagine, we have a good pulse on the state of the industry. We've been in this industry since the very beginning. Deep learning was really ignited when pioneering researchers around the world discovered the use of GPUs to accelerate deep learning and made it practical. The GPU was a perfect match because the nature of the GPU is a sea of small processors that can communicate with each other simultaneously. That architectural innovation has been the source of our GPU computing initiative for about a decade. If you look at deep learning today, five years later, it's clear that deep learning has been infused into almost every Internet service, making them smarter and more delightful to consumers. The hyperscale adoption of deep learning is broad, large-scale, and global. This new computing approach is significant long-term, which is why five years ago we started making significant investments across the entire stack of our company. GPU computing is not just the GPU chip; it's GPU architecture, the GPU's design, the GPU system, and all the algorithms that run on top of it, as well as the frameworks that facilitate deep learning. We've dramatically improved deep learning on GPUs over the last two generations. When we started this, we were on Kepler; Maxwell improved deep learning performance ten-fold from Kepler, and Pascal improved it another ten-fold from Maxwell. In just two generations, we've made substantial improvements. Our strategy is not just to focus on the GPU and expertise in parallel computing, but to maintain a singular architecture approach for deep learning. We've placed all our investment behind one architecture that's available from hyperscale to datacenters to workstations to notebooks to PCs to cars to embedded computers like our completely integrated high-performance computer in a box, the NVIDIA DGX-1. Our lead in the market is substantial, but we're not resting on our laurels. We've been investing significantly. Over the next several years, I think you will continue to see remarkable advancements in this area.
Operator
And your next question comes from the line of Romit Shah.
Yes, thank you. I had a question on automotive. You mentioned that DRIVE PX is now shipping to 80 car companies. Jen-Hsun, I'm curious: Are the wins here similar in size and focusing more on prototyping, or are there opportunities that could ultimately translate into full production wins and drive the automotive business disproportionately?
Well, I appreciate the question. We've just started this quarter shipping DRIVE PX 2. Before I answer your question, let me explain what DRIVE PX 2 is. DRIVE PX 2 is a processor designed for various levels of automation in vehicles. It has the capability to build an autopilot or an AI co-pilot or an entirely self-driving car. It performs sensor fusion, SLAM—localization and mapping, and detection using deep neural nets. All cameras around the car feed into this processor, which performs real-time inference for environmental awareness. This quarter we started shipping them to our partners and developers to begin developing their software and systems around our computer and on our software stack. We intend to ship in volume production, but precise timelines vary based on customer schedule, which can range from immediate to over the next couple of years. Developing a self-driving car is a significant undertaking; it's not done for fun. We expect our customers to develop a range of solutions from standard cars to shuttles and trucks, which are vital for global commerce.
Operator
And your next question comes from the line of Craig Ellis.
Yeah. Thanks for taking the question. The first is just a follow up on the Gaming strength in the quarter. With the company launching the Founders Edition availability of Gaming products in the quarter, can you talk about how that went and how gross margins compare to chip sales that would go into a gaming card OEM?
Well, the Founders Edition is engineered by NVIDIA, completely built by NVIDIA, and sold directly by NVIDIA and supported by NVIDIA. Some customers prefer a direct relationship with us. Its availability is limited, and it’s engineered at the highest possible quality. We limit production because we have a network of partners who can distribute our architecture globally, providing different sizes, shapes, styles, thermal solutions, and configurations. We believe this diversity contributes to the popularity of the NVIDIA GeForce platform. Regarding gross margins, they are marginally comparable.
Operator
And your next question comes from the line of Matt Ramsay.
Yes. Good afternoon. Thank you. Jen-Hsun, I wanted to ask a couple of questions again on the datacenter business. The first being, we've done a little bit of work trying to estimate what the long-term server attach rates for accelerators in general could be, and for GPUs within that. It would be really interesting to hear your perspectives on that. Secondly, is there a market for an APU-type product in the datacenter? I know you guys have Project Denver and some other things regarding the CPU perspective, but is there a deep learning integrated CPU/GPU play that might open up more long-term TAM for your company that you guys are considering pursuing? Thank you.
Sure. First of all, the types of workloads in datacenters have changed significantly. In the past, it was largely about database searches. Now it's not just text or data; the majority of what's going through the Internet and datacenters today includes images, voice, and increasingly, live video. If you think about live video, it must be processed in real-time, requiring substantial AI capability in the datacenter. Regarding how much GPU would be necessary, it's challenging to predict exact numbers, but my sense is that the growth opportunity for deep learning is considerable. Every hyperscale datacenter will be GPU-accelerated for both training and inference. As for APUs, energy efficiency is vital. While deep learning workloads increasingly require GPU support, there is still demand for high-performing CPUs. Intel remains the leader in CPU performance, making it hard to argue otherwise based on established craft.
Operator
And your next question comes from the line of Ian Ing.
Yes. Thank you. So earlier you talked about taping out all the Pascal products at this point. Are you ceding the sub-$250 price point for cards to competition, or is this something you can serve with older Maxwell product or an upcoming product? Thanks.
Yeah, thanks a lot, Ian. We have taped out, verified, and ramped every Pascal GPU. That's right. However, we have not introduced all of them.
Operator
And your next question comes from the line of Steve Smigie.
Great. Thanks a lot for the question. I just wanted to follow up a bit on virtual reality. You guys have talked a bit about investments there, and I was just curious about the reception you're getting at this point. What will be the biggest driver getting that going? Is it more headsets or more developers working on that? Thank you.
Yeah, Steve, I think it's all of that. We must keep pushing VR and get headsets out to the world. I think HTC Vive is doing a great job and Oculus, of course, is doing well. We're tracking the headsets closely, and adoption is growing. The content is captivating, and we need more of it as developers worldwide are increasingly engaged with VR. VR is not just about games; one area we're seeing success in is enterprise and industrial design—like medical imaging and architecture. We use our photorealistic renderer, Iray, which is fully GPU-accelerated, to design our own workspace in VR. The result is a photorealistic environment that’s truly immersive. The design and architecture sectors are likely to benefit. I see broad-based VR adoption. An important technical advancement we made is multi-resolution rendering on Pascal; it's the first architecture that supports multiple projections simultaneously for VR and various displays. We also integrated physics simulation into VR, which enhances the experience because you experience a more realistic environment with haptics. Our standing in VR is strong, and I am enthusiastic about its development.
Operator
And your next question comes from the line of Vijay Rakesh.
Hi, guys. Thanks. Just on the datacenter side, Jen-Hsun, you mentioned three key segments: HPC, GRID, and deep learning. What percent of the mix are those for the datacenter?
I would say, it's about half deep learning at the moment, and probably around 35%, a third is high-performance computing, and the rest is virtualization. Going forward, deep learning is likely to become a very significant part of that. It's also important to note that deep learning is not just for Internet service providers for voice recognition, image recognition, and face recognition. Deep learning is a way of using mathematics and software to reveal insights from vast amounts of data, and supercomputing centers generate a huge amount of data without the ability to sort through it effectively, which deep learning allows.
Operator
And your next question comes from the line of Harlan Sur.
Good afternoon, and solid job on the quarterly execution. You guys had really good growth in Professional Visualization with record revenues. I would've thought most of the growth was driven by the upcoming release of the Pascal-based P5000 and P6000 family. I was surprised to hear that most of the demand was driven by your current generation M6000 family, which means the Pascal demand cycle is still ahead of you. Number one, is that a fair view? And then what's driving the strong adoption of M6000? If you haven't already released it, when do you expect to launch the Pascal-based P5000 and P6000 family? Thank you.
I appreciate the question, Harlan. Your observation is right; it's coming from several sources. Design increasingly focuses on product aesthetics, where the look of the product is just as significant as its mechanical design. This applies to a range of products, whether consumer electronics or architectural design. The computational load to simulate these details is quite significant. We're seeing demand grow for Quadro GPUs based on creative software packages users rely on for photorealistic rendering. Coupled with the growing integration of GPUs in computers, it means that designers can trust in existing GPU capabilities to enhance their work. This virtuous cycle of GPU utilization is indeed starting to show results. The second point is Maxwell was the most energy-efficient GPU until Pascal. It enabled the trend toward lighter, thinner designs in workstations. As for Pascal, it's ramping into workstations globally, and I expect that the dynamics I mentioned—software development, our photorealistic rendering capabilities, and GPUs' energy efficiency—will be beneficial for the workstation market. VR adoption is also relevant for these design applications.
Operator
And your next question comes from the line of Ross Seymore.
Hi, guys. Thanks for letting me ask a question. I have a couple for you, Jen-Hsun, on the automotive side. The first part would be; we've seen some partnerships formed with some of your competitors and customers dissolving recently. How does NVIDIA play in this ecosystem with respect to forming partnerships? The second part: If we put a ballpark year on it, when do you think the autonomous driving part of your automotive business will exceed the infotainment side?
We play in a graceful, friendly, and open way. Building an autonomous driving car is a software issue. It's a complicated problem, and it's not logical for one company to claim ownership over it. We believe that self-driving cars are best approached as a software challenge with each company maintaining its destiny. This belief is why DRIVE PX 2 is an OpenStack—an open platform for each car manufacturer to develop their vehicles. The architecture is scalable, with various application requirements from autopilot to fully autonomous vehicles. I think the computation necessary for this accomplishment is diverse and substantial, much like voice recognition, which has escalated in computation needs over the last four years. To categorize autonomous vehicles and self-driving capabilities as a single problem is impractical, as many companies have different visions and approaches. Every company brings its perspective to the solution. It's clear AI will be integral to this endeavor. As for the timeline, each automotive client's objectives vary widely; therefore, the rates differ too. Some partners are developing shortly while others have a longer timeline.
Operator
And your next question comes from the line of Joseph Moore.
Great. Thank you so much. You talked about deep learning in the hyperscale environment, but you seem to be gaining traction in the enterprise environment as well. Can you discuss your progress and what it takes to build a presence in traditional enterprises?
Deep learning isn't just for Internet services; it’s also supercharged machine learning. It's about finding insights in unstructured, high-dimensional data. Every sector, be it life sciences, healthcare, retail, manufacturing, or security, deals with vast amounts of data—and deep learning leverages this data. Given that context, I think deep learning has an even broader opportunity in enterprises than in consumer-oriented Internet services. This is why we created the NVIDIA DGX-1—a supercomputer in a box designed for enterprises without the need for extensive in-house capabilities to develop high-performance computing resources for deep learning. The DGX-1 is readily available for enterprises to purchase, complete with integrated software and performance tuning.
Operator
And your next question comes from the line of Ambrish Srivastava.
Hi, thank you very much for squeezing me in. I had a question on gross margin, Jen-Hsun. You guided a big top-line, but gross margin is set to be flat. What is the rationale? While I get that margins don’t always directly correlate with revenue, is it pricing impact, or yields—what is the reason given the favorable mix?
Our guidance is our best estimate, and we’ll have more clarity next quarter. But broadly speaking, I agree that as we invest more into our platform approach—specialized and rich with software—the value we deliver becomes significant. The benefits extend beyond just frames per second, providing tangible cost savings as companies reduce server clusters and enhance productivity. This platform approach should deliver value long-term, but for next quarter, let's see how that evolves.
Operator
And your next question comes from the line of Rajvindra Gill.
Thank you.
Rajvindra, how are you?
Sir, exactly. Good. A question, Jen-Hsun, on the DRIVE PX 2; my understanding is it's one scalable architecture from cockpit to ADAS to mapping to autonomous driving. But how does this compare to the approach that some of your competitors are taking with respect to providing different solutions for levels of ADAS systems—specifically with the V2X communication for level 4 autonomous driving?
Great question. The issues of achieving full autonomy are multi-faceted. Everyone possesses different projections of the future and how to reach it, from completely autonomous vehicles within mapped geofenced areas to eventual highway autopilot systems. Each company brings its view of the issue to the table, drawing from varied transportation insights. Meanwhile, it's indisputable that AI will play a critical role in this challenge. However, developing such systems requires comprehensive and sophisticated computation. Real-time operation is critical—this demands remarkable software and a computing architecture that meets its needs, which is what our DRIVE PX 2 is designed for. The supervision of the entire computer design matters, while fueling collaboration with partners is essential.
Operator
And your next question comes from the line of Mitch Steves.
Hey. Thanks for taking my question, guys. Circling back to the datacenter piece and the deep learning aspect: Is there a change in ASPs you guys are seeing when you enter that market?
No.
So essentially, there's going to be no margin change from the datacenter sales, and I guess the same question goes for automotive as well.
Automotive ASPs for self-driving cars will be much higher than infotainment offerings due to the complexity of the problem. Most cars have infotainment systems, but few exhibit full autonomy. Therefore, the technology necessary for self-driving cars is more complicated than lane-keeping, adaptive cruise control, or first- and second-generation ADAS systems.
Operator
Your next question comes from the line of Brian Alger.
Hi, guys. Thanks for squeezing me in. Profound congratulations on a fantastic quarter and guidance. I want to come back to the difference between Pascal and competition, specifically Intel or AMD. Although there has been documentation on power requirements between Pascal and Polaris, one might think that while this is essential in gaming, it would be even more crucial for deep learning applications. Could you address this aspect of design—power efficiency—and the impact when scaling it up for complex problems?
Thank you, Brian. I appreciate your support. Energy efficiency is paramount in modern-day processors and is crucial moving forward because every environment, even your PC with 750 watts or 1,000 watts, runs a risk of being power-constrained. In more limited environments, like drones, we may only have one or two watts to work with. Even within data centers—when training neural networks or performing inference—energy efficiency becomes paramount as the number of GPUs in use will multiply into the tens of thousands. Energy efficiency thus is a leading consideration. In addition, performance hinges on architectural features. The architectural changes we've made in Pascal are groundbreaking, ideally suited for user demands in deep learning. Our GPU computing initiative encompasses the entire processing system from architecture to software, storage, and networking, providing the highest throughput computing on the planet. This initiative helped us secure contracts to build the next two fastest supercomputers in the world—effectively leading the market.
Operator
And your next question comes from the line of Blayne Curtis.
Hey, guys. Thanks for squeezing me in here, and great execution on the quarter. Two related questions: Colette, I'm curious about your view on capital return; using buybacks is obviously an accelerated part, with only $9 million in the last quarter. How will that evolve going forward? Jen-Hsun, a larger question regarding capital allocation: you mentioned CPUs are not something you're looking to enter, but I was curious if you're considering exploring other areas where you could provide value in the datacenter?
The return of capital remains a crucial component of our shareholder value strategy. Both dividends and share repurchases are essential. Looking ahead, we'll ensure that the dividend remains on a long-term trajectory, keeping competitiveness and our profitability in mind. As for share repurchase, we'll look for opportune moments to execute these transactions.
As for capital allocation, NVIDIA excels in vision, creativity, and sustained innovation, which forms our core strategy. Our use of capital focuses on nurturing our talent and enabling innovation opportunities for our employees to change the world with their creative endeavors. We do not shy away from acquisitions and have continuous engagement with partners of various sizes globally to advance our field; however, our primary strategy remains investment in our own talent and capabilities.
Operator
And your next question comes from the line of C.J. Muse.
Yeah. Good afternoon. Thank you for squeezing me in. I have two quick questions. Thank you for breaking out deep learning as a percentage of the datacenter; can you provide what that percentage was for the April quarter? For my follow-up, looking back at your guide from the last four quarters, you've indicated a roughly 50% incremental operating margin. Is that the right way to underwrite? Or should we expect higher margins from improving mix and maturing processes from your foundry partners?
Deep learning remains a vital indicator of our datacenter performance and comprises a majority of our datacenter revenues. This is projected to continue to escalate. I believe it's a vast majority, but I can't pinpoint an exact figure. Regarding your second question, incremental operating margins vary, but given deep learning's significance and the push towards our platform approach, there is every reason to anticipate strong margins for the future.
Operator
And your next question comes from the line of Kevin Cassidy.
Thanks for taking my question. Could you speak to Tegra beyond infotainment in automotive?
Tegra is essential in our self-driving car initiatives. Without Tegra, there would be no self-driving cars. The DRIVE PX 2 setup relies heavily on Tegra combined with discrete GPUs. We also utilize Tegra for Jetson, aimed at embedded autonomous and intelligent machines. Innovations in autonomous technology will continue to drive future developments. I'm optimistic about Jetson's prospects, as Tegra remains our significant AI computer-on-a-chip. Okay. May I thank you all for the questions. Our growth is driven by several factors: our focus on deep learning, self-driving cars, gaming, and VR—areas where GPUs are vital—really pay off. This quarter, Pascal is the most advanced GPU we've created, and I’m proud of how we ramped it successfully last quarter. Hyperscale adoption of deep learning is now widespread and continues to grow globally. As we go forward, I look forward to sharing our many developments in deep learning and how it's scaling in the market. Thank you for joining us today.
Operator
This concludes today's conference call. You may now disconnect.