NVIDIA Corp
NVIDIA is the world leader in accelerated computing.
Profit margin of 55.6% — that's well above average.
Current Price
$177.39
+0.93%GoodMoat Value
$221.97
25.1% undervaluedNVIDIA Corp (NVDA) — Q1 2021 Earnings Call Transcript
Operator
Good afternoon. My name is Josh, and I will be your conference operator today. I would like to welcome everyone to NVIDIA's Financial Results Conference call. Simona Jankowski, you may begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2021. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2021. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may vary materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Form 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 21, 2020, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Jensen.
Thanks, Simona. Before Colette describes our quarterly results, I'd like to thank those who are on the front lines of this crisis, first responders, health care workers, and service providers, who inspire every day with their bravery and selflessness. I also want to acknowledge the incredible efforts of our colleagues here at NVIDIA. Despite many challenges, they have barely broken stride during one of the busiest periods in our history. Our efforts related to the virus are focused in three areas. First, we're taking care of our families and communities. We've pooled in raises by six months to put more money in our employees' hands, and NVIDIA and our people have donated thus far more than $10 million to those in need. Second, we're using NVIDIA's unique capabilities to fight the virus. A great deal of science being done on COVID-19 uses NVIDIA technology for acceleration when every second counts. Some examples include sequencing the virus, analyzing drug candidates, imaging the virus at molecular resolution with cryo-electron microscopy and identifying elevated body temperature with AI cameras. And third, because COVID-19 won't be the last killer virus, we need to be ready for the next outbreak. NVIDIA technology is essential for the scientific community to develop an end-to-end computational defense system, a system that can detect early, accelerate the development of a vaccine, contain the spread of disease, and continuously test and monitor. We are racing to deploy the NVIDIA Clara computational healthcare platforms, where Clara Parabricks can accelerate genomics analysis from days to minutes. Clara Imaging will continue to partner with leading research institutes to develop state-of-the-art AI models to detect infections, and Clara Guardian will connect AI to cameras and microphones in hospitals to help overloaded staff watch over patients. We completed the acquisition of Mellanox on April 27. Mellanox is now NVIDIA's networking brand and business unit and will be reported as part of our data center market platform, and Israel is now one of NVIDIA's major technology centers. The new NVIDIA has a much larger footprint in data center computing, end-to-end and full-stack expertise in data center architectures, and tremendous scale to accelerate innovation. NVIDIA and Mellanox are a perfect combination and position us for the major forces shaping the IT industry today, data center scale computing and AI. From microservice cloud applications to machine learning and AI, accelerated computing and high-performance networking are critical to modern data centers. Previously, a CPU compute node was the unit of computing. Going forward, the new unit of computing is an entire data center. The basic computing elements are now storage servers, CPU servers, and GPU servers, which are composed and orchestrated by hyperscale applications that are serving millions of users simultaneously. Connecting these computing elements together is the high-performance Mellanox networking. This is the era of data center scale computing. And together, NVIDIA and Mellanox can architect end to end. Mellanox is an extraordinary company, and I'm thrilled that we're now one force to invent the future together. Now let me turn the call over to Colette.
Thanks, Jensen. Against the backdrop of the extraordinary events unfolding around the globe, we had a very strong quarter. Q1 revenue was $3.08 billion, up 39% year-on-year, down 1% sequentially and slightly ahead of our outlook, reflecting upside in our data center and gaming platforms. Starting with gaming, revenue of $1.34 billion was up 27% year-on-year and down 10% sequentially. We are pleased with these results, which exceeded expectations in the quarter, marked by the unprecedented challenges of COVID-19. Early in Q1, as the epidemic unfolded, demand in China was impacted with iCafes closing for an extended period. As the virus spread globally, much of the world started working and learning from home, and gameplay surged. Globally, we have seen a 50% rise in gaming hours played on our GeForce platform, driven both by more people playing and more gameplay per user. With many retail outlets closed, demand for our products has shifted quite efficiently to e-tail channels globally. Gaming laptops revenue accelerated to its fastest year-on-year growth in six quarters. We are working with our OEMs and channel partners to meet the growing needs of the professionals and students engaged in working, learning, and playing at home. In early April, our global OEM partners announced a record new 100 NVIDIA GeForce-powered laptops with availability starting in Q1 and the most to ship in Q2. These laptops are the first to use our high-end GeForce RTX 2080 SUPER and 2070 SUPER GPUs, which have been available for desktop since last summer. In addition, OEMs are bringing to market laptops based on the RTX 2060 GPU at just $999, a price point that enables a larger audience to take advantage of the power and features of RTX, including its unique ray tracing and AI capabilities. These launches are well-timed as mobile and remote computing needs accelerate. The global rise in gaming also lifted sales of NVIDIA Nintendo Switch and our console business, driving strong growth both sequentially and year-over-year. We collaborated with Microsoft and Mojang to bring RTX ray tracing to Minecraft, the world's most popular game with over 100 million gamers monthly and over 100 billion total views on YouTube. Minecraft with RTX looks astounding with realistic shadows and reflections and naturalistic effects like fog. Reviews for it are off the charts. Ars Technica called it a jaw-dropping stunner, and PC World said it was glorious to behold. Our RTX technology stands apart, not only with our two-year lead in ray tracing but with its use of AI to speed up and enhance games using the Tensor Core silicon on our RTX class GPUs. We introduced the next version of our AI algorithm called Deep Learning Super Sampling. In real-time, DLSS 2.0 can fill the missing bits from every frame, doubling performance. It represents a major step function from the original and it can be trained on non-gaming-specific images, making it universal and easy to implement. The value and momentum of our RTX GPUs continue to grow. We have a significant upgrade opportunity over the next year with the rise and tide of RTX-enabled games, including major blockbusters like Minecraft and Cyberpunk. Let me also touch on our game streaming service, GFN, which exited beta this quarter. It gives gamers access to more than 650 games with another 1,500 in line to get onboarded. These include Epic Games, Fortnite, which is the most played game on GFN, and other popular titles such as CONTROL, Destiny 2, and League of Lighting in the fall. Since launching in February, GFN has added 2 million users around the world, with both sign-ups and hours of gameplay boosted by stay-at-home measures. GFN expands our market reach to the billions of gamers with underpowered devices. It is the most publisher-friendly, developer-friendly game streaming service with the greatest number of games and the only one that supports ray tracing. Moving to Pro Visualization, revenue was $307 million, up 15% year-on-year and down 7% sequentially. Year-on-year revenue growth accelerated in Q1 driven by laptop workstations and Turing adoption. We are seeing continued momentum in our ecosystem for RTX ray tracing. We now have RTX support for all major rendering visualization and design software packages, including Autodesk Maya, Dassault's CATIA, Pixar's RenderMan, Chaos Group's V-Ray, and many others. Autodesk has announced that the latest release of VRED, its automotive 3D visualization software, supports NVIDIA RTX GPUs. This enables designers to take advantage of RTX to produce more life-like designs in a fraction of the time versus CPU-based systems. Over 45 leading creative and design applications now take advantage of RTX, driving a sustained upgrade opportunity for Quadro-powered systems while also expanding their reach. We see strong demand in verticals, including health care, media and entertainment, and higher education. Higher health care demand was fueled in part by COVID-19 related research at Siemens, Oxford, and Caption Health. Caption Health received FDA clearance for an update to its AI-guided ultrasound, which makes it easier to perform diagnostics-quality cardiac ultrasounds. And in media and entertainment, demand increased as companies like Disney deployed remote workforce initiatives. Turning to automotive and robotic autonomous machines. Automotive revenue was $155 million, down 7% year-on-year and down 5% sequentially. The automotive industry is seeing a significant impact from the pandemic, and we expect that to affect our revenue in the second quarter as well, likely declining about 40% from Q1. Despite the near-term challenges, our important work continues. We believe that every machine that moves someday will have autonomous capabilities. During the quarter, Xpeng introduced the P7, an all-electric sports sedan with innovative Level 3 automated driving features powered by the NVIDIA DRIVE AGX Xavier AI compute platform. Our open, programmable, software-defined platform enables Xpeng to run its proprietary software while also delivering over-the-air updates for new driving features and capabilities. Production deliveries of the P7 with NVIDIA DRIVE begin next month. Our Ampere architecture will power our next-generation NVIDIA DRIVE platform called Orin, delivering more than 6x the performance of Xavier Solutions and 4x better power efficiency. With Ampere scalability, the DRIVE platform will extend from driverless robotaxis all the way down to in-windshield driver assistance systems sipping just a few watts of power. Customers appreciate the top-to-bottom platform all based on a single architecture, letting them build one software-defined platform for every vehicle in their fleet. Lastly, in the area of robotics, we announced that BMW Group has selected the new NVIDIA as a robotics platform to automate their factories, utilizing logistic robots built on advanced AI computing and visualization technologies. Turning to data center. Quarterly revenue was a record $1.14 billion, up 80% year-on-year and up 18% sequentially, crossing the $1 billion mark for the first time. Announced last week, the A100 is the first Ampere architecture GPU. Although just announced, A100 is in full production, contributing meaningfully to Q1 revenue and demand is strong. Overall, data center demand was solid throughout the quarter. It was also broad-based across hyperscale and vertical industry customers as well as across workloads, including training, inference, and high-performance computing. We continue to have solid visibility into Q2. The A100 offers the largest leap in performance to date over our eight generations of GPUs, boosting performance by up to 20x over its predecessor. It is exceptionally versatile, serving as a universal accelerator for the most important high-performance workloads, including AI training, inference, data analytics, scientific computing, and cloud graphics. Beyond its leap in performance and versatility, the A100 introduces new elastic computing technologies that make it possible to bring rightsized computing power to every job. A multi-instance GPU capability allows each A100 to be partitioned into as many as seven smaller GPU instances. Conversely, multiple A100 interconnected by our third-generation NVLink can operate as one giant GPU for ever larger training tasks. This makes the A100 ideal for both training and inference. The A100 will be deployed by the world's leading cloud service providers and system builders, including Alibaba Cloud, Amazon Web Services, Baidu Cloud, Dell Technologies, Google Cloud Platform, HPE, and Microsoft Azure, among others. It is also getting adopted by several supercomputing centers, including the National Energy Research Scientific Computing Center and the Jülich Supercomputing Centre in Germany and Argonne National Laboratory. We launched and shipped the DGX A100, our third-generation DGX and the most advanced AI system in the world. The DGX A100 is configurable from one to 56 independent GPUs to deliver elastic software-defined data center infrastructure for the most demanding workloads from AI training and inference to data analytics. We announced two products for edge AI: the EGX A100 for larger commercial off-the-shelf servers; and the EGX Jetson Xavier NX for micro-edge servers. Supported by fully AI optimized, cloud-native, and secure software, the EGX platform is built for AI computing at the edge. With the EGX, hospitals, retail stores, farms, and factories can securely carry out real-time processing of the massive amounts of data streaming from trillions of edge sensors. NVIDIA EGX makes it possible to securely deploy, manage, and update fleets of servers remotely. EGX is also ideal for the massive computational challenge of 5G networks, which we are working on with our partners like Ericsson and Mavenir. Additionally, we announced CUDA 11 and other important software harnessing the A100's performance and universality to accelerate three of the most complex and fast-growing workloads: recommendation systems, conversational AI, and data science. First, NVIDIA Merlin is a deep recommender application framework that enables developers to quickly build state-of-the-art recommendation systems, leveraging our pretrained models. With billions of users and trillions of items on the Internet, deep recommenders are the critical engine powering virtually every internet service. Second, NVIDIA Jarvis is a GPU-accelerated application framework that makes it easy for developers to create, deploy, and run end-to-end real-time conversational AI applications that understand terminology unique to each company and its customers using both vision and speech. Demand for these applications is surging amid the shift to working from home, telemedicine, and remote learning. And third, in the field of data science and data analytics, we announced that we are bringing end-to-end GPU acceleration to Apache Spark, an analytics engine for big data processing that uses more than 500,000 data scientists worldwide. Native GPU acceleration for the entire Spark pipeline, from extracting, transforming, and loading the data to training and inference, delivers the performance and scale needed to finally connect the potential of big data with the power of AI. Adobe has achieved a 7x performance improvement and a 90% cost savings in an initial test using GPU-accelerated data analytics with Spark. Our accelerated computing platform continues to gain momentum, underscored by the tremendous success of GTC Digital, our annual GPU technology conference, which shifted this spring to an online format. More than 55,000 online developers and AI research registered for the online event, which includes hundreds of hours of free content from AI practitioners and industry experts who leverage NVIDIA's platforms. Our ecosystem is now 1.8 million developers strong. Times like these truly test a computing platform's strength in the utility it brings to scientists racing for solutions. Researchers around the world are deploying our GPU computing platform in the fight against COVID-19. Scientists are combining AI simulations to detect changes in pneumonia cases, sequence the virus, and seek effective biomolecular compounds for a vaccine or treatment. The first breakthrough came from researchers at the University of Texas at Austin and the National Institute of Health, who used the GPU-accelerated application to create the first 3D atomic scale map of the virus using NVIDIA GPUs. This was followed by researchers at Oak Ridge National Laboratory who screened 8,000 compounds to identify 77 promising drug targets using the world's fastest supercomputer, Summit, which is powered by more than 27,000 NVIDIA GPUs. The V100 GPUs at Oak Ridge are in high demand as they can analyze 17 million compound-protein combinations in a day, helping to understand the virus spread patterns. The University of California, San Diego, researchers ported their microbiomic analysis software to GPUs in the San Diego supercomputing cluster, achieving a 500x analysis speedup. Moving to the rest of the P&L, Q1 GAAP gross margins were 65.1% and non-GAAP was 65.8%, up sequentially and year-on-year, primarily driven by GeForce GPU product mix and higher data center sales. Q1 GAAP operating expenses were $1.03 billion, and non-GAAP operating expenses were $821 million, up 10% and 9% year-on-year, respectively. Q1 GAAP EPS was $1.47, up 130% from a year earlier, and non-GAAP EPS was $1.80, up 105% from a year ago. Q1 cash flow from operations was $909 million. Before I turn to the outlook, let me make a few comments on our Mellanox acquisition. Beyond the strong strategic and cultural fit that Jensen has discussed, Mellanox has an exceptionally strong financial profile. The company reported revenue of $429 million in its March quarter, accelerating to 40% year-on-year growth, with GAAP and non-GAAP gross margins in the mid- to high-60% range. We expect the acquisition to be immediately accretive to non-GAAP gross margins, non-GAAP earnings per share, and free cash flow. We aim to retain the full Mellanox team and accelerate investments in our combined roadmap as we jointly innovate on our shared vision for the future of accelerated computing. With that, let me turn to the outlook of the second quarter of fiscal 2021, which includes a full quarter contribution from Mellanox. We have assumed in our outlook the potential ongoing impact from COVID-19. We expect our automotive platform sales to be down 40% on a sequential basis and Pro Viz to decline sequentially. In gaming, while we will likely see ongoing impact from the partial operations or closures of iCafes and retail stores, we expect that to be largely offset by a shift to e-tail channels. Overall, the precise magnitude of the impact is difficult to predict, given uncertainties around the reopening of the economy. Overall, we expect second quarter revenue to be $3.65 billion, plus or minus 2%. The contribution of Mellanox revenue is likely to be in the low teens percentage range of our total Q2 revenue. We are providing this breakout to help with comparability between Q1 and Q2. But going forward, it will become an integrated part of our data center market platform. GAAP and non-GAAP gross margins are expected to be 58.6% and 66%, respectively, plus or minus 50 basis points. The sequential decline in GAAP gross margins primarily reflects an increase in acquisition-related costs, most of which are nonrecurring. GAAP and non-GAAP operating expenses are expected to be approximately $1.52 billion and $1.04 billion, respectively. The sequential change in GAAP operating expenses reflects an increase in stock-based compensation and acquisition-related costs. GAAP and non-GAAP operating expenses for the full year are expected to be approximately $5.7 billion and $4.1 billion, respectively. For the full year, stock-based compensation and acquisition-related costs will also influence. GAAP and non-GAAP OI&E are both expected to be an increase of approximately $50 million and $45 million, respectively. GAAP and non-GAAP tax rates are both expected to be 9%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $225 million to $250 million. Further financial details are included in the CFO commentary and other information available on our IR website. New this quarter, we have also posted an investor presentation summarizing our results and key highlights. In closing, let me highlight upcoming events for the financial community. Next Thursday, May 28, we will webcast a presentation and Q&A with Jensen on our recent product announcement moderated by Evercore. We will also be at Cowen's TMT Conference on May 27; Morgan Stanley's Cloud Secular Winners Conference on June 1; BoFa's Technology Conference on June 2; Needham's Fourth Automotive Technology Conference on June 3; and Nasdaq Investor Conference on June 16.
Operator
Operator, we will now open for questions. Can you please poll for questions, please. Aaron Rakers with Wells Fargo.
Congratulations on a solid quarter. Colette, I'm curious about your commentary around visibility in the data center side. How would you characterize your visibility today relative to maybe what it was last quarter? And how do we think about the visibility in the context of trends maybe into the back half of the calendar year?
Thanks, Aaron, for the question. You are correct. We have indicated a couple of quarters ago that we were starting to see improved visibility after we came out of the digestion period in the prior overall fiscal year. As we move into Q2, we still have visibility and solid visibility into our Q2 results for overall data centers. So at this time, I'd say they are relatively about the same as what we had seen going into the Q1 period. We think that is a true indication of their excitement about our platform and most particularly our excitement regarding the A100 and its additional products. Now regarding the second half of the year, as you know, we have seen broad-based growth in both the hyperscale and the vertical industries, both of which have record levels in our Q1 results. We see inferencing continuing to grow as well as we're expanding in terms of edge AI. Our strong demand for the A100 products, including the Delta Board, is just starting an initial ramp. However, we do guide only one quarter at a time. So it's still a little too early for us to give true certainty in terms of the macro situation that's in front of us. But again, we feel very good about the demand for A100.
Operator
Your next question comes from Stacy Rasgon with Bernstein Research.
I first wanted to follow-up on your gaming commentary. You sort of mentioned a couple of offsets. COVID potentially still a headwind, e-tail or tailwind and maybe offsetting each other. Were you trying to suggest that those did offset completely and gaming was kind of flattish into Q2? Because I know it has a typical seasonal pattern that typically switches up. I guess what were you trying to say with those factors? And what are the kinds of things we should be thinking about when it comes to seasonality, Colette, into Q2 around that business segment?
So let me start, and I'll see if Jensen also wants to add on to it. I think you're talking about our sequential between Q1 and Q2. Some of the pieces that we had seen related to COVID-19 in Q1 may carry over into Q2. COVID-19, in fact, had an impact in terms of our retail channels as well as our iCafes. However, as we discussed, we efficiently moved to overall e-tail. We have normally been seasonally down in desktop between Q1 and Q2, and that will likely happen. But we do see the strength in terms of laptops and overall consoles as we move from Q1 to Q2. In summary, we expect to grow sequentially between Q1 and Q2 for our overall gaming business, and I'll turn it over to Jensen to see if he has additional commentary.
No, that was great. That was fantastic.
I just want to follow up on that. If it's growing, we've seen it grow very strong double digits in prior years, even though the business mix was different then. Are we thinking it might be somewhat up? Is there any possibility that it could reach the typical levels we've seen in the past? Any sense of the magnitude would be really helpful.
Yes. I think when we think about that sequential growth, we'll probably be in the low to mid-single digits in terms of that's what our guidance right now, and we'll just have to see how the quarter goes.
Yes. That's very helpful.
Stacy, the thing that I would add is this. I'd say the guidance is exactly what Colette mentioned. But if you look at the big picture, there are a few dynamics that are working really well in our favor. First, of course, is that RTX and ray tracing is just a home run. Minecraft was phenomenal. We have 33 games in the pipe that have already been announced or are shipping. Just about every game developer has signed on to RTX and ray tracing, and I think it's a foregone conclusion that this is the next generation. This is the way computer graphics is going to be in the future. And so I think RTX is a home run. The second, the notebooks that we create are just doing great. We got 100 notebooks in gaming. We have 75 notebooks designed for either mobile workstations or what we call NVIDIA Studio for designers and creators. The timing was just perfect. With everybody needing to stay at home, the ability to have a mobile gaming platform and a mobile workstation was just perfect timing. And then, of course, you guys know quite well that our Nintendo Switch is doing fantastic. The top three games in the world today are Fortnite, Minecraft, and Animal Crossing. All three games are NVIDIA platforms. And so I think we have all the dynamics working in our favor. And then, we just got to see how it turns out.
Operator
Your next question comes from Joe Moore with Morgan Stanley.
I wanted to ask about the rollout of Ampere how quickly it will roll into various segments, including hyperscale as well as on the DGX side, as well as on the HPC side. Is it a smooth transition? I remember when you launched Volta, there was a little bit of a transitional pause. Can you tell us how you see that ramping up with the different customer segments?
Yes. Thanks a lot, Joe. So first of all, taking a step back. Accelerated computing is now common sense in data centers. It wasn't the case when we first launched Volta. If you went back to Volta, it was the first generation that introduced deep learning training in a really serious way, and it was really focused on training and high-performance computing. We didn't come until later with the inference version called T4. Over the course of the last five years, we've been accelerating workloads that are now diversifying in data centers. Most of the hyperscalers now use machine learning. Deep learning is now pervasive. The notion of accelerated deep learning and machine learning using our GPUs is now common sense. It didn't used to be. People still saw it as something esoteric. But today, data centers all over the world expect a very significant part of their data center to be accelerated with GPUs. The number of workloads we've accelerated in the last five years has expanded tremendously, whether it's imaging, video, conversational AI, or deep recommender systems. So the number of applications we now accelerate is quite diverse. I think the transition will be really smooth. The general sense of it is that the number of workloads for accelerated computing continues to grow, and the adoption of machine learning and AI in cloud and hyperscalers has grown. The common sense of using acceleration is now a foregone conclusion. And so I think we're ramping into a very receptive market with a really fantastic product.
Operator
Your next question comes from Vivek Arya with Bank of America.
Congratulations on the strong growth and execution. Just a quick clarification. Colette, is 66% kind of the new baseline for gross margin? And then the question for you, Jensen, is give us a sense of how much inference as a workload and payer as a product is expected to contribute? I'm curious where you are in terms of growing in the inference and edge AI market. Where are we kind of in the journey of Ampere penetration?
So let me start on the first question regarding gross margin and our gross margin as we look into Q2. We are guiding Q2 non-GAAP gross margins at 66%. This would be another record gross margin quarter just as we finished an overall record level, even as we are currently ramping our overall Ampere architecture within that. The Q2 guidance also incorporates Mellanox. Mellanox has a very similar overall margin to our overall data center margins as well. We see this new baseline as a great transition and likely to see some changes as we go forward. However, it's still a little early to see where these gross margins will go. But we're very pleased with the overall guidance at 66% for Q2.
Accelerated computing is just at the beginning of its journey. If you look at it, I would characterize it as several segments. The first is hyperscaler AI microservices, which is all the AI services that we enjoy today that have AI. Whenever you shop on the web, it recommends a product, or when you're watching a movie, it recommends a movie or a song. The first 10 websites that it recommends are based on machine learning today. It's the reason why they're collecting data. As we move forward, it is transforming every company into a tech company whether it's logistics, warehousing, or manufacturing. The journey is hardly just starting. We announced three important partners in three domains–Walmart, the U.S. Postal Service, and BMW. These three partnerships are great examples of the next phase of AI and how Ampere is going to ramp into it. So I think it's fair to say that we're really well positioned in the two fundamental forces of IT today: data center scale computing and artificial intelligence. The segments that will make a real impact are all gigantic markets.
Operator
Your next question comes from C.J. Muse with Evercore.
I guess if I could ask two. Colette, can you help us with what you think the growth rate for Mellanox could look like in calendar '20? And then Jensen, a bigger picture question for you and really not specific to health care, more broad-based. How do you think about the long-lasting impact of COVID on worldwide demand for AI?
C.J., can you help me? You cut out in the middle of your sentence to me. Can you repeat the first part of it for me?
No, sorry about that. I'm curious if you could provide a little handholding on what we should think about for growth for Mellanox in calendar '20?
At this time, it's a little early for us. And as you know, we generally just go one quarter out, and we're excited to bring the Mellanox team on board so we can start to build products together. For the overall margin, their overall performance over the last couple of quarters, they had a great last year and they had a great March quarter as well. We just have to stay tuned to see equally what the second half of the year looks for them.
Yes, C.J., thanks for the question. This pandemic is really tragic, and it's reshaping industries and markets. I believe it will have structural changes. The first is that the world's enterprise digital transformation and moving to the cloud is going to accelerate. Companies can't afford to rely just on on-prem IT anymore. They have to be more resilient. And having a hybrid cloud computing infrastructure is going to provide them with the resilience they need. I wouldn't be surprised to see the acceleration of cloud computing AI because of that. The second is the importance of creating a computational defense system against infectious disease. I think every nation and government is gearing up for what it takes to create a national defense system based on computational methods. Lastly, more people are going to work permanently from home, leading to video games representing a more significant segment of the overall entertainment budget of society. These trends are structural changes that will be here to stay, and they are really good for us.
Operator
Your next question comes from Toshiya Hari with Goldman Sachs.
I had one for Colette and then one for Jensen as well, if I may. Colette, I wanted to come back to the gross margin question. You're guiding July essentially flat sequentially despite what I'm guessing is better mixed with non-ops coming in and automotive guided down 40% sequentially. I guess the question is, what are some of the offsets that are pulling down gross margins in the current quarter? And sort of related to that, how should we be thinking about the cadence and OpEx going forward, given the six-month pull in that you guys talked about on the compensation side? And then one quick one for Jensen. I was hoping you could comment on the current trade landscape between the U.S. and China. I feel like you guys shouldn't be impacted in a material way directly nor indirectly. But at the same time, given the critical role you play in scientific computing, I can sort of see a scenario where some people may claim that you contribute to efforts outside of the U.S. So if you can kind of speak on that, that will be helpful.
Thanks, Toshiya, for your question. Regarding gross margins in the second quarter, our guidance at 66% is up sequentially from a record level in Q1. This next record that we hope to achieve is even with the inclusion of our overall Ampere architecture. So typically, when we transition to new architectures, margins can somewhat be a little lower on the onset but tend to move up over time. Additionally, our automotive is lower, but also, we will see growth in some of our gaming platforms such as consoles, which may offset those two. Overall, there's nothing structural to really highlight other than our mix in business and the ramp of Ampere.
Let's see, the trade tension. We've been living in this environment for some time, Toshiya. As you know, the trade tension has been in the background for coming up on a year. China's high-performance computing systems are largely based on Chinese electronics anyhow. So I think our condition won't materially change going forward.
So Toshiya, let me respond to your second question that you had for me, which was regarding our OpEx and our decision to pull forward our overall local into Q2. This is something that we've normally done later in the year. We felt it was prudent during the current COVID-19 situation. Although our employees are quite safe, we just wanted to ensure that their family members also were safe and had the opportunity to have cash upfront. It is about a couple of months earlier than normal, around four months earlier than normal, and it is incorporated into our guidance for Q2.
Operator
Your next question comes from Mark Lipacis with Jefferies.
I have a question about the A100. I'm trying to understand how it fits into the evolution of your solution set over time and the shifting demand for applications. If I reflect on it, you initially had a solution that focused primarily on training. Then, you introduced solutions that were more geared towards inferencing. Now, you have a solution that effectively addresses both inferencing and training. I'm curious about the long-term perspective—three, five, or ten years from now—will this be part of the general-purpose computing or acceleration framework that you mentioned previously, where Ampere is comparable to an Ampere-class product? Or should we anticipate seeing separate inferencing-specific solutions, training-specific solutions, and an Ampere solution for different application classes? Providing a framework for understanding Ampere in these contexts would be helpful.
Yes. Thanks for the call, Mark. Good question. If you take a step back, currently in our data centers, the setup in data centers starting from probably all the way back, six or seven years ago, really accelerating in the last two years is what we learned along the way. There are three classes of workloads that we discovered. The first was deep learning training and the ideal setup for that today prior to Ampere or yesterday prior to Ampere is the V100 SXM with NVLink, which is designed for scale-up. The second class that we learned was cloud computing, with V100 PCI Express allowing for one GPU all the way up to many GPUs. The third class was inference, where we currently have TensorRT 7.0. So the number of applications we accelerate has greatly expanded, and it's important because accelerated computing is now well established in data centers. We're in the era where computational workloads are reaching empirical limits imposed by physics and algorithms. With A100, we have a breakthrough on performance, and we can use this to accelerate the moment the data comes into the data center. The future of acceleration is unified, elastic, and efficient, bridging the constituent demands of numerous diverse workloads.
Operator
Your next question comes from Timothy Arcuri with UBS.
Actually I had two, I guess, Jensen, first for you. Just on the data center business, things have been very strong recently. Obviously, there's always concerns that customers are pulling in CapEx, but it sounds like you have pretty good visibility into July. I guess last time, most folks also thought that your attrition really was so low that you would be immune to any digestion, but that wasn't the case. I guess I'm wondering if things are different now with A100 and whatnot. How do you handicap your ability to this time maybe get through any digestion on the CapEx side? And then, Colette, stock compensation had been running like $220 million a quarter, and the guidance implies that it goes up to like $460 million a quarter. Is that all executive retention? And is that sort of the right level as you look into 2021?
Sure. So let me help you with the overall GAAP adjustments for the delta between our GAAP OpEx and our non-GAAP OpEx. If you look at it for the full year and what we guided, we probably have about $1.55 billion associated with GAAP-level expenses. Keep in mind, there is more in there than just our stock-based compensation. We have also incorporated the accounting for Mellanox, and a portion of those costs are associated with the amortization of intangibles and acquisition-related costs and one-time items. So our stock-based compensation includes what we need for NVIDIA as well as for onboarding Mellanox. There is some retention with the onboarding of Mellanox. But for the most part, it is just working them into the year for three quarters, which is influencing the stock-based compensation.
Tim, there are several differences between the conditions then and now. The first difference is the diversity of the workloads that we now accelerate. Back then, we were still early in our inference, and most of the data center acceleration was used for deep learning. Today, the versatility stands from data processing to deep learning and the number of different types of AI models being trained is growing tremendously. When we started to introduce Ampere to the data center, it was easy for them to adopt. They have a large amount of workload that's already accelerated by NVIDIA GPUs. Our GPUs are architecturally compatible across generations. Everything that runs on T4 runs on A100, everything that runs on V100 runs on A100. So I think the transition will be smooth.
Operator
Your next question comes from Harlan Sur with JPMorgan.
Jensen, the team has showed the importance of networking, networking fabric, and the Mellanox acquisition. How does Mellanox' Ethernet switching platform differ from those provided by other large networking OEMs? How does the Cumulus acquisition fit into the switching and networking strategy?
Yes. Thanks for the question, Harlan. High-performance networking and high-performance computing go hand-in-hand. Problems we're solving no longer fit in one computer, no matter how big it is. When you distribute a computational workload of such intense scale, communication overhead becomes one of the greatest bottlenecks, which is why Mellanox is so valuable. It's not just about the link speed; it's architecture, software, electronics design, and chip design. Mellanox is in 60% of the world's supercomputers and 100% of the AI supercomputers. The movement towards disaggregated microservice applications where combinations and orchestration across a large hyperscale data center are also important. Their low latency is unique. The combination of cost-effective deployment and rapid innovation capability is what makes it really impactful. Cumulus is designed for us to innovate from end to end across the networking stack. We're super excited about the team and all the possibilities.
Operator
Your next question comes from William Stein with SunTrust.
Jensen, I'd like to focus on something you said. I think it was in one of your earlier responses. You said something about a very significant part of data centers are now accelerated with GPUs. I'm curious how to interpret that. If we think about the evolution of compute architecture going from almost entirely CPUs to some future day where we have many more accelerators, and maybe a much smaller number of CPUs relative to those. Can you talk about where we are in terms of that architectural shift and where you think it goes longer term?
Yes. I appreciate the question. There are only two computing architectures that have made it far: x86 and ARM. NVIDIA accelerated computing has reached a tipping point. The number of developers this year that we supported was almost 2 million, and it's growing exponentially. The question now is how much accelerated computing do companies use in their pipelines? The major workloads of the world's most important companies are now solidly requiring acceleration. So I think when I said that acceleration is still growing, it is. But the architectural shift is common-sense adoption is at a tipping point, and we're ramping into a very receptive market with a fantastic product.
Operator
Your next question comes from John Pitzer with Crédit Suisse.
Just two quick ones. Colette, I hate to ask something as mundane as OpEx, but just given the full year guide, there’s a lot to unpack. I think you also probably have some COVID plus or minuses in there. I think there’s an extra week this year as well. Just curious, when we look at the full year guide, is there something structural going on OpEx in your investments? Or can we use it as sort of a guidepost to how you think about revenue for the back half of the year?
Thanks, John, for the question. We've guided the non-GAAP at approximately $4.1 billion for the year. Yes, that incorporates three full quarters of Mellanox and its employees. You are correct; we have a 53rd week this year. This is outlined in SEC filings, and you should expect that as well. We pulled forward some of our focal by several months in order to take care of our employees. We are investing in our business, and you’ve seen good results from our investment. There's more to do. We are hiring and investing in those businesses. So there's nothing different structurally, but this onset of Mellanox and our investment will produce great long-term results.
As usual, John, we’re investing into the IT industry's largest opportunities, cloud computing and AI. We’re looking down the fairway with some extraordinary opportunities. During this time, when the market is disrupted, it allows the market leaders to lean into investments for the future. Companies that lean into their core technology and pushed out innovation during a down cycle will be the one that leads out.
Operator
And your next question comes from Matt Ramsay with Cowen.
Two different topics, Jensen. The first question is it might have been a little hard to talk when the deal was pending about this topic, but now that it's closed, can you talk about opportunities to innovate on and customize the Mellanox stack and the balance of having an industry standard. How do you think about gaming product launch logistics? And any comments on there would be really helpful.
Yes. Thanks a lot, Matt. I appreciate your questions. On the one hand, I do miss that we can't engage the developers face to face. It's so much fun. GTC is doing all their work, and I learned so much. A lot of developers are participating in our online formats, such as GTC, and the number of views has been incredible. I think our reach could be quite great. I'm confident we will find a way to engage our gamers and customers. The acquisition has bridged extensive product synergies. Ampere comes with a brand-new numerical format called Tensor Float 32, and the performance is incredible. We had to integrate it with industry-standard frameworks. The virtualization in collaborative development is creating great opportunities for us to innovate in the software and data center stack integration. It’s coming. Thank you. We had a great and busy quarter. With our announcements, we highlighted several initiatives. First, computing is moving to data center scale where computing and networking go hand in hand. The acquisition of Mellanox gives us deep expertise and scale for innovation from end to end. Second, AI is the most powerful technology force of our time. Our Ampere generation offers several breakthroughs: the largest generational leap of 20x in training and inference throughput; the first unified acceleration platform for data analytics, machine learning, deep learning, training, and inference; and the first elastic accelerator that can be configured for both scale-up applications like training and scale-out applications like inference. Ampere is fast, universal, and elastic. It's going to re-architect the modern data center. Third, we are opening large new markets with AI software application frameworks such as Clara for healthcare, DRIVE for autonomous vehicles, Isaac for robotics, Jarvis for conversational AI, Metropolis for edge IoT, AERIAL for 5G, and Merlin with very important recommender systems. Finally, we have built multiple engines of accelerated computing growth across RTX computer graphics, artificial intelligence, and data center scale computing from cloud to edge. I look forward to updating you on our progress next quarter. Thank you, everybody.
Operator
This concludes today's conference call. You may now disconnect.