Skip to main content
NVDA logo

NVIDIA Corp

Exchange: NASDAQSector: TechnologyIndustry: Semiconductors

NVIDIA is the world leader in accelerated computing.

Did you know?

Profit margin of 55.6% — that's well above average.

Current Price

$177.39

+0.93%

GoodMoat Value

$221.97

25.1% undervalued
Profile
Valuation (TTM)
Market Cap$4.31T
P/E35.90
EV$4.22T
P/B27.40
Shares Out24.30B
P/Sales19.96
Revenue$215.94B
EV/EBITDA29.46

NVIDIA Corp (NVDA) — Q4 2018 Earnings Call Transcript

Apr 5, 202616 speakers7,280 words47 segments

Operator

My name is Victoria and I will be your conference operator for today. Welcome to NVIDIA's Financial Results Conference Call. The phone lines have been placed on mute to prevent background noise. After the speakers' remarks, there will be a question-and-answer period. Thank you. I will now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.

O
SJ
Simona JankowskiVP, IR

Thank you. Good afternoon everyone and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2018. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It is also being recorded. You can hear a replay by telephone until February 16, 2018. The webcast will be available for replay up until next quarter's conference call to discuss our fiscal first quarter financial results. The contents of today's call is NVIDIA's property; it can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, February 8, 2018, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO Commentary, which is posted on our website. With that, I'll turn the call over to Colette.

CK
Colette KressEVP & CFO

Thanks, Simona. We had an excellent quarter and fiscal 2018 led by strong growth in our Gaming and Data Center businesses. Q4 revenue reached $2.91 billion, up 34% year-on-year, up 10% sequentially and well above our outlook of $2.65 billion. All measures of profitability set records. They also hit important milestones. For the first time, gross margins strongly exceeded 60%, non-GAAP operating margins exceeded 40% and net income exceeded $1 billion. Fiscal 2018 revenue was $9.71 billion, up 41% or $2.8 billion above the previous year. These short platforms posted record full year revenue with data center growing triple digit. From a reporting segment perspective, Q4 GPU revenue grew 33% from last year to $2.46 billion. Tegra Processor revenue rose 75% to $450 million. Let’s start with our Gaming business. Q4 revenue was $1.74 billion, up 29% year-on-year and up 11% sequentially with growth across all regions, driving GPU demand for a number of great titles during the holiday season, including Players Battleground, PUBG, Destiny 2, Call of Duty, World War II, Star Wars: Battlefront 2. PUBG continued its remarkable run reaching almost 30 million players and recording more than 3 million concurrent players. These games delivered stunning, visible effects that require strong graphics performance, which is driving a shift toward the higher end of our gaming portfolio and adoption of our Pascal architecture. eSports continues to grow, expanding the overall industry and our business. One sign of their popularity is Activision Overwatch League launched in January and reached 10 million viewers globally in its first week. We had a busy start to the year with a number of announcements at the Annual Consumer Electronics Show in Las Vegas. We introduced NVIDIA BFGDs, Big Format Gaming Displays in a partnership with ACER, ASUS and HP. The high-end 65-inch 4K displays enable ultra-low latency gaming and integrate our Shield streaming devices, offering popular apps such as Netflix, Gaming Video, YouTube and Hulu. The BFGD won 19 best of show awards from various publications. We expanded the free beta of GeForce Now beyond Macs to Windows-based PCs and we enhanced the GeForce experience with new features, including NVIDIA Freestyle for customizing gameplay with various filters and updated NVIDIA Ansel photo mode and support for new titles with ShadowPlay Highlights for capturing gaming achievements. Additionally, the Nintendo Switch gaming console contributed to our growth, as it became the fastest selling console of all time in the U.S. Strong demand in the cryptocurrency market exceeded our expectations. We met some of this demand with a dedicated board in our OEM business and some was met with our gaming GPUs. This contributed to lower than historical channel inventory levels of our gaming GPUs throughout the quarter. While the overall contribution of cryptocurrency to our business remains difficult to quantify, we believe it was a higher percentage of revenue than the prior quarter. That said, our main focus remains on our core gaming market, as cryptocurrency trends will likely remain volatile. Moving to data center, revenue of $606 million was up 105% year-on-year and up 20% sequentially. This excellent performance reflected strong adoption of Tesla V100 GPUs based on our Volta architecture, which began shipping in Q2 and continued to ramp in Q3 and Q4. V100s are available through every major computer maker and have been chosen by every major cloud provider to deliver AI and high performance computing. Hyperscale and cloud customers adopting the V100 include Alibaba, Amazon Web Services, Baidu, Google, IBM, Microsoft Azure, Oracle and Samsung. We continued our leadership in AI training markets where our GPUs remain the platform of choice for training neural networks. During the quarter, Japan's Preferred Networks trained the ResNet-503 network for image classification in a record of 15 minutes by using 1,024 Tesla P100 GPUs. Our newer generation V100 delivered even higher performance with the Volta architecture offering 10 times the deep learning performance of Pascal. We also saw growing traction in the AI inference market, where NVIDIA's platform can improve performance and efficiency by orders of magnitude over CPUs. We continue to view AI inference as a significant new opportunity for our data center GPUs. Hyperscale inference applications that run on GPUs include speech recognition, image and video analytics, recommender systems, translations, search and natural language processing. The data center business also benefited from strong growth in high performance computing. The HPC community has increasingly moved to accelerated computing in recent years as Moore’s law has begun to level off; indeed more than 500 HPC applications are now GPU accelerated including all of the top 15. NVIDIA added a record 34 new GPU-accelerated systems to the latest Top 500 supercomputer list, bringing our total to 87 systems. We increased our total petaflops on the list by 28% and we captured 14 of the top 20 slots on the Green500 list of the world’s most energy efficient supercomputers. During the quarter, we continued to support the buildout of major next-generation supercomputers, among them the U.S. Department of Energy's Summit System expected to be the world’s most powerful supercomputer when it comes online later this year. We also announced new wins such as Japan’s fastest AI supercomputer, the ABCI system, which leverages more than 4,000 Tesla V100 GPUs. Importantly, we are starting to see the convergence of HPC and AI as scientists embrace AI to solve problems faster. Modern supercomputers will need to support multiprocessing computation for applying deep learning together with simulation and testing. By combining AI with HPC, supercomputers can deliver increased performance that is orders of magnitude greater in computations ranging from particle physics to drug discovery to astrophysics. We are also seeing traction for AI in a growing number of vertical industries such as transportation, energy, manufacturing, smart cities, and healthcare. We announced engagements with GE Health and Nuance in medical imaging, Baker Hughes, a GE company in oil and gas and Japan's Komatsu in construction and mining. Moving to professional visualization, fourth quarter revenue grew to a record $254 million, up 13% from a year ago, up 6% sequentially, driven by demand for real-time rendering as well as emerging applications like AI and VR. These emerging applications now represent approximately 30% of pro visualization sales. We saw strength across several key industries including defense, manufacturing, energy, healthcare, and internet service providers. Among key customers, high-end quality products are being used by GlaxoSmithKline for AI and by Pemex oil and gas for seismic processing and visualization. Turning to automotive, in automotive for the fourth quarter, revenue grew 3% year-on-year to $132 million and was down 8% sequentially. The sequential decline reflects our transition from infotainment which is becoming commoditized to next generation AI cockpit systems and complete top-to-bottom self-driving vehicle platforms built on NVIDIA hardware and software. At CES, we demonstrated our leadership position in autonomous vehicles with several key milestones and new partnerships that point to AI self-driving cars moving from deployment to production. In a standing-room-only keynote that drew nearly 8,000 attendees, Jensen announced that DRIVE Xavier, the world's first autonomous machine processor, will be available to customers this quarter. With more than 9 billion transistors, DRIVE Xavier is the most complex system ever created. We also announced that NVIDIA Drive is the world's first functionally safe AI self-driving platform, enabling automakers to create autonomous vehicles that they can operate safely and is a necessary ingredient for going to market. Additionally, we announced a number of collaborations at CES, including with Uber, which has been using NVIDIA technology for the AI controlling system in its fleet of self-driving cars and freight trucks. We announced that ZF and Baidu are using NVIDIA Drive self-driving technology to create a production-ready AI autonomous vehicle platform for China, the world's largest automotive market. Production vehicles utilizing this technology including those from Chariots are expected on the road by 2020. We also announced a partnership with Aurora which is working to create a modular scalable Level 4 and Level 5 self-driving hardware platform incorporating the NVIDIA Drive Xavier processor. Jen-Hsun Huang joined on stage by Volkswagen CEO, Herbert Diess, and they announced the new generation of intelligent VW vehicles using NVIDIA Drive’s Intelligent Experience or Drive IX platform to create the new AI-infused cockpit experiences and improved safety. Later at CES, Mercedes-Benz announced that MBUX, its new AI-based smart cockpit, uses NVIDIA graphics and AI technologies. The MBUX user experience includes beautiful touch screen displays and a new voice-activated assistant, which they unveiled last week at the Mercedes-Benz A-Class compact car. Earlier this week, we announced a partnership with Continental to build AI self-driving vehicle systems from enhanced Level 2 to Level 5 for production in 2021. There are now more than 320 companies and research institutions using the NVIDIA Drive platform that’s up 50% from a year ago and includes virtually every car maker, truck maker, robo-taxi company, mapping company, sensor manufacturer, and self-starter in the autonomous vehicle ecosystem. With this growing momentum, we remain excited about the long-term opportunities for autonomous driving. Now turning to the rest of the P&L, Q4 GAAP gross margins was 61.9% and non-GAAP was 62.1%, a record that reflects continued growth in our value-added platforms. GAAP operating expenses were $728 million and non-GAAP operating expenses were $607 million, up 28% and 22% year-on-year respectively. We continue to invest in the key platforms driving our long-term growth including Gaming, AI, and automotive. GAAP EPS was $1.78, up 80% from a year earlier, some of the upside was driven by a lower than expected tax rate, as a result of U.S. tax reform and excess tax benefits related to stock-based compensation. Our fourth quarter GAAP effective tax rate had a benefit of 3.7% compared with our expectation of a tax rate of 17.5%. Non-GAAP EPS was $1.72, up 52% from a year ago, reflecting a quarterly tax rate of 10.5% compared with our expectation of 17.5%. We returned $1.25 billion to shareholders in the fiscal year through a combination of quarterly dividends and share repurchases. Our quarterly cash from operations reached record levels at $1.36 billion, bringing our fiscal year total to a record $3.5 billion. Capital expenditures were $469 million for the fourth quarter, inclusive of $335 million associated with the purchase of our previously financed Santa Clara campus building. Let me take a moment to provide a bit more detail on the impact of U.S. corporate tax reform on the quarter and our go-forward financials. In Q4, we reported a GAAP only one-time net tax benefit of $133 million or $0.21 per diluted share. This is primarily related to provisional tax amounts for the transition tax on accumulated foreign earnings and re-measurement of certain deferred tax assets and liabilities associated with the Tax Cuts and Jobs Act. We previously accrued for taxes on a portion of foreign earnings in excess of the provisional tax amount recorded for the transition tax, hence the one-time benefit. For fiscal 2019, we expect our GAAP and non-GAAP tax rates to be around 12%, which is down from approximately 17% previously. This does not take into account the excess tax benefit from stock-based compensation, which could increase or decrease our tax rate in GAAP in a given quarter depending on stock price and vesting schedule. In terms of our capital allocation priorities, we continue to focus first and foremost on investing in our business as we see significant opportunities ahead. Our lower tax rate strengthens our ability to invest in both OpEx such as adding engineering talent as well as CapEx such as investing in supercomputers for internal AI development. In addition, we remain committed to returning cash to shareholders with our plan remaining at $1.25 billion for fiscal 2019. With that, let me turn to the outlook for the first quarter of fiscal 2019. We expect revenue to be $2.9 billion plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 62.7% and 63% respectively plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $770 million and $645 million respectively. GAAP and non-GAAP other income and expenses are both expected to be nominal. GAAP and non-GAAP tax rates are both expected to be 12% plus or minus 1% excluding discrete items. For the full fiscal year 2019, we expect our operating expenses to grow at a similar pace as in Q1. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, I'd like to highlight a few upcoming events for the financial community. We will be presenting at the Goldman Sachs Technology and Internet Conference on February 13th and at the Morgan Stanley Technology, Media, and Telecom Conference on February 26th. We will also be hosting our Annual Investor Day on March 27th in San Jose on the sidelines of our annual GPU Technology Conference, which we are very excited about. We will now open the call for questions. Operator, will you poll for questions please.

Operator

Your first question comes from C. J. Muse from Evercore.

O
CM
C. J. MuseAnalyst

I guess first question when I think about normal seasonality for gaming that would imply data center potentially more than $700 million plus into the coming quarter. And so, curious if I am thinking about that right whether crypto has been modeled more conservatively by you guys, and so would love to hear your thoughts there?

JH
Jen-Hsun HuangPresident & CEO

Which way is more conservatively?

CM
C. J. MuseAnalyst

Yes, sorry.

JH
Jen-Hsun HuangPresident & CEO

When you say conservatively, which direction are you implying, up or down?

CM
C. J. MuseAnalyst

Well, just curious to your thoughts there.

JH
Jen-Hsun HuangPresident & CEO

We model crypto approximately flat.

CM
C. J. MuseAnalyst

Okay. And then I guess as part of the larger question. How are you thinking about seasonality for gaming into the quarter?

JH
Jen-Hsun HuangPresident & CEO

Well, there are a lot of dynamics going on in gaming. One dynamic, of course, is that there is a fairly sizable pent up demand going into this quarter. But I think a larger dynamic relates to just really amazing games that are out right now. PUBG is doing incredibly well as you might have known and it’s become a global phenomenon, whether it’s here in the United States or Europe or in China, in Asia. PUBG is just doing incredibly well and we expect other developers to come up with similar genres like PUBG that could become popular in the near future and I am super excited about these games. Of course, there's Call of Duty, and there's Star Wars; there are just so many great games in the market today, like Overwatch and League of Legends that are still doing well. They’re just a catalyst of great franchises that are out in the marketplace and the gaming market is growing, and production value is going up; that's driving increased unit sales of GPUs as well as ASPs of GPUs. So I think that’s probably the larger dynamic of gaming.

Operator

Your next question comes from the line of Mark Lopasif with Jefferies.

O
ML
Mark LopasifAnalyst

The first question, the checks we've done indicate that the Tensor Core you put into Volta give it a huge advantage in neural network applications in the data center. I am wondering whether the Tensor Core might also have a similar kind of utility in the gaming market?

JH
Jen-Hsun HuangPresident & CEO

Yes, first of all I appreciate you asking a Tensor Core question. It is probably the single biggest innovation we had last year in data centers. Our GPU's equivalent performance to one of our GPUs, and one of our multi-GPUs would take something along the lines of 20 plus GPUs or 10 plus nodes. And so one GPU alone would do deep learning so fast that it would require 10 plus CPU powered server nodes to keep up with it. And then Tensor Core comes along last year and we increased the throughput of deep learning, increasing the computational throughput of deep learning by another factor of eight. Tensor Core really illustrates the power of GPU, it's very unlike a CPU where the instruction sets remain locked for a long-term and it's hard to advance. In the case of our GPUs, that's one of its fundamental advantages; we can continue year after year to continue to add new facilities to it. Tensor Core boosts the original great performance of our GPUs and really raises the bar last year. As Colette said earlier, our Volta GPUs have now been adopted all over the world, whether it's in China with Alibaba, Tencent, and Baidu, iFlytek too. Here in the United States, Amazon, Facebook, Google, Microsoft, IBM, and Oracle are also using them. And in Europe and Japan, the number of cloud service providers that have adopted Volta has been terrific. I think developers really appreciate the work that we did with Tensor Core, and although the updates they are now coming out from the frameworks, Tensor Core is the new instruction set and new architecture; the deep learning developers have really jumped on it and almost every deep learning framework is being optimized to take advantage of Tensor Core. On the inference side, that's where it plays a role in video games. You could use deep learning now to synthesize and to generate new art, and we've been demonstrating some of that in improving the quality of textures, generating artificial characters, and animating characters, whether it's facial animation with speech or body animation. The type of work that you could do with deep learning for video games is growing. And that’s where Tensor Core can provide a real advantage. If you take a look at the computational power we have in Tensor Core compared to a non-optimized GPU or even a CPU, it's now two plus orders of magnitude greater in computational throughput. That allows us to do things like synthesize images in real time and create virtual worlds, making characters and faces, bringing a new level of virtual reality and artificial intelligence to video games.

Operator

Your next question comes from the line of Vivek Arya from Bank of America.

O
VA
Vivek AryaAnalyst

Jen-Hsun, just a near and longer term question on the data center. Near-term you would have a number of strong quarters in data center. How is the utilization of these GPUs? And how do you measure whether you're over or under from a supply perspective? And then longer term, there seems to be a lot of money going into startups developing silicon for deep learning. Is there any advantage they’ve been taking a clean sheet approach? Or is GPU the most optimal answer? Like, if you were starting a new company looking at AI today, would you make another GPU or would you make another ASIC or some other format? Just any color would be helpful?

JH
Jen-Hsun HuangPresident & CEO

In the short term, a key indicator of customers using our GPUs for deep learning is the rate of repeat purchases. When customers return quarter after quarter to buy more GPUs, it indicates that their workloads are growing. For our existing customers who already have significant penetration, we see an exciting opportunity for using our GPUs in inference, which is a largely untapped area for growth. Many companies are still in the early stages of deploying and adopting deep learning for their applications. I believe we are just beginning to see the second wave of customers emerging. There's also a third group of customers, which includes internet services and consumer applications with large user bases that could benefit from artificial intelligence but utilize hyperscale clouds for their operations. This third phase of growth is really gaining momentum, and I'm enthusiastic about it. Currently, we have opportunities to apply our GPUs for inference across the board. If I had unlimited resources for research and development, I would invest heavily in NVIDIA's GPU team because GPUs are already the world’s best high-throughput computational processors. High-throughput processing involves significant complexity beyond what simple tools can manage. Maintaining exceptional energy efficiency and ensuring data flow through extensive software optimizations adds layers of complexity. The landscape of neural networks is evolving constantly, moving from basic CNNs to more intricate versions and from simple RNNs to advanced LSTMs and gated RNNs, with network depths increasing from just a few layers to hundreds and beyond. The focus has shifted from basic recognition tasks to synthesis tasks using GANs, and the variety of GANs is expanding. Navigating this complexity is challenging, and we are still at the early stages of artificial intelligence. The adaptability of our GPUs to various architectures and network types provides a significant advantage. Customers can confidently choose our GPUs, knowing they can reduce server counts in their data centers significantly. Greater purchases of GPUs correlate with greater savings. For instance, in previous years, we launched a 16-bit mixed precision, an 8-bit imager, and recently introduced Tensor Core technology, which has nearly increased performance by a factor of ten. Our GPUs are becoming more sophisticated, energy-efficient, and complemented by increasingly advanced software. The challenges posed by artificial intelligence are incredibly complex; it is arguably the most intricate software ever developed. This complexity is why progress has taken time, and high-performance supercomputers play a crucial role in advancing AI. It is far from just linear algebra; if I had unlimited resources, I'd direct them to our expert team.

Operator

Your next question comes from the line of Stacy Rasgon with Bernstein Research.

O
SR
Stacy RasgonAnalyst

I have a question for Colette. So if I correct for the Switch revenue growth in the quarter, it means that the gaming business X which was at maybe $140 million or $150 million. In your Q3 commentary, you did not call out crypto as a driver. You are calling out it in Q4. Is it fair to say that like that incremental growth is all crypto? And I guess going forward, you mentioned pent up demand; normally, your seasonality for gaming will be down probably double digit. Do you think that pent up demand is enough to reverse that normal seasonal pattern? And frankly, do you think gamers can even find GPU at retail at this point to buy in order to satisfy that pent up demand?

CK
Colette KressEVP & CFO

So, let me comment on first one. We did talk about our overall crypto business last quarter as well. We indicated how much we had in OEM boards and we also indicated that there was definitely some with our GTX business. Keep in mind that it’s very difficult for us to quantify down to the end customer’s view, but yes, there was also some in our Q3 and we did comment on that. So here, we are commenting in terms of what we saw in Q4. It’s up a bit from what we saw in Q3 and we do again expect probably going forward. Although, Jen-Hsun answered regarding the demand for gamers as we move forward.

JH
Jen-Hsun HuangPresident & CEO

Yes, so one way to think about the demand is we typically have somewhere between six to eight weeks of inventory in the channel, and I think you would ascertain that globally right now the channel is relatively lean. We're working really hard to get GPUs down to the marketplace for the gamers and we’re doing everything to advise retailers and system builders to serve the gamers. So we’re doing everything we can; but the most important thing is we just got to catch up to that supply.

Operator

Your next question comes from the line of Mitch Steves with RBC.

O
MS
Mitch StevesAnalyst

And I just want to circle back with autos. With CES, is it kind of on track for towards calendar year '19 in that we see the autonomous kind of ASP uplift? Just to clarify, the expected ASP uplift, all around $1,000, is that all right?

JH
Jen-Hsun HuangPresident & CEO

Yes, it just depends on the mix. I think for autonomous vehicles that still have drivers passenger cars, branded cars, ASP anywhere from $500 to $1,000 makes sense. For robot taxis where they are driverless, they are not autonomous vehicles; they are actually driverless vehicles, the ASP will be several thousand dollars. In terms of timing, I think that you’re going to see a larger and larger deployment starting this year and going through next year for sure, especially with robot taxis. And then with autonomous vehicles, cars that have autonomous driving capability, you could expect more of that to come in late 2019; you could see a lot more in 2020. Just about every creating car by 2022 will have autonomous driving capabilities.

Operator

Your next question comes from Toshiya Hari with Goldman Sachs.

O
TH
Toshiya HariAnalyst

Jen-Hsun, I was hoping to ask a little bit about inferencing. How big was inferencing within data center in Q4 or fiscal '18? More importantly, how do you expect that trend to develop over the next 12 to 18 months?

JH
Jen-Hsun HuangPresident & CEO

First of all, I want to comment on inference. It involves taking the outputs from various frameworks, which generate complex computational graphs. Neural networks have millions of parameters, contributing to this complexity. These parameters are found in activation layers and functions, and they play a role in creating the computational graph, which consists of intricate layers. Each framework produces a different computational graph, varying in formats, styles, and architectures. You need to compile and optimize these graphs to minimize conflicts among resources within your GPU or processor. Conflicts can arise in memory, register files, data paths, or interfaces. Given the complexity of these systems across processors and the connections between GPUs and network nodes, we must identify the different resources, compile, and optimize to maintain efficient operation. TensorRT serves as an advanced optimizing graph compiler, specifically targeting each processor differently. For example, how it targets Xavier is distinct from how it targets Volta, and inference requirements vary based on the precision needed. TensorRT is integral to our inference software, which is where much of the innovation lies. Additionally, we tailor our GPUs for high throughput and support various precisions; some networks can utilize 8-bit images, whereas others may necessitate maintaining 32-bit floating point for consistent precision. We have developed an architecture that encompasses this optimizing graph and a compiler targeting high-throughput processors while ensuring precision. We've been testing our Tesla P4, our data center inference processor, and the responses have been very positive. This quarter, we began shipping, and I'm optimistic about the inference market's size in data centers, which seems comparable to the training market. The encouraging aspect is that everything trained on our processors performs exceptionally well for inference as well. Data centers are realizing that acquiring more GPUs for both training and inference results in significant cost savings, potentially by factors of ten, rather than just hundreds or a few thousands. This level of savings is considerable for data centers facing capital constraints. Additionally, we see inference opportunities in autonomous machines like self-driving cars. TensorRT is designed to optimize for Xavier and our Pegasus robot taxi computer, both needing efficient inference for real-time operations while managing energy and cost effectively. In summary, inference is crucial for us, involving complex work where we are making excellent strides.

Operator

Your next question comes from the line of Blayne Curtis with Barclays.

O
BC
Blayne CurtisAnalyst

Just curious, as you look at the gaming business, I have kind of lost track of the seasonality as you really have a big ramp ahead. I’m just curious, as we think about Pascal and our seasonality ahead of Volta, if you could just extrapolate as you look out into April and maybe July?

JH
Jen-Hsun HuangPresident & CEO

Well, we haven’t announced 2018 for April or July. The best way to think about that is Pascal is the best gaming platform on the planet. It is the most future feature-rich software. The most energy-efficient, and from $99 to $1,000, you can buy the world’s best GPUs, the most advanced GPUs. You buy Pascal, you know you got the best. Seasonality is a good question. Increasingly because gaming is a global market and people play games every day, I don’t think there is much seasonality in the TV, books or music; gaming is just a part of life. Whenever new titles come out, that’s when the new season starts. In China, there are iCafes; there is Singles' Day on November 11; there is Back to School in the United States; there is Christmas; there is Chinese New Year. Boy, there are so many seasons that it's hard to define what exactly seasonality is anymore. So hopefully over time, it becomes less of a matter. But most importantly is that we expect Pascal to continue to be the world’s best gaming platform for the foreseeable future.

Operator

Your next question comes from the line of Harlan Sur with JP Morgan.

O
HS
Harlan SurAnalyst

I know somebody asked a question about inferencing for the data center market, but on inferencing embedded in edge applications on the software and firmware side. You talked about the TensorRT framework. On the hardware side, you got the Jetson TX platform embedded in edge inferencing applications, things like drones and factory automation and transportation. What else is the team doing in the embedded market to capture more of the same opportunities going forward?

JH
Jen-Hsun HuangPresident & CEO

Thanks a lot, Harlan. The video TensorRT is really the only optimizing inference compiler in the world today and it targets all our platforms. We do inference in the data center that I mentioned earlier. The first embedded platform we’re targeting is self-driving cars. To drive the cars, you basically have to infer and try to perceive what's around you all the time and that’s a very complicated inference matter. It could be extremely easy, like taking the car in front of you and applying the brakes, or it could be critically hard trying to figure out whether you should stop at an intersection or not. If you look at most intersections, you can't just look at the lights to determine where to stop. You have to consider where the few lines are. Using scene understanding and deep learning, we have the ability to recognize where to stop and where not to. For Jetson, we have a platform called Metropolis and Metropolis is used for very large scale smart cities where cameras are deployed all over the place to keep cities safe. We’ve been very successful in smart cities with nearly every major smart city provider and what’s called intelligent video analysis companies using NVIDIA’s video platform to do inference at the edge and AI at the edge. We’ve also announced recent success with FANUC, the largest manufacturing robotics company in the world, and Komatsu, one of the largest construction equipment companies in the world to apply AI at the edge for autonomous machines. Drones are being equipped with our technology; we have several industrial drones inspecting pipelines, power lines, and flying over large spans of farms to figure out where to spray insecticides more accurately. There are all kinds of applications. You’re absolutely right that inference at the edge or AI at the edge is a very large market opportunity for us and that’s exactly why TensorRT was created.

Operator

Your next question comes from the line of Joe Moore with Morgan Stanley.

O
JM
Joe MooreAnalyst

You mentioned how lean the channel is in terms of gaming cards. There has been a noticeable increase in prices at retail. I am just curious, is that a broad-based phenomenon? And is there any economic ramification to you? Or is that just sort of retailers bringing prices up in a shortage environment?

JH
Jen-Hsun HuangPresident & CEO

We don’t set prices at the end of the market, and the best way for us to solve this problem is to work on supply. The demand is great and it's very likely that the demand will remain great as we look through this quarter. We just have to keep working on increasing supply. Our suppliers are the world’s best and largest semiconductor manufacturers, and they’re responding correctly. I'm really grateful for everything they are doing; we just got to catch up to that demand, which is really great.

Operator

Your next question comes from the line of Chris Rolland with Susquehanna.

O
CR
Chris RollandAnalyst

Just to clarify in terms of pent-up demand, one of your GPU competitors basically said that the constraint was memory. I just want to make sure that was correct? And then in the CFO commentary, you mentioned opportunities for professional business, like AI and deep learning. Can you talk about that in more kind of application you would use Quadro versus Volta or GeForce specs?

JH
Jen-Hsun HuangPresident & CEO

We’re just constrained. Obviously, we're ten times larger as a GPU supplier than the competition, so we have a lot more suppliers supporting us and a lot more distributors taking our products to market and many partners distributing our products all over the world. I don't know how to explain this; the demand is just really great. Just have to keep our nose to it and catch up to the demand. With respect to Quadro, it is a workstation processor. The entire software stack is designed for all of the applications that the workstation industry uses. The rendering quality is, of course, world-class because NVIDIA is in control, but the entire software stack has been designed so that mission-critical applications or long-life industrial applications, and companies that are gigantic manufacturing and industrial companies, can rely on an entire platform consisting of processors, systems, software, middleware, and all the integrations into CAD tools in the world. They need to know the supplier will be here and can be trusted for the entire life of the use of that product, which could be several years, but the data generated from it must be accountable for a couple of decades. You need to pull up an entire design of a plane or train or car a couple of decades after it was sent to production to make sure it is still compliant, and if there’s a question about it, that it can be pulled up. NVIDIA's entire platform is designed to be professional-grade. The exciting part about artificial intelligence is we can now use AI to improve images. For example, you could fix a photograph using AI; you could fill in damaged parts of a photograph or parts of the image that haven’t been rendered yet. You can use AI to fill in the dots, predict what comes next, rendering results, which we announced and demonstrated at GTC recently. You can use that to generate designs; you sketch up a few strokes of what you want a car to look like, and based on inventory, safety, physics, it has learned how to fill in the rest. This is called generative design. Generative design will be seen in product design and building design and just about everything. The last, if you will, 90% of the work is done after the initial installation when the conceptual design is done. That part could be highly automated through AI. Quadro can be used as a platform that designs as well as generatively designs. Lastly, a lot of people are using our workstations to also train their neural networks for the generative designs. You can train and develop your own networks and then apply it in your applications. AI is the future way of developing software; it is a brand new capability where computers can write their own software that is so complex and so capable that no humans could write it ourselves; you can teach and use data to teach software to figure out how to write it themselves. Then when you’re done developing the software, you can use it to do all kinds of tasks, including designing products.

Operator

Your next question comes from the line of Craig Ellis with B. Riley.

O
CE
Craig EllisAnalyst

A lot of near-term items here on gaming. So, I’ll switch it to longer term. Jen-Hsun, at CES, I think you said that there are now 200 million GeForce users globally, and if my math is correct, that would be up about 2X over the last three to four years. So the question is, is there anything that you can see that would preclude that kind of growth over a similar period? Given the recent demand dynamics, we've seen that NVIDIA’s direct channels have been very good sources for GPUs at the prices that you intend. So as we look ahead, should we expect any change in channel management from the Company?

JH
Jen-Hsun HuangPresident & CEO

Yes, thanks a lot, Craig. In the last several years, several dynamics happened at the same time. All of that were favorable contributions to today. First of all, gaming became a global market and China became one of the largest gaming markets in the world. The second is that because the market became so big, developers could invest extraordinary amounts into the production value of video games. They could invest a few hundred million dollars knowing they would get a return on that. Back when the video game industry was quite small or PC gaming was small, it was too risky for developers to invest that much. Now, an investor or a developer could invest hundreds of millions of dollars and create something that is just extraordinarily realistic and immersive. When the production value goes up, the GPU technology that’s needed to run it well goes up; it’s very different from music, it's very different from watching movies; everything in video games is emphasized in real time. When the production value goes up, the ASP with technology has to go up. Lastly, people wonder about how big the video game market is going to be, and I have always believed that it will eventually include literally everyone. In 10 years’ time or 15 years’ time, there could be another billion people on earth, and those people will be gamers. We see more and more gamers today, and almost every sport can be a virtual reality sport. Video games encompass every sport; they can be any sport and every sport. I think when you consider this and put it in your mind, the opportunity for video games is quite large and that’s essentially what we’re seeing.

Operator

Your next question comes from the line of William Stein with SunTrust.

O
WS
William SteinAnalyst

I am hoping we can touch on automotive a little bit more. In particular, I think in the past you’ve talked about expecting sort of a low in revenue growth in this market and till roughly the 2020 timeframe when autonomous driving kicks in more meaningfully. But of course, you have the AI co-pilot that seems to be potentially ramping sooner, and you have at least one marquee customer that is ramping now I guess that volumes aren’t quite that large on the autonomous driving side. So any guidance as to when we might see these two factors start to accelerate revenue in that end market?

JH
Jen-Hsun HuangPresident & CEO

Yes, thanks a lot, Will. I wish I had more precision for you, but here are some of the dynamics that I believe in. I believe that autonomous capability, autonomous driving is the single greatest dynamic next to EVs in the automotive industry. Transportation is a $10 trillion industry, including cars, shuttles, buses, delivery vehicles; it’s just an extraordinary market, and everything that’s going to move in the future will be autonomous, that’s for sure, whether fully or partially. The size of this marketplace is quite large. In the near term, our path to that future, which I believe starts in 2019 and 2020, then more strongly in 2022, involves several elements. The first element is that it will require training neural networks for all these companies — whether they are Tier 1s or startups, OEMs, or taxi companies or ride-hailing companies, all in delivering autonomous driving capabilities. The first thing they have to do is train a neural network. We created a platform called the NVIDIA GTX that allows everyone to train neural networks as quickly as possible. So that first, it’s the development of the AI requires GPUs and we benefit first from that. The second is the development platforms for the vehicles themselves; the vehicles need the new hardware and software. Finally, with Xavier, we have created the first kind of Xavier, the most complex SoC that has ever been made. We’re super excited about the state of Xavier and we're going to be sampling in Q1. So now, we will be able to help everybody create development systems; there will be thousands and tens of thousands of quite expensive development systems based on Xavier and based on that Pegasus that world will need. These three elements and components are in the near term; and then hopefully starting in 2019 going forward, and very strongly from 2022 and beyond, the actual car revenues and economics will show up. I appreciate the question. I think this is our last question. We’ve had a record quarter ramping up with a record year. We've strong momentum in our gaming, AI, data center, and self-driving car businesses. It's great to see the adoption of NVIDIA's GPU computing platform increasing in so many industries. We accomplished a great deal this last year and we have big plans for this coming year. Next month, the brightest minds in AI and science will come together at our GPU technology conference in San Jose. GTC has grown tenfold in the last five years; this year we expect more than 8,000 attendees. GTC is the place to be if you're an AI researcher or doing any field of science where computing is your essential instrument. There will be over 500 talks about recent breakthroughs and discoveries by leaders in the field from companies like Google, Amazon, Facebook, Microsoft, and many others. Developers from industries ranging from healthcare to transportation to manufacturing and entertainment will come together to share state-of-the-art developments in AI. This is going to be a big GTC. I hope to see all of you there.

Operator

This concludes today's conference call. You may now disconnect. Thank you for your participation.

O