File size: 64,379 Bytes
6d08226
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429


Thomson Reuters StreetEvents Event Transcript
E D I T E D   V E R S I O N

Q4 2020 NVIDIA Corp Earnings Call
FEBRUARY 13, 2020 / 10:30PM GMT

================================================================================
Corporate Participants
================================================================================

 * Colette M. Kress
   NVIDIA Corporation - Executive VP & CFO
 * Jensen Huang
   NVIDIA Corporation - Co-Founder, CEO, President & Director
 * Simona Jankowski
   NVIDIA Corporation - VP of IR

================================================================================
Conference Call Participiants
================================================================================

 * Toshiya Hari
   Goldman Sachs Group Inc., Research Division - MD
 * Vivek Arya
   BofA Merrill Lynch, Research Division - Director
 * Aaron Christopher Rakers
   Wells Fargo Securities, LLC, Research Division - MD of IT Hardware & Networking Equipment and Senior Analyst
 * Joseph Lawrence Moore
   Morgan Stanley, Research Division - Executive Director
 * William Stein
   SunTrust Robinson Humphrey, Inc., Research Division - MD
 * Blayne Peter Curtis
   Barclays Bank PLC, Research Division - Director & Senior Research Analyst
 * Timothy Michael Arcuri
   UBS Investment Bank, Research Division - MD and Head of Semiconductors & Semiconductor Equipment
 * Atif Malik
   Citigroup Inc, Research Division - VP and Semiconductor Capital Equipment & Specialty Semiconductor Analyst
 * Harlan Sur
   JPMorgan Chase & Co, Research Division - Senior Analyst
 * Mark John Lipacis
   Jefferies LLC, Research Division - MD & Senior Equity Research Analyst
 * Christopher James Muse
   Evercore ISI Institutional Equities, Research Division - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst
 * Matthew D. Ramsay
   Cowen and Company, LLC, Research Division - MD & Senior Technology Analyst

================================================================================
Presentation
--------------------------------------------------------------------------------
Operator    [1]
--------------------------------------------------------------------------------

          Good afternoon. My name is Christina, and I'm your conference operator today. Welcome to NVIDIA's financial results conference call. (Operator Instructions) Thank you. I'll now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.

--------------------------------------------------------------------------------
Simona Jankowski,  NVIDIA Corporation - VP of IR    [2]
--------------------------------------------------------------------------------

          Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for the Fourth Quarter of Fiscal 2020. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2021. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, February 13, 2020, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
With that, let me turn the call over to Colette.

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [3]
--------------------------------------------------------------------------------

          Thanks, Simona. Q4 revenue was $3.11 billion, up 41% year-on-year and up 3% sequentially, well above our outlook, reflecting upside in our data center and gaming businesses. Full year revenue was $10.9 billion, down 7%. We recovered from the excess channel inventory in gaming and an earlier pause in hyperscale spending and exited the year with great momentum.
Starting with gaming. Revenue of $1.49 billion was up 56% year-on-year and down 10% sequentially. Full year gaming revenue was $5.52 billion, down 12% from our prior year.
We enjoyed strong end demand for our desktop and notebook GPUs. Let me give you some more details. Our gaming lineup was exceptionally well positioned for the holidays with the unique ray tracing capabilities of our RTX GPUs and incredible performance at every price point. From the Singles Day shopping event in China through the Christmas season in the West, channel demand was strong for our entire stack. Fueling this were new blockbuster games like Call of Duty: Modern Warfare, continued eSports momentum and new RTX Super products. With RTX price points as low as $299, ray tracing is now the sweet spot for PC gamers.
Gaming is thriving and gamers prefer GeForce. The global phenomenon of eSports keeps gaming momentum with an audience now exceeding 440 million, up over 30% in just 2 years according to Newzoo. The League of Legends World Championship brought more than 100 million viewers, on par with this month's Super Bowl.
Ray tracing titles continue to come to market, and GeForce RTX GPUs are the only ones that support this important technology. This quarter, Wolfenstein: Young blood and Deliver Us The Moon were the latest titles to support ray tracing as well as NVIDIA's Deep Learning Super Sampling technique, which also uses AI to boost performance. With the proliferation of RTX-enabled games and our best ever top-to-bottom performance, we are solidly into the Turing architecture upgrade cycle. Gamers continue to move to higher-end GPUs, seeking better performance and support for ray tracing.
Gaming laptops posted double-digit year-on-year growth for the eighth consecutive quarter. The category continues to expand, driven by appealing thin and light form factors with fantastic graphics performance. This holiday season, retailers stocked a record 125 gaming laptops based on NVIDIA GPUs, up from 94 last year, with our Max-Q designs up 2x. At CES, we launched the world's first 14-inch GeForce RTX laptop with ASUS. We also continue to expand our Studio lineup of laptops for the fast-growing population of freelance creators, designers and YouTubers with 13 new RTX Studio systems introduced at CES. Powered by Turing GPUs, these systems are optimized for over 55 creative and design applications with RTX accelerated ray tracing and/or AI.
Last week, we launched our GeForce NOW cloud gaming service. Powered by GeForce, GeForce NOW is the first cloud gaming service to deliver ray trace games. It's also the only open platform so gamers can enjoy the games they already have and use their existing store accounts without having to repurchase games. GeForce NOW enables PC games on Macs, Windows, PCs, TVs, Mobile devices and soon, Chromebooks. GFN has a freemium business model that includes 2 membership plans: a free membership with standard access; and a Founders tier with a starting price of $4.99 per month, which gives priority access and RTX ray tracing support.
Our goal with GeForce NOW is to expand GeForce gaming to more gamers. About 80% of GeForce NOW gamers are playing on underpowered PCs or devices with Mac OS or Android. With GeForce NOW, they are able to enjoy PC gaming on a GeForce GPU in the cloud. GeForce now can expand GeForce well beyond the roughly 200 million gamers we reach today.
Separately, we entered into a collaboration with Tencent, the world's largest gaming platform, to bring PC gaming in the cloud to China, the world's largest gaming market. NVIDIA GPU technology will power Tencent's Start cloud gaming service, which is in early testing stages.
Moving to data center. Revenue was a record $968 million, up 43% year-on-year and up 33% sequentially, our strongest ever sequential growth in dollar terms. Full year fiscal year '20 data center revenue was a record $2.98 billion, up 2% from the prior year. Strong growth was fueled by hyperscale and vertical industry end customers. Hyperscale demand was driven by purchases of both our training and inference products in support of key AI workloads, such as natural language understanding, conversational AI and deep recommendators. Hyperscale demand was also driven by cloud computing. AWS now makes the T4 available in every region. This underscores the versatility of the T4, which excels at a wide array of high-performance computing workloads, including AI inference, cloud gaming, rendering and virtual desktop.
Vertical industry growth was driven primarily by consumer Internet companies. Other verticals such as retail, health care and logistics continue to grow from early-stage build-outs with a strong foundation of deep learning engagements, and we see an expanding set of opportunities across high-performance computing, data science and edge computing applications.
T4, our inference platform, had another strong quarter, with shipments up 4x year-on-year, driven by public cloud deployments as well as edge AI video analytics applications. T4 and V100, reflecting strong demand for inference and training, respectfully, set records this quarter for both shipments and revenue.
Even as NVIDIA remains the leading platform for AI model training, NVIDIA's inference platform is getting wide use by some of the world's leading enterprise and consumer Internet companies, including American Express, Microsoft, PayPal, Pinterest, Snap and Twitter.
The industry continues to do groundbreaking AI work for NVIDIA. For example, Microsoft's biggest quality improvements made over the past year in its Bing search engine stem from its use of NVIDIA GPUs and software for training and inference of its natural language understanding models. These DNN transformer models popularized by BERT have computational requirements for training that are in the order of magnitude higher than earlier image-based models. Conversational AI is a major new workload, requiring GPUs for inference to achieve high throughput within the desired low latency. Indeed, Microsoft cited an inference throughput increase of up to 800x on NVIDIA GPUs compared with CPUs, enabling it to serve over 1 million BERT inferences per second worldwide. And just this week, Microsoft researchers announced a new breakthrough in natural language processing with the largest ever publicized model trained on NVIDIA DGX-2. This advances the state of the art for AI assistance in tasks, such as answering questions, summarization and natural language generation.
Recommendators are also an important machine learning model for the Internet, powering billions of queries per second. The industry is moving to deep recommendators such as wide and deep model, which leverage deep learning to enable automatic feature learning and to support unstructured content. Running these models on GPUs can dramatically increase inference throughput and reduce latency compared with CPUs. For example, Alibaba's and Baidu's recommendation engines run on NVIDIA AI, boosting their inference throughput by orders of magnitudes beyond CPUs. Deep recommendators enabled Alibaba to achieve 10% increase in click-through rates.
We also announced the availability of a new GPU-accelerated supercomputer on Microsoft Azure. It enables customers for the first time to rent an entire AI supercomputer on demand from their desk, matching the capabilities of large on-premise supercomputers that can take months to deploy. And in Europe, energy company Eni announced the world's fastest industrial supercomputer based on NVIDIA GPUs.
AI has even come to pizza delivery. At the National Retail Federation's Annual Conference last month, we announced Domino's as a customer deploying our platform for deep learning and data science applications, helping with customer engagement and order accuracy prediction. More broadly in retail, we have seen a significant increase in the adoption of NVIDIA's edge computing offerings by large retailers for powering AI applications that reduce shrinkage, optimize logistics and create operational efficiencies.
At the SC19 Supercomputing conference, we introduced a reference design platform for GPU-accelerated ARM-based servers, along with ecosystem partners, ARM, Ampere Computing, Fujitsu and Marvell. We made available our ARM-compatible software development kit consisting of NVIDIA CUDA-X libraries and development tools for accelerating computing. This opens the floodgates of innovation to support growing new applications from hyperscale cloud to Exascale supercomputing. We also introduced NVIDIA Magnum IO, a suite of software optimized to eliminate storage and input/output bottlenecks. Magnum IO delivers up to 20x faster data processing for multi-server, multi-GPU computing nodes when working with massive data sets to carry out complex financial analysis, climate modeling and other workloads for data scientists, high-performance computing and AI researchers.
Finally, we introduced TensorRT 7, the seventh generation of our inference software development kit, which speeds up components of conversational AI by 10x comparing to running on CPUs. This helps drive latency below the 300 millisecond threshold considered necessary for real-time interactions supporting our growth in conversational AI.
Moving to ProVis. Revenue reached a record $331 million, up 13% year-on-year and up 2% sequentially. Full year revenue was a record $1.21 billion, an increase of 7% from the prior year. ProVis accelerated in Q4 as the rollout of more RTX-enabled applications is driving strong upgrade cycle for our Turing GPUs. RTX is also opening up new market segment opportunities, such as rendering and studio for freelance creatives.
In November, V-ray, Arnold and Blender software renderers began shipping with RTX technology. These joined our leading creative and design applications, including Premier Pro, Dimension, SOLIDWORKS, CATIA and Maya. With RTX, these applications enable enhanced creativity and notable productivity gains. In Blender Cycles, for example, real-time rendering performance is boosted 4x versus a CPU. RTX is now supported by more than 40 leading creative and design applications, reaching a combined user base of over 40 million.
Finally, turning to automotive. Revenue was $163 million, flat from a year ago and up 1% sequentially. Full year revenue reached a record $700 million, up 9% year-on-year. During the quarter, we announced DRIVE AGX Orin, the next-generation platform for autonomous vehicles and robots, powered by our new Orin SoC and delivering nearly 7x the performance of the previous generation Xavier SoC. The platform scales from level 2 plus AI-assisted driving up to level 5 fully driverless operation. Orin is software-defined and compatible with Xavier, allowing developers to leverage their investment across multiple product generations.
Moving to the rest of the P&L. Q4 GAAP gross margins was 64.9% and non-GAAP was 65.4%, up sequentially, largely reflecting a higher contribution of data center products. Q4 GAAP operating expenses were $1.02 billion and non-GAAP operating expenses were $810 million, up 12% and 7% year-on-year, respectively.
Q4 GAAP EPS was $1.53, up 66% from a year earlier. Non-GAAP EPS was $1.89, up 136% from a year ago. Q4 cash from operations was $1.46 billion. Fiscal year '20 cash flow from operations was a record $4.76 billion.
With that, let me turn the outlook for the first quarter of fiscal 2021. The outlook does not include any contribution from the pending acquisition of Mellanox. We are engaged and progressing with China on the regulatory approval and believe the acquisition will likely close in the first part of calendar 2020.
Before we get to the new -- the numbers, let me comment on the impact of the coronavirus. While it is still early and the ultimate effect is difficult to estimate, we have reduced our Q1 revenue outlook by $100 million to account for the potential impact. We expect revenue to be $3 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 65% and 65.4%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $1.05 billion and $835 million, respectively. GAAP and non-GAAP OI&E are both expected to be income of approximately $25 million. GAAP and non-GAAP tax rates are both expected to be 9%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $150 million to $170 million. Further financial details are included in the CFO commentary and other information available on the IR website.
In closing, let me highlight an upcoming event for the financial community. We will be at the Morgan Stanley Technology, Media and Telecom Conference on March 2 in San Francisco.
With that, we will now open the call for questions. Operator, will you please poll for questions.


================================================================================
Questions and Answers
--------------------------------------------------------------------------------
Operator    [1]
--------------------------------------------------------------------------------

          (Operator Instructions)
And our first question comes from the line of Toshiya Hari with Goldman Sachs.

--------------------------------------------------------------------------------
Toshiya Hari,  Goldman Sachs Group Inc., Research Division - MD    [2]
--------------------------------------------------------------------------------

          I guess on data center, Colette or Jensen, can you speak to some of the areas that drove the upside in the quarter? You talked about inference and -- both the T4 and the V100 having record quarters but relative to your internal expectations, what were some of the businesses that drove the upside? And if you can also speak to the breadth of your customer profile today relative to a couple of years ago, how that's expanded, that would be helpful as well.

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [3]
--------------------------------------------------------------------------------

          Yes. Toshiya, thanks a lot for your question. The primary driver for our growth is AI. There are 4 fundamental dynamics. The first is that the AI models that are being created are achieving breakthroughs and quite amazing breakthroughs, in fact, in natural language understanding, in conversational AI, in recommendation systems. And you know this, but for the others in the audience, recommendation systems are essentially the engine of the Internet today. And the reason for that is because there are so many items in the world, whether it's a store or whether it's content or websites or information you are querying, there are hundreds of billions, trillions, and depending on how you count it, hundreds of trillions of items in the world. And there are billions of people, each with their own characteristics and their countless contexts. And between the items, the people, the users and the various contexts that we're in, location and what you're looking for and weather or what's happening in the environment, those kind of contexts affects the search query that -- the answer they provide you. The recommendation system is just foundational now to search. And some people have said this is the end of search and the beginning  -- and the era of recommendation systems. Work is being done everywhere around the world in advancing recommendation systems. And very first time over the last year, it's been able to be done in deep learning.
And so the first thing is just the breakthroughs in AI. The second is production AI, which means that whereas we had significant and we continue to have significant opportunities in training because the models are getting larger, and there are more of them, we're seeing a lot of these models going into production, and that business is called inference. Inference, as Colette mentioned, grew 4x year-over-year. It's a substantial part of our business now. But one of the interesting statistics is TensorRT 7, the entire TensorRT download this year was about 500,000, a doubling over a year ago. What most people don't understand about inference is it's an incredibly complex computational problem, but it's an enormously complex software problem. And so the second dynamic is moving from training or growing from training and models going into production called inference.
The third is the growth, not just in hyperscale anymore, but in public cloud and in vertical industries. Public cloud because of thousands of AI start-ups that are now developing AI software in the cloud. And the OpEx model works much better for them as they're younger. When they become larger, they could decide to build their own data center infrastructure on-prem, but the thousands of start-ups start their lives in the cloud.
We're also seeing really great success in verticals. One of the most exciting vertical is logistics. Logistics, retail, warehousing. We announced, I think, this quarter or last -- end of last quarter, USPS, American Express, Walmart, just large companies who have enormous amounts of data that they're trying to do data analytics on and do predictive analytics on. And so the third dynamic is the growth in -- beyond hyperscale and public cloud as well as vertical industries.
And then the last dynamic is being talked about a lot, and this is really, really exciting, and it's called edge AI. We used to call it industries and AI where the action is. But the industry now calls edge AI. We're seeing a lot of excitement there. And the reason for that is you need to have low latency inference. You might not be able to stream the data all the way to the cloud for cost reasons or data sovereignty reasons, and you need the response time. And so those 4 dynamics around AI really drove our growth.

--------------------------------------------------------------------------------
Operator    [4]
--------------------------------------------------------------------------------

          Your next question comes from the line of Joe Moore with Morgan Stanley.

--------------------------------------------------------------------------------
Joseph Lawrence Moore,  Morgan Stanley, Research Division - Executive Director    [5]
--------------------------------------------------------------------------------

          Great. Just following up on that. As you look back at the last 12 months and the deceleration that you saw in your HPC cloud business, now that you have the perspective of seeing what's driving the rebound, any thoughts on what drove it to slow down in the first place? Was it just digestion? Was it sort of a handoff from image recognition to these newer applications that you just talked about? Just help us -- what happened there? And I guess as it pertains to the future, do we think of this as a business that will have that kind of lumpiness to it?

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [6]
--------------------------------------------------------------------------------

          Yes. That's a really good question. In fact, if you look backwards, now we only have the benefit of history. The deep recommendation systems, the natural language understanding breakthroughs, the conversational AI breakthroughs, all happened in this last year. And the velocity by which the industry captured the benefits here and continue to evolve and advance from these what so-called transformer models was really quite incredible. And so all of a sudden, the number of breakthroughs in AI has just grown tremendously, and these models have grown tremendously. Just this last week, Microsoft announced that they've trained a neural net model in collaboration with work that we did, we call Megatron, increased the size of a model from 7.5 billion parameters to 17.5 billion parameters. And the accuracy of their natural language understanding has just -- has really been boosted.
And so the models are -- AI is finding really fantastic breakthroughs, and models are getting bigger and there are more of them. And when you look back and look at when these breakthroughs happened, it essentially happened this last year.
The second, we've been working on inference for some time. And until this last year, very few of those inference models went into production. And now we have deep learning models across all of the hyperscalers in production. And this last year, we saw really great growth in inference.
The third dynamic is public clouds. All these AI startups that are being started all over the world, there's about 6,000 of them, they're starting to develop and be able to put their models into production. And with the scale out of AWS, we now have T4s in every single geography. So the combination of the availability of our GPUs in the cloud, and the startups and vertical industries deploying their AI models into production, the combination of all that just kind of came together. And all of that happened this last year. And as a result, we had record sales of V100s and T4s. And so we're quite excited with the developments, and it's all really powered by AI.

--------------------------------------------------------------------------------
Operator    [7]
--------------------------------------------------------------------------------

          Your next question comes from the line of Vivek Arya with Bank of America Securities.

--------------------------------------------------------------------------------
Vivek Arya,  BofA Merrill Lynch, Research Division - Director    [8]
--------------------------------------------------------------------------------

          Congratulations on returning the business back to the strong growth. Jensen, I wanted to ask about how you are positioned from a supply perspective for this coming year? Your main foundry is running pretty tight. How will you be able to support the 20% or so growth here that many investors are looking for? If you could just give us some commentary on how you're positioned from a supply perspective, that will be very helpful.

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [9]
--------------------------------------------------------------------------------

          Well, I think we're in pretty good shape on supply. We surely won't have ample supply. It is true that the industry is tight and the combination of supporting multiple processes, multiple fabs across our partner, TSMC. We've got a lot of different factories and a lot of different -- several different nodes of process qualified. I think we're in good shape. And so we just have to watch it closely. And we're working very closely with all of our customers in forecasting. And of course, that gives us better visibility as well and -- but all of us have to do a better job forecasting, and we're working very closely between our customers and our foundry partners, TSMC.

--------------------------------------------------------------------------------
Operator    [10]
--------------------------------------------------------------------------------

          Your next question comes from the line of Timothy Arcuri with UBS.

--------------------------------------------------------------------------------
Timothy Michael Arcuri,  UBS Investment Bank, Research Division - MD and Head of Semiconductors & Semiconductor Equipment    [11]
--------------------------------------------------------------------------------

          Colette, I'm wondering if you can give us -- in data center, if you can give us a little idea of what the mix was between industries and hyperscale. I think last quarter, hyperscale was a little bit less than 50%. Can you give us maybe the mix or how much it was up, something like that?

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [12]
--------------------------------------------------------------------------------

          Yes. Tim, thanks for the question. Similar to what we had seen last quarter, with all things growing as we moved into this quarter, growth in terms of the hyperscales, continued expansion in terms of those vertical industries and even in the cloud instances. We're still looking at around the same split of 50-50 between our hyperscales and our vertical industries and maybe a little bit tad below 50 in terms of our total overall hyperscales.

--------------------------------------------------------------------------------
Operator    [13]
--------------------------------------------------------------------------------

          Your next question comes from the line of Aaron Rakers with Wells Fargo.

--------------------------------------------------------------------------------
Aaron Christopher Rakers,  Wells Fargo Securities, LLC, Research Division - MD of IT Hardware & Networking Equipment and Senior Analyst    [14]
--------------------------------------------------------------------------------

          Congratulations on the results. When I look at the numbers, the growth on an absolute basis sequentially in data center was almost 2x or north of 2x, what we've seen in the past as far as the absolute sequential change. Through the course of this quarter, you were pretty clear that you would expect to see an acceleration of growth in the December quarter. I'm just curious of how you think about that going into the April quarter? And how we should think about that growth rate through the course of this year? If you can give us any kind of framework.
And Jensen, just curious, I mean, as you think about the bigger picture, where do you think we stand from an industry perspective today in terms of the amount or the attach rate of GPUs, is it for acceleration in the server market? And where do you think that might be looking out over the next 3 years or so?

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [15]
--------------------------------------------------------------------------------

          Thanks, Aaron. Colette, do you want to go first?

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [16]
--------------------------------------------------------------------------------

          Sure. When we think about going into Q1 and our data center overall growth, we do expect to see continued growth, both going into Q1. We believe our visibility still remains positive quite well, and we're expecting that as we move into it and go forward.

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [17]
--------------------------------------------------------------------------------

          Yes. Aaron, I believe that every query on the Internet will be accelerated someday. And at the very core of it, most -- almost all queries will have some natural language understanding component to it. Almost all queries will have to sort through and make a recommendation from the trillions of possibilities, filter it down and recommend a handful of recommended answers to your queries. Whether it's shopping or movies or just asking locations or even asking a question, the number of the possibilities of all the answers versus what is best answer is -- needs to be filtered down. And that filtering process is called recommendation. That recommendation system is really complex, and deep learning is going to be involved in all that. That's the first thing. I believe that every query will be accelerated.
The second is, as you know, CPU scaling has really slowed, and there's just no two ways about it. It's not a marketing thing. It's a physics thing. And the ability for CPUs to continue to scale without increasing cost or increasing power has ended. And it's called the end of Dennard scaling. And so there has to be another approach. The combination of the emergence of deep learning and the use of artificial intelligence and the amount of computation that's necessary to -- for every single query but the benefit that comes along with that, and the end of Dennard scaling, suggests that there needs to be another approach, and we believe that approach is acceleration.
Now our approach for acceleration is fundamentally different than an accelerator. Notice, we never say accelerator, we say accelerated computing. And the reason for that is because we believe that a software-defined data center will have all kinds of different AIs. The AIs will continue to evolve, the models will continue to evolve and get larger, and a software-defined data center needs to be programmable. It is one of the reasons why we've been so successful. And if you go back and think about all the questions that have been asked of me over the last 3 or 4 years around this area, the consistency of the answer has to do with the programmability of architecture, the richness of the software, the difficulties of the compilers, the ever-growing size of the models, the diversity of the models and the advances that these models are creating. And so we're seeing the beginning of a new computing era.
And a fixed function accelerator is simply not the right answer. And so we believe that the future is going to be accelerated. It's going to require an accelerated computing platform, and software richness is really vital, so that these data centers could be software defined. And so I think that we're in the early innings, the early innings, very, very early innings of this new future. And I think that accelerated computing is going to become more and more important.

--------------------------------------------------------------------------------
Operator    [18]
--------------------------------------------------------------------------------

          Your next question comes from the line of Matt Ramsay with Cowen.

--------------------------------------------------------------------------------
Matthew D. Ramsay,  Cowen and Company, LLC, Research Division - MD & Senior Technology Analyst    [19]
--------------------------------------------------------------------------------

          Obviously, congratulations on the data center success. I wanted to ask a little bit, Colette, about the -- you took $100 million out for coronavirus, and I wanted to ask a little bit about how you got to that number. Really 2 pieces. One, if you could remind us maybe in terms of units or revenue, how -- what percentage of your gaming business is within China? And as you look at that $100 million that you pulled out of the guidance, are you thinking about that from a demand disruption perspective? Or are you thinking about it from something in the supply chain that might limit your sales?

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [20]
--------------------------------------------------------------------------------

          Sure. Thanks for the question, Matt. So it's really still quite early in terms of trying to figure out what the impact from the overall coronavirus may be. So we're not necessarily precise in terms of our estimate. Yes, our estimates are split between an impact possibly on gaming and data center and split pretty much equally. The $100 million also reflects what may be supply challenges or may be overall demand. But we're still looking at those to get a better understanding where we think that might be.
In terms of our business and our business makeup, yes, our overall China business for gaming is an important piece. We have about 30% of our overall China gaming as a percentage of our overall gaming business. For data center, it's -- it moves quite a bit. They are a very important market for us, but it moves from quarter-to-quarter just based on the overall end customer mix as well as the system builds that they may choose. So it's a little harder to determine.

--------------------------------------------------------------------------------
Operator    [21]
--------------------------------------------------------------------------------

          Your next question comes from the line of Harlan Sur with JPMorgan.

--------------------------------------------------------------------------------
Harlan Sur,  JPMorgan Chase & Co, Research Division - Senior Analyst    [22]
--------------------------------------------------------------------------------

          Congratulations on the strong results and guidance. On gaming -- yes, no problem. Good to see the recent launch of your GeForce NOW service. But on the partnership with Tencent on cloud gaming, seems like Tencent should have a smoother transition to the cloud model. They are the largest gaming company in the world, so they own many of the games. They also have their own data center infrastructure already in place. But how is the NVIDIA team going to be supporting this partnership? Is it going to be [deal your] GeForce NOW hardware framework? Or will you just be supporting them with your standalone GPU products? And when do you expect the service to go mainstream?

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [23]
--------------------------------------------------------------------------------

          Let's see. Tencent is the world's largest publisher. China represents about 1/3 of the world's gaming, and transitioning to the cloud is going to be a long-term journey. And the reason for that is because Internet connection is not consistent throughout the entire market. And a lot of application still needs to be onboarded, and we're working very closely with them. We're super enthusiastic about it. If we're successful long term, and we're talking about an extra 1 billion gamers that we might be able to reach. And so I think that this is an exciting opportunity, just a long-term journey.
Now here in the West, we've had a lot more opportunity to refine the connections around the world and working through the data centers, the local hubs as well as people's WiFi routers at home. And so we've been in beta for quite some time, as you know. And here in the West, our platform is open. And we have several hundred games now and we're in the process of onboarding another 1,500 games. We're the only cloud platform that's based on Windows and allows us to be able to bring PC games to the cloud. And so the reach is -- we've had more experience here in the West with reach, and we've had -- we obviously have a lot more games that we can onboard. But I'm super enthusiastic about the partnership we have with Tencent.
Overall, our GeForce NOW -- you guys saw the launch, it's -- the reception has been fantastic, the reviews have been fantastic. Our strategy has 3 components. There's the GeForce NOW service that we provide ourselves. We also have GeForce NOW alliances with telcos around the world to reach the regions around the world that we don't have a presence in. And that is going super well, and I'm excited about that. And then lastly, partnerships with large publishers, for example, like Tencent. And we offer them our platform, of course, and a great deal of software and just a lot of engineering that has to be done in collaboration to refine the service.

--------------------------------------------------------------------------------
Operator    [24]
--------------------------------------------------------------------------------

          Your next question comes from the line of C.J. Muse with Evercore.

--------------------------------------------------------------------------------
Christopher James Muse,  Evercore ISI Institutional Equities, Research Division - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst    [25]
--------------------------------------------------------------------------------

          I guess a question on the gaming side. If I look at your overall revenue guide, it would seem to suggest that you're looking for typically, I guess, better seasonal trends into April. And I guess can you speak to that? And then how are you seeing desktop gaming demand with ray tracing content becoming more available? How should we think about the growth trajectory through 2020? And then just really as a modeling question as part of gaming, with notebook now 1/3 of the revenues, how should we think about kind of the seasonality going into April and July for that part of your business?

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [26]
--------------------------------------------------------------------------------

          Yes. So C.J., I'm going to go first, and then Colette is going take it home here. So the first part of it is this, our gaming business has at the end -- I'm sorry. Okay. Our gaming business, the end market demand is really terrific. It's really healthy. It's been healthy throughout the whole year. And it's pretty clear that RTX is doing fantastic. And it's very -- it's super clear now that ray tracing is the most important new feature of next-generation graphics. We have 30 -- over 30 games that have been announced, 11 games or so that have been shipped. The pipeline of ray tracing games that are going to be coming out is just really, really exciting. The second factor -- and one more thing about RTX, we finally have taken RTX down to $299. So it's now at the sweet spot of gaming. And so RTX is doing fantastic. The sell-through is fantastic all over the world.
The second part of our business that is changing in gaming is this -- the amount of notebook sales and the success of Nintendo Switch has really changed the profile of our overall gaming business. Our notebook business, as Colette mentioned earlier, has seen double-digit growth for 8 consecutive quarters, and this is unquestionably a new gaming category. Like it's a new game console. This is going to be the largest game console in the world, I believe. And the reason for that is because there are more people with laptops than there are of any other device. And so the fact that we've been able to get RTX into a thin and light notebook, a thin and light notebook, is really a breakthrough. And it's one of the reasons why we're seeing such great success in notebook. Between the notebook business and our Nintendo Switch business, the profile of gaming overall has changed and has become more seasonal. It's more seasonal because devices, systems, like notebooks and Switch, are built largely in 2 quarters, Q2 and Q3. And they build it largely in Q2 and Q3 because it takes a while to build them and ship them and put them into the hubs around the world. And they tend to build it ahead of the holiday season. And so that's one of the reasons why Q3 will tend to be larger and Q4 will tend to be more seasonal and Q1 will tend to be more seasonal than the past. But the end demand is fantastic. RTX is doing great. And part of it is just a result of the success of our notebooks. I'm going to hand it over to Colette.

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [27]
--------------------------------------------------------------------------------

          Yes. So with that from a background and you think about all those different components that are within gaming, the notebook, the overall Switch, and of course, all of the ray tracing that we have in terms of desktop, our normal seasonality, as we look at Q1 for gaming with all those 3 pieces, is usually sequentially down from Q4, sequentially down Q4 to Q1. This year, the outlook assumes it will probably be a little bit more pronounced due to the coronavirus. So in total, we're probably looking at Q1 to be in the low double-digit sequential decline in gaming.

--------------------------------------------------------------------------------
Operator    [28]
--------------------------------------------------------------------------------

          Your next question comes from the line of Atif Malik with Citi.

--------------------------------------------------------------------------------
Atif Malik,  Citigroup Inc, Research Division - VP and Semiconductor Capital Equipment & Specialty Semiconductor Analyst    [29]
--------------------------------------------------------------------------------

          Good job on results and guide. On the same topic, coronavirus. Colette, I'm a bit surprised that the guidance -- the range on the guidance is not wider versus historic. Can you just talk about why not widen the range? And what went into that $100 million hit from the coronavirus?

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [30]
--------------------------------------------------------------------------------

          So Atif, thanks for the question. Again, it's still very early regarding the coronavirus. Our thoughts are out with both the employees, the families and others that are in China. So our discussions, both with our supply chain that is very prominent in the overall Asia region as well as our overall AIC makers as well as our customers, is as about as timely as we can be. And that went into our discussion and our thoughts on the overall guidance that we gave into our $100 million. We'll just have to see how the quarter comes through, and we'll discuss more when we get to it. But at this time, that was our best estimate at this time.

--------------------------------------------------------------------------------
Operator    [31]
--------------------------------------------------------------------------------

          Your next question comes from the line of William Stein with SunTrust.

--------------------------------------------------------------------------------
William Stein,  SunTrust Robinson Humphrey, Inc., Research Division - MD    [32]
--------------------------------------------------------------------------------

          Jensen, I'd love to hear your thoughts as to how you anticipate the inference market playing out. Historically, NVIDIA's had essentially all of the training market and little of the inference market in the last 1.5 years or so. I think that's changed where you've done much better in inference. Now you have the T4 in the cloud, you have EGX at the edge. And you have Jetson, I think, is what it's called at the sort of endpoint device. How do you anticipate that market for inference developing across those various positions? And how are you aligning your portfolio for that growth?

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [33]
--------------------------------------------------------------------------------

          Yes. Thanks a lot, Will. Let's see, I think the -- historically, inference has been a small part of our business because AI was still being developed. Deep learning, AI is not -- historical AI, classical machine learning weren't particularly suited for GPUs and weren't particularly suited for acceleration. It wasn't until deep learning came along that the amount of computation necessary is just extraordinary. And the second factor is the type of AI models that were developed. Eventually, if -- the type of models related to natural language understanding and conversational AI and recommendation systems, these require instantaneous response. The faster the answer, the more likely someone is going to click on the answer. And so you know that latency matters a great deal, and it's measurable. The effect on the business is directly measurable.
And so for conversational AI, for example, we've been able to reduce the latency of the entire pipeline from speech recognition to the language processing to, for example, fix the errors and such, come up with a recommendation to text to speech to the voice synthesis. That entire pipeline could take several seconds. We run it so fast that it's possible now for us to process the entire pipeline within a couple of hundred, 200, 300 milliseconds. That is in the realm of interactive conversation.
Beyond that, it's just simply too slow. And so the combination of AI models that are large and complex that are moving to inference, moving to production. And then secondarily, conversational AI and latency-sensitive models and applications where our GPUs are essential, now moving forward, I think you're going to see a lot more opportunities for us in inference.
The way to think about that long-term is acceleration is essential because of end of Dennard scaling. Process technology is going to demand that we compute in a different way. And the way that AI has evolved and deep learning, it suggests that acceleration on GPUs is just a really phenomenal approach.
Data centers are going to have to be software-defined. And I think as I mentioned, I think I mentioned earlier to another question, I believe that in the future, the data center will all be accelerated. It will be all running AI models, and it will be software-defined and it will be programmable and having an accelerated computing platform is essential. As you move out to the edge, it really depends on whether your platform is software-defined, whether it has to be programmable or whether it's fix functioned. There are many, many devices where the inference work is very specific. It could be something as simple as detecting changes in temperature or changes in sound or detecting motion. Those type of inference models are -- could still be based on deep learning. It's function-specific. You don't have to change it very often and you're running 1 or 2 models at any given point in time. And so those devices are going to be incredibly cost-effective.
I believe, those AI chips, you're going to have AI chips that are $0.50, $1, and you're just going to put it into something and it's going to be doing magical detections. The type of platforms that we're in, such as self-driving cars and robotics, the software is so complicated and there's so much evolution to come yet, and it's going to constantly get better. Those software-defined platforms are really the ideal targets for us. And so we call it AI at the edge, edge computing devices. One of the edge computing devices I'm very excited about is what people call mobile edge or basically 5G telco edge. That data center will be programmable. We recently announced that we partnered with Ericsson and we're going to be accelerating the 5G stack. And so that needs to be a software-defined data center. It runs all kinds of applications, including 5G. And those applications are going to be -- those opportunities are fantastic for us.

--------------------------------------------------------------------------------
Operator    [34]
--------------------------------------------------------------------------------

          Your next question comes from the line of Mark Lipacis with Jefferies.

--------------------------------------------------------------------------------
Mark John Lipacis,  Jefferies LLC, Research Division - MD & Senior Equity Research Analyst    [35]
--------------------------------------------------------------------------------

          Jensen, I guess I had a question about your -- how you think about the sustainability of your market position in the data center? And I guess in my simplistic view, about 12 years ago, you made out a consensus call to invest in CUDA software, distribute it to universities. Neural networking took off and you were the de facto standard, and here we are right now. And for me, what's interesting to hear is that the demand that you're seeing today for your products is from markets that's just developed within the last year. And my question is like, how do you think about your investment, your R&D investment strategy to make sure that you are staying way ahead of the market, of the competition and even your customers who are investing in these markets, too?

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [36]
--------------------------------------------------------------------------------

          Yes. Thanks, Mark. Our company has to live 10 years ahead of the market. And so we have to imagine where the world is going to be in 10 years' time, in 5 years' time and work our way backwards. Now our company is focused on one singular thing. The simplicity of it is incredible. And that one singular thing is accelerated computing, accelerated computing. And accelerated computing is all about the architecture, of course. It's about the complicated systems that we're in because throughput is high. When our acceleration, we can -- when we can compute 10x, 20x, 50x, 100x faster than the CPU, all of a sudden, everything becomes a bottleneck. Memory's a bottleneck, networking's a bottleneck, storage is a bottleneck, everything is a bottleneck. And so we have to be -- NVIDIA has to be a supremely good system designer. But the complexity of our stack, which is the software stack above it, is really where the investments over the course of the last -- some 29 years now, has really paid off.
NVIDIA, frankly, has been an accelerated computing company since the day it was born. And so we -- our company is constantly trying to expand the number of applications that we can accelerate. Of course, computer graphics was an original one, and we're reinventing it with real-time ray tracing. We have rendering, which is a brand-new application that we're making great progress in. We just talked -- I just mentioned 5G acceleration. Recently, we announced genomics computing. And so those are new applications that are really important to the future of computing.
In the area of artificial intelligence, from image recognition to natural language understanding, to conversation, to recommendation systems, to robotics and animation, the number of applications that we're going to accelerate in the field of AI is really, really broad. And each one of them are making tremendous progress and getting more and more complex. And so the question about the sustainability of our company really comes down to 2 dimensions. Let's assume for the fact -- let's assume for now that accelerated computing is the path forward, and we surely believe so. And there's a lot of evidence from the laws of physics to the laws of computer science that would suggest that accelerated computing is the right path forward. But this really basically comes down to 2 dimensions. One dimension is are we continuing to expand? Are we continuing to expand the number of applications that we can accelerate? Whether it's AI or computer graphics or genomics or 5G, for example.
And then the number -- and then the second is those applications, are they getting more impactful and adopted by the ecosystem, the industry? And are they continuing to be more complex? Those dimensions, the number of applications and the rich -- and the impact of those applications and the evolution, the growth of complexity of those applications, if those dynamics continue to grow, then I think we're going to do a good job. We're going to sustain. And so -- and I think when I spelled it out that way, it's basically the equation of growth of our company. I think it's fairly clear that the opportunities are fairly exciting ahead.

--------------------------------------------------------------------------------
Operator    [37]
--------------------------------------------------------------------------------

          Your next question comes from the line of Blayne Curtis with Barclays.

--------------------------------------------------------------------------------
Blayne Peter Curtis,  Barclays Bank PLC, Research Division - Director & Senior Research Analyst    [38]
--------------------------------------------------------------------------------

          Jensen, I just wanted to ask you on the auto side. I think at least one of your customers might have slowed out their program. Just kind of curious as you look out the next couple of years, the challenges, if the OEM is moving slower? And then just any perspective on the regulatory side, has anything changed there, would be helpful.

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [39]
--------------------------------------------------------------------------------

          I think that the automotive industry is struggling, but -- for all of the reasons that everybody knows. However, the enthusiasm to redefine and reinvent their business model has never been greater. Every single one of them, every single one of them would know now and they surely -- they've known for some time, and autonomous capabilities is really the vehicle to do that. They need to be tech companies. Every car company wants to be a tech company. They need to be a tech company. Every car company needs to be software-defined. And the platform by which to do so is an electric vehicle with autonomous autopilot capability. That car has to be software-defined. And this is their future and they're racing to get there.
And so although the automotive industry is struggling in near term, their opportunity has never been better in my opinion. The future of AV is more important than ever. The opportunity is very real. The benefits of autonomous is for whether it's safety, whether it's utility, whether it's cost reduction and productivity, has never been more clear. And so I think that I'm as enthusiastic as ever about the autonomous vehicles and the projects that we're working on are moving ahead. And so the near-term challenges of the automotive industry or whatever sales slowdown in China that they're experiencing, I feel badly about that. But the industry is as clearheaded about the importance of AV as ever.

--------------------------------------------------------------------------------
Operator    [40]
--------------------------------------------------------------------------------

          I will now turn the call back over to Jensen for any closing remarks.

--------------------------------------------------------------------------------
Jensen Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [41]
--------------------------------------------------------------------------------

          We had an excellent quarter with strong demand for NVIDIA RTX graphics and NVIDIA AI platforms and record data center revenue. NVIDIA RTX is reinventing computer graphics, and the market's response is excellent, driving a powerful upgrade cycle in both gaming and professional graphics, while opening whole new opportunities for us to serve the huge community of independent creative workers and social content creators and new markets in rendering and cloud gaming. Our data center business is enjoying a new wave of growth, powered by 3 key trends in AI, natural language understanding, conversational AI, deep recommenders, are changing the way people interact with the Internet. The public cloud demand for AI is growing rapidly. And as AI shifts from development to production, our inference business is gaining momentum. We'll be talking a lot more about these key trends and much more at next month's GTC Conference in San Jose. Come join me. You won't be disappointed. Thanks, everyone.

--------------------------------------------------------------------------------
Operator    [42]
--------------------------------------------------------------------------------

          Ladies and gentlemen, this concludes today's conference call. Thank you for participating. You may now disconnect.







--------------------------------------------------------------------------------
Definitions
--------------------------------------------------------------------------------
PRELIMINARY TRANSCRIPT: "Preliminary Transcript" indicates that the 
Transcript has been published in near real-time by an experienced 
professional transcriber.  While the Preliminary Transcript is highly 
accurate, it has not been edited to ensure the entire transcription 
represents a verbatim report of the call.

EDITED TRANSCRIPT: "Edited Transcript" indicates that a team of professional 
editors have listened to the event a second time to confirm that the 
content of the call has been transcribed accurately and in full.

--------------------------------------------------------------------------------
Disclaimer
--------------------------------------------------------------------------------
Thomson Reuters reserves the right to make changes to documents, content, or other 
information on this web site without obligation to notify any person of 
such changes.

In the conference calls upon which Event Transcripts are based, companies 
may make projections or other forward-looking statements regarding a variety 
of items. Such forward-looking statements are based upon current 
expectations and involve risks and uncertainties. Actual results may differ 
materially from those stated in any forward-looking statement based on a 
number of important factors and risks, which are more specifically 
identified in the companies' most recent SEC filings. Although the companies 
may indicate and believe that the assumptions underlying the forward-looking 
statements are reasonable, any of the assumptions could prove inaccurate or 
incorrect and, therefore, there can be no assurance that the results 
contemplated in the forward-looking statements will be realized.

THE INFORMATION CONTAINED IN EVENT TRANSCRIPTS IS A TEXTUAL REPRESENTATION
OF THE APPLICABLE COMPANY'S CONFERENCE CALL AND WHILE EFFORTS ARE MADE TO
PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS,
OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE CONFERENCE CALLS.
IN NO WAY DOES THOMSON REUTERS OR THE APPLICABLE COMPANY ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER
DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN
ANY EVENT TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S
CONFERENCE CALL ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE
MAKING ANY INVESTMENT OR OTHER DECISIONS.
--------------------------------------------------------------------------------
Copyright 2020 Thomson Reuters. All Rights Reserved.
--------------------------------------------------------------------------------