Skip to main content

Arista Networks Inc

Exchange: NYSESector: TechnologyIndustry: Computer Hardware

Arista Networks is an industry leader in data-driven, client-to-cloud networking for large AI, data center, campus, and routing environments. Its award-winning platforms deliver availability, agility, automation, analytics, and security through an advanced network operating stack.

Did you know?

Trading 12% above its estimated fair value of $151.90.

Current Price

$172.70

-0.01%

GoodMoat Value

$151.90

12.0% overvalued
Profile
Valuation (TTM)
Market Cap$217.48B
P/E61.93
EV$160.37B
P/B17.58
Shares Out1.26B
P/Sales24.15
Revenue$9.01B
EV/EBITDA47.83

Arista Networks Inc (ANET) — Q1 2024 Earnings Call Transcript

Apr 4, 202622 speakers8,409 words68 segments

AI Call Summary AI-generated

The 30-second take

Arista had a strong start to the year, beating its own expectations and raising its annual growth forecast. The company is seeing increased business across all its customer types and is becoming more confident about its future sales, especially related to building networks for artificial intelligence.

Key numbers mentioned

  • Q1 2024 revenue was $1.57 billion.
  • Non-GAAP earnings per share was $1.99.
  • Non-GAAP gross margin was 64.2%.
  • International revenue contribution was 20%.
  • AI networking revenue goal by 2025 is $750 million.
  • Total Addressable Market (TAM) is $60 billion.

What management is worried about

  • The deferred revenue balance can move significantly on a quarterly basis independent of underlying business drivers.
  • The timing of when AI pilots scale into production depends on factors like facility construction, GPU availability, and network performance.
  • The company expects some continued growth in inventory on a quarter-by-quarter basis as it receives components.

What management is excited about

  • Visibility to new AI and cloud projects is improving, and enterprise and provider activity continues to progress well.
  • The company is projecting above its Analyst Day range, now expecting 12% to 14% annual growth in 2024.
  • Arista is migrating four major AI Ethernet cluster wins from trials to pilots, connecting thousands of GPUs this year.
  • Ethernet is proving to offer at least a 10% improvement in job completion performance versus InfiniBand in AI workloads.
  • Customer activity in Q1 was high and better than normally seen, leading to increased confidence.

Analyst questions that hit hardest

  1. Samik Chatterjee (JPMorgan) — Implied Second-Half Growth Rate: Management responded by stating their numbers are getting larger and the increased annual guide itself shows confidence, but they will guide quarter-by-quarter.
  2. George Notter (Jefferies) — Ethernet vs. InfiniBand Performance Claim: Management gave a detailed, technical answer about job completion time in full clusters and future improvements with the Ultra Ethernet Consortium.
  3. Amit Daryanani (Evercore ISI) — CEO's Commitment and Executive Transition: Jayshree Ullal gave a personal response about her long-term commitment and hope that the departing COO might return someday.

The quote that matters

We are now projecting above our Analyst Day range of 10% to 12% annual growth in 2024.

Jayshree Ullal — CEO

Sentiment vs. last quarter

The tone is more confident and forward-looking, specifically shifting from caution about cloud spending and limited visibility to explicitly raising the annual growth forecast based on improved visibility and momentum across all customer sectors.

Original transcript

Operator

Welcome to the First Quarter 2024 Arista Networks Financial Results Earnings Conference Call. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the Arista website following this call. Ms. Liz Stine, Arista's Director of Investor Relations, you may begin.

O
LS
Liz StineDirector of Investor Relations

Thank you, operator. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal first quarter ending March 31, 2024. If you would like a copy of this release, you can access it online at our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the second quarter of the 2024 fiscal year, longer-term financial outlooks for 2024 and beyond. Our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on this call are expressed on a non-GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non-GAAP financial measures to GAAP financial measures in our earnings press release. With that, I will turn the call over to Jayshree.

JU
Jayshree UllalCEO

Thank you, Liz. Thank you, everyone, for joining us this afternoon for our First Quarter 2024 Earnings Call. Amidst all the network consolidation, Arista is looking to establish ourselves as the pure-play networking innovator for the next era, addressing at least a $60 billion TAM in data-driven client-to-cloud AI networking. In terms of Q1 specifics, we delivered revenue of $1.57 billion for the quarter with a non-GAAP earnings per share of $1.99. Services and Software Support Renewals contributed strongly at approximately 16.9% of revenue. Our non-GAAP gross margins of 64.2% were influenced by improved supply chain and inventory management, as well as a favorable mix of the enterprise. International contribution for the quarter registered at 20% with the Americas strong at 80%. As we kick off 2024, I'm so proud of the Arista team’s work and our consistent execution. We have been fortunate to build a seasoned management team for the past 10 to 15 years. Our core founders are very engaged in the company for the past 20 years. Ken is still actively programming and writing code, while Andy is our full-time chief architect for next-generation AI, silicon, and optics initiatives. Hugh Holbrook, our recently promoted Chief Development Officer, is driving our major platform initiatives in tandem with John McCool and Alex on the hardware side. This engineering team is one of the best in tech and networking that I have ever had the pleasure of working with. On behalf of Arista, I would like to express our sincere gratitude for Anshul Sadana's 16-plus wonderful years of instrumental service to the company in a diverse set of roles. I know he will always remain a well-wisher and supporter of the company. But Anshul, I'd like to invite you to say a few words.

AS
Anshul SadanaExecutive

Thank you, Jayshree. The Arista journey has been a very special one. We've come a long way from our startup base to over an $80 billion company today. Every milestone, every event, the ups and downs are all etched in my mind. I've had a multitude of roles and learned and grown more than what I could have ever imagined. I have decided to take a break and spend more time with family, especially when the kids are young. I'm also looking at exploring different areas in the future. I want to thank all of you on the call today, our customers, our investors, our partners, and all the well wishes over these years. Arista isn't just a workplace; it's family to me. It's the people around you that make life fun. Special thanks to Arista leadership, Chris, Ashwin, John McCool, Mark Foss, Ita, and Chantelle, Marc Taxay, Hugh Holbrook, Ken Duda, and many more. Above all, there are two very special people I want to thank: Andy Bechtolsheim for years of vision, passion, guidance, and listening to me. And of course, Jayshree. She hasn't been just my manager, but also my mentor and coach for over 15 years. Thank you for believing in me. I will always continue to be an Arista well-wisher. Back to you, Jayshree.

JU
Jayshree UllalCEO

Anshul, thank you for that very genuine and heartfelt expression of your huge contributions to Arista. It gives me goosebumps hearing your nostalgic memories. We will miss you and hope someday you will return back home. At this time, Arista will not be replacing the COO role and instead flattening the organization. We will be leveraging our deep bench strength of our executives who stepped up to drive our new Arista 2.0 initiatives. In particular, John McCool, our Chief Platform Officer; and Ken Kiser, our Group Vice President, have taken standard responsibility for our cloud, AI, tech initiatives, operations, and sales. On the non-cloud side, two seasoned executives have been promoted: Ashwin Kohli, Chief Customer Officer; and Chris Schmidt, Chief Sales Officer, will together address the global enterprise and provider opportunity. Our leaders have grown up in Arista for a long time with long tenures of a decade or more. We are quite pleased with the momentum across all our three sectors: Cloud and AI Titans, Enterprise, and Providers. Customer activity is high as Arista continues to impress our customers and prospects with our undeniable focus on quality and innovation. As we build our programmable network on the basis of our Universal Leaf/Spine topology, we are also constructing a suite of overlays such as zero-touch automation, security, telemetry, and observability. I would like to invite Ken Duda, our Founder, CTO, and recently elected to the Arista Board, to describe our enterprise NaaS strategy, as we drive toward our enterprise campus goal of $750 million in 2025. Over to you, Ken.

KD
Kenneth DudaCTO

Thank you, Jayshree, and thanks, everyone, for being here. I'm Ken Duda, CTO of Arista Networks. Excited to talk to you today about NetDL, the Arista Network Data Link and how it supports our Network-as-a-Service strategy. From the inception of networking decades ago, networking has involved rapidly changing data. Data about how the network is operating, which paths through the network are best, and how the network is being used. But historically, most of this data was simply discarded as the network changes state, and that which was collected can be difficult to interpret because it lacks context. Network addresses and port numbers by themselves provide little insight into what users are doing or experiencing. Recent developments in AI have proved the value of data. But to take advantage of these breakthroughs, you need to gather and store large data sets labeled suitably for machine learning. Arista is solving this problem with NetDL; we continually monitor every device, not simply taking snapshots, but rather streaming every network event, every counter, every piece of data in real-time, archiving a full history in NetDL. Alongside this device data, we also collect flow data and inbound network telemetry data gathered by our switches. Then we enrich this performance data further with user, service, and application layer data from external sources, enabling us to understand not just how each part of the network is performing, but also which users are using the network for what purposes and how the network behavior is influencing their experience. NetDL is a foundational part of the EOS stack, enabling advanced functionality across all of our use cases. For example, in AI fabrics, NetDL enables fabric-wide visibility, integrating network data and NIC data to enable operators to identify misconfigurations or misbehaving hosts and pinpoint performance bottlenecks. But for this call, I want to focus on how NetDL enables Network-as-a-Service. Network-as-a-Service or NaaS is Arista's strategy for up-leveling our relationship with our customers, taking us beyond simply providing network hardware and software by also providing customers or service provider partners with tools for building and operating services. The customer selects the service model, configures service instances, and Arista's CV NaaS handles the rest: equipment selection, deployment, provisioning, building, monitoring, and troubleshooting. In addition, CV NaaS provides end-user self-service, enabling customers to manage their service systems, provision new endpoints, provision new virtual topologies, set traffic prioritization policies, set access rules, and get visibility into their use of the service and its performance. One can think of NaaS as applying cloud computing principles to the physical network: reusable design patterns, scale, autonomous operations, multi-tenant from top to bottom, with cost-effective automated end-user self-service. And we couldn't get to the starting line without NetDL, as NetDL provides a database foundation of NaaS service deployment and monitoring. Now NaaS is not a separate SKU, but really refers to a collection of functions in addition. For example, Arista Validated Designs or AVD is a provisioning system; it's an early version of our NaaS Service Instance Configuration tool. Our AGNI services provide global, location-independent identity management needed to identify customers within NaaS. Our UNO product, or Universal Network Observability, will ultimately become the service monitoring element of NaaS. And finally, our NaaS solution has security integrated through our ZTN or Zero Trust Networking product that we showcased at RSA this week. Thus, our NaaS vision simultaneously represents a strategic business opportunity for us, while also serving as a guiding principle for our immediate CloudVision development efforts. While we are really excited about the future here, our core promise to our investors and customers is unchanging and uncompromised: we will always put quality first. We are incredibly proud of the amount of success customers have had deploying our products because they really work. And as we push hard building sophisticated new functions in the NetDL and NaaS areas, we will never put our customers' networks at risk by cutting corners on quality. Thank you.

JU
Jayshree UllalCEO

Thank you, Ken, for your tireless execution in the typical Arista way. In an era characterized by stringent cybersecurity, observability is an essential perimeter and imperative. We cannot secure what we cannot see. We launched CloudVision and UNO in February 2024 based on the EOS Network Data Link Foundation that Ken just described for universal network observability. CloudVision UNO delivers fault detection, correction, and recovery. It also brings deep analysis to provide a composite picture of the entire network with improved discovery of applications, hosts, workloads, and IT systems of record. Okay. Switching to AI. Of course, no call is complete without that. As generative AI training tasks evolve, they are made up of many thousands of individual iterations. Any slowdown due to the network can critically impact application performance, creating inefficient wait states and idling away processor performance by 30% or more. The time taken to reach coherence, known as job completion time, is an important benchmark achieved by building proper scale-out AI networking to improve the utilization of these precious and expensive GPUs. Arista continues to have customer success across our innovative AI for networking platforms. In a recent blog from one of our large Cloud and AI Titan customers, Arista was highlighted for building a 24,000-node GPU cluster based on our flagship 7,800 AI Spine. This cluster tackles complex AI training tasks that involve a mix of model and data parallelization across thousands of processors. Ethernet is proving to offer at least a 10% improvement in job completion performance across all packet sizes versus InfiniBand. We are witnessing an inflection of AI networking and expect this to continue throughout the year and decade. Ethernet is emerging as a critical infrastructure across both front-end and back-end AI data centers. AI applications simply cannot work in isolation and demand seamless communication among the compute nodes, consisting of back-end GPUs and AI accelerators as well as the front-end nodes like the CPUs, alongside storage and IP/WAN systems as well. If you recall, in February, I shared with you that we are progressing well in four major AI Ethernet clusters that we won versus InfiniBand recently. In all four cases, we are now migrating from trials to pilots, connecting thousands of GPUs this year, and we expect production in the range of 10,000 to 100,000 GPUs in 2025. Ethernet at scale is becoming the de facto network choice for scale-out AI training workloads. A good AI network needs a good data strategy, delivered by our highly differentiated EOS and network data lake architecture. We are, therefore, becoming increasingly constructive about achieving our AI target of $750 million in 2025. In summary, as we continue to set the direction of Arista 2.0 networking, our visibility to new AI and cloud projects is improving, and our enterprise and provider activity continues to progress well. We are now projecting above our Analyst Day range of 10% to 12% annual growth in 2024. And with that, I'd like to turn it over to Chantelle for the very first time as Arista's CFO, to review financial specifics and tell us more. Warm welcome to you, Chantelle.

CB
Chantelle BreithauptCFO

Thank you, Jayshree, and good afternoon. The analysis of our Q1 results and our guidance for Q2 2024 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-related charges, and other nonrecurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. Total revenues in Q1 were $1.571 billion, up 16.3% year-over-year and above the upper end of our guidance of $1.52 billion to $1.56 billion. This year-over-year growth was led by strength in the enterprise vertical with cloud doing well as expected. Services and subscription software contributed approximately 16.9% of revenue in the first quarter, down slightly from 17% in Q4. International revenues for the quarter came in at $316 million or 20.1% of total revenue, down from 22.3% in the last quarter. This quarter-over-quarter reduction reflects the quarterly volatility and includes the impact of an unusually high contribution from our EMEA in-region customers in the prior quarter. In addition, we continue to see strong revenue growth in the U.S. with solid contributions from our Cloud Titan and Enterprise customers. Gross margin in Q1 was 64.2%, above our guidance of approximately 62%. This is down from 65.4% last quarter and up from 60.3% in Q1 FY '23. The year-over-year margin accretion was driven by three key factors: Supply chain productivity gains led by the efforts of John McCool, Mike Capes and his operational team, a stronger mix of Enterprise business, and a favorable revenue mix between product, services, and software. Operating expenses for the quarter were $265 million or 16.9% of revenue, up from last quarter at $262.7 million. R&D spending came in at $164.6 million or 10.5% of revenue, down slightly from $165 million last quarter. This reflected increased head count offset by lower new product introduction costs in the period due to the timing of prototypes and other costs associated with our next-generation products. Sales and marketing expense was $83.7 million or 5.3% of revenue, compared to $83.4 million last quarter, with increased head count costs offset by discretionary spending that is delayed until later this year. Our G&A costs came in at $16.7 million or 1.1% of revenue, up from 0.9% of revenue in the prior quarter. Income from operations for the quarter was $744 million or 47.4% of revenue. Other income for the quarter was $62.6 million, and our effective tax rate was 20.9%. This resulted in net income for the quarter of $637.7 million or 40.6% of revenue. Our diluted share number was 319.9 million shares, resulting in a diluted earnings per share number for the quarter of $1.99, up 39% from the prior year. Now turning to the balance sheet. Cash, cash equivalents, and investments ended the quarter at approximately $5.45 billion. During the quarter, we repurchased $62.7 million of our common stock. And in April, we repurchased an additional 82 million for a total of $144.7 million at an average price of $269.80 per share. We have now completed share repurchases under our existing $1 billion Board authorization, whereby we repurchased 8.5 million shares at an average price of $117.20 per share. In May 2024, our Board of Directors authorized a new $1.2 billion stock repurchase program, which commences in May 2024 and expires in May 2027. The actual timing and amount of future repurchases will be dependent upon market and business conditions, stock price, and other factors. Now turning to operating cash performance for the first quarter. We generated approximately $513.8 million of cash from operations in the period, reflecting strong earnings performance, partially offset by ongoing investments in working capital. DSLs came in at 62 days, up from 61 days in Q4 driven by significant end-of-quarter service renewals. Inventory turns were 1, flat to last quarter. Inventory increased slightly to $2 billion in the quarter, up from $1.9 billion in the prior period, reflecting the receipt of components from our purchase commitments and an increase in switch-related finished goods. Our purchase commitments at the end of the quarter were $1.5 billion, down from $1.6 billion at the end of Q4. We expect this number to level off as lead times continue to improve but will remain somewhat volatile as we ramp up new product introductions. Our total deferred revenue balance was $1.663 billion, up from $1.506 billion in Q4 of fiscal year 2023. The majority of the deferred revenue balance is services-related and directly linked to the timing and term of service contracts, which can vary on a quarter-by-quarter basis. Our product deferred revenue balance decreased by approximately $25 million versus last quarter. We expect 2024 to be a year of significant new product introductions, new customers, and expanded use cases. These trends may result in increased customer-specific acceptance clauses and increase the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers. Accounts payable days were 36 days, down from an unusually high 75 days in Q4, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $9.4 million. Now turning to our outlook for the second quarter and beyond. I have now had a quarter working with Jayshree, the leadership team, and the broader Arista ecosystem, and I am excited about both our current and long-term opportunities in the markets that we serve. The passion for innovation, our agile business operating model, and employee commitment to our customers' success are foundational. We are pleased with the momentum being demonstrated across all the segments: Enterprise, Cloud, and Providers. With this, we are raising our revenue guidance to an outlook of 12% to 14% growth for fiscal year 2024. On the gross margin front, given the expected end-customer mix combined with continued operational improvements, we remain with the fiscal year 2024 outlook of 62% to 64%. Now turning to spending and investments. We continue to monitor both the overall macro environment and overall market opportunities, which will inform our investment prioritization as we move through the year. This will include a focus on targeted hires and leadership roles, R&D, and the go-to-market team as we see opportunities to acquire strong talent. On the cash front, while we will continue to focus on supply chain and working capital optimization, we expect some continued growth in inventory on a quarter-by-quarter basis, as we receive components from our purchase commitments. With these sets of conditions and expectations, our guidance for the second quarter, which is based on non-GAAP results and excludes any non-cash stock-based compensation impacts and other nonrecurring items, is as follows: revenues of approximately $1.62 billion to $1.65 billion, gross margin of approximately 64%, and operating margin at approximately 44%. Our effective tax rate is expected to be approximately 21.5% with diluted shares of approximately 320.5 million shares. I will now turn the call back to Liz for Q&A.

LS
Liz StineDirector of Investor Relations

Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away.

Operator

And your first question comes from the line of Atif Malik with Citi.

O
UA
Unknown AnalystAnalyst

It's Adrienne for Atif. I was hoping you could comment on your raised expectations for the full year with regards to customer mix. It sounds like from your gross margin guidance, you're seeing a higher contribution from Enterprise, but I was hoping you could comment on the dynamics you're seeing with your Cloud Titans.

JU
Jayshree UllalCEO

Yes. So as Chantelle and I described, when we gave our guidance in November, we didn't have much visibility beyond 3 to 6 months, and so we had to go with that. The activity in Q1 alone, and I believe it will continue in the first half, has been much beyond what we expected. And this is true across all three sectors: Cloud and AI Titans, Providers, and Enterprise. So we're feeling good about all three, and therefore, have raised our guidance earlier than we probably would have done in May. I think we would have ideally liked to look at two quarters. Chantelle, what do you think, but I think we felt good enough.

CB
Chantelle BreithauptCFO

Yes. No, I think we saw because of the diversified momentum and the mix of the momentum that gave us confidence.

Operator

And your next question comes from the line of Samik Chatterjee with JPMorgan.

O
SC
Samik ChatterjeeAnalyst

I guess, Jayshree and Chantelle, I appreciate the sort of raise in guidance for the full year here. But when I look at it on a half-over-half basis in terms of what you're implying. If I'm doing the math correct, you're implying about a sort of 5%, 6% half-over-half growth, which when I go back and look at previous years, you've probably only seen that in one year out of the last 5 or 6 that you've been in that sort of range or below that. Every other year it's been better than that. I'm just wondering, you mentioned the Q1 activity that you've seen across the board. Why are we not seeing a bit more of a half-over-half uptick than in sort of the momentum in the back half?

JU
Jayshree UllalCEO

It's like anything else. Our numbers are getting larger and larger. So activity has to translate to larger numbers. So of course, if we see it improve even more, we'll guide appropriately for the quarter. But at the moment, we're feeling very good just increasing our guide from 10% to 12% to 12% to 14%. As you know, Arista doesn't traditionally do that so early in the year. So please read that as confidence, but cautiously confident or optimistically confident, but nevertheless confident.

Operator

And your next question comes from the line of Ben Reitzes with Melius Research. We will move on to the next question from George Notter with Jefferies.

O
GN
George NotterAnalyst

I would like to focus on something you mentioned earlier. You stated that Ethernet was 10% better than InfiniBand. My notes are lacking detail on this. Could you clarify what comparison you were referring to with InfiniBand? I would really appreciate any additional information on this topic.

JU
Jayshree UllalCEO

Certainly, George. Historically, as you know, when you look at InfiniBand and Ethernet in isolation, there are a lot of advantages of each technology. Traditionally, InfiniBand has been considered lossless and Ethernet is considered to have some loss properties. However, when you actually put a full GPU cluster together along with the optics and everything, and you look at the coherence of the job completion time across all packet sizes, data has shown that and this is data we have gotten from third parties, including Broadcom, that just about in every packet size in a real-world environment, independent of the comparing those technologies, the job completion time of Ethernet was approximately 10% faster. So you can look at these things in silos. You can look at it in a practical cluster, and in a practical cluster we are already seeing improvements on Ethernet. Now don't forget, this is just Ethernet as we know it today. Once we have the ultra Ethernet consortium and some of the improvements you're going to see on packet spring and dynamic load balancing and congestion control, I believe those numbers will get even better.

GN
George NotterAnalyst

Got it. I assume you're talking about RoCE here as opposed to just straight up Ethernet, is that correct?

JU
Jayshree UllalCEO

In all cases, right now, pre UEC, we're talking about RDMA or Ethernet, exactly. RoCE version two, which is the most deployed NIC you have in most scenarios. But with optimized RoCE, we're seeing the 10% improvement. Imagine when we go to UEC.

GN
George NotterAnalyst

I know you guys are also working on your own version of Ethernet, presumably, it blends into the UEC standard over time. But what do you think the differential might be there relative to InfiniBand? Do you have a sense on what that might look like?

JU
Jayshree UllalCEO

We have not finalized our metrics yet, but we are not developing our own version of Ethernet. Instead, we are focused on creating a UEC compatible and compliant version of Ethernet. There are two key components to this: what we implement on the switch and what others do on the NIC. Regarding the switch, we have already designed the Etherlink architecture, which accounts for buffering, congestion control, and load balancing, but we will need to implement some software enhancements. We are particularly looking for improvements in the NICs, especially at 400 and 800, as these advancements will enhance performance from the server to the switch. It's essential for both parts to collaborate effectively. Thank you, George.

Operator

Your next question comes from the line of Ben Reitzes with Melius Research.

O
BR
Benjamin ReitzesAnalyst

I was wondering if you can characterize how you're seeing NVIDIA in the market right now. And are you seeing yourselves go more head-to-head? How do you see that evolving? And if you don't mind also, I think NVIDIA moves to a more systems-based approach potentially with Blackwell. How do you see that impacting your competitiveness with NVIDIA?

JU
Jayshree UllalCEO

Yes. Thanks, Ben, for a loaded question. First of all, I want to thank NVIDIA and Jensen. I think it's important to understand that we wouldn't have a massive AI networking opportunity if NVIDIA didn't build some fantastic GPUs. So yes, we see them in the market all the time, mostly using our networks for their GPUs, and NVIDIA is the market leader there, and I think they've created an incremental market opportunity for us that we are very, very responsive to. Now do we see them in the market? Of course, we do. I see them on GPUs. We also see them on the RoCE or RDMA Ethernet NIC side. And then sometimes we see them, obviously, when they're pushing InfiniBand, which has been, for the most part, the de facto network of choice. You might have heard me say last year or the year before, I was outside looking into this AI networking. But today, we feel very pleased that we are able to be the scale-out network for NVIDIA's GPUs and NICs based on Ethernet. We don't see NVIDIA as a direct competitor yet on the Ethernet side. I think it's 1% of their business. It's 100% of our business. So we don't worry about that overlap at all. And we think we've got 20 years of founding to now experience to make our Ethernet switching better and better at both on the front end and back end. So we're very confident that Arista can build the scale-up network and work with NVIDIA scale-up GPUs. Thank you, Ben.

Operator

Your next question comes from the line of Amit Daryanani with Evercore ISI.

O
AD
Amit DaryananiAnalyst

I guess, Jayshree, given some of the executive transitions you've seen at Arista, can you just perhaps talk about what you can, the discussion you've had with the Board around your desire, your commitment to remain the CEO? Does anything change there? That would be really helpful. And then if I just go back to the job completion data that you talked about, given what you just said and the expected improvement, what are the reasons a customer would still use InfiniBand versus switching more aggressively with Ethernet?

JU
Jayshree UllalCEO

Well, first of all, you heard Anshul. I'm sorry to see Anshul decide to do other things. I hope he comes back. We've had a lot of executives make a U-turn over time, and we call them boomerangs. So I certainly hope that's true with Anshul. But we have a very strong bench. And we've been blessed to have a very constant bench for the last 15 years, which is very rare in our industry and in Silicon Valley. So while we're sorry to see Anshul make a personal decision to take a break, we know he'll remain a well-wisher. And we know the bench strength below Anshul will now step up to do greater things. As for my commitment to the Board, I have committed for multiple years. I think it's the wrong order. I wish Anshul had stayed and I had retired, but I'm committed to staying here for a long time.

Operator

And your next question comes from the line of Antoine Chkaiban with New Street Research.

O
AC
Antoine ChkaibanAnalyst

So as you can see, NVIDIA introduced in-network computing capabilities with NVSwitch, performing some calculations inside the switch itself. Perhaps now is not the best time to announce new products, but I'm curious about whether this is something the broader merchant silicon and Ethernet ecosystem could introduce at some point?

JU
Jayshree UllalCEO

Yes. So just for everyone else's benefit, a lot of the in-network compute is generally done as closest to the compute layer as possible, where you're processing the GPU. So that's a very natural place. I don't see any reason why we could not do those functions in the network and offload the network for some of those compute functions. It would require a little more state and built-in processing power, etc., but it's certainly very doable. I think it's going to be 601 and half a dozen of the other. Some would prefer it closest to the compute layer and some would like it network-wide for network scale at the network layer. So the feasibility is very much there in both cases, Antoine.

Operator

And your next question comes from the line of James Fish with Piper Sandler.

O
JF
James FishAnalyst

Anshul, we will miss you, and I share that sentiment, but I hope to see you soon. Jayshree, how are you approaching the timing of the 800-gig optics availability in relation to their use in systems? You've mentioned next-gen product announcements for several quarters, not just this one. Should we anticipate these developments to be more focused on adjacent use cases, the core, including AI or Software, and how does this align with our product roadmap?

JU
Jayshree UllalCEO

Yes. James, you might remember like deja vu, we've had similar discussions on 400 gig too. And as you well know, to build a good switching system, you need an ecosystem around it, whether it's the NICs, the optics, the cables, the accessories. So I do believe you'll start seeing some early introduction of optical and switching products for 800 gigs, but to actually build the entire ecosystem and take advantage, especially in the NICs, I think will take more than a year. So I think probably more into '25 or even '26. That being said, I think you're going to see a lot of systems. I had this discussion earlier. You're going to see a lot of systems where you can demonstrate high rating scale with 400 gig and go East-West much wider and build large clusters that are in the tens of thousands. And then once you have GPUs that source 800 gig, which even some of the recent GPUs don't, then you'll need not just higher ratings, but higher performance. So I don't see the ecosystem of 800 gig limiting the deployment of AI networks. That's an important thing to remember.

Operator

And your next question comes from the line of Simon Leopold with Raymond James.

O
VC
Victor ChiuAnalyst

This is Victor Chiu in for Simon Leopold. Do you expect Arista to see a knock-on effect from AI networking in the front end or at the edge as customers eventually deploy more AI workloads based, I'm sorry, biased towards inferencing? And then maybe help us understand how we might be able to size this if that's the case?

JU
Jayshree UllalCEO

We haven’t considered that in our $750 million projection for 2025, but you're correct that as we expand back-end capabilities, they need to connect to something. Typically, this would involve connecting to the front end of our compute, storage, and WAN networks instead of creating new IP and adaptive routing solutions. We anticipate that the deployment of more back-end clusters will lead to a more uniform architecture across compute, storage, memory, and an overall holistic network for AI in the next phase. Our priority is to deploy the clusters first, and our customers expect this integrated approach. This is also one reason why they view us positively; they want to avoid creating separate silos or islands of AI clusters and prefer to unify their AI data centers.

Operator

And your next question comes from the line of Meta Marshall with Morgan Stanley.

O
MM
Meta MarshallAnalyst

Maybe I'll flip James' question and just kind of ask what do you see as kind of some of the bottlenecks from going from pilots to ultimate deployments? It sounds like it's not necessarily 800 gig. And so is it just a matter of time? Are there other pieces of the ecosystem that need to fall into place before some of those deployments can take place?

JU
Jayshree UllalCEO

I wouldn't refer to them as bottlenecks. It's more about timing and familiarity. Everyone knows how to deploy in the Cloud; it's somewhat plug-and-play. However, even in the Cloud, many use cases have emerged. The primary use case for AI networking is to create the fastest training workloads and clusters, focusing on performance. Power and GPU cooling are significant factors. Often, it's just a matter of waiting for facilities and infrastructure to be properly set up. On the operating system side, there's a lot of foundational work required. They need to determine what is needed in the cluster, including hashing, load balancing, Layer 2 or Layer 3 setup, visibility features, and WAN connectivity. Additionally, as mentioned, there's the whole transition from 400 to 800, but we're noticing less of that because it largely depends on familiarity and understanding how to manage the cluster effectively for optimal job completion time and GPU availability since no one can afford downtime. Ken, I would like to hear your thoughts on this.

KD
Kenneth DudaCTO

Yes. Thanks, Jayshree. I believe the main issue in deployments is the availability of all necessary components. There is significant pent-up demand for these technologies, and we observe clusters being developed as quickly as companies can construct the facilities, acquire GPUs, and set up the required networking. We are extremely well positioned in this regard, as we have years of experience building large-scale storage clusters for some of the biggest players in the cloud industry. While storage clusters differ from AI clusters, they share challenges in managing a large-scale back-end network that requires proper load balancing and can handle sudden spikes in demand. Our work in congestion management for storage networks is also applicable to AI networks. The topic of InfiniBand frequently arises, and I want to emphasize that Ethernet has been around for about 50 years. Throughout these decades, Ethernet has consistently outperformed various technologies like Token ring, SONET, ATM, FDDI, HIPPI, and Scalable Coherent Interconnect. The common thread in these competitive situations is that Ethernet has emerged victorious. This success can be attributed to Metcalfe's law, which states that the value of a network increases quadratically with the number of nodes connected. Therefore, anyone trying to create an alternative to Ethernet starts at a significant disadvantage, and any short-term edge they might have due to specific technology cycles will likely be surpassed by the connectivity benefits that Ethernet offers.

Operator

And your next question comes from the line of Ben Bollin with Cleveland Research Company.

O
BB
Ben BollinAnalyst

Jayshree, you made a comment that back when we had guidance in November, you had about 3 to 6 months of visibility. Could you take us through what type of visibility you have today? And maybe compare and contrast the different subsets of customers and how they differ?

JU
Jayshree UllalCEO

Thank you, Ben. That's a good question. So let me take it by category, like you said. In the Cloud and AI Titans, in November, we were really searching for even 3 months visibility; 6 would have been amazing. Today, I think after a year of tough situations for us where the Cloud Titans were pivoting rather rapidly to AI and not thinking about the Cloud as much. We're now seeing a more balanced approach where they're still doing AI, which is exciting, but they're also expanding their regions on the Cloud. So I would say our visibility has now improved to at least 6 months, and maybe it gets longer as time goes by. On the Enterprise, I don't know. I'm not a bellwether for macro, but everybody else is citing macro, but I'm not seeing macro. What we're seeing with Chris Schmidt and Ashwin and the entire team is a profound amount of activity in Q1, better than we normally see in Q1. Q1 is usually when they come back from the holidays. January slow; there's some East Coast storms to deal with, winter is still strong. But we have had one of the strongest activities in Q1, which leads us to believe that it can only get better for the rest of the year, and hence, the guide increase from an otherwise conservative team of Chantelle and myself, right? And then the Tier 2 cloud providers, I want to speak to them for a moment because not only are they strong for us right now, but they are starting to pick up some AI initiatives as well. So they're not as large as close as the Cloud Titans, but the combination of the Service Providers and the Tier 2 Specialty Providers is also seeing some momentum. So overall, I would say our visibility has now improved from 3 months to over 6 months. And in the case of the Enterprise, obviously, our sales cycles can be even longer. So it takes time to convert into wins. But the activity has never been higher.

Operator

And your next question comes from the line of Michael Ng with Goldman Sachs.

O
MN
Michael NgAnalyst

It was very encouraging to hear about the transition from trials to pilots with ANET's production rollout to support GPUs ranging from 10,000 to 100,000 for 2025. First, could you discuss some of the key factors that determine whether we end up at the high end or the low end of that range? Then, assuming a cost of $250,000 per GPU, that would suggest around $25 billion in compute spending. ANET's target of $750 million would only represent about 3% of the high end. I believe you've mentioned that networking typically accounts for 10% to 15% of compute historically. Could you clarify if there's anything I'm overlooking in those assumptions?

JU
Jayshree UllalCEO

Yes. Thank you, Michael. I think we could do better next year. But your point is well taken that in order to go from 10,000 GPUs to 30,000, 50,000, 100,000, a lot of things have to come together. First of all, let's talk about the data center or AI center facility itself. There's a tremendous amount of work and lead time that goes into the power, the cooling, the facilities. And so now when you're talking this kind of production as opposed to proving something in the lab, that's a key factor. The second one is the GPU, the number of GPUs, the location of the GPUs, the scale of the GPUs, the locality of these GPUs, should they go with Blackwell; should they build with a scale-up inside the server or scale out to the network? So the whole center of gravity. What's nice to watch, which is why we're more constructive on the 2025 numbers is that the GPU lead times have significantly improved, which means more and more of our customers will get more GPUs, which in turn means they can build out to scale our network. But again, a lot of work is going into that. And the third thing I would say is the scale, the performance, how much ratings they want to put in. And then I'll give a quick analogy here. We ran into something similar on the Cloud when we were talking about 4-way CMP or 8-way CMP or these railway designs that is often called and the number of NICs you connect to go 8-way or 4-way or 12-way or switch off and go to 800 gig, the performance and scale will be the third metric. So I think power, GPU locality, and performance of the network are the three major considerations that allow us to get more positive on the rate of production in 2025.

Operator

And your next question comes from the line of Matthew Niknam with Deutsche Bank.

O
MN
Matthew NiknamAnalyst

I got to ask one more on AI. Sorry to beat a dead horse. But as we think about the stronger start to the year and the migration from trials to pilots specific in relation to AI, is there a ramp towards getting to that $750 million next year? And I guess more importantly, is there any material contribution baked into this year's outlook? Or is there any contribution that may be driving the 2 percentage point increase relative to the prior guide for '24?

JU
Jayshree UllalCEO

Chantelle, you want to take that? I've been talking about AI a lot. I think you should.

CB
Chantelle BreithauptCFO

Yes, I can take this AI question. So I think that when you think about the $750 million target that has become more constructive to Jayshree's prepared remarks, that's a glide path. So it's not zero in '24; it's a glide path to '25. So I would say there is some assumed in the sense of it's a glide path, but it will end in 2025 at the $750 million in the glide path, not a hockey stick. Yes.

JU
Jayshree UllalCEO

It's not zero this year, Matt, for sure.

Operator

And your next question comes from the line of Sebastien Naji with William Blair.

O
SN
Sebastien Cyrus NajiAnalyst

I've got a non-AI question here. So maybe you can talk a little bit about some of the incremental investments that you're making within your go-to-market this year, particularly as you look to grab some share from competitors. A lot of them are going through some type of disruption, one or the other acquisitions, etc. And then what you might be doing with the channel partners to land more of those mid-market customers as well?

JU
Jayshree UllalCEO

Yes. Sebastian, we're probably doing a little more on investment than we have done enough progress on channel partners, to be honest. But the last couple of years, we were getting very apologetic about our lead times. Our lead times have improved. So we have stepped up our investment on go-to-market, where I'm expecting Chris Schmidt and Ashwin's team to grow significantly. Judging from the activities they've had and the investments they've been making in '23 and '24, we're definitely going to continue to accelerate on that front. I think our investments in AI and Cloud Titans remain about the same because while there is a significant technical focus on the systems engineering and product side, we don't see a significant change on the go-to-market side. And on the channel partners, I would say, where this really comes to play, and this will play out over multiple years, it's not going to happen this year is on the campus. Today, our approach on the campus is really going after our larger Enterprise customers. We got 9,000 customers, probably 2,500 that we're really going to target. And so our mid-market is more targeted at specific verticals like healthcare, education, public sector. And then we appropriately work with the channel partners in the region, in the country, to deal with that. To get to the first billion, I think this will be a fine strategy. As we aim beyond $750 million to $1 billion, and we need to go to the second billion, absolutely, we need to do more work on channels. This is still a work in progress.

Operator

And your next question comes from the line of Aaron Rakers with Wells Fargo.

O
AR
Aaron RakersAnalyst

I'm going to shift gears away from AI actually. Jayshree, if we look at the server market over the past handful of quarters, we've seen unit numbers down quite considerably. I'm curious as you look at some of your larger Cloud customers, how you would characterize the traditional server side and whether or not you're seeing signs of them moving past this kind of optimization phase and whether or not you think a server refresh cycle in front of you could be an incremental catalyst for the company?

JU
Jayshree UllalCEO

Yes. No, I think if you remember, there was this one dreadful year where one of our customers skipped a service cycle. But generally speaking, on the front-end network now, we're going back to the cloud. And we do see service refresh and service cycles continuing to be in the three to five years. For performance upgrades, they like three, but occasionally, some of them may go a little higher. So absolutely, we believe there will be another cloud cycle because of the server refresh. And the associated use cases, because once you do that on the server, there’s appropriately the regional spine and then the data center interconnect and storage and so much ripple effect from that server use case upgrade. That side of compute and CPU is not changing; it's continuing to happen. In addition to which we're also seeing more and more regional expansion. New regions are being created and designed and outfitted for the cloud by our major Titans.

Operator

And your next question comes from the line of Karl Ackerman with BNP Paribas.

O
KA
Karl AckermanAnalyst

Jayshree, you spoke about how you are not seeing any slowness in Enterprise. I'm curious whether that is being driven by the growing mix of your software revenue? And do you think the deployment of AI Networks on-prem can be a more meaningful driver for your Enterprise and financial customers in the second half of fiscal '24? Or will that be more of a fiscal '25 event?

JU
Jayshree UllalCEO

Well, that's a very good question. I have to analyze this some more. I would say our Enterprise activity is really driven by the fact that Ken has produced some amazing software quality and innovation. And we have a very high-quality Universal topology, where you don't have to buy 5 different OSs and 50 different images and operate this network with thousands of people. It's a very elegant architecture that applies to the data center use case that you just outlined for Leaf/Spine. The same Universal Spine can apply to the campus; it applies to the wide area; it applies to the branch; it applies to security; it applies to observability. And you bring up a good point that while the enterprise use cases for AI are small, we are seeing some activity there as well. Relative to the large AI Titans, they're still very small. But think of them as back in the trial phase I was describing earlier, trials, pilot production, so a lot of our enterprise customers are starting to go in the trial phase of GPU clusters, so that's a nice use case as well. But the biggest ones are still in the data center, campus, and the general-purpose Enterprise.

LS
Liz StineDirector of Investor Relations

Operator, we have time for one last question.

Operator

And your final question comes from the line of David Vogt with UBS.

O
DV
David VogtAnalyst

So Jayshree, I have a question about AI, the roadmap, and the deployment schedule for Blackwell. It seems like the initial customer delivery is happening later this year, which might be slower than expected. How are you approaching this in terms of your roadmap, particularly regarding your plans for 2025? Does this delayed delivery potentially impact cloud spending this fall, especially considering the technology transition towards Blackwell and away from the legacy product?

JU
Jayshree UllalCEO

Yes. We're not seeing a pause yet. I don't think anybody is going to wait for Blackwell necessarily in 2024 because they're still bringing up their GPU clusters. And how a cluster is divided across multiple tenants, the choice of host, memory, storage architectures, optimizations on the GPU for collective communication, libraries, specific workloads, resilience, visibility, all of that has to be taken into consideration. All this to say, a good scale-out network has to be built, no matter whether you're connecting to today's GPUs or future Blackwell. So they're not going to pause the network because they're waiting for Blackwell. They're going to get ready for the network, whether it connects to a Blackwell or a current H100. So as we see it, the training workloads and the urgency of getting the best job completion time is so important that they're not going to spare any investments on the network side. And the network side can be ready no matter what the GPU is.

LS
Liz StineDirector of Investor Relations

Thanks, David. This concludes the Arista Networks First Quarter 2024 Earnings Call. We have posted a presentation, which provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today, and thank you for your interest in Arista.

Operator

Ladies and gentlemen, thank you for joining. This concludes today's call, and you may disconnect now.

O