Arista Networks Inc
Arista Networks is an industry leader in data-driven, client-to-cloud networking for large AI, data center, campus, and routing environments. Its award-winning platforms deliver availability, agility, automation, analytics, and security through an advanced network operating stack.
Trading 12% above its estimated fair value of $151.90.
Current Price
$172.70
-0.01%GoodMoat Value
$151.90
12.0% overvaluedArista Networks Inc (ANET) — Q4 2024 Earnings Call Transcript
Original transcript
Operator
Welcome to the Fourth Quarter 2024 Arista Networks Financial Results Earnings Conference Call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question-and-answer session. Instructions will be provided at that time. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website following this call. Mr. Rudolph Araujo, Arista's Head of Investor Advocacy. You may begin.
Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter ending December 31, 2024. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the first quarter of the 2025 fiscal year, longer-term business model and financial outlook for 2025 and beyond. Our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on this call are expressed on a non-GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non-GAAP financial measures to GAAP financial measures in our earnings press release. With that, I will turn the call over to Jayshree.
Thank you, everyone, for joining us this afternoon for our fourth quarter 2024 earnings call. First, I'd like to warmly welcome our new IR leadership duo of Rudolph Araujo, our Director of IR advocacy, supported by Rod Hall that many of you may know as our leader for IR strategy. Special thanks to Liz Stine for her tenure both as a systems engineer and IR lead at Arista. Well, I think you'll all agree that 2024 has been a memorable and defining year for Arista. We started with an initial guidance of 10% to 12% annual revenue growth. With the momentum of generative AI, we have achieved well beyond that at almost 20% growth, achieving record revenue of $7 billion coupled with a non-GAAP operating margin of 47.5%. Before I dwell on that more, let me get back to Q4 2024 specifics. We delivered revenues of $1.93 billion for the quarter, with a non-GAAP earnings per share of $0.65, adjusted for the recent 4-to-1 stock split. Our non-GAAP gross margins of 64.2% and was influenced by efficient supply chain and manufacturing, as well as a good mix of enterprise and software in the quarter. International contribution for the quarter registered at 16%, with the Americas super strong at 84%. Now shifting to annual sector revenue for 2024. Our cloud and AI titans contributed significantly at approximately 48%, keeping in mind that Oracle is a new member of this category. Enterprise and financials were strong at approximately 35%, while the providers, which now includes Apple, was at 17% approximately. Both Microsoft and Meta are greater than 10% concentration customers at approximately 20% and 14.6%, respectively. As you know, we cherish our privileged partnership with both of them very much. It has spanned over 14 years as we collaborate deeply with joint engineering and innovative AI and cloud products. In terms of annual 2024 product lines, our core cloud AI and data center products are built off a highly differentiated, extensible OS stack and is successfully deployed across 10, 25, 100, 200, 400 and 800 gigabit Ethernet speeds. It delivers power efficiency, high availability, automation and agility as the data center demand, insatiable bandwidth capacity and network speeds for both front end and back-end storage, compute and AI zones. This core product line grew approximately 65% of our revenue. We continue to gain market share and the highest performance of the switching category of 100, 200 and 400 gig ports to attain the number one position at greater than 40% market share according to industry analysts in ports. We have increased our 400-gig customer base to approximately 1,000 customers last year in 2024. We expect 800 gigabit Ethernet to emerge as an AI back-end cluster in 2025. We remain optimistic about achieving our AI revenue goal of $1.5 billion in AI centers, which includes the $750 million in AI back-end clusters in 2025. Our network adjacencies market comprised of routing, replacing routers and the cognitive AI-driven campus is going well. Our investments in cognitive wireless zero-touch provisioning and network identity as well as sensors for threat mitigation are being received extremely well by our campus customers. Our recent modern stacking introduction of flag switched aggregation group is a fitting example of our compelling innovation for open and efficient networking, conserving IP addresses without proprietary methods. The post-pandemic campus is very different, and our customers are seeking alternatives to legacy incumbents with deep zero-trust security, high availability and observability embedded in the network across our software stack with Cloud vision management. We are committed to the $750 million goal in 2025 and much more ahead. We have successfully also deployed in routing, edge and peering use cases. Just in 2024 alone, we introduced six EOS software releases with greater than 600 new features across our core and adjacent offerings. The campus and routing adjacencies together contribute approximately 18% of revenue. On that category is network software and services based on subscription models, such as Arista A-Care, CloudVision, DMF observability and advanced security sensors for network detection and response. We added over 350 CloudVision customers translating to literally one new customer a day. CloudVision is pivotal to building our network as a service and deploying Arista validated designs in the enterprise. Arista's subscription-based network services and software contributed approximately 17% of total revenue. Note that perpetual licenses do not count here and go into the core or adjacent sections. While the 2024 headline has clearly been about generative AI, Arista continues to diversify its business globally with multiple use cases and verticals. We are viewed as the modern network innovator of choice for client to campus to cloud and AI networking ideally positioned with our differentiated foundation. We celebrated two milestones in 2024, our tenth anniversary of going public at the New York Stock Exchange and our 20th anniversary of founding. In the past decade, we have exceeded 10,000 customers with a cumulative 100 million ports of installed base, as Arista drives the epicenter of mission-critical network transactions. Arista 2.0 strategy is resonating exceptionally well with our customers. Customers are not only looking to connect but unify and consolidate their data across silos for optical networking outcomes. Our modern networking platforms are foundational for transformation from incongruent silos to centers of data, and it places us in a very unique position as the best-of-breed innovator for data-driven networking. These centers of data, as we call it, can reside in the campus as a campus center or data centers or WAM centers or AI centers, regardless of their location. Networking for AI is also gaining traction as we move into 2021 and building some of the world's greatest Arista AI centers at production scale. These are constructed with both back-end clusters and front-end networks. And as I've shared with you often, the fidelity of the AI traffic differs greatly from cloud workloads in terms of diversity, duration and size of flow. Just one slow flow can affect the entire job completion time for a training workload. Therefore, Arista AI centers seamlessly connect to the front end of compute storage WAN and classic cloud networks with our back-end Arista Etherlink portfolio. This AI accelerated networking portfolio consists of three families and over 20 Etherlink switches, not just one point switch. Our AI for networking strategy is also doing well, and it's about curating the data for higher-level network functions. We instrument our customers' networks with our published subscribed state Foundation with our software called Network Data Lake to deliver proactive, predictive and prescriptive platforms that have superior AI ops with a care support and product functions. We are pleased to surpass for the first time the $1 billion revenue mark in 2024 for the software and subscription service category. In 2024, we conducted three very large customer events in London, New York and Santa Clara, California. Our differentiated strategy of superior products is resonating deeply as we touched over 1,000 strategic customers and partners in these exclusive events. Simply put, we outpaced the industry in quality and support with the highest Net Promoter Score of 87, which translates to 93% of customer respondent satisfaction. Of course, we do that with the lowest security vulnerabilities and steadfast network innovation. In summary, 2024 has been a pivotal turning point for Arista. It has been a key breakaway year as we continue to aim for $10 billion annual revenue with a CAGR of double digits that we sent way back in November 2022 Analyst Day. While I do appreciate the exuberant support from our analyst community on our momentum, I would encourage you to pay attention to our stated guidance. We live in a dynamic world of changes, most of which have resulted in positive outcomes for Arista. We reiterate at the upper range of our 2025 guidance of our double-digit growth at 17%, now aiming for approximately $8.2 billion in 2025 in revenue. The Arista leadership team has driven outstanding progress across multiple dimensions. In 2024, we are at approximately 4,465 employees routed in engineering and customer investments. I'm incredibly proud of how we've executed and navigated the year based on our core principles and culture. While customers are struggling with customer fatigue from our legacy incumbents, Arista's redefining the future of data-driven net booking intimately with our strategic customers. With that, I'd like to turn it over to Chantelle, who has transitioned to become our core Arista and Chief Financial Officer in record time, less than a year. Over to you, Chantelle, and welcome again, and happy one year anniversary.
Thank you, Jayshree, and congratulations on the great 2024. My first full year as CFO has been more than I could have hoped for, and I am excited about Arista's journey ahead. Now on to the numbers. As a reminder, this analysis of our Q4, our full-year 2024 results and our guidance for Q1 2025 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-related charges and other non-recurring items. In addition, all share-related numbers are provided on a post-split basis to reflect the 4-to-1 stock split in December 2024. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. Total revenues in Q4 were $1.93 billion, up 25.3% year-over-year and above the upper end of our guidance of $1.85 billion to $1.9 billion. For fiscal year 2024, we are pleased to have delivered 19.5% in revenue growth, driven by achievements in all three of our product sectors. Services and subscription software contributed approximately 18.3% of revenue in the fourth quarter, up from 17.6% in Q3. International revenues for the quarter came in at $311.1 million or 16% of total revenue, down from 17.6% last quarter. This quarter-over-quarter decrease was driven by the relative increased mix of domestic revenue from our large global customers. The overall gross margin in Q4 was 64.2%, slightly above the guidance of 63% to 64% and down from 65.4% in the prior year. As a recap for the year, we delivered a gross margin result of 64.6%, compared with 62.6% for the prior year. This increase is largely due to a combination of improved supply chain and inventory management. Operating expenses for the quarter were $332.4 million or 17.2% of revenue, up from the last quarter at $279.9 million. R&D spending came in at $226.1 million or 11.7% of revenue, up from 9.8% last quarter. This matches the expectations discussed in our Q3 earnings call regarding the timing of engineering costs and other costs associated with the development of our next-gen products moving from Q3 to Q4. This finishes the year of R&D at 11.2% of revenue, demonstrating a continued focus on product innovation. Sales and marketing expense was $86.3 million or 4.5% of revenue, up from $83.4 million last quarter. This was driven by continued investment in both headcount and channel programs. Our G&A costs came in at $19.9 million or 1% of revenue, up from $19.1 million last quarter, reflecting continued investment in scaling the company. Our operating income for the quarter was $907.1 million or 47% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2024 of $3.3 billion or 47.5% of revenue. Congratulations to the Arista team on this impressive achievement. Other income and expense for the quarter was a favorable $89.3 million, and our effective tax rate was 16.7%. This lower-than-normal quarterly tax rate reflected the release of tax reserves due to the expiration of the statute of limitations on favorable changes in state taxes. This resulted in net income for the quarter of $830.1 million or 43% of revenue. Our diluted share number was 1.283 billion shares, resulting in a diluted earnings per share for the quarter of $0.65, up 25% from the prior year. For FY 2024, we are pleased to have delivered a diluted earnings per share of $2.27, a 31.2% increase year-over-year. Now turning to the balance sheet. Cash, cash equivalents and marketable securities ended the quarter at approximately $8.3 billion. In the quarter, we repurchased $123.8 million of our common stock at an average price of $94.80 per share. Within fiscal year 2024, we repurchased $423.6 million of our common stock at an average price of $77.13 per share. Of the $1.2 billion repurchase program approved in May 2024 and $921 million remains available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price and other factors. Now turning to operating cash performance for the fourth quarter. We generated approximately $1 billion of cash from operations in the period, reflecting strong earnings performance combined with an increase in deferred revenue, offset by an increase in income tax payments. DSOs came in at 54 days, down from 57 days in Q3, reflecting the timing of shipments and a strong collections performance by the team. Inventory turns were 1.4 times, up from 1.3% last quarter. Inventory increased marginally to $1.83 billion, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $3.1 billion, up from $2.4 billion at the end of Q3. As mentioned in prior quarters, this expected activity represents purchases for chips related to new products and AI deployments. We will continue to rationalize our overall purchase commitment number. However, we expect to maintain a healthy position related to key components and continue to have some variability in this number to meet customer demand and improve lead times in future quarters. Our total deferred revenue balance was $2.79 billion, up from $2.51 billion in the prior quarter. The majority of the deferred revenue balance is services related and directly linked to the timing and term of service contracts, which can vary on a quarter-by-quarter basis. Our product deferred revenue balance increased by approximately $150 million over the last quarter. Fiscal 2024 was a year of new product introductions, new customers and expanded use cases. These trends have resulted in increased customer trials and contracts with customer-specific acceptance clauses that have and will continue to have increased the variability and magnitude of our preferred deferred revenue balances. We expect this to continue into fiscal 2025. Accounts payable days were 51 days, up from 42 days in Q3, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $12.5 million. In October, we began our initial construction work to build expanded facilities in Santa Clara, and we expect to incur approximately $100 million in CapEx during fiscal 2025 for this project. Now turning to our outlook for the first quarter of 2025 and the remainder of the fiscal 2025 year. We continue to gain confidence in our view for fiscal year 2025 and now place our revenue growth outlook at approximately 17% or $8.2 billion. This is up from our initial fiscal year 2025 guidance of 15% to 17%. This reflects our combined outlook for cloud, AI, enterprise and cloud specialty providers, along with the recognition of the volatility that we have seen in the market since the beginning of the year. For gross margin, we reiterate the range of the fiscal year of 60% to 62% with Q1 2025 expected to be above the range due to the anticipated mix of business in the quarter. Similar to others in the industry, we will continue to monitor the fluid tariff situation and be thoughtful for both the short- and long-term outcomes to both our company and our customers. In terms of spending, we expect to invest in innovation, sales and scaling the company, resulting in the continued operating margin outlook of 43% to 44% in 2025. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory due to the timing of the component receipts on purchase commitments. Our structural tax rate is expected to remain at 21.5%, back to the usual historical rate, up from the unusually low one-time rate of 16.7% experienced last quarter, Q4 FY 2024. With all this as a backdrop, our guidance for the first quarter is as follows: Revenues of approximately $1.93 billion to $1.97 billion, a slightly stronger seasonality in Q1 than prior year trends and outcome of the timing of our customers' priorities, gross margin of approximately 63% and operating margin at approximately 44%. Our effective tax rate is expected to be approximately 21.5%, with approximately 1.85 billion diluted shares. In summary, we had Arista enthusiastic about 2025 and the general networking outlook ahead. We have an impressive portfolio and are ready to solve our customers' needs across all the centers of data. Combined with our Arista team spirit, we are ready to realize our fair share of the $70 billion market TAM. With that, I, too, would like to welcome Rudy and Rod to the Arista IR team. Back over to you, Rudy, for Q&A.
Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. Thank you for your understanding. Operator, take it away.
Operator
We will now begin the Q&A portion of the Arista earnings call. Our first question will come from the line of Michael Ng at Goldman Sachs. Please go ahead.
Hi, good afternoon. Thank you for the question. I was just wondering if you could talk about the timing of how the year might look like and how you expect the risk to switches in the AI back end we rolled out into production. On the sale of these switches tied to the deployment of next-generation video chips or hyperscale custom ASICs on the compute side? And is that a gating factor that you're watching out for? Thank you.
Yes, thank you, Michael. First, I want to emphasize that Arista remains focused on four of our five AI initiatives that I mentioned in previous calls. The fifth is currently on hold as they await new GPUs and some additional funding. I'm optimistic they'll return next year, but for now, we won't discuss them further. Regarding the other four, three of these customers are expected to roll out a total of 100,000 GPUs this year, so we anticipate strong performance from them. They're primarily using NVIDIA's class of GPUs and will be looking forward to the next generation. Regardless, we plan to deliver substantial numbers. For the fourth customer, we're transitioning from InfiniBand and validating it as a viable option since we have a history with InfiniBand. We're currently in a pilot phase and expect to move into production next year. Overall, we are performing well with four of the customers, while the fifth remains stalled, and three out of four are set to deploy 100,000 GPUs this year.
Operator
Our next question comes from the line of Amit Daryanani with Evercore. Please go ahead.
Good afternoon. Thanks for taking my question. I guess, Jayshree, there's always this concern around the impact of white box vendors to your revenue growth. And clearly, over the last decade, I don't think it's been an impediment for the company. But can you maybe share your perspective that when it comes to AI network especially the back-end networks, how do you see the mix evolving white box versus OEM solution? And maybe just help us understand the differentiators that help Arista be successful on the front end, do they extend the back networks as well? Or is there something different we should be aware about? Thank you.
Yes, Amit, this seems like a question we've addressed before, but I appreciate you bringing it up. It's certainly something many are thinking about. We've been navigating this landscape for over a decade now in the cloud space. First, I want to emphasize how vast this market is. White boxes and non-U.S. operating systems will always coexist, similar to how Apple operates alongside various other phone brands. In the backend of an AI cluster, there are generally two main components: the AI Leaf, which connects to the GPUs and serves as the initial point of connection, and the AI Spine, which aggregates these AI Leaves. In nearly all backend scenarios we've observed, the AI Spine is predominantly powered by Arista's branded EOS. There is a significant level of routing, scaling, features, and capabilities that would be challenging to replicate in any other setup. The AI Leaf can vary; for instance, among the five customers I previously mentioned, three use EOS for both the leaf and spine, while two are more hybrid setups that incorporate some form of SONiC or FBOSS. As you're aware, we collaborate and coexist in various use cases where there's a blend of EOS and open OS. Generally, I want to convey that white boxes and Arista will coexist, catering to different needs of different customers. As for our differentiation, many of our current deployments are operating at 400 and 800 gig speeds, and we see significant advantages not just from scale and routing features but also from cost-effective load balancing, real-time AI visibility and analytics, personalized monitoring, congestion control, and crucially, SmartSystem upgrades. It's vital that GPUs remain operational and not fail due to inadequate software, so our network ensures that if any GPU encounters issues, we can provide alternative connections. This distinction adds considerable value, especially since a GPU typically costs five times more than a CPU.
Operator
Our next question comes from the line of Tim Long at Barclays. Please go ahead.
Thank you. I wanted to touch on the cloud titan numbers, a few parts there. Obviously, one of them Meta looks like based on the numbers you gave, if I heard it right, is down year-over-year. If you could touch on that. And then the other, if we do the math for the other cloud titans, looks like it went up a lot. I don't know, I think Oracle was kind of in that already. Is there anything else going on other than the Oracle shift with the rest of the cloud titans where it looked extremely strong in 2024? Thank you.
Yes. So speaking specifically to Meta, we're obviously in a number of use cases in Meta. Keep in mind that our 2024 Meta numbers are influenced by more of the 2023 CapEx, and that was Meta's year of efficiency, where the CapEx was down 15% to 20%. So you're probably seeing some correlation between their CapEx being down and our revenue numbers being slightly lower in '24. In general, I would just say all our titans are performing well in demand, and we shouldn't confuse that with the timing of our shipments. And I fully expect Microsoft and Meta to be greater than 10% customers in a strong manner in 2025 as well. Specific to the others we added in, they're not 10% customers, but they're doing very well, and we are happy with their cloud and AI use cases.
Operator
Our next question comes from the line of Ben Reitzes with Melius. Please go ahead.
Yes, darn Tim Long, took my question, so I'm going to ask about gross margins. You know darn that guy. So about gross margins, so obviously, they're going to be between to 61 at the midpoint after being much higher in the first quarter. I would think that implies significant cloud titan mix though going throughout the rest of the year. Do you mind just giving some color on what is pushing down the gross margin a little more? And does that mean that cloud titans do accelerate throughout the year because the gross margin gets pushed down vis-a-vis in your guidance?
Yes, go ahead, go ahead.
Thank you. Yes, sorry, Jayshree. No, absolutely, I think you got it right, Ben, from that perspective as we entered Q4 last year talking about 2025 guidance. And as we enter into this quarter, it is and still remains a mix. Jayshree kind of gave some thoughts on to the timing to the first question. So it is mix-driven. There were some questions last quarter if it was price-driven. This is just a mix-driven conversation. I would say we have absorbed a little bit of the specific tariffs on China in that number. So we are absorbing that on behalf of our customers. But otherwise, it's mix-driven, and we'll continue to update as we do the quarterly guidance through the year.
John, you had a question. Sorry, I'm just going to add that John has done a fantastic job on the planning for China ahead of time. So while they are absorbing the past some, most of it is related to the mix and some of it is related to the China tariffs.
Operator
Our next question comes from the line of Meta Marshall at Morgan Stanley. Please go ahead.
Great, thanks. Maybe another topical question from investors just over the past month has been DeepSeek and just as you think about kind of this 1:1 ratio you've talked about on back end versus front end, how you kind of see that changing as we've seen some kind of the changes to kind of thoughts around training and investments? Thanks.
Yes. Thank you, Meta. Well, DeepSeek certainly deep-fix many stocks, but I actually see this as a positive because I think you're not going to see a new class of CPUs, GPUs, AI accelerators and where you can have substantial efficiency gains that go beyond training. So that could be some sort of inference or a mixture of experiments or reasoning which lowers the token count and therefore the cost. So what I like about all these different options is Arista can scale up efforts for all kinds of XPUs and accelerators. And I think the eye-opening thing here for all of our experts, who are building all these engineering models is there are many different types, and training isn’t the only one. So I think this is a nice evolution of how AI will not just be a back-end training only limited to five customers that phenomena, but will become more distributed across a range of CPUs and GPUs.
Operator
Our next question comes from the line of Aaron Rakers at Wells Fargo. Please go ahead.
Yes, thanks for taking the question. Jayshree, I'm curious just kind of thinking about some of the questions we've gotten recently is when you see announcements like Stargate, and obviously, Stargate has the involvement of one of your newer cloud titan customers, how do you conceptualize the opportunity set for Arista vis-a-vis both back-end and front-end networking in deployments like that? And then do you have any thoughts on just the broader context of what you're seeing on Sovereign AI opportunities in your business? Thank you.
Thank you, Aaron. Stargate and Sovereign AI are not particularly related. Let me address the first one. Traditionally, we've viewed GPUs and collective libraries as two distinct components: a vendor supplying the GPU and us facilitating the scale-out networking. However, with projects like Stargate, we're beginning to see a trend towards integrated vertical racks where the processing, scaling, and software for unified control and visibility are increasingly converging. This is not something to expect until 2025, but certainly by 2026 and 2027, we can anticipate a new generation of AI accelerators designed for advanced training and inference, which will differ significantly from the current modular approach. We're optimistic about these developments, and there is significant involvement from key team members in the design of several of these next-generation initiatives. The demand for improvements in density and performance that we experienced in the 2000s is resurfacing, allowing for enhanced performance per XPU, necessitating an increase in network scale from 800 gig to 1.6. Additionally, factors like liquid cooling and comprehensive packaging of copper and optics are crucial. Arista is deeply engaged in these developments, leveraging the best-in-class hardware that both John McCool's and Andy’s teams are working on.
Operator
Our next question comes from the line of Atif Malik at Citi. Please go ahead.
Hi, thank you for taking my questions. I appreciate your emphasis on the guidance. You are still forecasting $750 million in AI back-end sales this year, even with the delay of the fifth customer. Can you explain where the potential for growth is this year? Is it coming from a wide range of customers or just one or two? Additionally, regarding the $70 billion total addressable market estimate for 2028, how much of that is attributed to AI?
Okay. So I'll take your second question first, Atif, on the $70 billion TAM in 2028, I would roughly say a third is AI, a third is data center and cloud, and a third is campus and enterprise. And obviously, absorbed into that is routing and security and observability, I'm not calling them out separately for the purpose of this discussion. So roughly 20 to 25 on each to get to that $70 billion. So coming back to your original question, which was? Help me out again?
The $750 million in back-end sales?
Yes, we're well on our way, and three customers deploying a cumulative 100,000 GPUs will help us reach that number this year. As we raised our guidance to $8.2 billion, I believe we'll see momentum in AI, cloud, and enterprises. I'm not ready to break it down yet, but we'll have a clearer picture in the second half. Chantelle and I feel confident that we can achieve the $8.2 billion, and having visibility this early in the year is beneficial.
Operator
Our next question comes from the line of Samik Chatterji with JPMorgan. Please go ahead.
Hey, thanks for taking the question. Jayshree, maybe I can sort of bring up one more topic that's come up a lot in the last few days, which is the value of the U.S. software layer to the back end of the network and particularly in the discussion in terms of rate of competition to like a white box player how do you sort of emphasize the value of U.S. to your customers? Can you sort of outline some of the sort of what's the key drivers we should keep in mind? And again, in that competitive landscape between white box and Arista? Thank you.
Yes. Sure, Samik. First of all, when you're buying these expensive GPUs that cost $25,000, they're like diamonds, right? You're not going to string a diamond on a piece of thread, so first thing I want to say is you need a mission-critical network, whether it's called where you want to call it white box, blue box, U.S. or some other software, you've got to have mission-critical functions, analytics, visibility, high availability, et cetera. As I mentioned, and I want to reiterate, they are also typically a Leaf Spine network. And I have yet to see an AI spine deployment that is not EOS based. I'm not saying it can't happen or won't happen. But in all five instant major installations, the benefit of our EOS features for high availability for routing, VXLAN, for telemetry, our customers really see that. And the 7800 is the flagship AI spine product that we have been deploying last year, this year and in the future. Coming soon, of course, is also the product we jointly engineered with Meta, which is the distributed Etherlink switch. And that is also an example of a product that provides that kind of Leaf Spine combination, both with FOS and EUS options in it. So in my view, it's difficult to imagine a highly resilient system without Arista EOS in AI or non-AI use cases. On the Leaves, you can cut corners. You can go with smaller buffers, you may have a smaller installation. So I can imagine that some people will want to experiment and do experiment in smaller configurations with non-U.S. But again, to do that, you have to have a fairly large staff to build the operations for it. So that's also a critical element. So unless you're a large cloud titan customer, you're less likely to take that chance, because you don't have the staff. So all in all, U.S. is alive and well in AI and cloud use cases, except in certain specific use cases where the customer may have their own operations staff to do so.
Operator
Our next question comes from the line of Ben Bollin with Cleveland Research. Please go ahead.
Good afternoon, everyone. Thanks for taking the question. Jayshree, I'm interested in your thoughts on your enterprise strategy within G2000 and how that may be evolving as it looks like refresh opportunities are intensifying? Thank you.
Yes. No, listen, we're always looking at three major threads. Our classic cloud business, our AI and the enterprise, led by Ashwin and Chris is a very significant area of investment for us. From a product point of view, we have a natural trickle-down effects from our high-end data center products to the cloud. And so whether it's the enterprise data center or the campus, I've never seen our portfolio be as strong as it is today. So a lot of our challenges and our execution are really in the go-to-market, right? And that just takes time. As you know, we've been slowly but steadily investing there. And our customer count, the number of projects we get invited to especially as you pointed out in the Global 2000 has never been stronger. One area I'd like to see more strength and Chris Schmidt and the team are working on it, as you can tell from our numbers, is international. We're bringing in some new leadership there and hope to see some significant contributions in the next year or so.
Operator
Our next question comes from the line of Ryan Koontz with Needham & Company. Please go ahead.
Great, thanks. Jayshree, can you comment, let's put a lot of chatter lately about co-package optics. Can you maybe speak about its place in your roadmap and how investors should think about that, the effect on your TAM and your opportunities to sell?
Well, first of all, Andy has reminded me that co-package optics is not a new idea. It's been around 10 to 20 years. So the fundamental reason, let's go through why package optics has had relatively weak adoption so far is because of field failures and most of it is still in proof-of-concept today. So going back to networking, the most important attribute of a network switch is reliability and troubleshooting. And once you solder a co-packaged optics on a PCB, you lose some of that flexibility, and you don't get the serviceability and manufacturing. That's been the problem. Now a number of alternatives are emerging, and we're a big fan of co-packaged copper, as well as pluggable optics that can complement this like linear drive or LTO, as we call it. Now we also see that if co-packaged optics improves some of the metrics it has right now, for example, it has a higher channel count than the industry standard of eight-pluggable optics, but we can do higher channel pluggable optics as well. So some of these things improve, we can see that both CPC and CPO will be important technologies at 224 gig or even 448 gig. But so far, our customers have preferred a Lego approach that they can mix and match pluggable switches and pluggable optics and haven't committed to soldering them on the PCB, and we feel that will change only if CPO gets better and more reliable. And I think CPC can be a nice alternative to that.
Operator
Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.
Thank you very much for taking a question. I was hoping you could maybe double-click on the cognitive adjacencies. It's been a meaningful part of revenue. I think you said 18%. If you could offer a little bit more color about how the elements of that are trending and your expectations for how that part of the business is growing in your 2025 expectations? Thank you.
Yes. It's an important aspect of both routing and the campus, and we've already committed up to $750 million. Out of our $8.2 billion, I expect that to exceed $1 billion this year. In the routing use case, which primarily relates to enterprise and service providers, it's challenging to measure it on its own. We define it strictly as a combination of software operating with dedicated routing hardware. For instance, if the hardware is shared between switching and routing, we don’t include it in the count. Sometimes we may underestimate the numbers a bit, and more may be included in the core, but I want to emphasize that this is a very strategic area. Additionally, the SD-WAN market is evolving beyond just encryption, tunnels, and transitioning from MPLS, as a routed backbone is now essential. The integration of SD-WAN at the edge with the router backbone is precisely where Arista excels in both enterprise and service provider sectors. Regarding our campus initiatives, we are highly focused there. We see a shift in the market, as there’s a growing fatigue with subscription models from some competitors and mergers or acquisitions happening with others. Arista stands out as the sole pure-play campus innovator capable of delivering best-in-class solutions. We are gaining traction among our data center customers, who are already familiar with us and can extend their data center spine to wired and wireless needs. Through CloudVision as a management domain, we're noticing significant engagement, whether it’s for automation, zero trust security, or observability with our campus products. Both of these areas are meaningful, and we anticipate them to surpass $1 billion. Chantelle, would you like to add anything?
Yes, I’d like to add to your points, Jayshree, which clearly show our intent. Regarding our campuses, I want to highlight John, who has worked hard to improve our lead times this year. We're pleased with the progress on lead times, and our customers seem excited about it too. Additionally, we have a curated preferred partner program, especially for international markets, which ties into the earlier point about increasing international revenue. We've already seen some significant wins where not only is the data center bringing us in, but they are also driving those wins through our campus offerings. This adds to our optimism for 2025.
Operator
Our next question will come from the line of Tal Liani at Bank of America. Please go ahead.
Hey, guys. Two questions, one on routing and one on enterprise. Enterprise grew 16% this year. What drives it? Is it just regular growth of data centers? Or do you start to see entries investing because of AI and applications for AI? The second thing is about routing. Routing used to be a small opportunity when it was just a license. Can you elaborate on routing? What is your differentiation versus the others are bundling it with optics? How do you sell it? And then how big is the opportunity, not in terms of numbers, but is it now a hardware with software? Or is it just software license like it used to be? Thanks.
I can start with enterprise. For Enterprise, there are several growth vectors I would highlight for 2024 and continuing into 2025 and beyond. One key aspect is coverage; we have invested in increasing our sales and marketing headcount in 2023, 2024, and into 2025. In 2024, we experienced a double-digit rise in sales and marketing personnel, which has enhanced our coverage. We're also leveraging our preferred partner program to penetrate the enterprise sector. Additionally, we are engaged in international campaigns and have a strong focus on acquiring new clients. All these approaches contribute to our growth. From an AI perspective, conversations with customers are shifting from theoretical discussions to specific use cases, particularly among banks and major Global 2000 and Fortune 500 companies. They are transitioning from theory to practical applications and are currently collaborating mainly with cloud service providers to train before deciding to implement solutions on-premises. Although this is still in the early stages, we are having very promising discussions regarding AI. Would you like to add anything, Jayshree?
Yes. Tal, routing has always been a crucial aspect of our cloud and data center offerings. As you mentioned, it has become more software-focused. Recently, I worked with a large bank in New York during a snowstorm, which forced me to stay indoors most of the time. The scenario involved a WAN routed fabric, rather than just data center or campus solutions. They want us to not only develop the core routing as a hub but also extend it to the spokes. Regarding features, previously we were primarily focused on those for cloud and data center environments, which only constituted about 20% of our offerings. Now, our routing portfolio has expanded significantly to include complete VXLANsec, Tunnelsec, MapSec encryption, MPLS, segment routing, OSPF, and BGP. We no longer need to apologize for any gaps in our routing capabilities, as we deliver the best software stack in the industry in terms of quality and support. While we have been selling a substantial amount of software SKUs, we are also increasingly working with dedicated hardware SKUs, particularly with the 7280 platform, which has proven to be a reliable and successful solution for us.
Operator
Our next question comes from the line of Antoine Chkaiban with New Street Research. Please go ahead.
Hi, thank you for taking my question. I'd love to get your latest perspective on what you're hearing from service providers in One of your competitors mentioned that they were seeing AI driving demand for service providers because they're building out their network in anticipation of an increase in traffic driven by AI. So just wondering if you could comment on that as well. Thank you.
Antoine, do you mean the classic service providers or generally the Neo Cloud?
The classic service providers.
We haven't observed a significant increase there yet. There may be some experimentation happening. However, to address a question you didn't specifically ask, we have noticed a rise in activity around the sovereign cloud and Neo Cloud. We are witnessing the emergence of a new group of Tier 2 specialty cloud providers that aim to offer AI as a service with distinct features. There is substantial funding flowing into this area, so it's premature to make definitive statements about service providers. Nonetheless, regarding Neo Clouds and specialty providers, we are indeed seeing numerous examples of that.
Operator
Our next question comes from the line of Matt Niknam with Deutsche Bank. Please go ahead.
Hey, thanks so much for taking the question. Maybe for Chantelle. I mean you're sitting now on $8 billion worth of cash and equivalents on the balance sheet. So maybe just an update on how you prioritize uses of cash tending into 2025? Thank you.
Yes. No, it's great. Thank you for the question. We're very pleased with the performance. And our capital allocation strategy has not changed coming into FY 2025. Just kind to reiterate and remind and I appreciate the opportunity to do so. First of all, in the sense of investing that cash where we can still get a very reasonable and respectable return continues to be a priority, repurchasing, you saw the way we did through FY 2024 and we'll aim to do what we can through FY 2025 organic investment. So you saw that in the sense we're still looking to scale the company in R&D, sales, back office and probably the one that's at least on the scale is sizable inorganic activity. So I would focus on the first four, and that's how we're making in FY 2025.
Operator
Our next question comes from the line of David Vogt with UBS. Please go ahead.
Great, thanks guys for taking my question. So Jayshree, I have a question about sort of the evolution of speed deployment at some of your M&M customers. So obviously, you mentioned 400 and 800Gs have been obviously a principal driver. How are you thinking about how that plays out in '25 and beyond? There's some wins out there. I know different parts of the network at 1.6%, but trying to get a sense for how you're thinking about 400 to 800 into ultimately 1.6, not just in '25, but in '26 and beyond? Thanks.
Yes, that's a very good question. The state transitions due to AI are definitely accelerating. Previously, moving from 200 gig at Meadow or 100 gig with some of our cloud customers to 400 gig typically took three to four, possibly five years. However, we are now witnessing that cycle occurring nearly every two years in the AI space. I would anticipate that 2024 will be the year when 400 gig becomes widely adopted, while 2025 and 2026 will see more growth in 800 gig. I do expect the emergence of 1.6 terabits, though we currently lack the necessary chips until late 2026, with genuine production likely in 2027. There is significant discussion and excitement surrounding this, similar to the hype we experienced with 400 gig five years back. Realistically, there will be an extended transition from 400 to 800 gig. As we approach 1.6 terabits, the pace will be careful and deliberate, largely because many of our customers are still waiting for their own AI accelerators or NVIDIA GPUs, which, with liquid cooling, could support that level of bandwidth. The introduction of new GPUs will necessitate higher bandwidth, which is likely to extend the timeline by a year or two.
Operator
Our next question comes from the line of Karl Ackerman with BNP Paribas. Please go ahead.
Yes, thank you. Could you discuss the outlook for services relative to your outlook for March and the full-year? I asked because services grew 40% year-over-year and your deferred revenue balance is up another $250 million or so sequentially, nearly $2.8 billion. So the outlook for that would be very helpful. Thank you.
Yes. We don't usually guide the piece parts of that. So all I would say is that the thing to keep in mind with services is a little bit of timing in the sense of catching up to product, especially kind of post COVID. So I would kind of take the trend that you see over the last few years, and that's probably your best guide looking forward. We don't guide piece parts.
Thanks, Karl.
Operator
Our next question will come from the line of Sebastian Naji with William Blair. Please go ahead.
Yes, thanks for taking the question. And you talked about this a little bit, but there's been a lot of discussion over the last few months between the general purpose GPU clusters from NVIDIA and then the custom ASIC solutions from some of your popular customers. I guess just in your view, over the longer term, does Arista's opportunity differ across these two chip types? And is there one approach that would maybe pull in more Arista versus the other?
Yes. No, absolutely. I think I've always said this, you guys often spoke about NVIDIA as a competitor. And I don't see it that way. I see that. Thank you, NVIDIA. Thank you, Jensen, for the GPUs because that gives us an opportunity to connect to them, and that's been a predominant market for us. As we move forward, we see not only that we connect to them that we can connect to AMD GPUs and build an in-house AI accelerators. So A lot of them are in active development or in early stages. NVIDIA is the dominant market share holder with probably 80%, 90%. But if you ask me to guess what it would look like two, three years from now, I think it could be 50-50. So Arista could be the scale-out network for all types of accelerators. We'll be GPU agnostic. And I think there'll be less opportunity to bundle by specific vendors and more opportunity for customers to choose best of breed.
This concludes Arista Networks Fourth Quarter 2024 Earnings Call. We have posted a presentation that provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today and for your interest in Arista.
Operator
Thank you, ladies and gentlemen. This concludes today's call, and you may now disconnect.