Arista Networks Q1 2026 Earnings Call - AI Demand Outpaces Supply, Guiding 27.7% Growth
Summary
Arista Networks delivered a robust Q1 2026, with revenue of $2.71 billion (+35% YoY) and raised its full-year growth forecast to 27.7%, targeting $11.5 billion. The company is commanding #1 market share in high-speed switching, driven by explosive demand for AI networking across scale-out, scale-across, and upcoming scale-up workloads. Management highlighted that AI revenue targets have more than doubled to $3.5 billion, with scale-across emerging as a critical, differentiated segment.
However, the story is defined by a severe supply chain bottleneck. Management warned that wafer, silicon, and memory shortages will persist for 1-2 years, forcing Arista to make multi-year purchase commitments and absorb cost pressures to ensure delivery. While gross margins dipped to 62.4% due to customer mix and component costs, the company remains confident in its ability to ship, prioritizing customer relationships over short-term margin expansion. The enterprise business also showed strength, with new wins in NeoClouds, service providers, and manufacturing, diversifying revenue beyond hyperscalers.
Key Takeaways
- Revenue of $2.71 billion grew 35.1% year-over-year, beating guidance of $2.6 billion.
- Full-year 2026 revenue growth forecast raised to 27.7%, targeting approximately $11.5 billion.
- AI fabric revenue target increased to $3.5 billion, more than doubling from prior year.
- Arista holds #1 market share in high-speed switching (>10Gb Ethernet), overtaking incumbents.
- Scale-across networking is emerging as a major growth driver, expected to contribute at least one-third of AI revenue this year.
- Supply chain constraints are severe and multi-year, affecting wafers, silicon, optics, and memory.
- Management committed to multi-year purchase commitments and absorbing cost pressures to secure supply, even at the expense of gross margins.
- Gross margin for Q1 was 62.4%, down from 63.4% in Q4, primarily due to customer mix and component cost inflation.
- Deferred revenue rose to $6.2 billion, reflecting longer qualification cycles for new AI products and complex deployments.
- Enterprise business showed strong diversification with wins in NeoClouds, service providers, insurance, and manufacturing sectors.
Full Transcript
Antoine Chkaiban, Analyst, New Street Research4: Welcome to the first quarter 2026 Arista Networks financial results earnings conference call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question and answer session. Instructions will be provided at that time. If you need to reach an operator at any time during the conference, please press the star key followed by 0. As a reminder, this conference is being recorded and will be available for replay from the investor relations section on the Arista website following this call. Mr. Rudolph Arrajo, Arista’s Head of Investor Advocacy, you may begin.
Antoine Chkaiban, Analyst, New Street Research5: Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today’s call are Jayshree Ullal, Arista Networks Chairperson and Chief Executive Officer, and Chantelle Breithaupt, Arista’s Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing its fiscal first quarter results for the period ending March 31st, 2026. If you want a copy of this release, you can find it on our website.
During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the second quarter of the 2026 fiscal year, longer term business model and financial outlooks for 2026 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI, inventory management, lead times, and product innovation, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call.
This analysis of our Q1 results and our guidance for Q2 2026 is based on non-GAAP and excludes stock-based compensation expense, intangible asset amortization, gains, losses on strategic investments, and income tax effect of these non-GAAP exclusions, including the recognition of direct access tax benefits associated with stock-based awards. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I would turn the call over to Jayshree.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Thank you, Rudy, and welcome everyone to our first quarter 2026 earnings call. Arista has experienced significant velocity in all our sectors in Q1 and are now commanding the number 1 market share in high-speed switching in the greater than 10 gigabit Ethernet category. With that, we have overtaken many incumbent vendors according to major market analysts for 2025. Our cloud and AI networking strategy for diverse AI accelerators continues to gain traction. Unlike typical workloads, AI workflow patterns can be long-lived elephant flows or short-lived and simply not predictable. This implies careful attention to performance, where a flow can cause burstiness for a long duration of milliseconds. The intensity of a flow can determine the line rate throughput. The shifting traffic patterns to massive flows synchronized to all-to-all or all reduce or bursts of collective communication are all important for AI training and inference applications.
I would like to take a moment to review our three AI fabric use cases. In scale-up mode, we have familiar technologies such as NVLink and PCIe that have enabled vertical scaling of single compute nodes or racks. The advent of ESUN, Ethernet for scale-up networking specifications, allows for increasing or decreasing computing power in a flexible manner with Ethernet to automatically adapt to workload demands. Scale-up will be a new entry for Arista in 2027 and beyond, where we will be working closely with our customers to build AI racks with very fast interconnects for co-packaged copper, CPC, or open co-packaged optics, CPO, as well as supporting collectives and memory acceleration. Scale-out or horizontal scaling involves adding more machines to a leaf-spine fabric, moving workloads across multiple servers or nodes, or even connecting other elements like storage or CPUs.
As you scale up or out with massive datasets, bottlenecks can be resolved with collectives and protocol acceleration at L2, L3, cluster load balancing all at wire rate. The system must deliver consistent performance without degradation as more nodes participate. Arista is a shining example here with greater than 100 cumulative customers to date in 800 gigabit Ethernet deployments, and we expect the addition of 1.6 terabyte in 2027 at production scale. Scale-across drives across the cloud and AI as the AI accelerators in a location may need to be distributed to achieve the appropriate bandwidth capacity with the optimal power. As workloads become more complex and more distributed, the bisectional bandwidth must scale smoothly to avoid bottlenecks and preserve performance.
This demands sophisticated traffic engineering, deep routing, encryption properties, and integrated optics based on Arista EOS stacks and using Arista’s flagship 7800R3 or 7800R4 series. The 7800 has established itself in this category as the premier scale across choice. You can see with Arista’s accelerated networking strategy and these 3 types of AI fabrics, these are critical to deployment of diverse accelerators and frontier models. Traditional static network topologies with hotspot jitter that slows down job completion time or increases time to first token for inference are all not the way to go. Arista’s EtherLink portfolio addresses both the synchronous flows for massive training and the low latency for concurrent swarms of real-time inference in this era of trillions of tokens, terabits of performance, and terawatts of power.
In 2024, you may recall we discussed four Ethernet-based AI training deployments, and of course, since then we’ve expanded and exploded to countless others. This fourth customer from the group has officially moved from InfiniBand to Ethernet at production scale over the last two years. The high-speed Ethernet AI leaf-spine with flexible air or liquid-cooled infrastructure overcomes the physical constraints of power and space for AI workloads. It results in a low latency distributed AI supercomputer fabric across global regions. What is clear to me and us is our networking prowess with data, control and management, and multiplanar orchestration is not only central to our AI switching performance, but also important for high-speed optics transmission. At the recent Optical Fiber Communication Conference, Arista unveiled its extended pluggable optics, XPO form factor, designed specifically for optics innovations at high speed.
Now endorsed by greater than 100 vendors, salient features include record-breaking throughput, delivering 12.8 terabits per pluggable module. Unprecedented rack density achieving 204.8 terabits per OCP rack unit. Integrated cold plate capable of cooling up to 400 watts power per module. The universality and flexibility across a range of pluggable optics copper, as well as linear half-retimed or retimed interfaces. A special kudos to Andy Bechtolsheim, Arista’s Chief Architect, for driving from OSFP 10 years ago to this next generation XPO, bringing structural improvements in power, footprint, and cost reductions. Our enterprise business experienced strong results in Q1 2026, both in data center and campus. Our VeloCloud acquisition is also integrating well into our branch and campus strategy, bringing more distributed enterprise use cases and a new channel motion with managed service providers, MSPs.
To share some recent wins, let us hear now from Todd Nightingale and Ken Duda, our co-presidents, to delineate our Arista 2.0 centers of data strategy. Over to you.
Antoine Chkaiban, Analyst, New Street Research0: Thanks, Jayshree. Arista is diversifying its business with new customer acquisitions covering a broad set of use cases, all unified by Arista’s EOS stack and its ability to modernize enterprise infrastructure operating models. Our first highlighted win is a NeoCloud AI network. The customer was constrained by an incumbent white box architecture that simply could not keep pace with the massive scale-out requirements of AI. Arista was selected as a commercially proven and reliable scale-out architecture with unmatched stability of EOS and the ability to connect AMD MI-series XPUs. Arista’s AI leaf and spine Etherlink products were deployed at 800 gigabits to provide the incredible performance modern AI networks require. The AI fabric was tuned using Arista’s cluster load balancing to scale out to thousands of XPUs, minimizing hotspots and congestion.
On the software side, the customer leveraged AVD, Arista’s Validated Design framework, to automate network provisioning, which both reduces the total cost of ownership, but also provides an easy path to reliable network deployment at scale, where without AVD automation, a small mistake can cost precious days of debugging time. This was a strategic NeoCloud win with large potential for upside growth in an area where we are seeing enormous opportunity and velocity in both NeoCloud and Sovereign cloud customers. Our next win is in the service provider sector with a leading regional fiber to the home provider serving hundreds of thousands of subscribers. As subscriber bandwidth demands have surged, this customer realized their legacy routing architecture was too rigid, too brittle, and too costly to scale. They needed a solution which would modernize their next generation backbone and internet peering edge.
Arista won this upgrade by proving an automation-first approach with a modern operating model driving operational savings and increased subscriber reliability. On the hardware side, we deployed popular 7280 routing platforms using EOS’s FlexRoute capabilities, which unlock deep buffering, a rich control plane software stack, and full internet route scale. On the software side, Arista’s AVD framework again automates router provisioning to reduce the time it takes to turn up services while also reducing errors. Here we saw great results from our technology partnership with Palo Alto Networks, ensuring the routing edge integrated securely and seamlessly with our overarching security architecture. Here, Arista’s core value proposition of lower operating costs and greater reliability drove a competitive win. Now I’ll hand it off to Todd.
Ben Bollin, Analyst, Cleveland Research0: Thanks, Ken. Our third win is in the insurance services sector. Following a year of strategic collaboration, the customer wanted to modernize their infrastructure with a streamlined, automated foundation capable of delivering granular real-time insights to secure and monitor critical applications. Here, observability was truly the key. Arista secured this comprehensive win after executing a flawless proof of concept, proving our architecture significantly exceeded operational standards. To achieve deep network observability, the customer deployed our R3 series for filter and delivery roles on our monitoring fabric, DMF. Additionally, they deployed campus switches to radically simplify out-of-band management. Leveraging our rich telemetry capabilities of EOS, the customer unlocked advanced features like VXLAN header stripping and transitioned to a fully automated declarative operational model. Our final win is within the manufacturing sector, where we’re seeing amazing momentum.
Here we have a customer operating more than 100 factory sites globally, servicing consumer, healthcare, aerospace, defense, and AI infrastructure customers. This was a true mission-critical use case, and their legacy campus network had become the bottleneck for achieving real 24/7 production. Shifting traffic patterns, manual provisioning, and importantly, a lack of visibility and forensics into microbursts and drops were keeping them from achieving their goals. Arista won an extensive bake-off against two established vendors, both of whom proposed campus design that could not match what Arista delivered. A universal leaf-spine campus based on open standards running a single EOS binary across campus, data center, and WAN. The cognitive campus solution leveraged a 100 gig campus spine, high-powered PoE leaves, and Arista Wi-Fi 7. CloudVision drove provisioning, configuration, and lifecycle end-to-end with consistent tooling across the network infrastructure.
Here it really was Arista’s modern operating model that drove differentiation in the engagement. Hit list production upgrades, latency analyzer for microburst visibility, and true packet drop forensics. The teams were able to significantly reduce production impacting maintenance windows and expose events that had previously caused line interruption. In all 4 of these examples, Arista’s support team stood out to customers for its best-in-class service, well known for troubleshooting issues with customers long after Arista gear is no longer suspected to be at fault. Arista’s modern operating model also played a key role, especially the AVD tooling that Ken mentioned for architecture, validation, and deployment. We’re excited about the momentum across the entire enterprise business and especially the diversification that it brings to Arista. Thanks, Jayshree.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Thank you, Todd. Thank you, Ken. It was so fantastic to hear of happy customer outcomes. We had another fitting example of that at our Innovate 2026 event here in the headquarter facility held in March. The energy and enthusiasm of our greater than 250 customers who attended was truly infectious and inspiring. I want to especially give a shout-out to Ashwin Kohli and Dhivya Wagner’s teams, who have already improved our outstanding net promoter score from 87 to 89 ratings, translating to a 94% customer approval. This really exemplifies the lowest security vulnerabilities in the tech industry. It enhances our ability to better cope with the many risks that AI is creating. As I look ahead at the year, our Arista 2.0 momentum continues to march on and resonate. Our demand is actually the best I have ever seen in my Arista tenure.
The supply, however, is a slightly different and opposite tale. We are experiencing industry-wide shortages across the board, be it wafers, silicon chips, CPUs, optics, and of course, memory that I referred to last quarter, coupled with elevated costs to procure these. Clearly, our demand is outstripping our supply this year. While we hope the supply chain will ease in the next year or two, the Arista operations team has been diligently engaging with our vendors in strengthening supply agreements and engaging in multi-year purchase commitments. We anticipate gross margin pressure due to mix and trade-offs we are making to pay more to assure supply continuity to our customers. Nevertheless, it gives us confidence to increase our forecasted growth slightly to 27.7%, aiming now for $11.5 billion for 2026.
We also increased our AI target now to $3.5 billion this year, thereby more than doubling our AI sales annually. With that good news, over to you, Chantelle, for the financial details.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: Thank you, Jayshree. I continue to be impressed by our company’s ability to deliver such a breadth and depth of networking innovation. It is a core tenet that underpins our strong financial return to shareholders. Q1 to detail our most recent financial outcomes. To start off, total revenues in Q1 were $2.71 billion, up 35.1% year-over-year and above our guidance of $2.6 billion. Growth was seen across the customer sectors, led by our AI and specialty providers customers within the quarter. International revenues for the quarter came in at $418.9 million or 15.5% of total revenue, down from 21.2% last quarter. This quarter-over-quarter decrease was primarily influenced by Americas-based sales to our large global customers.
The overall gross margin in Q1 was 62.4% within the guidance range of 62%-63% and down from 63.4% in the prior quarter. This quarter-over-quarter decrease is due to the lower mix of sales to our enterprise customers in the quarter. Operating expenses for the quarter were $396.8 million or 14.6% of revenue, down slightly from last quarter at $397.1 million. Our R&D spending came in strong at $271.5 million or 10% of revenue. Despite a slight sequential decrease due to the timing of new product introduction costs, Arista continues to demonstrate its commitment and focus on networking innovation.
Sales and marketing expense was $103.5 million or 3.8% of revenue, down from 4% last quarter, representative of the highly efficient Arista go-to-market methodology. Our G&A costs came in at $21.8 million or 0.8% of revenue, down from $26.3 million last quarter, reflecting our strong base cost productivity within a pure play networking business model. Our operating income for the quarter was $1.29 billion or 47.8% of revenue. Let me pause here to thank the greater Arista team for all of their efforts and resulting excellent execution in a dynamic environment. Other income and expense for the quarter was a favorable $110.8 million, and our effective tax rate was 21.1%.
Overall, this resulted in net income for the quarter of $1.11 billion or 40.9% of revenue. Our diluted share count was 1.27 billion shares, resulting in a diluted earnings per share for the quarter of $0.87, up 31.8% from the prior year. Now turning to the balance sheet. Cash, cash equivalents, and marketable securities ended the quarter at approximately $12.35 billion. In the quarter, we did not repurchase our common stock. Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now turning to operating cash performance for the quarter.
We generated approximately $1.69 billion of cash from operations in the period, the strongest in the history of Arista. This was driven by a robust earnings performance coupled with an increase in deferred revenue. DSOs came in at 64 days, down from 70 days in Q4 due to the linearity of shipments within the quarter. Our inventory turns improved slightly, landing at 1.7 versus 1.5 in the prior quarter. We ended the quarter with $2.38 billion in inventory, up from $2.25 billion last quarter. This marginal increase is a calculated investment in the mix of raw materials to fulfill our growing demand. Our purchase commitments at the end of the quarter were $8.9 billion, up from $6.8 billion at the end of Q4.
As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters as a reflection of the combination of demand for our new products, component variability, and the lead times from our key suppliers. This could also result in quarters of elevated inventory balances ahead of the deployments. Our total deferred revenue balance was $6.2 billion, up from $5.37 billion in the prior quarter. The majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $643 million versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI.
These trends have resulted in increased customer specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers. Accounts payable days were 54 days, down from 66 days in Q4, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $54.5 million. We continue the construction work to build expanded facilities in Santa Clara. In Q1, we incurred approximately $40 million in CapEx related to this program and estimate it will reach $180 million in 2026. These Q1 results have provided a strong start to our fiscal year 2026.
As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 27.7% revenue growth, delivering approximately $11.5 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI fabrics goal from $3.25 billion-$3.5 billion. I would like to take this opportunity to remind the audience that the timing and outcome of customer projects with acceptance terms can create quarterly and sequential dynamics that do not follow prior year trends. For gross margin, we reiterate the range for the fiscal year of 62%-64%, inclusive of mix and anticipated supply chain cost increases for memory and silicon. Given this challenging supply backdrop, I am proud of our sourcing team’s execution, which strongly contributes to the gross margin outlook holding in our guidance range.
We feel confident that we can source the necessary supply to meet our customers’ needs. Our operating margin outlook remains at approximately 46% for the fiscal year, with the tax rate expected at 21.5%. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory and cash flow from operations due to the timing of component receipts on purchase commitments. More specifically now, our guidance for the second quarter is as follows. Now with the added quarterly metric of diluted earnings per share. Revenues of approximately $2.8 billion. Gross margin between 62% and 63%. Operating margin between 46% and 47%. Diluted earnings per share of approximately $0.88 with approximately 1.27 billion diluted shares. Our effective tax rate is expected to be approximately 21.5%.
In closing, we are optimistic about the fiscal year ahead. The industry has many times demonstrated the pattern of landing on Ethernet as the winning technology, and that is where Arista shines best. We appreciate our customers’ choice of working with us to achieve their business outcomes. Now, Rudy, back to you for Q&A.
Antoine Chkaiban, Analyst, New Street Research5: Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I’d like to request that everyone please limit themselves to one question. Your line will be placed on mute after your question. Thank you for your understanding. Regina, please take it away.
Antoine Chkaiban, Analyst, New Street Research4: We will now begin the Q&A portion of the Arista earnings call. To ask a question during this time, simply press star and then the number one on your telephone keypad. If you’d like to withdraw your question, press star and the number one again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Simon Leopold with Raymond James. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research8: Great. Thank you very much for taking the question. I wanted to explore your commentary around the scale across opportunity in particular. I guess what I’m trying to get a better sense of is how much revenue, if any, did that contribute last year?
How material is that to the $3.5 billion forecast you’re giving this year, and how should that trend longer term? Thank you.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Sure, Simon. I think last year on scale across, we were just beginning. I think they were small numbers, and majority of the numbers were really scale out. That’s sort of our heritage, and that’s where we excel. If I were to anticipate how it would be this year, again, scale-up is virtually zero and nonexistent because it really only comes to play after the ESON spec. Consider that more a 27, 28 kind of number. I think the number will be really shared between scale across and scale out. I don’t know if I can say it’s 50-50 or 70-30 or 60-40, but scale across will definitely contribute at least a third of our AI number.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of George Nader with Wolfe Research. Please go ahead.
George Nader, Analyst, Wolfe Research: Hi, guys. Thanks very much. Maybe just continuing the discussion on scale up. You know, we are starting to see rack design wins. One of your competitors in the ODM space, I think, has got a couple of designs that they’ve announced at least. I know you’re kind of pointing towards ESON as being kind of a key, you know, catalyst in generating business there. But can you talk a little bit about where you are in terms of designs with customers, progress?
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Sure.
George Nader, Analyst, Wolfe Research: Anything you can tell us there would be great. In fact, I think a few quarters ago you said you had 5-7 scale-up rack designs that you were at least working on.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah.
George Nader, Analyst, Wolfe Research: I’m just wondering if you can update that. Thanks a lot.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. That’s correct. George, I think there is no doubt in our minds that we will have a number of racks, a number of scale-up use cases in 2027. Maybe some of them will be in early trials, but majority of them are looking at really starting with 160, and 160 chips will really happen in 2027. There may be a few, a handful of them that try some experimental stuff at 800 gig, but we continue to see at least, you know, 5-7 rack opportunities. Some of them are multiple racks with the same customer. We’re actively designing with them. There’s a huge amount of liquid cooling, you know, designs with very dense cabling options, acceleration of collectives and memory, features we have to work on for low latency.
I definitely feel we’re in active engineering phase with Ken and Hugh’s teams this year. Unlike the ODMs, I think we’re held to a higher bar, and we have to just make sure that this thing is production-worthy and specification adhering to ESUN. I would say today’s scale-up is mostly limited to NVLink from Nvidia and maybe some PCIe switching. Majority of the Ethernet scale-up will only really happen in 2027 and 2028.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Antoine Chkaiban with New Street Research. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research: Hi. Thank you very much for taking my question. With the supply outstripping demand, I’m wondering, you know, how much does your current supply allow you to grow this year and next? Is the updated, you know, top-line growth guide of 28% growth a good reflection, of how much supply you’ve secured for this year? What could that number look like next year, you know, based on how much supply you think you can get as of today?
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Antoine, I think the supply chain problem, and Todd maybe you can add to this, is not a 1- or 2-quarter phenomena. We now think it’s a 1- or 2-year phenomena. You know, at first we thought it was memory, now it’s all the wafer fabrication facilities. Every chip is challenged, and you can see how Chantelle has leaned in with the purchase commitment for multiple years. While we will continue to improve it, this is a reflection of not just demand, but how much we can ship this year. As we continue to ship this year, we can give you better visibility on next year. I can just tell you, we see multi-year demand, and we are gonna do everything, including hurt our gross margins to supply to that demand this year and next year.
Because we believe that we certainly don’t wanna keep GPUs idle and AI infrastructures underutilized because Arista didn’t supply the network. Can the number get better this year? I think this reflects our best attempt at a good number. We started out at—Did we start out at 20%, 25% growth? Yeah. We started out at 20, we’re at 25, now we’re at 27.7. Could we improve to the tail end of the year? We’ll see. The amount of decommits we’re seeing doesn’t feel good. We think a lot of this will continue into next year and keep us constrained for the next couple of years.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Aaron Rakers with Wells Fargo. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research8: Yeah. Thanks for taking the question. You know, Jayshree, last quarter you had alluded to kind of engagements with other hyperscale cloud titan customers. I think you also pointed to maybe having 1 or 2 new 10% customers this year. I’m curious of, you know, where we stand today. Any updated thoughts on adding 1 or 2 new customers at 10% plus? You know, maybe qualitatively just talk about your engagements you’re having beyond your 2 big cloud titans across the hyperscale vertical. Thank you.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. Absolutely. First of all, our two big ones, we never take them for granted. Microsoft and Meta, they’re our all-time favorites. They’ve been our 10% and greater customers for over a decade, and the partnership could never be stronger, and it continues to get better, both in cloud and in AI. In terms of the new entrants, we still expect at least one, maybe two. Maybe I should caveat this by saying certainly in demand we see one or two. We shall see, Todd, how we do on shipments to see if we can achieve the greater than 10%.
The two of them have very interesting characteristics. They exhibit what I would call the three use cases I just alluded to, scale up, scale out, and scale across, where we really have a fabric notion of creating. You know, so far we’ve been working with them a lot on the front end, and now we get to complement that on the back end, definitely for scale out and scale across, and maybe even a little bit of scale up, in some of these use cases. The other thing we’re seeing with a lot of these use cases is the lack of power in sites, and the ability and demand to distribute and get a more multi-tenant, scale across is very high in these two use cases.
A third common thread we’re seeing across all of them, much as we all talk about ODM and white boxes, they deeply appreciate EOS and the features and the reliability and the observability and just the fact that we have a robust, highly scalable layer 2, layer 3 stack commands a lot of superior advantages. I believe the diversity of these cloud titans is largely due to the fact that we have great hardware and software combined. Ken, you want to say a few words on that?
Antoine Chkaiban, Analyst, New Street Research0: It’s just been an incredible journey to live through this and see the level of infrastructure build out we’re getting and how well-positioned our hardware and software roadmaps are to address these ever-evolving, more advanced use cases. It’s just a blast to get to work on this stuff.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: That’s always fun when your job is a blast. Ben, I still see 1, I think 1, maybe 2, 10% customers. Todd, hopefully we can ship it.
Antoine Chkaiban, Analyst, New Street Research0: Sure.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Oh, sorry. Ita.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Ben Reitzes with Melius Research. Please go ahead.
Ben Reitzes, Analyst, Melius Research: Oh, there you go, Jayshree. Here I am.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah.
Ben Reitzes, Analyst, Melius Research: I, yeah, I wanted to ask around the product, the constraints. Are you able to say what the number was in the quarter and what it’s taking away in terms of the $2.8 billion guide? Is it safe to say, you know, things would have been, you know, $100 million or $200 million higher for both? Then if you don’t mind, just, if you can touch on why the gross margin should go back up to 63%. You know, what is it that you guys are doing that, what gives us confidence that it can actually expand a tad from here?
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: Yeah, I think that.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Maybe you take this one, Chantelle.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: Okay, I’ll just-
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: Hey, Ben. I think that I don’t think the commentary about the demand outstripping the supply is a Q1, Q2. I think we’re talking about looking ahead Q3, Q4 into next year. I don’t think there’s something outside of what we’ve guided or what we’ve delivered in the first half. I think in the sense of the margin, the margin’s a mix of things, right? I think that all the team members are executing in full force. I think the supply chain’s doing everything they can on ensuring that we have the best supply at the best price, we’ve incorporated that. I think that the mix of customers, the only chance for mix expansion or margin expansion would be due to mix.
I think that’s the opportunity as we look to see what we can deliver in the second half, Ben. I think that would be the opportunity.
Ben Bollin, Analyst, Cleveland Research0: The teams are also doing everything they can to make sure we control our costs, especially on the manufacturing side. That includes, you know, bringing on secondary providers, qualifying new components, et cetera, to make our supply chain more resilient, and more cost-effective in the long run.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: One thing to clarify also on gross margins is we view this as a partnership with our customers. While we did consider and have raised prices a little bit, unlike our competitors, we haven’t done two price increases. We haven’t done major price increases. The price increases really come into play once our backlog starts to reduce, right? You won’t see the impact of that. Our gross margins are a strong factor of cost going up and us still eating a lot of the costs and you know, giving our customers the benefit and promise of the pricing we said we would give to them.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research3: Hey, good afternoon. Thanks for the question. I was just wondering if you could talk about whether or not Arista’s seeing networking attach opportunities for customers that are using TPU or TPU-like architectures. Then, anything you could comment about as it relates to growing NeoCloud traction. Is that something that you think may be a little bit underappreciated by the analyst community? Thank you very much.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Michael, you’re absolutely right. I’ll take your second question first. It’s easy to talk about the titans ’cause the numbers are so ginormous, right? The NeoClouds are a very important sector because they don’t always have the staff to do everything they want to do, and they really lean on Arista’s design expertise, EOS expertise, you know, network design configurations we can provide them. You know, a family of 22 products we have in AI. Yes, I would agree with you. It’s an underappreciated, the NeoCloud was very strong this quarter, if I recall Chantelle, for us in the, in the specialty and cloud providers. What was the other question? You had a 1A, 1B.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: The TPU.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Oh, yeah, the TPU. In general, we are seeing diverse accelerators. Last time I spoke about the AMD accelerators. This time I will definitely give a nod to the TPUs, because in particular, scale across use cases, we’re seeing multi-tenants connecting to different AI accelerators, including TPUs as well. I think the diversity of accelerators is creating tremendous multi-accelerator opportunity and multi-protocol features that we can provide for them in our network.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Sean O’Loughlin with TD Cowen. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research7: Great, thanks. You know, congrats on the results, and thanks for letting me join in on the fun here. Jayshree, wanted to get your thoughts on, you know, we’ve been talking a lot about agentic AI and the demands that it’s placing on maybe some of the more general purpose infrastructure that we’ve had in the background over the last couple years. You’ve talked in the past about a 2-to-1 pressure, you know, on front-end networking created by back end. First, I guess, is that still the correct way to think about it? Second, you know, as agentic workflows become more common, is there any additional demand, you know, from your perspective having a single-image EOS platform on the front and the back end, or is the front and back end still pretty siloed?
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. Well, first of all, Sean, welcome to your first call. It will be fun. Join the fun. agentic AI, it’s kind of a buzzword, but let me sort of break it into how The biggest killer application we see in agentic AI right now is still training. Indeed, it’s gonna move to more distributed inference, and we’d also like to see agentic AI move into a lot of enterprise use cases, all of which we are seeing, by the way. I would say large, medium, small. The largest killer agentic AI application is training, the medium is enterprise, and the small is, medium is inference, and the small is obviously enterprise.
In terms of back end versus front end, we are now seeing way more back end activity, particularly with our large AI titans and cloud titans, because there is just so much scale they need to prepare for the billions of parameters and tokens, this is where a lot of so much so that I think the front end, they might come back and refresh, they’re almost ignoring right now in favor of the back end. Having said that, though, by virtue of the back end deployments, I don’t know if we anymore see a 2 to 1 to the front end, we at least see a 1 to 1, the 1 to 1 can be wide area, CPU, and storage. Those are probably the 3 common use cases.
Not all the customers are up and lifting everything and doing all three, although we’ve had cases where some of them did an upgrade at the front end before they went into the back end, but usually they will have to come back to that because the minute you put that kind of performance pressure and scale on the back end, you almost have to do something in the front end. At the moment, I would say it’s more 1 to 1, and at the moment I’d also say the scale across in the back end has become a bigger use case than we imagined this time last year.
Antoine Chkaiban, Analyst, New Street Research0: The other thing I’d like to mention here is just how good it feels to have the same set of products and the same common operating system management suite and operating model across the front end and back end. This lowers cost for the customer, simplifies their design process to get that leverage, and we’re one of the few vendors who can do that.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: I think only.
Antoine Chkaiban, Analyst, New Street Research0: Yeah, I think so.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: I think only. Yes, absolutely. Good point, Ken.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research2: Great, thanks. appreciate the question. Maybe just a question on XPO monetization or just how it helps you kind of continue to gain share with customers or just mind share with customers by being so front-footed with the technology. Thanks.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Thank you, Meta. I think, as you know, we’re not a classic optics vendor, but almost always whenever we’re selling our switches, it has to connect to something, and usually it’s some form of copper or optics. Andy’s innovations with OSFP, I remember this super well, where everybody was saying, "Oh, no, we can just use QSFP," has proven to be, you know, not only a contribution for Arista, but really for the industry wide, and that’s still how we see it with XPO as well. You know, while the industry has been talking a lot about co-package optics, these are still science experiments and they’re very proprietary with individual vendors doing their own thing.
We embrace open CPO few years from now, but we think XPO has a 10-year run, especially at 1.60 and 3T where you need liquid cooling and you need that kind of capacity. You know, all those scale-up racks we’re talking about wouldn’t be possible without XPO or CPC or any one of those technologies. We see this as just as the last decade was greatly influenced by OSFP, the next decade will be greatly influenced by XPO. Remember, 99% of the optical market today that we connect to is all pluggable optics.
Antoine Chkaiban, Analyst, New Street Research0: Right.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: This is a very crucial invention and innovation, not just for Arista, but the industry at large.
Antoine Chkaiban, Analyst, New Street Research0: I think this is a great example of how Arista enables an ecosystem, and then we profit as that ecosystem grows. What XPO unlocks is a standard interoperable multi-vendor way to get to 4 times the network density and liquid cooling, which is absolutely critical for these AI use cases. Without that, you’ve got this huge bottleneck at the front panel, and the amount of extra rack space is required to get through OSFPs. It’s. We’re really enabling the future growth of our industry this way, which we benefit and others benefit as well.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. It’s stunning to me. I remember when I first talked to Andy and Vijay, they said, "Oh, we think we’ll get about 20 signatures," and then it was 40, and now it’s north of 100. It tells me the whole consortium is coming together for things like Ethernet, IP, and standards and standardization of optics.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Tal Liani with Bank of America. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research9: Hi, guys. Can you hear me?
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yes, Tal, we can hear you.
Antoine Chkaiban, Analyst, New Street Research2: Hi, Tal. Yep.
Antoine Chkaiban, Analyst, New Street Research9: Hello. I promised myself to be nice today, so I have a good question for you.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: I promise to be nice too.
Antoine Chkaiban, Analyst, New Street Research9: Deferred revenues doubled in the last year, and it went up. If I combine short-term, long-term, it went up $826 million.
It went up significantly in the last 4 quarters. What needs to happen? What are the conditions for to recognize deferred revenues? Meaning what needs to happen for deferred revenues to be recognized over the next few quarters? Is it about data center going live and traffic goes into data centers? Or what are the sources for the deferred revenue increase? Thanks.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Right. Right. Tal, I really do like you, so I’m gonna be nice to you not because I have to, but because I like to. I think if you remember 10 years ago, Tal, we had a similar phenomenon where in the cloud, the whole leaf-spine design was brand new. Nobody really knew how to build it or monetize it, and we were building some of the world’s largest networks for Azure, et cetera, right? We had new products. They had new designs. They had done traditionally the access aggregation core and were now moving to the Clos fabric topology. We had some fairly lengthy qualification cycles. I would say there’s a customer aspect to it and a product aspect to it.
The customer aspect to it is they need to have the space, they need to have the facilities, they need to have their, in this case, GPUs now. Back and then it used to be CPUs. They gotta have their rack and stack. Many cases, by the way, we’re running into examples where it’s literally they need to manually install the cables, and that takes several months, right? Thousands of people have to do that. There’s certainly a customer acceptance piece of it, which it starts with being ready. There’s also a new product. Many of these new products in the Arista Etherlink family, particularly for the AI, are brand new. Brand new chips, brand new software. The familiarity with it, particularly in the back end for scale-out and scale-across, is new to them.
There’s a level of testing and level of making sure it works with the rest of their ecosystem, including the front end. That is super important, and Arista bears a huge responsibility to that as well. All this to tell you that the length of time to qualify this, which used to be 2-4 quarters, has extended more like 6 to even 8 quarters. It’s gotten much longer. Chantelle, you want to add something there?
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: Yeah. The only other thing I’d add, thank you, Jayshree, is that we do recognize some of it every quarter. It’s not like it’s one balance. This is aging and growing. Tal, we recognize things every quarter. Things come in and things are recognized to the P&L. I just want to make sure you understand that things are recognized.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. It’s not piling.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: That’s right.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Some things go in and some things come out.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: That’s right. Okay.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. Does that make sense, Tal? It does. Tal, you’re on mute? No, no.
Antoine Chkaiban, Analyst, New Street Research5: They muted him after his question.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Oh, he does. Okay. All right. We can wait till the next one.
Antoine Chkaiban, Analyst, New Street Research4: Our next question will come from the line of Amit Daryanani with Evercore. Please go ahead.
Amit Daryanani, Analyst, Evercore: Yep, thanks for taking my question. You know, I guess, Jayshree, you folks have kind of positioned XPO as the next OSFP, and I would love to kind of understand that as XPO ramps from, you know, the OFC demos to potentially deployments in 2027. You know, how do you see change in the optics architecture within AI clusters and then maybe specifically for Arista? You know, does that change the growth profile or your content per AI rack or cluster as we go forward? Thank you.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. No, thank you, Amit. I think you should look at XPO as a partner to OSFP. At 400 gig and 800 gig, you’ll be fine with OSFP. As we go to higher speeds in 2027, 2028 or even beyond, you know, OSFP will run out of steam, and this will be the new connector of choice. The migration to higher speeds equals the migration to XPO, particularly for scale-out and scale-across. Within a rack and scale-up, there’s still a number of choices. I think within short distances of 2 to 3 meters, you’re still gonna see a lot of co-packaged copper. I think XPO in terms of density will be another alternative. I don’t rule out open CPO as well over there if they’re really looking to maximize their density in a minimum amount of space.
I think XPO will be particularly prevalent in scale-out and scale-across, and will be one of the choices in scale-up.
Antoine Chkaiban, Analyst, New Street Research4: Our next question comes from the line of Ryan Koontz with Needham. Please go ahead.
Jeff Hobson, Analyst, Needham: Hi, this is Jeff Hobson on for Ryan. I appreciate the question. On the scale-across, it seems like that would be a really good fit for all of Arista’s capabilities. I know you mentioned it would maybe be around a third of revenue this year. Is this something where scale-across could even be larger than scale-out over the next couple of years? Thanks.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Hi, Ryan, or rather Jeff. I think the answer to that would lie on how well we do with both and what form factors are used for both. A majority of the scale-across today is a very premier, valuable, heavy-duty routing platform, the 7800. If we do lots of that, it could get well beyond the 30%. Some of them may do it with fixed boxes too, or fixed switches, and choose to add a lot of cable, in which case it wouldn’t go well above that. We don’t know what we don’t know. I would agree with you that scale-across is by far the most significant and differentiated opportunity that really highlights Arista’s prowess in both platforms and software.
Antoine Chkaiban, Analyst, New Street Research4: Our next question comes from the line of Samik Chatterjee with JP Morgan. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research6: Hi. Thanks for taking my question. Jayshree, maybe slightly related to the last question here. Just trying to think about, you said, most of the cloud revenue near term is going to be scale-out and scale-across as we wait for scale-up to ramp. How are you thinking about your market share when it comes to scale-out versus scale-across in the early days of scale-across? What are you seeing in terms of market share? And are you seeing customer decisions being led in scale-across by sort of the incumbent in scale-out, or is it a different decision altogether in terms of how they’re designing wins for scale-across? Thank you.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Oh, good question, Samik. You’re making me think. So I would say if it’s a greenfield deployment, then they tend to think of it together because they’re not only building the sites, but they’re thinking of the interconnect across them. And therefore, our market share is generally strong in both. In some cases where Arista has not been a historical participant within the data center, we now have an opportunity to offer the scale across multi-tenant, even in a non-greenfield situation, and let’s say in a brownfield, where now they’ve got disparate data centers or AI clusters that we now have to bring in.
Once again, I think Arista’s a really fitting example to be in scale across for both those use cases, but has the additional opportunity in a brand-new data center to be in all use cases, if that makes sense. It’s giving us a chance to participate with different types of accelerators and different types of models because people aren’t getting the power, and they’re having to distribute the data centers. As a result of distribution, you need more traffic engineering, routing, multi-tenancy. I would say scale across is the common denominator in all our use cases, and scale up and scale out may be nice options in brand-new greenfields.
Antoine Chkaiban, Analyst, New Street Research4: Our next question comes from the line of Karl Ackerman with BNP Paribas. Please go ahead.
Karl Ackerman, Analyst, BNP Paribas: Yes, thank you. Jayshree, you are doing more networking design today more than ever. Does that change your ability to monetize your services to capture more of the work, of the other value that you’re adding to this, to these applications? I guess as you address that, given the large mix of services revenue within deferred, could services revenue accelerate faster and represent perhaps 25% or 30% of sales going forward? Thank you.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: I don’t think so, Karl. I think we’re a product company, and majority of our revenue generation and interest in Arista as a company for all the designs we’re doing comes from our product heritage. It’s not like we charge for services. In fact, we work closely with our partners also. We will, you know, recommend network designs. We will support services and certainly things like we are the gold standard for worldwide support. But I don’t expect services as a function of our revenue to go up. I continue to see ourselves as a product-led company.
Antoine Chkaiban, Analyst, New Street Research4: Our next question comes from the line of Matthew Niknam with Truist. Please go ahead.
Antoine Chkaiban, Analyst, New Street Research1: Hey, thanks so much for taking the question. I just wanted to go back to gross margin. I know we were sort of in that 62-ish range. They dipped about 170 basis points year-over-year. I want to dig into whether it was primarily mix related or, you know, maybe if you can quantify how significant the memory and cost-related impacts were, if there’s any color you can provide. Thanks.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: Yeah. I think it’s a great question. I would say the majority, even if you look at prior quarter or prior year, the majority of the difference is mix of the customers. And just to clarify, you know, our larger customers, you know, have a lower gross margin accretion, and so that mix is the primary driver. Then the secondary, although not as significant, would be things depending on the quarter, depending how deferred’s moving tariffs or the memory cost or the silicon cost, depending on the quarter. Secondary driver, but the primary driver is mix of the customer segments.
Antoine Chkaiban, Analyst, New Street Research4: Our next question comes from the line of David Vogt with UBS. Please go ahead.
Ben Bollin, Analyst, Cleveland Research1: Thanks. Hi, this is Andrew for David. you know, from a high level, with $2.4 billion almost of inventory and, you know, almost 2 years in COGS of purchase commitments, you know, how should we think about the supply constraints and where that inventory and purchase commitments are not satisfactory to meet demand? Where are the holes in your inventory?
Ben Bollin, Analyst, Cleveland Research0: I wouldn’t say we have holes in our inventory, but we have surging demand, especially on the newest platforms, which of course is driving our need for the most modern, you know, silicon from our providers, and it’s driving need for an expanded amount of memory, even more than we were expecting before the year began. That’s driving us to be a buyer in the market. Luckily, we’ve got pretty good spending power. We’re a very reliable partner in these scenarios, and so we partner closely with these vendors. There’s no doubt that, like, the newest platforms that we’re delivering, especially in the AI space, is driving needs of ours in the high end of our portfolio.
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yes. Just to add to that, David, the real hole is lead times. We are experiencing such significant wafer fab shortages that we’re not getting the chips in time. You know. More than a hole, I would just say, you know, our purchase commitments are multi-years because we’re having to deal with forecasts that are out multiple years so that we get them in time because the lead time of these chips is so long. I think that’s the biggest hole, lead times.
Ben Bollin, Analyst, Cleveland Research0: Yeah. We are experiencing 52-week lead times pretty reliably with reservation needs beyond that, and our customers certainly do not want to wait that long.
Antoine Chkaiban, Analyst, New Street Research4: Our next question comes from the line of James Fish with Piper Sandler. Please go ahead.
James Fish, Analyst, Piper Sandler: Hey, guys. Chintal, maybe for you, the guide raise was primarily all on AI. Are you guys prioritizing these shipments? Or what’s giving the hesitancy around sort of the non-AI, non-campus at this point and leaving that roughly flat still? Jayshree, just for you, just as we think about the mix here on gross margin, what are you guys seeing in terms of blue box adoption now? Are you seeing any sort of net pull-in of demand just given, you know, you have a lot of smart customers here, and they’re very much aware of the supply chain constraints. Thanks, guys.
Chantelle Breithaupt, Chief Financial Officer, Arista Networks: Yeah. Thank you. Thank you, James. I’ll start with mine first in the sense of the order of your question. I don’t think we’re saying because we’re raising the revenue and attributing to AI that we’re not excited about all the other customer segments. I think you heard both Jayshree and I talk about we’re very happy with how the year started, what we’re seeing across all three customer segments. We’re very happy what we’re seeing in enterprise, which I wouldn’t say is quite AI yet, so let’s count that as the non-AI bucket that you referred to. We’ll wait and see. We’re in Q1, reporting Q1. We’ll see how the year goes. We’re very confident across all three that we’re seeing strong demand.
I think I would leave it in the sense of let’s see where we get to in our future quarter guides. Jayshree?
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah. I would agree with that. I just to remind everybody, we’ve raised now from 10.5 or whatever we just said last September to $11.5 billion. Yes, a high degree of that is AI, but we have aggressive commitments on the campus to go to a $1.25 billion quarter and continue to service and grow our data center and cloud as just as well. All three are growing, but certainly AI is taking the news headline. Regarding blue box adoption, one of the customer use cases you actually heard about was moved from that you heard from Ken, moved from white box to blue box. Their goal right now is their desire to move to blue box is it works, number 1. It scales, 2.
It actually does the job for us with AMD accelerators, number 3. Down the road, they may use open operating systems, but they were very pleased with the diagnostics capability, the platform SDK, where, you know, we literally rewrite every piece of software and bit-twiddle all the Broadcom chip transistors, very, very well, and the EOS features. Down the road, they may use some open NOSes as well, but that would be a really good example of a blue box that has EOS today and may go down to other NOSes. And we continue to see that, particularly in the NeoCloud clouds. We’ve always seen a bit of that in the cloud and AI Titans because they know how to work with open NOSes. We’ve had that hybrid strategy always, but we’re certainly seeing more of that in the NeoCloud clouds now.
Antoine Chkaiban, Analyst, New Street Research5: Regina, we have time for one last question.
Antoine Chkaiban, Analyst, New Street Research4: Our final question will come from the line of Ben Bollin with Cleveland Research. Please go ahead.
Ben Bollin, Analyst, Cleveland Research: Good afternoon, everyone. Thank you for taking the question. Jayshree, you referenced inference a little bit earlier, said it’s kind of a smaller use case right now. I’m interested in your thoughts on where you think enterprise is in terms of their ability to consume inference and create agents, and then, how that develops over time and where you think the front-end networks and edge networks are today in their ability to support those use cases. Basically, just do we get the sustained investment period because what you’re seeing now bleeds and becomes much more significant in enterprise and how long-lasting that might be?
Jayshree Ullal, Chairperson and Chief Executive Officer, Arista Networks: Yeah, no, Ben, I tend to agree with your thesis that while today we are in a training fever, that a more distributed AI, generative AI paradigm with inferences, which means you don’t always need the GPU. You’re gonna have high-end CPUs, and you’re gonna have a smaller set of parameters to, and tokens to manage, and you’re gonna have specific agentic AI use cases and applications. We’re seeing very, very early trials and stages. Nothing super big yet, but we are seeing. I mean, they’re not in the hundreds of thousands of GPUs like you see on the AI Titans, but we’re frequently seeing our customers in certain high-tech sectors want to deploy clusters that are 1,000, few thousand, definitely not 10,000, but in the hundreds to thousands.
They tend to be exactly as you said, not training, but more inference-based, more a-agentic AI edge inference-based as well. I think we’ll see more of that. This is the calm before the storm, if you will. As AI gets more distributed, I think it doesn’t need GPUs alone. It’s going to need more high-performance compute. Many of them seem to feel to us like high-performance compute, HPC use cases that are sort of getting revived for AI. I agree with your thesis, Ben. I think it’s gonna take a couple of years to fully happen.
Antoine Chkaiban, Analyst, New Street Research5: This concludes Arista Networks’ first quarter 2026 earnings call. We have a presentation posted that provides additional information on our results, which you can access on the investor section of our website. Thank you for joining us today and for your interest in Arista.
Antoine Chkaiban, Analyst, New Street Research4: Thank you for joining, ladies and gentlemen. This concludes today’s call. You may now disconnect.