Astera Labs Q1 2026 Earnings Call - Scorpio Switches Drive Revenue Surge as AI Connectivity Demand Accelerates
Summary
Astera Labs delivered a strong first quarter of 2026, with revenue jumping 93% year-over-year to $308.4 million, driven by robust adoption of its PCIe 6.0 switches and signal conditioning products. The company highlighted the successful initial volume shipments of its Scorpio X-Series scale-up fabric switches, particularly the new 320-lane model, which incorporates hardware-accelerated in-network compute capabilities to reduce latency and improve AI training and inference performance. Management emphasized that AI infrastructure spending remains in its early stages, with hyperscalers and sovereign entities continuing to invest heavily in scalable AI clusters.
Looking ahead, Astera Labs provided upbeat second-quarter guidance, projecting revenue between $355 million and $365 million, reflecting continued momentum in its AI fabric portfolio. The company is strategically expanding its footprint into optical interconnects, with near-package optics (NPO) and co-packaged optics (CPO) solutions expected to ramp in 2027. Additionally, new design wins in custom solutions, including KV cache offload applications and NVLink Fusion architectures, position the company to capture higher dollar content per accelerator, potentially exceeding $1,000 per XPU. Despite a slight sequential gross margin compression due to mix shifts and non-cash warrant impacts, the company maintained strong profitability and reiterated its commitment to long-term growth through targeted R&D investments and supply chain diversification.
Key Takeaways
- Revenue surged 93% year-over-year to $308.4 million in Q1 2026, exceeding guidance and reflecting broad-based strength across signal conditioning and AI fabric portfolios.
- PCIe 6.0 products now account for over one-third of total revenue, with millions of ports shipped and strong adoption in both scale-up and scale-out AI systems.
- Initial volume shipments of the Scorpio X-Series 320-lane scale-up fabric switch began in Q1, featuring hardware-accelerated in-network compute and Hypercast capabilities to boost AI performance.
- Scorpio X-Series is expected to become the largest product line by year-end, with high-radix configurations ramping in the second half of 2026 and broader deployment in 2027.
- Management raised Q2 revenue guidance to $355-$365 million, up 15-18% sequentially, driven by continued Scorpio ramp and strong Aries PCIe 6.0 adoption.
- Non-GAAP gross margin came in at 76.4% in Q1, up 70 basis points sequentially, though Q2 guidance reflects an estimated 200 basis point non-cash impact from a customer warrant agreement.
- The company announced new design wins for KV cache offload applications using its Leo CXL memory controller, with shipments expected in 2027, targeting AI inference workloads.
- Optical interconnect strategy is progressing, with near-package optics (NPO) and co-packaged optics (CPO) solutions slated for volume shipments starting in 2027, supported by the aiXscale Photonics acquisition and Israel Design Center integration.
- Custom solutions business is emerging as a multi-billion dollar opportunity, with deep engagements in NVLink Fusion hybrid rack architectures and UALLink fabric switches expected to launch in 2027.
- Supply chain diversification and a 75-day inventory position provide confidence to support doubled revenue growth through 2027, with management emphasizing disciplined execution and strategic R&D investments.
Full Transcript
Audra, Conference Operator: Good afternoon, my name is Audra, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs First Quarter 2026 Earnings Conference Call. After management remarks, there will be a question and answer session. I will now turn the call over to Leslie Green, investor relations for Astera Labs. Please go ahead.
Leslie Green, Investor Relations, Astera Labs: Good afternoon, everyone. Welcome to Astera Labs’ First Quarter 2026 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-founder; Sanjay Gajendra, President and Chief Operating Officer and Co-founder; and Desmond Lynch, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today’s earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in our most recent annual report on Form 10-K.
It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties and assumptions, all results, events, or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call, except as required by law. During the call, we will refer to certain non-GAAP financial measures which we consider to be an important measure of the company’s performance.
For example, the overview of our Q1 financial results and Q2 financial guidance are on a non-GAAP basis. These non-GAAP financial measures are provided in addition to, and not as a substitute for, financial results prepared in accordance with US GAAP. A discussion of why we use non-GAAP financial measures, whose difference is primarily stock compensation, acquisition-related costs, and related income tax effects, and reconciliations between our GAAP and non-GAAP financial measures and financial outlook are available in the earnings release we issued today, which can be accessed from our website, through the investor relations portion of our website. With that, I’d like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra?
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: Thank you, Leslie. Good afternoon, everyone. Thanks for joining our first quarter conference call for fiscal year 2026. Today, I’ll update you on AI infrastructure market trends, our Q1 results, and recent announcements. I’ll turn the call over to Sanjay to discuss Astera Labs’ growth profile. I’d also like to welcome Des, our CFO, joining this call for the first time. Des will cover our Q1 financials and Q2 guidance. Since our last earnings call, AI infrastructure spending has clearly accelerated. Hyperscalers, AI labs, and sovereign entities are signaling the industry’s build-out is still in its early stages, underpinned by strong monetization and ROI. We expect these strong secular trends to be a tailwind for Astera Labs’ growth over the long term. Astera Labs delivered strong results in Q1, with revenue and non-GAAP EPS above our outlook.
Revenue for the quarter was $308 million, up 14% from the prior quarter and up 93% versus Q1 of last year. Revenue growth was broad-based, spanning across our signal conditioning and fabric switch product portfolios as we continue to diversify our business profile with new design wins across multiple customers and product categories. Our PCIe 6.0 business across both AI fabric and signal conditioning was strong in Q1, with revenue expanding to more than one-third of our total revenue. We have now shipped millions of PCIe Gen 6 ports to date, demonstrating the robustness and maturity of our PCIe portfolio. Taurus’ smart cable modules for Ethernet AECs continue to perform well as new program designs ship in volume while others ramp to mature levels across GPU, XPU, and general-purpose systems.
On the scale-up fabric front, our initial design wins with Scorpio X Series in smaller radix configurations shifted from pre-production shipments to initial volume ramp during the first quarter. Building on this momentum, today we announce the expansion of our Scorpio product line of AI fabric switches for both scale-up and scale-out use cases. Scorpio X Series portfolio now supports up to 320 lanes for high radix scale-up networking, Scorpio P Series PCIe 6.0 portfolio now spans 32 to 320 lanes for diverse system topologies, making it the broadest in the industry. Our new flagship Scorpio X Series 320 lane has been purpose-built to maximize AI economics by leveraging hardware-accelerated Hypercast and in-network compute engines to boost collective operations by up to 2x.
In-network compute offloads critical accelerator-to-accelerator communication and computation directly onto the switch, dramatically reducing the networking overhead during large-scale training and inference. These hardware capabilities are delivered through enhancements to our COSMOS software, which can now integrate deeper into our customer software stacks, providing not only diagnostics and telemetry, but also directly improving AI platform performance. Scorpio’s advanced hardware and software capabilities are a result of Astera Labs’ deep system-level understanding of AI architectures and close customer collaborations, creating a durable competitive mode. We are excited to report that we are now shipping initial volumes of our new 320-lane Scorpio X, with production volumes ramping in the second half of 2026. Scorpio X-Series also has widening interest in design activity with hyperscalers, AI inference providers, and enterprise infrastructure builders to address high bandwidth AI clustering use cases.
Scorpio P-Series continues to grow through 2026. We expect initial shipments to at least 2 additional major hyperscalers towards the end of 2026, with broader deployment in 2027. We made good progress during the quarter as we continue to work through the qualification process at a large AI platform provider with our ultra-high precision optical fiber coupler product, which we expect to ship in volume starting in 2027. We are actively expanding our volume manufacturing capabilities to support the ramp of both scale out and scale up CPO applications. Beyond the early commercial traction of our merchant connectors, our high-density fiber coupler technology will be a critical piece of our long-term optical roadmap for NPO and CPO applications. Finally, our Leo memory controller is on track for an early ramp of CXL-attached memory with Microsoft Azure M-series virtual machines.
During the quarter, we captured a new custom design win for a KV cache offload application, with shipments expected in 2027. As we look to the second half of 2026, robust demand reflects secular AI infrastructure spending, deep customer partnerships, and expansion towards higher value solutions within our portfolio. This trend is quickly increasing our silicon dollar content opportunity beyond $1,000 per XPU within AI racks and positions Astera Labs to outperform our end market growth rates. As a result, we expect strong revenue growth to continue through 2026 and into 2027, driven by the proliferation of AI fabrics and the industry’s transition to PCIe 6, 800 gig and 1.6 T Ethernet connectivity. Based on the momentum we are seeing in 2026, we are strategically investing to drive strong continued growth.
Our acquisition of aiXscale Photonics has created immediate design opportunities, and our Israel Design Center is fully integrated and working with customers on new programs. We have expanded our product portfolio and increased dollar content per accelerator while diversifying our customer base with additional design-ins. We are making progress within large market opportunities, including optical engines and interconnects, UALink fabrics, and custom solutions for NVLink and AI inferencing. Most of all, I’m proud of the stellar team we have built through worldwide hiring and thoughtful acquisitions, the progress we have made, and the results we are delivering together. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to outline our vision for growth over the next several years.
Blayne Curtis, Analyst, Jefferies2: Thanks, Jitendra, and good afternoon, everyone. Today, I will provide an update on our recent execution, followed by an overview of the meaningful market opportunities that will fuel Astera Labs’ growth over the next several years. Astera Labs’ mission is to deliver a purpose-built intelligent connectivity platform with a portfolio of standard, custom, and platform-level solutions across copper and optical interconnects for rack scale AI infrastructure deployments. As AI deployments advance to production at scale and operational efficiency, infrastructure teams face a new set of constraints. Multi-trillion parameter models, agentic workflows, multi-step reasoning distributed across heterogeneous compute infrastructure, to name a few. The industry needs connectivity solutions purpose-built to address these workloads. Higher radix to simplify topologies, intelligent fabric capabilities to reduce communication overhead, open and platform-specific optimization, and data center-grade diagnostics to maintain uptime when a single fault can cost millions of dollars in AI compute.
Let me now walk through our approach to address these evolving needs and our future strategy, starting with our standard products. We continue to see strong momentum across both AI fabric and signal conditioning portfolios. We strengthened our mission-critical position with the introduction of our flagship Scorpio X-Series 320 lane scale-up fabric switch and the overall expansion of our Scorpio switch portfolio. The Scorpio X-Series 320 lane high-radix AI fabric switch replaces multiple legacy switches to enable large scale-up cluster sizes in a single hop and reduces overall latency. Several new features, such as in-network compute, reduce time to first token and tokens per watt performance. The newly expanded Scorpio P-Series PCIe switch portfolio now spans from 32 lanes to 320 lanes to enable diverse accelerator optionality and system topologies.
Our AI fabric portfolio is poised to expand further into 2027 with the introduction of UALink-based products for AI scale-up platforms. In early April, the UALink Consortium published a new specification which defines in-network compute chiplets manageability and 200 gig performance. UALink 2.2 delivers these advancements with an open vendor-neutral approach and confirms that scale-up switching is not simply hardware but an AI-aware fabric actively helping the system compute and drive performance. This evolution plays into Astera Labs strength as demonstrated by the industry-leading feature set that are being deployed through our Scorpio portfolio expansion today. The maturity of the ecosystem is also accelerating with customers and suppliers working tightly to deploy initial programs in 2027.
On the signal conditioning portfolio, our Aries products will expand to support PCIe 7.0 and our Taurus portfolio into 1.6T Ethernet, positioning us at the forefront of the next connectivity upgrade cycle. Turning to our optical business. Astera Labs signal connectivity business is driven by the rapid shift of AI systems towards rack scale architectures and higher compute capabilities, where scaling performance increasingly depends on high bandwidth, high radix, low latency interconnects. These requirements will expand our AI connectivity opportunities across both copper and optical interconnects. Astera Labs is well-positioned to lead this transition by extending its proven value chain approach from copper into optics. Over the past couple years, we have been systematically investing to broaden our internal capabilities across advanced analog and mixed signal design, DSP, electronic ICs, photonic ICs, and optical packaging capabilities while also deepening our supply chain relationships.
Together, these capabilities will enable high volume deployment of a complete scale-up optical engine. We’re focused on 3 areas pertaining to scale-up optics. 1, high-density detachable reusable fiber attached solutions using the core technology from our XSCALE acquisition. We expect to ship these connectors in volume starting in 2027. Chipsets in support of NPO that will enable multi-rack AI clusters starting in 2027, and eventually fully optically enabled Scorpio X fabric switches with CPO supporting larger domains, higher egress densities and bandwidth. Let me talk about our custom solutions business that also continues to make meaningful progress as we work to develop new products and close on new designs. Once again, tight collaboration with hyperscaler customers, coupled with a diverse set of foundational technology and operational capabilities have been essential to our initial success. These opportunities represent a new multi-billion dollar market opportunity for Astera Labs.
First, we are engaging with multiple customers to enable NVIDIA Fusion scale-up architecture for hybrid racks. Our strong historical execution delivering intelligent connectivity solutions for NVIDIA-based systems positions us well to develop and design within these new custom programs. Second, we are seeing new custom solution opportunities within the memory space for KV cache applications. We are happy to report that we have won a new design leveraging a customized version of our Leo CXL controller to maximize performance within these AI use cases. Overall, we are pleased with the initial traction we have seen on the custom solutions front and have conviction that this opportunity set will continue to broaden and become a meaningful business for Astera Labs over the next few years.
We continue to demonstrate solid momentum with our platform business as we ultimately look to expand beyond our add-in cards and Smart Cable Modules to enable broader rack scale solutions for customers. We have grown from an IO component supplier to an AI Fabric solution provider over the past couple of years, customers are looking for Astera Labs to bring additional value to the AI rack at the system level. Astera Labs is at a key inflection point in the company’s journey as we begin to ship production volumes of our scale-up AI Fabrics. We are also making great strides towards broadening our business across new product categories, including optical and custom solutions as our partners look for us to deliver more value in next generation systems.
Therefore, we will continue to strategically and thoughtfully invest as we position Astera Labs to deliver growth rates above our end market benchmarks over the long term. With that, I will turn the call over to our CFO, Desmond Lynch, who will discuss our Q1 financial results and our Q2 outlook.
Desmond Lynch, Chief Financial Officer, Astera Labs: Thank you, Sanjay. Good afternoon, everyone. I’m pleased to be joining you today for my first earnings call as the CFO of Astera Labs. I look forward to partnering with Jitendra, Sanjay, and the rest of the leadership team as we continue to drive long-term value for our shareholders. Today, I will begin by reviewing our Q1 financial results, will then discuss our Q2 guidance, both presented on a non-GAAP basis. Revenue in the first quarter of 2026 was $308.4 million, which was up 14% versus the previous quarter and up 93% year-over-year. We saw revenue growth across our signal conditioning and switch fabric portfolios, supporting both scale up and scale out connectivity for AI fabric and reach extension applications.
Our Scorpio product family performed well in Q1, driven by strong demand for PCIe Gen 6 switching applications and continued expansion of designs across various platforms. During the quarter, Scorpio X-Series products began shipping in initial production volumes. Looking ahead, we expect Scorpio X-Series shipments to increase in Q2, along with initial shipments of our new Scorpio X 320 lane, then ramp to full volume production in the second half of 2026. Aries revenue grew on strong early adoption of our PCIe 6 solutions for both scale out and scale up signal conditioning. In total, PCIe Gen 6 revenue across AI fabric and signal conditioning contributed more than one-third of total company revenue in the quarter. Taurus also delivered solid results driven by broad adoption of AEC to extend reach in both AI and general purpose compute platforms.
Non-GAAP gross margin for the first quarter was 76.4%, up 70 basis points sequentially, primarily driven by a lower mix of hardware sales across our signal conditioning portfolio. Non-GAAP operating expenses for the first quarter were $123.9 million, reflecting continued R&D investment to support our expanding product roadmap, including a full quarter of our XSCALE acquisition and a partial quarter of our newly formed Israel Design Center. Within Q1 non-GAAP operating expenses, R&D expenses were $96.2 million, sales and marketing expenses were $12 million, and general and administrative expenses were $15.7 million. Non-GAAP operating margins for the first quarter was 36.2%. We will continue to invest strategically to drive above-industry revenue growth over the long term while maintaining strong and durable profitability.
For the first quarter, interest income was $11.6 million. Our non-GAAP tax rate was 11%, and non-GAAP fully diluted shares outstanding were 181.2 million shares. Non-GAAP diluted earnings per share for the quarter was $0.61. We ended the quarter with cash equivalents, and marketable securities totaling $1.18 billion, flat versus Q4 as cash from operations of $74.6 million was offset by cash paid for acquisitions. Now turning to our outlook for the second quarter. We expect revenue to be between $355 million and $365 million, up 15%-18% sequentially, driven by continued strength across our AI fabric and signal conditioning portfolios.
Aries revenue growth is expected to be driven by continued strong adoption of PCIe 6.0 across AI platforms, supporting both scale up and scale out connectivity. Taurus growth is expected to be driven by increased volumes for AI scale out connectivity. In AI fabric, we expect robust growth driven by the continued early-stage ramp of our Scorpio X-Series products for large-scale XPU clustering applications, as well as continued growth in our P-Series solutions and customized GPU platforms. We expect second quarter non-GAAP gross margin to be approximately 73%. This outlook includes an estimated 200 basis point non-cash impact related to a recently executed warrant agreement with one of our customers. We expect second quarter non-GAAP operating expenses to be between $128 million and $131 million.
Interest income is expected to be approximately $11 million. We expect our non-GAAP tax rate to be approximately 12%. We expect our Q2 share count to be 184 million diluted shares outstanding. Overall, we are expecting non-GAAP fully diluted earnings per share to be between $0.68 and $0.70. This concludes our prepared remarks. Once again, we appreciate everyone joining the call. I will now turn the call back to our operator to begin the Q&A. Operator?
Audra, Conference Operator: Thank you. At this time, I would like to remind everyone, in order to ask a question, press star then the number one on your telephone keypad. We ask that you please limit yourself to one question to allow everyone an opportunity to ask a question. If time permits, we may queue again for follow-up questions. We will take our first question from Harlan Sur at JP Morgan.
Harlan Sur, Analyst, JP Morgan: Good afternoon, and thanks for taking my questions, and great job on the execution by the team. You know, as your customers build compute workload inflection from training to inference in the second half of last year, essentially very focused now on monetization, right? We saw that as inferencing workloads evolved, one shot to reasoning to knowledge ntech, right? This created new silicon opportunities, right? It created new storage tiers. It created more demand for high-performance CPUs. Obviously, storage and CPUs communicate via PCIe, like, so right in the sweet spot of your technology and product leadership, right? That’s one example. Your CXL solutions targeted at KV cache applications may be another example. Sort of help us understand how the transition to more inferencing-based workloads, especially agentic-based workloads, has potentially helped to create new opportunities for the team and potentially expand your SAM opportunity.
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: Harlan, thank you. This is Jiten. Let me try to take a stab at that. You point out very correctly that the inferencing has created a lot of focus in the industry and a lot of additional opportunities. The good news is at Astera, we’ve been focused on these AI applications from the start, and we helped the training workloads when the training workloads were still, you know, the mainstream. Now we are helping the inferencing workloads equally well. The KV cache offload is a great opportunity where we mentioned earlier that we picked up a new design for a custom application. For KV cache offloads, that’s really a key part of AI inferencing. I also wanna draw your attention to the newly introduced Scorpio X, 320 lane family that supports in-network compute and Hypercast.
Both of these are extremely important technologies to reduce the networking overhead and then deliver additional performance for training as well as inferencing. Not only that, we enable these hardware-accelerated modes through our COSMOS software, which now not only gives our customers the ability to do diagnostics and telemetry but allows them to uniquely improve the performance of their system for their inferencing workloads using these unique capabilities that we’ve worked in tight collaboration with our customers.
Audra, Conference Operator: We’ll move to our next question from Blayne Curtis at Jefferies.
Blayne Curtis, Analyst, Jefferies: Hey, guys. Good afternoon, and, I’ll echo the congrats on the nice results. Maybe you can, in terms of the Scorpio ramp, I know last quarter you talked about it being 20% of revenue. It’s a big ramp. I’m assuming that’s the biggest driver into June. I was wondering, can you kind of frame just how big that is? I’m curious, particularly this 320 lane product that’s ramping, like, what are the milestones, and what’s left to do? You’ve sampled it, but to get that to production in an AI server, I’m just kind of curious what’s left there.
Desmond Lynch, Chief Financial Officer, Astera Labs: Hi, Blayne. It’s Des. Thanks for your question. We’ve been very pleased with the performance of our Scorpio product family. It’s certainly been a large driver of our growth in the sort of first half of the year. We continue to expect to see Scorpio P continuing to ramp, driven by scale-out opportunities. Scorpio X, this is really a greenfield opportunity for us associated with scale-up connectivity. The small radix solutions are ramping today, and we do expect to see the layering in of the high radix configurations in the second half of the year.
Given the size of the opportunity and the associated dollar content, we would expect to see that Scorpio will become our largest product line by the end of the year, which is strong performance for a product line that was only 15% of total company revenue last year. As we go throughout the year, I would expect to see X-Series revenue, exceeding, P-Series. Overall, we’re very pleased with the performance of the Scorpio product family and the outlook of the business.
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: Blayne, into your second point, about other milestones, we are already shipping, as Des mentioned, the Scorpio X, the newly introduced Scorpio X family. You’ll be able to see and touch and feel this at the Computex, where we will be demonstrating this live in our booth.
Audra, Conference Operator: We’ll move next to Joseph Moore at Morgan Stanley.
Joseph Moore, Analyst, Morgan Stanley: Great. Thank you. You talked quite a bit about your optical strategy. I guess, can you talk about the timeframe where you see optical scale-up becoming more relevant? Do you have the building blocks that you need to progress from copper to optical in that space? You know, do you need tuck-in type technologies, and do you need to invest a lot more? Just a general sense of, you know, what it’s gonna take to transition from copper to optical over the next several years.
Blayne Curtis, Analyst, Jefferies2: Yeah. No, thanks for the question. Sanjay here. We have been working for the last couple of years building all the foundational things that are required for optical enablement, all of the mixed signal technology that’s required, all of the electronic I, IT, as well as we have, you know, did the acquisition with XSCALE that brought in the pluggable connector as well as the PIC technology. In general, I want to say we have made tremendous progress in preparation for the optical opportunities that are coming up on us. For us, in terms of timeline, what we believe is that the NPO-based opportunities or the near package optics would be the first one to ramp, and that will start happening in 2027.
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: We will also be ramping our pluggable connected technologies for CPO, mostly for scale out next year, 2027, with more of the mainstream deployments for CPO happening in the 2028 timeframe. In general, for us, between the components that we’re building that go inside the NPO, the detachable connector technology for folks that have their own CPO solutions, as well as our own Scorpio X devices that will come in to support both NPO variants and CPO variants. We believe it’s been all coming together nicely for us.
One key consideration, of course, that we’ve been working is the supply chain and getting all of the commitments in place so that we can not only provide the technology that’s required for NPO and CPO, but also make sure that we are able to ship to revenue. I think overall, there’s quite a bit of work and progress that we have done, and enabling us to start ramping in 2027.
Audra, Conference Operator: We’ll take our next question from Ross Seymore at Deutsche Bank.
Blayne Curtis, Analyst, Jefferies0: Hi, guys. Thanks for taking this question. Congrats on the strong results and guide. I just want to talk about a small part of your business today, but something that sounds like it could grow a little faster than we thought before, and that’s specifically your Leo product line. Given the dominance or resurgence of the CPU demand and memory being such a large cost and bottleneck these days, how has the demand trajectory and growth potential changed in your view and your ability to kind of do the pooling and the sharing on the memory side in CXL in general?
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: Yeah, we are definitely seeing increased traction for CXL, not only for the general purpose compute application where we started, but also for AI inferencing, as we touched upon earlier. Just kind of staying with general purpose compute first, we are seeing additional demand from our customers. We are on track for deploying this with Microsoft Azure for their M-series instances at the data center. That’s in private beta now, expected to go into general availability end of the year. We see additional customers also kind of following suit for this particular high memory type application. In addition, we are also excited by the new KV cache offload or AI inferencing opportunities, where some of our customers have already designed the same.
In fact, we picked up our second design win of a custom application for CXL earlier this quarter. We are working with our customer, which is an additional new hyperscaler. We’re working with them on at scale performance test and expect that one to ship for revenue in 2027.
Audra, Conference Operator: We’ll go next to Tore Svanberg at Stifel.
Blayne Curtis, Analyst, Jefferies5: Yes, thank you. Congrats on the record quarter and Das, welcome on board. I wanted to follow up on what you said about Scorpio mix as we approach the end of the year, especially in relation to Aries, because obviously, Aries is now ramping in PCIe Gen 6. Next year, obviously, there’s gonna be a lot of mixed networking topologies. You know, I understand Scorpio will be the sort of biggest product by the end of the year. How should we think about in 2027 between, you know, Aries and Scorpio? You know, obviously, there are significant drivers for both.
Desmond Lynch, Chief Financial Officer, Astera Labs: Hey, Tory. Thanks for the question. Yeah, we’ve been very pleased with the growth rates of our Scorpio product family, as I mentioned earlier. Really excited about the continued growth opportunity ahead of us. That said, we still expect to see strong growth within the Aries product line. We expect to continue to grow our sort of leadership position there. We expect to see strong growth given the PCIe 6 portfolio. It’s just the fact that Scorpio will continue to be our sort of largest and fastest growing sort of business within the company here.
Audra, Conference Operator: Next, we’ll move to Ananda Baruah at Loop Capital.
Ananda Baruah, Analyst, Loop Capital: Yeah, good afternoon, guys. Thanks for taking the question. Congrats on the great execution here. I guess the question, guys, would be what’s a good way, particularly with all the additional context you’ve given around Scorpio X and Scorpio P lanes progressing, you know, through the back half of 2026. As we move forward post 2026, and clusters get bigger and presumably high rate switches have more ports, should we expect Scorpio X and Scorpio P switches to continue to increase the lane count? If so, is there any useful anecdotal way to think about like how that may occur? Should we just think of that sort of can continue in some perpetuity? Thanks. That’s it for me. Thanks.
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: Thanks for the question. We can talk for an hour just on that topic. Let me say this, the AI fabric switches have become a very important part of our overall strategy, and we are investing heavily in not only the current generation that we’ve announced, but also upcoming devices. We are going to continue to focus on PCIe Gen 3 because that is a large part of the business today. We are also working on UAL products that will form the basis of the next generation of these devices.
In terms of the lane count, et cetera, you know, we work very, very closely with our customers to understand what their deployment profile is going to look like, because it’s really important to target the right lane counts and radix for these devices. Because if you don’t, then the cluster sizes get limited, and if you over-index, then you come up with a solution that is not competitive. Fortunately, we have very, very good partnerships with our customers, and they are telling us what the deployment looks like.
sizes increased, not only is it important to have a switch, actually, it is also important to have the right media types for the deployment. For our family of switches, we will continue to support copper connectivity as we have so far. As Sanjay mentioned earlier, increasingly we will enable optical connectivity as well, starting with NPO with the next generation of switches going to CPO. You know, it is very important to understand that as a switch company, it gives us a perfect opportunity to deploy optical solutions. That’s something that we’ll completely leverage and make sure that we support an end-to-end connectivity with our switches, including copper, NPO and CPO.
Audra, Conference Operator: We’ll take our next question from Natalia Winkler at UBS.
Natalia Winkler, Analyst, UBS: Thank you for taking my question, and congratulations on the results. I was wondering if you can add a little bit more color on the NVLink Fusion opportunity for you guys. Specifically, how do you see from the standpoint of portfolio, maybe where, you know, where it would be most interesting for you and also from the standpoint of competitive landscape, given some of the partnerships that NVIDIA has for the NVLink Fusion as well?
Blayne Curtis, Analyst, Jefferies2: Thanks for the question. In general, if you look at our business, you can broadly divide that into 3 categories. There is a standard business that we’re doing, the custom business, and then of course, the module and the solution business that we have. Clearly an area that we see tremendous opportunity for us going forward is the custom solutions under which we are developing the NVLink Fusion type of devices. This actually is proving to be pretty interesting. We do have several opportunities. We’re very deep in engagement for an initial design win in collaboration with NVIDIA and then the hyperscaler. That project is going well.
We do expect that to start contributing revenue in 2027 as some of the GPUs that are designed for this kind of use case, which is called as a hybrid rack situation, where the GPU or the XPU still talks native protocols, which could be in a protocol like PCIe or UALink and others. When they need to leverage and cross over and talk to an NVLink type of ecosystem, then they would need a product that’s based on NVLink Fusion that we’re developing. In short, I would say that we are very deep in engagement from a development, silicon development standpoint. We do expect that this will start providing some meaningful revenue in 2027 and then growing from there.
The second part of your question was competitive situation. I mean, obviously, this is an ecosystem that NVIDIA is creating with NVLink Fusion. There are others, but for us, you know, the main thing is that we have been engaged with real customers, real applications, and to that standpoint, we will continue to focus on that and do what we need to do and not get distracted by any competitive festivities.
Audra, Conference Operator: We’ll go to our next question from Sebastien Naji at William Blair.
Blayne Curtis, Analyst, Jefferies3: Thank you, and congrats on the strong results. My question is on the Scorpio business and maybe a little bit of a follow-up to one of the prior questions. With your announcement of the new 320 lane Scorpio switches for both the X and T series, how should we be thinking about ASPs for the higher rated solutions? Is it right to think that your dollar content is correlated directly to the lane count, or is there another way to think about your dollar content? Just any details there.
Blayne Curtis, Analyst, Jefferies2: Yeah. In general, what I would say is the bigger the switch, the higher the ASP. That’s the way industry works. Also, please keep in mind is that these switches are more like AI fabric class device, which are, you know, a lot more than just the number of lanes, right? We talked about in-network compute, we talked about Hypercast, we talked about several features that we have that are unique and critical for deploying the AI clusters, whether for training or more and more for inference application where things like latency become super important. When it comes to ASP, obviously it’s a combination of how what features are enabled and not just based on the port count.
We do see that our content continue to increase. To that standpoint, we are expecting and, going forward with the design wins we have, you know, over $1,000 worth of content per accelerator. That is, of course, significant, and it’s growing rapidly for us. If you consider the path that we’ve taken so far from offering retimers now to offering, you know, complete AI fabric. With the future products with optically enabled switch and so on, you can only imagine that this content would grow from a $ per accelerator standpoint.
Audra, Conference Operator: We’ll go next to Quinn Bolton at Needham.
Quinn Bolton, Analyst, Needham: Hey, guys. Let me offer my congratulations as well. I guess, you mentioned Jitendra Mohan the KV cache offload custom design. I’m wondering if you might be able to put any sort of numbers around it, in terms of like dollar content per CPU or dollar content per, you know, gigabyte or terabyte of memory that’s attached. Is there a way we can think about how that opportunity or how to size that opportunity?
Blayne Curtis, Analyst, Jefferies2: Yeah. These are going into new inference applications. There are, you know, multiple use cases and platforms that we see for this. In that context, you know, this would be a significant opportunity for us to execute and deliver on. In terms of exact dollar association with this, I would say it’s probably a little bit early because some of the platforms and architectures are being finalized. In general, for us, like was highlighted earlier, inference and KV cache is a significant opportunity. We have the IP, not just for memory, but also for things like KV cache acceleration and so on, as part of our portfolio right now.
To that standpoint, we will increasingly develop products that provide more function and capability to ensure that memory is available for KV cache use cases. I would also say that the ASP also would be, you know, continue to be pretty meaningful when you think about the cost of the memory. In other words, these controllers will always fade compared to the amount of money that people are paying for the memory itself. In some ways, what I’m trying to say is that these are not ASP-challenged, and we will continue to make sure that we extract the most value out of these products.
Audra, Conference Operator: We’ll move to our next question from Karl Ackerman at BNP Paribas.
Blayne Curtis, Analyst, Jefferies1: Hi, this is Sam Feldman on for Karl Ackerman. Thanks for taking my question. You mentioned near package optics as a preliminary solution to CPO. From a Astera Labs point of view, do you believe customers view XPO as a viable option to extending pluggable optics? Does Astera Labs plan to participate in the XPO MSA?
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: Yeah, that’s a great question. We clearly work very closely with our customers to understand, you know, what solutions they are looking for. XPO is a pluggable technology that has come about recently. We will certainly participate in that. Not all of our customers at the moment are looking to intercept XPO. The customers that are looking to intercept with NPO, we will certainly support them because it gives you a way to have a very high egress density without the constraints of front plate density. The customers that want us to work directly on CPO, we absolutely will work with them. As Sanjay mentioned earlier, we are engaged in that opportunity that should ship here in 2027.
For customers that are looking to do XPO, we will engage with them as well. Right now, our focus has been on NPO and CPO so far.
Audra, Conference Operator: We’ll take our next question from Suji Desilva at Roth Capital.
Blayne Curtis, Analyst, Jefferies4: Hi, Jitendra, Sanjay, welcome, Des. Just a bigger picture question. I mean, you mentioned the word custom quite a bit on this call, more than in the past. You know, when you first got IPO’d, Hopper was there and Aries, and it was fairly standard. Are we past kind of the point or evolving to the point, Jitendra, where standard products are not as applicable to you because each platform is different? Should we think of all products having some customization, or where’s the line there? Just trying to understand.
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: Yeah, I’m glad you asked the question. If you think about infrastructure and AI use cases, they all are bespoke, and they all are unique between platform and between customers. Having said that, if you look at the software-defined architecture we have with our products, these are devices, even our standard products, you know, Aries and Taurus, Scorpio and so on, they provide a ton of customization that customers leverage through the COSMOS interface. COSMOS allows them to, you know, not only monitor but also customize. Now with the new devices that we announced today, they can do lot more from a performance and other key offload feature enablement and so on. All in all, customization has been our story through software-defined architecture and offered through our standard products.
When we talk about our custom business, the business model is different. We are developing a product for a given customer under a business model that would include NRE and other ways of paying for the development and of course the product revenue that comes when the product starts shipping. In general, what we see is that as we’re getting into bigger devices, whether it’s for fabric class or other connectivity type of technology that goes beyond sort of what we have done so far, having the custom solution portfolio is important. We’re approaching that, you know, with our customers by also offering a variety of foundational technology that we’ve been building for the last couple of years. In general, we would see custom being an important growth driver for us.
At the same time, please think about our business in a way where the standard products will continue to be a very important part of our overall portfolio. We will do custom, we will be very systematic about it. We will not take any opportunity that comes our way because sometimes the custom business can lead into, you know, could be so unique to one customer and with lot of risk and margin and so on. To that standpoint, we will try to make sure that we are systematic and thoughtful about the opportunities that we pursue on the custom side.
Audra, Conference Operator: We’ll go next to Mehdi Hosseini at Susquehanna.
Blayne Curtis, Analyst, Jefferies6: Hi, this is Bastian filling in for Mehdi. Congrats on the quarter and welcome, Des. I guess I wanted to follow up on UALink. Can you share an update on the adoption process and the expected timeline for UALink based switches? What do you expect the dollar content to be? How should we think about the difference between kind of the PCIe switching pricing and the UALink pricing? Thank you.
Blayne Curtis, Analyst, Jefferies2: I think within the last 3 months or so, maybe 6 months or so, we’ve had a couple of announcements from our hyperscaler customers on what the intercept is. Both Amazon as well as AMD have said that their respective ASIC and GPU will launch sometime in 2027, we’ll certainly be prepared to intercept that launch with our UALink switch. In terms of the comparison of a UALink switch to PCIe, maybe a couple of things to state. First, as we go into this new generation of devices, both the complexity as well as the speed of these devices is going up sometimes in case of lane count, other times in case of radix.
Jitendra Mohan, Chief Executive Officer and Co-founder, Astera Labs: The value that we are able to charge for these devices will be substantially higher in terms of what we are able to do for the PCIe switches. The second thing that I’ll mention is the media attach also tends to change. We may go from kind of majority PCIe to a blend of majority copper to a blend of copper and NPO with the next generation switches. That also gives us a meaningfully large opportunity in terms of, you know, revenue and the TAM that we are able to address. Finally, leading up to a CPO, which is a really rich opportunity with a very large TAM that we are able to address, all because we have the platform in the form of Scorpio X switches.
Audra, Conference Operator: We’ll move next to Tore Svanberg at Stifel.
Blayne Curtis, Analyst, Jefferies5: Just sort a quick follow-up on capacity. Your inventory days, I think, came in at 75 days. You know, seems like a little bit at the lower end. I guess, you know, are you sort of feeling good about being able to continue to at least double revenues here and next year based on the capacity commitments you have today?
Desmond Lynch, Chief Financial Officer, Astera Labs: Hi, Tore. It’s Des here. Based upon our current view of demand, we do have supply in place through the end of the year, and we’re very comfortable with what our inventory sort of holdings are here. Like others within the industry, we continue to see pockets of supply challenges, but what we’ve done here is really a nice job with diversifying our back end supply chain, and we’ve been able to make sure that we have the sufficient supply in place to make sure we can meet the revenue commitments. No concerns just now, and we continue to work with our supply chain partners for supply going into 2027.
Audra, Conference Operator: That concludes the question and answer session. I’ll turn the call back over to Leslie Green for closing remarks.
Leslie Green, Investor Relations, Astera Labs: Thank you, Audra, and thank you everyone for your participation and questions. Please do refer to our investor relations website for information regarding upcoming financial conferences and events. Thanks so much.
Audra, Conference Operator: This concludes today’s conference call. Thank you for your participation. You may now disconnect.