RMBS April 27, 2026

Rambus Q2 2025 Earnings Call - Navigating Supply Tightness and the AI Inference Pivot

Summary

Rambus is navigating a complex transition period, balancing the resolution of recent supply chain hiccups against long-term structural shifts in the memory market. While product revenue is poised for an 11% sequential increase driven by the move from DDR5 Gen 2 to Gen 3, management remains cautious about back-end supply constraints that show no signs of easing. The company is aggressively positioning itself for the next wave of AI, specifically targeting the shift toward inference and 'Agentic AI' workloads which favor CPU-heavy architectures.

The narrative here is one of strategic patience. Management is playing a long game with high-stakes technologies like MRDIMM and LPDDR server modules, acknowledging that while these won't move the needle significantly in 2025, they are critical stepping stones for 2027. For now, Rambus is relying on its diversified portfolio—patent licensing and Silicon IP—to provide a steady floor while it waits for next-generation platforms from Intel and AMD to trigger the next major revenue ramp.

Key Takeaways

  • Product revenue is guided to grow 11% sequentially in Q2, signaling a recovery from previous supply chain disruptions.
  • The company is managing a critical market transition from DDR5 Gen 2 to Gen 3, which serves as a primary near-term catalyst.
  • Back-end supply chain constraints persist and have not improved since the previous quarter, with tightness expected to last into 2027.
  • Rambus views the rise of 'Agentic AI' and inference workloads as a major tailwind because they shift the CPU-to-GPU ratio in favor of CPUs.
  • The MRDIMM opportunity is valued at a $600 million Total Addressable Market (TAM), with a significant ramp expected to begin in earnest in 2027.
  • LPDDR server modules (SOCAMM2) are viewed as a strategic 'stepping stone' for long-term data center engagement, despite minimal financial impact this year.
  • Patent licensing remains a stable pillar of the business, providing predictable revenue in the $200 million to $220 million range on average.
  • Silicon IP business is seeing strong traction from AI developers seeking custom interfaces and security, with expected annual growth of 10% to 15%.
  • Management expects a stronger second half of the year due to typical seasonality and the launch cycles of new hardware platforms.
  • Rambus maintains a strong competitive position, reporting a market share in the mid-40% range as of late 2025 with no signs of erosion.

Full Transcript

Unknown CFO/Finance Executive, Chief Financial Officer or Finance Executive, Rambus: Inventory to support our product revenue growth and manage through potential supply chain constraints. First quarter depreciation expense was $8.5 million. Free cash flow in the quarter was $66.3 million. Let me now review our non-GAAP outlook for the second quarter on slide 7. As a reminder, the forward-looking guidance reflects our best estimates at this time, and our actual results could differ materially from what I’m about to review. In addition to the non-GAAP financial outlook under ASC 606, we also provide information on the licensing billings, which is an operational metric that reflects amounts invoiced to our licensing customers during the period, adjusted for certain differences. We expect revenue in the second quarter to be between $192 million and $198 million.

We expect product revenue to be between $95 million and $101 million, a sequential increase of 11% at the midpoint of guidance. We expect royalty revenue to be between $72 million and $78 million and licensing billings between $76 million and $82 million. We expect Q2 non-GAAP total operating costs, which includes cost of sales, to be between $114 million and $110 million. We expect Q2 capital expenditures to be approximately $14 million. Non-GAAP operating results for the second quarter are expected to be between a profit of $78 million and $88 million. For non-GAAP interest and other income at expense, we expect $7 million of interest income. We expect non-GAAP tax expenses to be between $13.6 million and $15.2 million in Q2. We expect Q2 share count to be 110 million diluted shares outstanding.

Overall, we anticipate Q2 non-GAAP earnings per share to range between $0.65 and $0.73. Let me finish with a summary on slide 8. In closing, we delivered solid results in line with our objectives, driving ongoing profitability and cash generation. Our diversified portfolio remains a core strength, with each of the businesses contributing meaningfully to our performance. Our patent licensing business continues to deliver consistent, predictable performance, supported by the long-term agreements we have in place. Our Silicon IP business is well-positioned, driven by critical interconnect and security technologies, addressing the accelerated demand for AI solutions. Our product business grew 15% year-over-year and is poised for sequential growth in the second quarter. We remain focused on delivering long-term shareholder value with year-over-year revenue growth in 2026.

Before I open the call up to Q&A, I would like to thank our employees for their continued teamwork and execution. With that, I’ll turn the call back to our operator to begin Q&A. Can we have our first question?

Operator: Thank you. Ladies and gentlemen, if you have a question, please press star one on your touch tone phone. Your first question comes from the line of Kevin Garrigan with Jefferies. Please go ahead.

Kevin Garrigan, Analyst, Jefferies: Yeah. Hey, team. Thanks for taking my questions. Can you just help us think about your product revenue into the June quarter? You know, last quarter, you discussed the low double-digit revenue impact from a one-time OSAT issue. I think, you know, we may have been expecting a larger sequential increase for June, just kinda given the strong demand has been. Can you just walk us through the drivers for the June quarter product revenue and, you know, why the recovery might be a little bit more measured?

Unknown CFO/Finance Executive, Chief Financial Officer or Finance Executive, Rambus: Hey, thank you, Kevin. Yes, sure. The first thing I would say is that, you know, the issue that we had talked about in the prior call is behind us. Everything is being resolved, and it’s a question now for us to restabilize the supply chain, which we are doing, and we see a normalization of that supply chain. You know, it is behind us, and the revenue for Q2 is guided at 11% over Q1, so that’s the right trajectory. We continue to expect to grow sequentially after that, in an environment where our footprint continues to be very strong. You know, I mentioned in earlier call that it was older generation of DDR5.

The market is transitioning from Gen 2 to Gen 3, which is a good catalyst for us. I would say, you know, we met, or we guide, you know, to double digit in the second quarter. We met, you know, what we said we would meet on the operational trend in Q1, and we would continue to grow sequentially quarters after that. We don’t see any, you know, issue with the demand, and we don’t see any more issues with the quality issue that we had in Q1. We feel quite confident for the rest of the year as the market moves from Gen 2 to Gen 3.

Kevin Garrigan, Analyst, Jefferies: Okay, great. Just as a follow-up on your LPDDR5 SOCAMM2 server module chipset, when would you expect to start seeing revenue from this chipset, and what kind of milestone should we watch to gauge traction?

Unknown CFO/Finance Executive, Chief Financial Officer or Finance Executive, Rambus: You know, I would see this as having a very good strategic impact at this point in time. The financial impact in the short run this year is gonna be very minimal, just because the volumes are very small for this type of solutions. You know, as a reminder, it only addresses a very small portion of the AI workloads. The volumes are small. The content is small as well. Strategically, I wouldn’t put it in the model for 2026, but it’s strategically very, very important because there is a trend.

Luc Seraphin, Chief Executive Officer, Rambus: To look at LPDDR in the server environment in the long run. LPDDR still has issues, you know, to address the server requirements, but it also has traction and it has benefits. We see this as a stepping stone for us. It builds on the fact that, you know, over the last few years, we have developed our product line as chipsets. We have the whole chipset for the SOCAMM2. We have our own teams for power management development, and these are the two new chips that we are proposing for this, you know, for this solution. We see this as a stepping stone. It allows us to engage with other, you know, AI players in the industry, and we’re working on next generation as well.

I don’t think that the financial impact is gonna be significant this year, just given the volumes.

Kevin Garrigan, Analyst, Jefferies: Okay, great. I appreciate the call, Luc.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you.

Operator: Your next question comes from the line of Tristan Gerra with Baird. Please go ahead.

Tristan Gerra, Analyst, Baird: Hi. Good afternoon. A quarter ago, you highlighted shortages and sounded a little bit maybe not cautious, but muted on the growth opportunity, and you provided a fairly muted data center unit forecast. How are shortages for components potentially impacting your revenue this year? What are you seeing, you know, that’s different now than a quarter ago? Given the outlook for DRAM to remain very tight next year, you know, how should we look at your product revenue growth and specifically your RCD growth while excluding the new product layers that we’ll be adding on to that from a year-over-year growth standpoint. In other words, would you expect, you know, the same type of growth next year-over-year versus this year?

I understand you’re not guiding for next year, but just wanted to get a bit more color on what you see on the market that potentially could put constraint on your growth. Clearly, that’s an issue for a lot of other companies as well.

Luc Seraphin, Chief Executive Officer, Rambus: Yeah. Thank you, Tristan. First of all, you know, let me say a few words about the demand. You know, we do see demands continue to grow for standard servers, which is, you know, good for us, you know, with Agentic AI in particular. We expect the server market to grow faster this year than last year. We model it at, you know, low double digit growth because, you know, despite the excitement around AI, there’s also a large portion of the server market so that is not AI related. We do see demand growing, you know, on the server side, which is really a good catalyst for us.

As we said last quarter, we’re watching the situation with supply, especially on the back end. Certainly since last quarter, the situation has not improved. You know, we’re working with our suppliers, but the lead times, you know, are long, and there is tension on the back end. We take this into account when we forecast our business. This is one factor. You know, the other factor that affects or that comes into play when we forecast is the timing of launch of new platforms in the market.

You know, as you know, it’s been the case in the past for us, you know, the launch of our new product depends on the launch of new platforms in the market, and that’s a dependency that we have. So we don’t see the situation as materially different than what we saw in Q1, but from a supply standpoint, things have not improved. We expect the supply situation to be tight, going into 2027 as well when we talk to industry players.

Tristan Gerra, Analyst, Baird: Okay. That’s useful. As my follow-up question, any additional color on the MRDIMM opportunity? I know you’ve talked in the past about, you know, some very initial shipments late this year, specifically with inferencing. Any additional color as to, you know, where it could be in terms of revenue in 2027? I think you’ve talked in the past about your expectation that you probably fully realize that $600 million TAM for MRDIMM by 2028. You know, what should we be looking at for next year kind of in between? What’s really driving that? What’s going to be driving the demand? Is it going to be mostly inferencing?

Any additional color you may have, you know, beyond what you’ve said in the past on you know, customer interest, you know, for this technology and where it’s going to ramp.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you, Tristan. First, we continue to make progress in the launch of these products and the interaction with our customers on this MRDIMM. We’re excited by the opportunity for the reasons we’ve always talked about. Larger capacity, larger bandwidth in the same ecosystem, so the adoption is easier. The main factor affecting the ramp of our MRDIMM is the timing of the launch of the platforms from Intel and AMD in particular, where they do have this capability attached, you know, in the next generation platform. We continue to see the ramp starting in 2027 in earnest and a SAM at this point in time, you know, which we still value at about $600 million.

You know, as I keep saying, you know, the SAM, once the products are in the market, you know, and we get feedback and the market gives us feedback, we’re gonna have a much better view of that SAM. But at this point in time, this is the right number to keep in mind.

Tristan Gerra, Analyst, Baird: Great. Thanks again.

Luc Seraphin, Chief Executive Officer, Rambus: Thanks, Tristan.

Operator: Your next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.

Aaron Rakers, Analyst, Wells Fargo: Yeah. Thanks for taking the questions. I guess kind of just building off that last question first. You know, when you kind of think about the $600 million, you know, incremental opportunity around MRDIMMs, I can appreciate that, you know, there’s a lot of unknown variables at this point, but I’m just curious, as you rolled up that expectation, what assumption are you making in terms of attach rate on AMD Venice and Diamond Rapids at this point? And you know, how might that evolve? I mean, I would assume that you’re being rather conservative on that attach rate at this point. And then also on that, how do you see CXL starting to play out?

Luc Seraphin, Chief Executive Officer, Rambus: You know, at this point in time, we modeled a lower attach rate, as I said. You know, until, in my experience, until a product is in the market, it’s hard to make those models more significant. There are a lot of variables coming into play. As we just said, the most important one is the timing of rollout of these platforms in the market. There’s also the whole situation with DRAM pricing and the prices of modules and how our customers are going to make the decisions between the combination of modules they wanna have in the current memory cycle environment.

We model a conservative percentage, I would say, you know, for MRDIMM at this point in time. You know, ramp will start when the platforms ramp in the market, and that’s when we’re gonna have a better view.

Aaron Rakers, Analyst, Wells Fargo: Any thoughts on CXL?

Luc Seraphin, Chief Executive Officer, Rambus: Oh, sorry, I missed the second part of your question. Sorry, Aaron. CXL, you know, we do have very good traction on our IP business. We are not planning to, you know, launch a semiconductor product at this point in time. You know, we do have this on our shelves, if you wish, as we designed one a couple of years ago. But we do see, with Agentic AI, we do see demand for, you know, standard DIMMs and MR DIMMs, you know, as being the main beneficiaries of that, and that’s where we will continue to focus our attention.

Aaron Rakers, Analyst, Wells Fargo: Yeah. One final quick one. When you guys talk about the opportunity to grow sequentially in the product revenue, you know, into the back half of the calendar year, I’m curious if you were asked about seasonality in the second half versus first half, if there’s anything that changes your views, maybe relative to the last couple of years. You know, I think you’ve seen some decent growth second half versus first half. Thank you.

Luc Seraphin, Chief Executive Officer, Rambus: Yes. Thanks, Aaron. That’s a good observation. We actually do see, you know, second half shaping out slightly different than the first half. You know, better growth in the second half. You know, a lot of times it had to do with the launch of new platforms. You know, they typically hit the market if they are on time, you know, in the second half of the year, and that’s where you have, you know, more products there. Even if you look at the first half of this year at the midpoint of our guidance for Q2, and you look at the first half of last year, you know, we’re still growing, you know, close to 18%.

You know, the first half, despite our issue in Q1, is still much higher than the first half of last year, and we believe the second half is gonna show growth. We do see some seasonality, and typically our second half is stronger than our first half.

Aaron Rakers, Analyst, Wells Fargo: Yeah. Thank you.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you, Aaron.

Operator: Your next question comes from the line of Gary Mobley with Loop Capital. Please go ahead.

Gary Mobley, Analyst, Loop Capital: Good afternoon, gentlemen. Thanks for taking my question. If I take the sum of your license billing and your contract and other revenue in the first half of this year for the results in the guide and compare that to the same period last year, looks like you’re generating some abnormal, abnormally strong growth. Is that due to any sort of variance in the patent licensing, or should I take this to mean that your Silicon IP business might actually be running north of $150 million annually right now?

Luc Seraphin, Chief Executive Officer, Rambus: You know, thanks, Gary. We can see some quarter-to-quarter variations in these two categories just for the nature of the business. I would say that underlying this, we see very good traction on our Silicon IP business. Actually, AI has an impact on our Silicon IP business, which is also very positive as people who develop custom solutions for AI are looking for new interfaces and new security solutions like the ones I mentioned in the prepared remarks. We do have very good traction on the Silicon IP business, and we continue to expect this business to grow 10%-15% a year, based on that. Our other business, our patent licensing business, it can also be changing from quarter to quarter.

You know, we do renew agreements on a regular basis, and sometimes these agreements are, you know, structured in different ways, depending on the customers and what they wanna do. We have some strong quarters, some quarters that are not too good. But on average, you know, this business continues to be stable at $200-$220 million. I would say, I would not, you know, pay too much attention on the quarterly split, you know, on these revenues. The fundamentals are really good. What I would add to this is if you look at our Patent Licensing business, our Silicon IP business, or our Product business, they all benefit from, you know, what’s happening in the memory subsystem area.

You know, they all benefit from AI and the move from AI to AI inference. That gives strength to our results, you know. When we have a challenge like we had last quarter on the product line, then we have these two other product lines also that allow us to meet our numbers.

Gary Mobley, Analyst, Loop Capital: Okay. Thank you, Luc. I just wanna follow up on to ask about CPU roles in AI-optimized servers. Like, there’s been a lot more noise recently indicating a higher ratio of CPUs to GPUs in AI-optimized servers driven by agentic workloads, and you sort of hinted to that. To put this into a question, I’m curious if, you know, we’ve moved to a point in time where we might see a 1-to-1 ratio of CPU to GPU. Does this alter your view on the growth rate of your SAM for your product revenue or the size of it?

Luc Seraphin, Chief Executive Officer, Rambus: We are excited with you know where the market is evolving with Agentic AI and inference. If you look at the types of architectures, software architectures, hardware architectures that inference requires, then you clearly see that the ratio between CPUs and GPUs is changing and is changing in favor of CPUs. Overall, that’s a very good thing for us. It’s just coming from the nature of you know what inference is or what Agentic AI is. That’s a good thing for us. Is it gonna be a one-on-one? Very difficult to say at this point in time. You know, everyone is trying to optimize now the memory subsystems. You know, everyone is trying to use HBM where it’s really good, use LPDDR where it’s really good, and use DDR and MRDIMMs where it’s really good.

I would say that DDR and MRDIMMs will continue to be, you know, the workhorse of these, you know, inference AI solutions. The fact that all of these systems starts to coexist, you know, HBM, DDR, LPDDR, is really good. You know, they all try to resolve a different part of the AI workload, and this plays to our strengths because this is what we’ve been doing for, you know, forever at Rambus. I would say that the move to AI inference and the move to agentic AI will change the ratio in favor of CPUs, and that’s good for us.

Gary Mobley, Analyst, Loop Capital: Thank you. Appreciate it.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you.

Operator: Your next question comes from the line of Sébastien Naji with William Blair. Please go ahead.

Sébastien Naji, Analyst, William Blair: Thank you. Maybe my first question, I wanted to ask about the new SOCAMM products that you announced last week. Could you maybe just comment on what Rambus’ dollar content looks like for each SOCAMM module, just across the different voltage regulators and the SPD hub? Any unit economics you can give us?

Luc Seraphin, Chief Executive Officer, Rambus: You know, given the current competitive environment, I stay away from giving you know, pricing on these things. I would say that the content on you know, on a SOCAMM from the standpoint of Rambus, you know, we have three voltage regulators and an SPD hub, so the content is minimum. This is what I was saying you know, and earlier on one of the questions. I do believe that this is strategically important for us because in the long run LPDDR may play a larger role, especially next generation’s LPDDR solutions in the data center. From a content standpoint, it stays minimal and the volume stays minimal, and I would leave it there.

Sébastien Naji, Analyst, William Blair: Okay. Okay, that’s fair. Maybe just turning back to the RDIMMs. Could we get an update on the progress you’re seeing with companion chips? How much revenue came from those companion chips in Q1? Maybe just relatedly, how important is it for your silicon customers that they have all of these DIMM components bundled together coming from one provider versus having to put these together from different providers?

Luc Seraphin, Chief Executive Officer, Rambus: Yes. Thank you. John, go ahead.

Unknown CFO/Finance Executive, Chief Financial Officer or Finance Executive, Rambus1: Sure. The newer products, Sebastian, they’re contributing low double-digit % of our total product revenue during the first quarter. We would expect it to be roughly the same in the second quarter as we see some growth in the overall revenue contribution from that part of our business.

Luc Seraphin, Chief Executive Officer, Rambus: Yeah. What I would add to this is that this is steady growth quarter-over-quarter. You know, you saw this, you know, in 2025, every quarter, we had a slightly higher percentage. We continue to do that, and we expect to continue to do that for the second half of the year, you know, with this. We expect maybe to exit the year at mid-double-digit % of product revenues, you know, coming from our new chips. Now to your other question, it is becoming more and more important for customers to, you know, have the whole chip set from one supplier, especially as the performance requirements increase. The reason has to do with interoperability.

You know, making sure that all of these chips on a module work well together at very high speed in very harsh environment is becoming more and more difficult to achieve, and that’s why our new customers request you know us to have the whole solution and to have them go through these generational changes.

Sébastien Naji, Analyst, William Blair: Makes a lot of sense. Thank you, Luc. Thank you, John.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you. Sure.

Operator: Your next question comes from the line of Kevin Cassidy with Rosenblatt Securities. Please go ahead.

Kevin Cassidy, Analyst, Rosenblatt Securities: Yeah, thanks for taking my question. Then during the quarter as you’re building inventory, were there any orders that you had to leave on the table that you weren’t able to book because you didn’t have the inventory that may be some upside surprise?

Luc Seraphin, Chief Executive Officer, Rambus: No. You know, we’ve not been in that situation. There are few market dynamics that we have to anticipate. One is, as I said earlier, we do see supply tightening, especially on the back end. We wanna make sure that, you know, we have. If that situation continues, we have enough supply to supply our customers. The second thing that is happening is that, you know, there’s fast transitions between generations. You remember we were talking about generation one moving to generation two. We indicated in the last call that you know, generation three, you know, is ramping very fast.

We wanna make sure that on these new generation of products, we also have enough inventory because the ramps, you know, on the customer side can be quite steep, and we just don’t wanna miss them.

Kevin Cassidy, Analyst, Rosenblatt Securities: Okay. I understand. Maybe even when you’re using your balance sheet to build more inventory, you know, when Intel reported, they said they even were able to ship some previously written down inventory. You know, it seems like the demand for CPUs is so strong and also DRAM, that you know, maybe older generations will get a little bit of a revival. Is that anything possible, or it sounds like you’re saying everything’s shifting to Gen 3 very quickly.

Luc Seraphin, Chief Executive Officer, Rambus: From a demand standpoint, it’s certainly the bulk of the demand for DDR products is shifting to Gen 3. What you’re describing in terms of, you know, using inventory of old products to serve, you know, demand is something that we continuously do, you know, and look at. You know, that’s part of our, you know, inventory management processes.

Kevin Cassidy, Analyst, Rosenblatt Securities: Okay, great. Thank you.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you.

Operator: Your next question comes from the line of Mehdi Hosseini with SGI. Please go ahead.

Unknown CFO/Finance Executive, Chief Financial Officer or Finance Executive, Rambus0: Hi, Luc. This is Sebastian filling in for Mehdi. My first question is on LPDDR. SOCAMM2 chipset. Would you mind clarifying the content of the chipset? It seems that the solution consists of one SPD and three voltage regulators. Do you expect to add any PMIC content there? What does the pricing look like of the SPD and the voltage regulator relative to the DDR DIMM? I have a follow-up.

Luc Seraphin, Chief Executive Officer, Rambus: Sure. Yes, on the SOCAMM solution, we have one SPD hub and two types of voltage regulators. Three voltage regulators in total, but two types. One 12-amp regulator and two 3-amp regulators. That’s the content. As I said, the content is minimal. You’re talking about PMIC. There’s no power management IC per se. You know, that function is done by the voltage regulators in this generation of product. That’s why we think it’s very strategic for us.

The way we look at this is that when LPDDR6 is available, you know, that LP memory will offer even more speed and even more, you know, power capabilities than it will require, you know, possibly more complex, you know, chips for, you know, power management, and we will work on those. One can imagine as well that, you know, as the market evolves, you know, in the longer run, the market will probably need as well the equivalent of, you know, of RCDs in the long run. This fits exactly in our strategy, and that’s why I’m talking about the stepping stone. We wanna make sure that we are early in these new technologies. They do not cannibalize the old technologies. They are complementary to them.

In the long run, they have the potential to grow quite nicely, and they build on strengths that we have, which has to do with signal integrity and power integrity. Now, in the short run for the SOCAMM2 on LPDDR5X, you know, as I said, the volumes and the content, the dollar content is gonna be very low. That’s a very interesting and strategic stepping stone for us, in that area.

Unknown CFO/Finance Executive, Chief Financial Officer or Finance Executive, Rambus0: Thanks, Luc. That’s really helpful. I guess my second question is on DDR5. How should we think about the timing of the ramp of Gen 4 and Gen 5 as we go to higher volume manufacturing?

Luc Seraphin, Chief Executive Officer, Rambus: Gen 4, you know, is going to start to ramp this year, but Gen 4 is a kind of a niche generation, if you wish. It doesn’t have the same traction as you know, Gen 1, Gen 2, Gen 3, or Gen 5. I think everyone is now waiting for Gen 5. We are going to start shipping products that correspond to Gen 5 towards the end of the year. But just like for the MRDIMM, Gen 5 is completely dependent on the timing of the ramps of the next generation platforms from Intel and AMD. This is where they’re going to be adopted. That’s why, you know, we do see, you know, initial volumes this year, but the bulk of the volume, just like for MRDIMM, is gonna start in 2027.

Unknown CFO/Finance Executive, Chief Financial Officer or Finance Executive, Rambus0: Got it. That’s very helpful. Thank you, Luc.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you.

Operator: Your next question comes from the line of Mark Lipacis with Evercore ISI. Please go ahead.

Mark Lipacis, Analyst, Evercore ISI: Great. Thanks for taking my question. A question on the DIMM attach rate. Is it different for CPUs used to perform orchestration and agentic AI versus CPUs used in standard servers versus CPUs that might be put next to the GPUs and the XPUs and the custom ASICs? Should we think about the attach rates differently for these?

Luc Seraphin, Chief Executive Officer, Rambus: It’s a very good question. Very difficult question also, Mark. I would say that the way we look at it is if you look at inferencing and agentic AI, the functions that have to be performed by these, you know, standard CPUs are closer to standard CPUs. I think the highest attach rate that you would find is really close to the GPUs HBM platforms. That’s where you have the heaviest loads, if you wish, for these CPUs. That’s how at this point in time I would compare it. I would say if you take a DGX box, you know, with GPUs and HBM, then the CPU there are the CPUs that use the most memory in terms of capacity and bandwidth.

I would say that when you go to inferencing, then you’re, you know, it’s probably a little less, but it’s difficult for us at this point in time to model that.

Operator: Mark, your line is open.

Mark Lipacis, Analyst, Evercore ISI: Hi. Sorry, I guess my phone dropped, and I don’t know if my question came through.

Luc Seraphin, Chief Executive Officer, Rambus: Yes.

Mark Lipacis, Analyst, Evercore ISI: Luc, I was wondering, should we think about the DIMM attach rate differently for CPUs that would be used in orchestration for Agentic AI versus CPUs used in standard servers versus CPUs that are used for inferencing, that get put next to the GPUs and the ASICs and the XPUs? Is there different, you know, density there for the DIMMs?

Luc Seraphin, Chief Executive Officer, Rambus: It’s a very good question, Mark, but a very difficult question to answer. I would say, you know, the way we look at it at this point in time is that the highest use of memory capacity and bandwidth really resides, you know, close to the GPUs and this GPU HBM clusters, if you wish. That’s where you know, you have the most need for very high capacity and very high bandwidth, which on average, you know, could be higher than what we found, you know, in inference and other solutions. You know, we have not modeled that at this point in time. It’s hard to model.

We do see in aggregate the fact that, you know, inference is being added to training as a very good traction for, you know, the use of, you know, standard DIMMs or ML DIMMs in general. The attach rate is difficult to model at this point in time.

Mark Lipacis, Analyst, Evercore ISI: Gotcha. Okay, that’s fair enough. The tightness in the back end that you’re noticing. Do you know or can you explain what the cause of that is? Is that because of you know the idea that a lot of the back end happens in Southeast Asia and they procure a lot of energy from the Middle East? Is that it or is it capacity? Is it more like just the whole industry is in a great recovery time and the capacity utilization rates are really ticking up? Do you have a sense of the cause of the tightness in the back end?

Luc Seraphin, Chief Executive Officer, Rambus: There’s a couple of reasons. One is the demand, especially in the data center, you know, has become very high, you know, recently. There’s increased demand there. The second reason is that a lot of semiconductor suppliers have moved their back-end supply chains away from China to other countries in Asia, and that has put a strain on, you know, the total capacity, you know, of these back-end suppliers. It’s the combination of the two. We’ve not seen an effect yet, not yet of, you know, the war. You know, there are discussions about, you know, some basic elements like gas that are going to be affected, but we don’t see this yet.

The main reason at this point in time is increased demand, especially in the data center, combined with, you know, semiconductor companies moving their supply chains outside of China.

Mark Lipacis, Analyst, Evercore ISI: Okay, that’s really helpful. A last question, if I may. As you think about your market share in this year, are you of the view that you are a share gainer or you keep share flattish or down? Like, what is your view on your ability to gain share? Thank you.

Luc Seraphin, Chief Executive Officer, Rambus: We continue to gain share. 2024-2025, we exited 2025, we were at mid-40% share. There’s no indication that we’re not going to continue on that trajectory. This year, the market is really at a high level transitioning from Gen 2 to Gen 3, and our footprint in Gen 3 is really good as well. There’s no sign of any erosion of the share. You know, if we add the other components, then we’ll go faster than market because we add content as well to what we ship to the market. We’re very pleased with where we were in 2025. As you know, Mark, we tend to talk share on a yearly basis.

You know, they can fluctuate from quarter to quarter, but we don’t see any sign of erosion of our share going into 2026.

Mark Lipacis, Analyst, Evercore ISI: Gotcha. Very helpful. Thank you.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you, Mark.

Operator: At this time, there are no further questions. This concludes the question and answer session. I would now like to turn the conference back over to the company.

Luc Seraphin, Chief Executive Officer, Rambus: Thank you, everyone, who has joined us today for your continued interest and time. We look forward to speaking with you again soon. Have a good day.

Operator: Thank you. This now concludes today’s conference.