DigitalOcean Q1 2026 Earnings Call - AI-Native Cloud Launch Drives 221% AI ARR Growth
Summary
DigitalOcean delivered a record Q1 2026, accelerating revenue growth to 22% year-over-year and raising its full-year 2026 revenue growth guidance to 25-27%. The company’s strategic pivot to an AI-Native Cloud platform, launched at its Deploy conference, is already paying off with AI customer ARR surging 221% year-over-year to $170 million. Management emphasized that over 80% of AI revenue now comes from inference services and core cloud pull-through, not bare metal, signaling a successful shift up the stack.
The company raised $888 million in equity, using the proceeds to pay down $500 million in debt and secure 60 megawatts of new capacity for 2027. This capital injection allows DigitalOcean to aggressively scale while maintaining a flexible balance sheet with no material debt maturities until 2030. Looking ahead, management projects revenue growth of 50% or more in 2027, driven by the ramp of new capacity and strong demand from AI-native customers like Cursor and Ideogram. The focus remains on delivering a full-stack, open platform that integrates inference, agents, and data, differentiating DigitalOcean from both hyperscalers and bare-metal GPU providers.
Key Takeaways
- Q1 2026 revenue reached $258 million, up 22% year-over-year, beating the top end of guidance and accelerating from Q4 2025’s 18% growth rate.
- AI customer ARR surged 221% year-over-year to $170 million, with over 80% of that revenue attributed to inference services and core cloud pull-through rather than bare metal.
- Million-dollar customer ARR grew 179% year-over-year to $183 million, reflecting strong expansion among top-tier cloud and AI-native clients.
- Management raised full-year 2026 revenue growth guidance to approximately 25-27% year-over-year, with an exit growth rate approaching 30% in Q4.
- DigitalOcean launched its AI-Native Cloud at the Deploy conference, introducing 15 new products across five integrated layers including inference engines, managed agents, and vector databases.
- The company raised $888 million in equity in Q1, using $500 million to repay its Term Loan A and save roughly $50 million annually in interest expenses.
- 60 megawatts of incremental data center capacity were secured across four locations, slated to ramp revenue throughout 2027, bringing total committed capacity to 135 megawatts.
- 2027 revenue growth guidance was increased to 50% or more year-over-year, driven by the new capacity and strong demand from AI-native workloads.
- Adjusted EBITDA margins remained robust at 41% in Q1, with management projecting high-thirties margins for 2026 and approximately 40% margins for 2027.
- Customer pipeline coverage stands at 3x to 4x available capacity, with marquee wins like Cursor and Ideogram validating the platform’s appeal to hyper-growth AI-native companies.
Full Transcript
Jill, Conference Operator: Thank you for standing by. My name is Jill, and I will be your conference operator today. At this time, I would like to welcome everyone to the DigitalOcean’s first quarter 2026 earnings conference call. All lines have been placed on mute to prevent any background noise. After the speaker’s remarks, there will be a question-and-answer session. If you would like to ask a question during this time, simply press star one on your telephone keypad. If you would like to withdraw your question, simply press star one again. I would now like to turn the conference over to Radu Patrichi, Head of Investor Relations. You may begin.
Jason Ader, Analyst, William Blair0: Great. Thank you, Jill, and good morning, everyone. Thank you all for joining us today to review DigitalOcean’s first quarter 2026 results. Joining me on the call today are Paddy Srinivasan, our Chief Executive Officer, and Matt Steinfort, our Chief Financial Officer. For those of you following along, an accompanying slide presentation is available on the webcast. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflects management’s best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings with the SEC, as well as those referenced in today’s press release that is posted on our website.
DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Non-GAAP financial measures will be discussed on this conference call, and reconciliations to the most directly comparable GAAP financial measures can be found in today’s earnings press release, as well as in our earnings presentation that outlines the discussion on today’s call. The webcast of today’s call is available on the IR section of our website. I’ll turn it over to Paddy.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Thank you, Radu. Good morning, everyone, and thank you for joining us today. We had an outstanding Q1 2026. I’ll start with four headlines. First, our momentum is accelerating. Q1 revenue was $258 million, up 22% year-over-year, with $1 million-plus customers growing 179% year-over-year to $183 million in ARR. AI customer ARR grew 221% to $170 million. We beat every financial target we shared in our last call. Number two, we launched the DigitalOcean AI-Native Cloud last week, the most significant product launch in our history, with more than 15 new product launches across five fully integrated layers built into a modern, open, unified stack purpose-built for the inferencing and agentic era.
Third, we are investing to meet our growing customer demand and to seize the material opportunity in front of us. We raised $888 million in equity during Q1 to strengthen our balance sheet and quickly utilized that flexibility to secure 60 megawatts of incremental capacity that is slated to ramp throughout 2027, bringing our total committed capacity to 135 megawatts. Finally, we are again raising our near and medium-term guidance on the strength of customer demand and the incrementally committed capacity. For 2026, we are increasing our full-year revenue growth projection from 21% to approximately 26% year-over-year and expect to exit Q4 approaching 30%. This revised 2026 growth is entirely driven by our previously committed capacity without any top-line benefit in 2026 from the new 60 megawatts.
With the projected ramp of the incremental 60 megawatts in 2027, we are now projecting revenue growth of 50% or more in 2027, meaningfully higher than the 30% growth we communicated just last quarter. I’ll now spend a few minutes drilling down on each of these four headlines. The momentum we are generating is clear evidence of both our differentiated position and our strong execution across the board. It starts with the accelerating top-line growth. Q1 revenue was $258 million, up 22% year-over-year and up over 400 basis points over Q4 2025’s already strong 18% exit growth rate. We are delivering this growth by continuing to delight our top cloud and AI-native customers. Our AI customer ARR reached $170 million, growing 221% year-over-year.
Our million-dollar customer ARR reached $183 million, growing 179% year-over-year. These are not just customers experimenting on our platform. These are cloud and AI native companies scaling their businesses on DigitalOcean. Our rate of acceleration is also increasing. We delivered a record $62 million in incremental organic ARR, the highest in the company’s history. Customers see our differentiated value and are leaning into our platform. RPO reached $243 million, up an extraordinary 1,700% year-over-year. We are doing all of this with strong profitability. We delivered 41% adjusted EBITDA margins and 18% trailing twelve-month adjusted free cash flow margins. Drilling into our growth, our largest customers continue to be our fastest-growing, and their growth continues to accelerate.
ARR from our $100,000 customers grew 73%, while our $500,000 customers’ ARR grew 132%. ARR from our $1 million-plus customers reached $183 million, growing at a 179% year-over-year versus 123% last quarter. Our AI customers are the other key driver of accelerating growth. AI customer ARR reached $170 million, growing 221% year-over-year. Most critically, inference and core cloud pull-through increased to more than 80% of total AI customer ARR, up from 70% in Q4. That number tells you something important. We are not a GPU rental business. We are a full stack cloud platform that AI native companies depend on to build, run, and scale their production AI software.
Last week, at our Deploy conference in San Francisco, we launched the DigitalOcean AI-Native Cloud. Let me explain why this is a very significant step. 4 forces are fundamentally reshaping AI right now. Inferencing has overtaken training as the dominant AI computing workload. Open-source AI is now in production at over half of AI-native companies. Reasoning models are driving the majority of token consumption. Agentic systems are rapidly moving from experimentation to production. Together, these forces represents AI evolution from quote, unquote, "thinking" in which AI plays an advisory role to both thinking and doing, in which AI delivers outcomes by executing autonomous tasks. The thinking part is powered by AI models in inferencing mode, and the doing part is delivered by a variety of modern cloud computing modules, all working together to take intelligent, autonomous, real-world action.
DigitalOcean AI-Native Cloud is purpose-built for AI natives building exactly these types of workloads. It starts at the bottom with foundational layers. We operate a global scale infrastructure with 20 data centers purpose-built for AI workloads, running a full stack core computing platform with a complete set of computing primitives that agentic workloads demand. Kubernetes, CPU and GPU Droplets, advanced networking stack, including virtual private cloud, object block and file storage, and high-performance NFS. This is part of the doing layer, the foundation that vast majority of GPU-centric clouds simply don’t have. Last week, we launched a new inference engine, which we co-invented with our customers to address their most critical inferencing needs, it delivers a lot more than just serving tokens.
It provides serverless and dedicated endpoints for serving up AI models, batch processing for asynchronous token generation, an intelligent policy-aware inference router that automatically selects the best model for cost and performance, a catalog of over 70 open source and closed source frontier models with day zero access, multimodal capabilities, and guardrails. For customers who want to run their own models, we support BYOM, or bring your own model. This is the quote, unquote, "thinking layer", and it is far more than just serving tokens. It is about serving tokens efficiently with best-in-class performance, tightly integrated with other parts of the cloud. Augmenting this new inference engine is our data and learning layer, for which we announced an enterprise version of our managed MySQL and PostgreSQL databases for advanced workloads. We also announced new vector database support for building agentic workloads.
We also launched a brand-new managed agents platform to give AI natives everything they need to build, execute, and operate autonomous agents at scale with open harnesses, sandbox, state management, agent observability, toolbox for external integrations, and Plano data plane-based orchestration on an open platform without getting boxed into a single LLM or platform provider. This is the DigitalOcean AI-Native Cloud. Five fully integrated layers from silicon to agents with zero lock-in, because we offer open source options at every single layer. This is absolutely essential as our target customers are AI native companies who are creating and monetizing software. AI infrastructure is a material cost of revenue line item for these AI natives, especially when they scale. Maintaining flexibility across models and platforms and leveraging the most efficient model capabilities for every specific task is an existential requirement for them.
AI natives are increasingly adopting open source at every level, including multiple open source models, open agent harnesses, open source vector databases, and so on, to avoid lock-in and deliver compelling unit economics for their customers as they go into hypergrowth mode themselves. Building a truly open, fully integrated platform is hard. That difficulty is precisely what makes our platform durable. The market is validating what we have long believed, that infrastructure without intelligence, without orchestration in a full cloud platform is insufficient for what AI native workloads actually demand. Agentic applications require intelligence, CPU-based execution, stateful memory, managed high-performance storage and databases, and orchestration, all working together natively, not assembled after the fact. Our integrated stack is built for exactly this architecture. That’s what enables us to deliver differentiated performance with compelling unit economics that matter to our AI native customers.
Leading independent benchmarking company, Artificial Analysis, recently reported that DigitalOcean delivers the number one output speed for leading open source models like DeepSeek V3.2, Qwen3.5, the 397 billion parameter model across all cloud providers. Our 230 output tokens per second on DeepSeek V3.2 is 3.9 times faster than one of the leading hyperscalers. This wasn’t just a hardware story. It required co-designing every layer of the stack from NVIDIA’s Blackwell Ultra GPUs to custom VLLM optimizations, including speculative decoding and kernel fusion, which is exactly the kind of deep engineering that differentiates a modern AI-native platform from GPU firms and inference wrapper providers. The clearest validation of our strategy is the caliber of customers choosing to build and scale on us.
We recently onboarded Cursor, one of the fastest-growing AI applications ever built for production inference, model fine-tuning, and core cloud services. Ideogram, a leading text-to-image foundation model company, migrated production inference from a hyperscaler to our AI infrastructure, running their own model weights at scale. Hailo AI, serving over 20 million creators with cinematic video generation, runs its full multi-model workflow on our integrated stack. 3 different AI native companies in hypergrowth mode running their production AI on our AI-Native Cloud. Our pipeline continues to grow in both volume and strategic scale. Let me spend a couple of minutes on our competitive positioning with our new platform announcement. At a high level, unlike the hyperscalers, we are more open, purpose-built for modern software without the legacy complexity of enterprise workloads designed for the previous era.
Compared to the GPU Neo clouds, which are optimized for large training clusters, we are a full stack inferencing and agentic platform. Finally, while the inference wrapper providers offer tokens, we offer the breadth AI native builders need to build complete modern software without forcing them to stitch a platform together themselves. What makes our position genuinely durable is three compounding layers. Number 1, our AI middleware. The Plano data plane and inference router, built on technology from our recent Katanemo acquisition completed last quarter, sits between the agents and the underlying infrastructure, intelligently steering workloads across models, regions, and accelerator types based on cost, latency, and availability trade-offs at real-time. 2, our managed agents platform extends computing primitives up the stack with secure runtimes, execution sandboxes, background workers, observability, orchestration, and much more. All purpose-built for agentic applications to be built and scaled on this platform.
The third is data gravity. Through managed databases, vector stores, caching, and object storage, production data lives inside our DigitalOcean AI-Native platform. Models and GPUs are not sticky. Data is. For AI natives, the decision of where to build is rarely about a single feature. It is about platform breadth, quality of abstractions, openness of the platform, and the absence of friction. Delivering that requires deliberate integrated engineering across every layer, from silicon to agents. It needs an AI-Native Cloud, which is what DigitalOcean has been building towards with millions of R&D hours over the last dozen plus years. The market opportunity is generational, and we are poised to earn more than our fair share. Global inference traffic will grow 10x by 2030, and agentic workloads consume 15 times more tokens than human users. A multiplier that compounds as AI matures.
We are already seeing it in our numbers. Our AI customer ARR is growing 221%, and over 80% of that is coming from inference services and core cloud, not bare metal. These are companies running full stack production AI on DigitalOcean, and they are accelerating. We are investing to meet this growing customer demand and to seize the opportunity in the massive inferencing and agentic markets. In Q1, we raised $888 million in equity, proceeds that enable us to expand our data center and GPU capacity to meet our growing customer demand while strengthening our balance sheet. Matt will provide more details on the equity raise and our capital strategy later in our comments. Let me give you a brief highlight on our expansion plans.
Starting with our existing committed capacity, we remain on track to deliver our previously communicated 31 megawatts as planned in 2026, with our Richmond facility beginning to ramp revenue in March. On top of this, we have now secured approximately 60 megawatts of incremental data center capacity across 4 locations. Capacity that will ramp revenue throughout 2027. This brings our total committed data center capacity to approximately 135 megawatts. Given growing customer demand, we continue to actively pursue additional capacity beyond this new 60 megawatts. Capacity that will be targeted to come online in 2027 and 2028. The opportunity in front of us is enormous, genuinely once in a generation.
Every data point we see from our growing customer pipeline to the demand signals we are seeing and hearing from our largest customers, to the reactions and interest in our DigitalOcean AI-Native Cloud reinforces that conviction. As we scale our business to meet this opportunity, we will continue to make the right long-term business decisions to seize this moment while building a durable and profitable growth engine. With momentum continuing to grow, we are further raising our near and medium-term outlooks. For the full year 2026, we now expect revenue growth of approximately 25%-27% year-over-year, with an exit growth rate approaching 30%. A full year ahead of the guidance we provided just last quarter.
This accelerated 2026 growth is based solely on the performance of our previously committed capacity and doesn’t include any projected revenue uplift from the newly committed 60 megawatts. We expect to deliver this 2026 growth with high thirties adjusted EBITDA margins and 9%-12% adjusted free cash flow margins, which does include some start-up costs for the new 60 megawatts. Looking further out, we now expect 2027 revenue growth of 50% or more, up from our 30% guidance last quarter, with approximately 40% adjusted EBITDA margins and high teen adjusted free cash flow margins. This combination of rapid revenue growth and true durable profitability puts us in rarefied company. DigitalOcean is one of just a handful of names across a broad set of software and AI infrastructure players delivering both attractive GAAP operating margins and material revenue growth.
As I shared on our last call, growth and discipline are not trade-offs for us. They’re both operating principles. Our execution of these principles is clear in our results. With that, I will turn it over to Matt to walk through our Q1 results and our updated guidance in more detail. Matt, over to you.
Matt Steinfort, Chief Financial Officer, DigitalOcean: Thanks, Paddy. Good morning, everyone, thanks for joining us. As Paddy just shared, we had a very good quarter. In my comments, I will review the financial results in detail, walk through our recent balance sheet and capital allocation actions, then provide an update to our near term and medium term outlooks. Starting with Q1, our results were very strong, we exceeded the guidance we last provided on all key metrics. Q1 revenue was $258 million, up 22% year-over-year, above the top end of our recent guide. The vast majority of this Q1 revenue beat came from strong retention in our top D&E cohorts and from expansion in our top cloud and AI native customers.
The Richmond Data Center, which began ramping revenue in March, contributed less than $500,000 of revenue and less than 20 basis points of year-over-year growth in Q1. Our top customers continue to drive our growth. Our million-dollar customer ARR reached $183 million, growing 179% year-over-year. AI customer ARR reached $170 million, growing 221% year-over-year. We continue to deliver both durable and profitable growth. First quarter adjusted EBITDA was $105 million, up 21% year-over-year, with an adjusted EBITDA margin of 41%. GAAP operating income was $37 million, with an operating income margin of 14%. Adjusted operating income was $64 million, with an adjusted operating income margin of 25%.
Trailing twelve-month adjusted free cash flow was $171 million or 18% of revenue. Trailing twelve-month adjusted free cash flow less lease principal payments was $154 million or 16% of revenue after including $17 million in financed equipment principal payments over the last twelve months. Next, I’ll spend a few minutes on the recent equity raise and what it means for our financial profile and for our capacity plans. In Q1, we raised $888 million in equity, and we have already put the proceeds to work across two important priorities. The first priority was strengthening the balance sheet. We repaid our full $500 million Term Loan A, saving roughly $50 million per year in cash interest and mandatory prepayments.
We intend to use a portion of the remaining cash to retire the outstanding $312 million 2026 convertible notes when they mature. Collectively, these actions result in a flexible balance sheet with no material maturities until 2030. The second priority was expanding capacity to meet demand. As Paddy shared, we have secured approximately 60 MW across 4 new locations, an 80% increase in our committed capacity. This capacity is projected to begin ramping revenue over the course of 2027. While there won’t be any 2026 revenue impact, the build-out of some of this capacity is likely to start in late 2026, which will impact 2026 cash flow and margins. We expect the CapEx per MW in this new capacity to be higher than for the equipment ordered last year for the 31 MW.
The increase is driven both by the rising component costs the entire market is seeing and higher cost and higher token capacity equipment that we plan to install. We expect the incremental ARR per megawatt to be higher as well, and importantly, we expect to generate the same or higher return on investment in these new data centers. We are likely to continue to align the timing of our investments with revenue by financing a material portion of the equipment for these facilities. With all of this, we expect to exit 2026 at approximately 3 times net leverage with no material debt maturities until 2030. Looking forward, we are again raising our near term and medium term outlooks. The strong Q1 retention and growth in our top cloud and AI-native cohorts has continued in Q2.
For the second quarter of 2026, we expect revenue of $272 million-$274 million, representing 24%-25% year-over-year growth. We expect second quarter adjusted EBITDA margins in the range of 37%-38%, which is $102 million at the midpoint, up 14% year-over-year. We expect non-GAAP diluted net income per share of $0.20-$0.23 based on approximately 121 million-122 million weighted average fully diluted shares outstanding. Note that our shares outstanding projection includes a benefit from the projected anti-dilutive impact of the capped call that we purchased along with the issuance of our 2030 notes. For the full year 2026, we are again meaningfully raising our outlook.
We now expect full year 2026 revenue of $1.13 billion to $1.145 billion, representing 25%-27% year-over-year growth with an exit growth rate approaching 30% in Q4. This does not include any projected revenue from the newly committed 60 megawatts. We expect strong full year adjusted EBITDA margins of 37%-39%, which is $432 million at the midpoint. Projected adjusted free cash flow margin will be in the range of 9%-12%, which includes roughly $100 million cash flow impact in 2026, a projected non-recurring start-up cost for some of our newly committed capacity. Without these costs, adjusted free cash flow margin would be roughly 18%-21% for the year, above prior guidance.
We expect adjusted free cash flow margin, less equipment finance principal payments, to be slightly positive for 2026, including the impact of the $100 million in costs for 2027 capacity. We expect full year non-GAAP diluted net income per share of $1.10-$1.20 on 118 million-119 million weighted average fully diluted shares outstanding. This is an increase to our prior guidance despite the equity raise as the interest savings from retiring our Term Loan A more than offset the impact of the higher share count. We are also increasing our medium to long-term outlook. The 30% 2027 revenue growth outlook we provided last call was based solely on the 75 MW of capacity that we had active or under contract at that time.
With approximately 60 megawatts of additional committed capacity projected to begin generating revenue over the course of 2027, we now expect 2027 revenue to exceed $1.7 billion, full year growth of 50% or more year-over-year. We will deliver this growth while working to make smart investments, generate attractive returns, and maintain a strong and flexible balance sheet. Our margin outlook for 2027 is healthy. We project approximately 40% adjusted EBITDA margins and high teens adjusted free cash flow margins. While we are excited by our progress and the increased growth outlook, we’re not stopping there. We continue to actively look for opportunities to further accelerate durable and profitable growth. With that, I’d like to turn it back over to Paddy.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Thank you, Matt. Before we move to Q&A, let me recap what we shared today. First, our momentum has never been stronger. Our million-dollar customer ARR reached $183 million, growing 179% year-over-year. Our AI customer ARR reached $170 million, growing 221%. Over 80% of that is coming from inference services and core cloud, not bare metal. We are an AI-native inference cloud, not a GPU landlord. Second, we launched the DigitalOcean AI-Native Cloud. We unveiled our full platform last week at Deploy Conference. We acquired Katanemo to accelerate our open-source AI stack. We landed multiple marquee AI-native customers, including Cursor. Our differentiation is clear. The pipeline is deep and the wins are real. We are the AI-native cloud.
Third, we are investing to meet our customer demand. $888 million raised, 60 MW of incremental capacity committed. We are building for 2027 and beyond with disciplined capital allocation and a strengthened balance sheet. Finally, we again raised our near and medium term outlooks: projected exit 2026 revenue growth approaching 30%, accelerating to 50% or more revenue growth in 2027, with attractive margins and a flexible balance sheet. We continue to build a durable and profitable growth engine. The inference and agentic economy is real. The demand is real. DigitalOcean, with its AI-Native Cloud, is purpose-built for this opportunity. With that, let’s open it up for questions.
Jill, Conference Operator: Thank you. The floor is now open for questions. We do request for today’s session that you please limit yourself to one question and one follow-up. Your first question comes from line of Kingsley Crane of Canaccord Genuity.
Kingsley Crane, Analyst, Canaccord Genuity: Thanks. Needless to say, congrats on the momentum. You’ve earned it. You continue to earn it. It’s great to see. One of the ideas over the past couple weeks is that the mix of CPU and GPU should be closer to 1 to 1 with agentic workloads compared to pure LLM calls. You talk about that new era of thinking and doing in your deck, which was really well prepared. Curious how relevant is that CPU renaissance for your business, given your large core cloud and CPU footprint. Trying to think about the quantitative benefit that could create.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Thank you, Kingsley. Appreciate your question. I think it is unmistakable that we are moving more and more towards an agentic era where more software is going to be rearchitected, and there will be a heavy dose of autonomous agents performing tasks that were previously handled by humans. In that era, the doing part, as I mentioned, will also require intelligence, but it is going to require a tremendous amount of computing that until about 12 months ago, or more precisely until OpenClaw really showed us the blueprint, we weren’t really as an industry contemplating how compute-intensive it is going to be. When I say compute-intensive, it is just not CPUs, right? It is high bandwidth memory. It is advanced databases like the ones that we just announced last week.
It is safe agent execution. It is orchestration between these agents. There is a tremendous amount of modern computing primitives that are required to orchestrate all of this. I don’t know whether the ratios that have propped up, which say CPUs to GPUs will go from 1 to 12, as we were previously thinking, to 1 to 1. I don’t know exactly what that ratio will end up being. What I can tell you is that we are going to need a hell of a lot of more compute to do all of these things as more software gets rearchitected over the next handful of years to be more agentic, which requires both inferencing for the thinking part and a lot of computing for the doing part. We are preparing for that.
The new capacity that we’ve just took on. All of our new data centers are deploying our full stack AI-Native Cloud. It is just not inference services. It is the full stack AI-Native Cloud that is getting deployed in these data centers. We are getting ready for a compute-heavy future. We are starting to see that in a very pronounced way from some of our advanced AI-native customers as they themselves move into an agentic era.
Kingsley Crane, Analyst, Canaccord Genuity: Thanks. It’s really helpful. For either Paddy or Matt, we’ve been thinking about low to mid-teens revenue per megawatt for AI. You’ve mentioned that the incremental capacity you’re bringing on could be higher. Just in addition to that, like, to what extent can software capabilities like inference engine, inference router, open source model adoption, agent framework, you know, push that revenue per megawatt higher? I think we’re all doing that megawatt math, but just curious to what extent that figure can become untethered from the peers there. Thanks.
Matt Steinfort, Chief Financial Officer, DigitalOcean: Thanks, Kingsley. I think that’s a great question. We definitely expect that we can increase that 13 million per ARR per megawatt over time. I mean, you’re already seeing that non-bare metal is over 80% of our AI customer ARR, and that should increase the, you know, the ARR by itself. We’re also expecting, as you just pointed out, there’s gonna be a lot of core cloud and a lot of compute that gets pulled through with that. Right now it’s still, I’d say, a modest amount of core cloud pull-through, and we think there’s upside there.
To your point, all of the capabilities that we announced at Deploy, the serverless inferencing and a lot of these other capabilities, they detach the pricing and the value creation from a dollars per GPU hour and enable us to capture both higher revenue and higher margins with stickier services. We’re very optimistic about our ability to drive the ARR per revenue up over time. Certainly that’s part of our investment thesis as we’ve taken on this incremental capacity.
Kingsley Crane, Analyst, Canaccord Genuity: Thanks so much.
Jill, Conference Operator: Your next question comes from the line of Gabriela Borges of Goldman Sachs. Your line is open.
Gabriela Borges, Analyst, Goldman Sachs: Hi. Good morning. Thank you and congratulations. Paddy, you start up this conversation talking about how the beat in the quarter was not driven by new capacity coming online, but rather previously committed capacity. Either for yourself on that, talk to us a little bit about how we should think about the beat and raise cadence. You’re already giving us visibility into 2027 based on capacity coming online. In any given quarter, what levers do you have to beat and raise? Maybe if you could comment on the pricing dynamics and the levers you can pull on pricing within that. Thank you.
Matt Steinfort, Chief Financial Officer, DigitalOcean: Thanks, Gabriela. That’s a great question. I think when we guided to 2026, and we outlined the pace at which capacity was gonna come online this year, there’s a number of assumptions that we had to make in that gave us the ability to have very strong confidence in the guidance that we were providing. One was the timing of the facilities coming online. The second was our ability to sell into that capacity as it came on. The third, the pricing at which we’re selling into that capacity.
If you think about all of those dimensions, you know, again, when we provided that guidance, which was, you know, late last year or early this year, we had to make sure that, you know, we had enough cushion. What we’re finding is we’re doing pretty well on all 3 of those dimensions. The Richmond data center came online. We had said second quarter. It came online in March. It didn’t contribute much to the first quarter, but it’s online and ready to go ahead of what we had said. We’re able to sell into it much, I’d say, on the a very appropriate and aggressive timeline, which is really good.
As you’re seeing in the market, the pricing for, you know, GPU hour, even services right now is not seeing any kind of price compression. In fact, we’re seeing increases in the prices for H100 and H200 and some of the legacy gear. I’d say we have sufficient ability to continue to beat and raise. We just outlined the incremental 60 megawatts for next year, and we’re taking a very similar approach, which is we’ll be cautious about our expectations around timing of delivery. We’ll be cautious about expectations of how long it takes to sell into it, and we’ll be cautious about the pricing that we get, and then we’ll work to exceed that.
Gabriela Borges, Analyst, Goldman Sachs: Matt, I’ll pick it up just from those comments on being cautious. I think we can all agree that we’re pretty early in what is going to be an incredible product cycle. At some point, the product cycle will peak. I guess the question is for the both of you, what are the demand signals that you’re watching to be able to figure out whether 2027 growing north of 50%, is that the peak growth rate? Does it accelerate from there? Does it normalize and come down? What are some of the metrics that we could potentially be tracking from the outside, and what do you track internally? Thank you.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Yeah. I can start at a high level, and then I let Matt comment on your specific 2027 question. We all agree, Gabriela, that this is such a tectonic shift in how software is built and delivered. One thing that I also want to highlight here is that inferencing and agentic workloads will scale very differently compared to training. Training is a one-time, almost episodic turn on. The entire cluster comes online and just stays static from a workload perspective. While inferencing and agentic workloads have more of a cloud kind of characteristics in terms of how the workload ramps, although the gradient of the ramp has been significantly steeper than we have ever seen with traditional cloud software.
A lot of our confidence is coming from observing our big marquee AI native customers and seeing their workload growth and hence the inferencing demand that they translate onto us and our platform. In terms of the product cycle peaking, I think that is we are still a few revisions of our product certainly, and also as an industry to get to that peak cycle. OpenCloud, I have to remind everyone, is barely 100 days old. Since then there have been a few other personal productivity agents like Hermes Agent and a few others that have come. The whole industry is now figuring out what agent harnesses should look like. It is still a very early days of the agentic architecture.
I expect the product cycle refresh to continue for quite a bit into the next several quarters before we can say, "Okay, we now have a blueprint for how these modern autonomous systems are gonna be built and operated and scaled." I think we still have a lot of innovation ahead of us. What gives us a lot of confidence is having this front row seat working with these marquee AI-native customers gives us a tremendous opportunity to learn about their application patterns. This luxury is available to us because we are not just a bare metal provider.
These customers want us to be in the room where they’re solving these problems. That’s how we were able to build a lot of these things that we saw last week in terms of innovation, like the intelligent routing, the many of the caching techniques that made us the number one in DeepSeek and Qwen token throughput and time to first token and things like that. It gives us a front row seat and a co-invention opportunity to do this alongside our customers. I definitely feel like the product cycle is not going to peak anytime soon.
Matt Steinfort, Chief Financial Officer, DigitalOcean: I think the best metric to watch, which we’re watching, is ARR per megawatt. I mean, if you think of token efficiency being one of the primary differentiators in terms of your ability to provide value to your customers is how much revenue can you get for those tokens and how efficiently can you provide them and how sticky are those, services that you’re providing. That should all translate into, higher ARR per megawatt, which is why we’ve introduced that metric. We track it internally and, it’s all about optimization for us. That’s where we’re focused. That’s what we would point the market to watch as well.
Gabriela Borges, Analyst, Goldman Sachs: It’s really good stuff, team. Thank you so much. Congratulations.
Jill, Conference Operator: Your next question comes from the line of Mark Zhang of Citi. Your line is open.
Mark Zhang, Analyst, Citi: Hey. Thank you so much for taking the question, team. Very nice to see the growing list of the non-bare metal, ARR this quarter. Just want to dig into some of the dynamics there and the inputs. You know, one, to get a sense of the contributions from just new lands versus existing conversions of the existing bare metal customers. How should we sort of like, you know, think of the pace of the mix shift going forward? Can you give a sense of the ASP uploads when you convert from bare metal. Thank you.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Thank you, Mark. Your, your line was a little choppy, but I think I got the essence of your question. In terms of the mix of the customers, it’s a healthy mix of AI native customers that are new to our platform, that are not just consuming core AI services, but also by the nature of their inferencing workloads, they use storage systems and database systems and also increasingly core computing primitives. We also have some of our existing digital native enterprise customers also starting to ramp up their AI innovation and AI workloads. It goes both ways, and we are super happy to see that.
In terms of the bare metal consumption, pretty much most of the customers that come to us now are coming to us because they see this rich set of inferencing entry points. Last week we announced serverless inferencing, dedicated inferencing, batch inferencing and things like that. Increasingly, customers are realizing, especially the AI natives, that they were forced to deal with all this complexity over the last couple of years, not because they wanted to, but they had to because there were very few vendors who were able to provide this kind of kernel optimization and performance enhancement using software and hardware co-design.
Now that these kinds of capabilities are available out of the box from our AI-Native Cloud, we are seeing a lot more appetite from our customers to come in at a higher altitude in our platform. We are not having to sell bare metal at all. In fact, we don’t even have that as part of our standard pitch.
Matt Steinfort, Chief Financial Officer, DigitalOcean: From a timing standpoint, this is one of the benefits of our, you know, our consumption-based model where we’re not locking in bare metal prices for 4 and 5 years. As these bare metal customers, and if you notice in the materials we provided, the bare metal not only decreased as a %, but it actually decreased in absolute dollars of the AI customer ARR. That’s because as these customers come up for contract renewal, we have the opportunity to resize and reconfigure that capacity. If we want to make that available to serverless inferencing, where we know we’ll earn a higher return than bare metal, that’s what we do. You know, we have the ability to steer that % down by not consuming our scarce capacity for bare metal services.
Not only are new customers not asking for it, but the customers that are on it right now, you know, we can rotate them off and into the new services, or we can repurpose the capacity for higher margin services, and we control that.
Mark Zhang, Analyst, Citi: No, that’s terrific. Thank you for walking through that. Maybe a follow on. You know, it’s terrific to see the, you know, new five-layer self-inferencing platform that you guys have provided last week at Deploy. How should we sort of think of the maybe like, you know, changes to the go-to-market from here? You know, obviously, you know, there’s a lot to sell. There’s, you know, much more products to, you know, for customers to consume. How are we thinking about just in terms of the go-to-market, partnerships and how to really like efficiently land new customers who are on these, you know, new modules? Thank you.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Thanks, Mark. That’s a great question. Our go-to-market, over the last several quarters has been aimed at getting marquee AI-native logos, and that’s how we have landed some of the customers that I was so proud to announce today. We just have to scale up in doing what we are already doing. Just as a reminder, we have a very small but mighty team of AI-native-focused sellers that are quite capable of selling our AI-native cloud stack. On top of it, we also have a very focused startup ecosystem team that nurtures high-quality AI-native companies in Silicon Valley and nurture them through their growth phases.
We also have a tremendous luxury of having perhaps the best product-led growth machine which keeps growing in strength. We get a tremendous amount of traffic and volume through our product-led growth flywheel. Which includes a heavy dose of AI native customers that absolutely just love the simplicity and the absence of friction in our platform that enables them to just come and try our platform and do it without any human intervention. We have multiple front doors as a way to solicit customer entry into our platform. We’ll be fortifying some of those things, and we have a very strong partnership team that enables us to build relationships with the various frontier model and open source model companies and the rest of the ecosystem.
Jill, Conference Operator: Your next question comes from the line of Jason Ader of William Blair. Your line is open.
Jason Ader, Analyst, William Blair: Yeah, thanks. Good morning. Paddy, I know you guys are exploiting a gap in the market right now, especially, you know, with the NeoClouds. The NeoClouds are all messaging, shifting to a full stack approach and a focus on inferencing. I guess my question is: How sustainable is your differentiation, relative to the NeoClouds and what drives that?
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Yeah. Great. Thank you, Jason. I think the market opportunity is just huge and tremendous, right? We feel that the NeoClouds adding software capabilities is a great validation of our strategy. We’ve been saying that for a long time. We are in fundamentally different businesses than the NeoClouds. They’re training first, and that’s a great model. They have a small number of highly concentrated customers with take or pay agreements. That type of contracts needs a tremendous amount of infrastructure and discipline and execution to pull that off. It is a significant heavy lift to deliver on these massive hyperscaler offtake contracts. I like our chances of continuing to innovate on the software stack.
As I said, it takes a lot of hard work to build a well-integrated stack like the one that we announced last week. It is just not a stack that lives on a PowerPoint slide. You can log into cloud.digitalocean.com and see how these layers work together. We are also incredibly proud of the fact that we have made the stack completely open with open source options at every single layer. That is a pretty big deal that I want everyone to appreciate because our target customers are AI native customers, and they feel very uncomfortable boxing themselves into a single LLM provider. That is just not how their businesses will scale. For them, having open source work as well as closed source as part of the native stack is very important.
Driving this kind of integrated open source-enabled stack is really hard. I like our focus, I like our discipline in terms of doing this. The market opportunity is going to be so big that I feel very, very convinced that if we focus on learning and understanding our customers better than anyone else and translate that to product innovation, everything else is going to take care of itself. I keep telling my teams, "Be extraordinarily customer obsessed and competitive aware, not the other way around." We should obsess over our customers first so that we can build the best product for them while being aware of competition and not the other way around. I feel we have a lot of room to run with this strategy.
Jason Ader, Analyst, William Blair: Okay. Great. Then one for Matt. Matt, for 2027, you talked about adjusted free cash flow margin in the mid to high teens, I believe. Could you give us a sense of what it would be including lease payments?
Matt Steinfort, Chief Financial Officer, DigitalOcean: Yeah. That’s a great question, Jason. It’s hard to answer though because it’ll depend entirely on the lease terms that we have, so whether we lease over four years or five years or longer period. It’ll also depend on the mix of what we lease versus what we pay for upfront. That’s why we’re not guiding to that at this point. What I can tell you is that we continue to make very disciplined investments. We’ve created a lot of balance sheet flexibility for ourselves with the equity raise. Got a lot of options at our disposal. We’re very excited by the return on investment that we’re underwriting for these new facilities.
We’ll continue to operate with discipline, but we can’t provide specificity on what the lease payments are gonna look like in 2027 because we don’t know yet.
Jill, Conference Operator: Your next question comes from the line of Wamsi Mohan of Bank of America. Your line is open.
Jason Ader, Analyst, William Blair3: Yes. Thank you so much. Paddy, when you look across your customer cohorts, how much penetration are you seeing of AI-driven workloads as you look at sort of the $1 million-plus and the 500K plus customer cohort? Are you actually seeing because of AI, do you expect over the next two years to have an even higher chunk of customers graduating from this 500K to a $1 million-plus cohort, as you look through the next few years? I have a follow-up for Matt.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Yeah, Wamsi. Yes, you’re absolutely right. I think the short answer is yes to both. We have a good mix of AI as well as cloud native customers in the 500K and $1 million customers. Yes, it is a very important motion that we drive internally to look at every 100K customer and drive our teams to find out what is blocking our customers from being a 500K customer. Similarly, we look at every 500K customer and find out how we can make them a $1 million customer and so forth. With the increased adoption of AI in these customer cohorts, we fully expect those numbers to keep going up to the right for sure.
Jill, Conference Operator: Your next question comes from line of Tom Blakey of Cantor. Your line is open.
Jason Ader, Analyst, William Blair2: Hi. Good morning, everyone, and congratulations on the great results here. Maybe a couple questions on my side. You know, Paddy, we’ve talked prior about, you know, 3x-4x demand in terms of your 75 megawatt capacity. It was really impressive to see you announce Cursor here. A great win. Congratulations. Just wondering if you could just maybe update us on the framework of what you’re seeing there in terms of, you know, your customer selectivity, and maybe even turning some customers away in this type of market. Secondly, for Matt and maybe the team, just this, you know, CapEx per megawatt, I think investors would love a little bit more color in terms of how much higher this can go for the 60 megawatts.
You know, would it be difficult to just upgrade the prior capacity from a software upgrade perspective to the AI-Native Cloud capacity to, you know, to maybe kind of pull some of that in? That’d be helpful. Thank you.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Yeah. I think on the last thing, we, it is hard to have a non-AI data center, deployed with AI hardware, because of the limitations. Especially, all of the new ones that we’re deploying are all direct liquid cooled, and the hardware specs are just different, Thomas. That’s that. Going back to your first question around the pipeline coverage and how we allocate capacity, I mean, that is some a new muscle that everyone in the industry is learning, right? Our pipeline, as I mentioned, several times, is 3x-4x, if not more, in terms of the actual capacity that we have.
Which is a great problem to have, but it is a problem that we are very keen and very thoughtful about resolving because we have to make some bets. Just like our customers are making bets on us, we have to make bets on how we want to allocate the capacity. As I said in the last call, if we decide to just sell the capacity to the first or the biggest or the loudest customer, we’ll be all done. We can go home and the capacity will all be taken. We have an intention to run this like a cloud, right? Where we want as many customers as possible so that we can learn, we can build a better product and build a bigger competitive moat.
That customers that only have or platforms that only have a few concentrated customers simply don’t have the luxury to learn and innovate as fast as we are. It’s a balancing act that we are trying to figure out, but so far so good with the types of customers we are bringing on board.
Matt Steinfort, Chief Financial Officer, DigitalOcean: In terms of the cost of the CapEx, it’s certainly gonna be higher than what we experienced for the 31 MW. That equipment was ordered in 2025. You’re seeing broadly across the industry component costs are going up. More importantly for us, we’re putting in gear that has higher token kind of capacity and capabilities. We expect to get the same or higher ROI on the investments that we’re making. You know, we’ll invest a bit more. We see a phenomenal opportunity in front of us. We’ve got a very differentiated position. We’re gonna get more capacity out of the investments we make, and we’re gonna earn similar or better returns on the investments.
Jill, Conference Operator: Your next question comes from the line of Josh Baer of Morgan Stanley. Your line is open.
Josh Baer, Analyst, Morgan Stanley: Great. Congrats on a wonderful quarter. Thanks for the question. Just hoping you could double-click a little bit on GPU and other pricing trends that you’re seeing in the spot market. Wondering if you can quantify the portion of your business that’s on demand and exposed to spot versus what portion is contracted and has fixed pricing. Any way that you can characterize the benefit in the quarter or the impact of the 2026 guide from to spot market pricing.
Matt Steinfort, Chief Financial Officer, DigitalOcean: It’s interesting, Josh, that you point to the spot pricing. We have a portion of a small portion right now of on-demand because most of our capacity is locked up with a customer. If you think about the or the core of your question, which is how much exposure do we have to the ability to raise GPU prices along with the market, because we don’t have 4 or 5-year contracts with our customers, if we’re locked into a customer, it may only be for 3 months or 6 months or a year. As I said earlier on the call, as those contracts are coming up, we can rotate. One, we can just raise the price on that customer to whatever the current market prevailing price is.
2, we can rotate it completely out of, if it’s a GPU per hour price, we can say, "We’re not gonna sell that capacity in that model any longer, and if you’re interested in it, you’ve got to take our on-demand pricing, or you know, you’re gonna take serverless inferencing." We have the ability to adjust to the market, I’d say, you know, probably more readily than maybe some of the other folks in the industry. We feel very good about our ability to adapt to pricing. And as I said, to Gabriela’s question, that ability and our ability to execute that is part of the reason why we’re able to raise the guidance for this year without getting any benefit from the incremental capacity that we just announced.
That’s a great question.
Jill, Conference Operator: Your next question comes from line of Radi Sultan of UBS. Your line is open.
Radi Sultan, Analyst, UBS: Awesome. Yeah, thanks, guys. just you think about adding more capacity and as the existing, you know, AI customer cohort scale, like how should we be thinking about the gross margin profile, this incremental capacity you’re looking to add once it’s fully utilized? You mentioned, Matt, you know, the increased component costs, but maybe just what are the key puts and takes there we should be keeping in mind just on the margin side of things?
Matt Steinfort, Chief Financial Officer, DigitalOcean: Hey, Radi. Great question. I think you’ll note in our materials that we highlighted non-GAAP operating margin. The reason that we did that is because, again, if you think of where the industry is going and how different this business is than the business that we had several years ago, gross margin is one input, but operating margin is a better, more holistic view of what’s going on in terms of the overall profitability. Because the revenue growth is so rapid, and it’s certainly at a lower gross margin, but it comes with tremendous operating expense leverage. The operating margins are very strong and very compelling, and we expect those to continue to be very attractive.
Will we see a small, you know, decrease in operating margin as we invest to accelerate our growth, given some of the same timing related issues with bringing on new capacity? We certainly will. If you look at the rate of revenue growth, if you look at the strong operating margins, if you look at the fact that we’ve been very, very disciplined with cash flow, and that we’re earning very good returns, I think you’d agree that we’re positioned very, very well for very durable and profitable growth.
Jill, Conference Operator: Your next question comes from the line of Patrick Walravens of Citizens. Your line is open.
Patrick Walravens, Analyst, Citizens: Great. Thank you very much. That’s amazing results, you guys. Congratulations. Paddy, when I was at your Deploy conference, the speaker got interrupted by applause like five or six times, but two of the times were when you talked about the inference router, and then also when you guys talked about support for the latest DeepSeek model. Can you just talk a little bit about why your customers are so enthusiastic about that?
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Thank you, Patrick. First of all, thank you for coming to Deploy last week. You bring up a really important point. For those of you who have not seen the keynote video recording from last week, I encourage you to please do that. The two points that Patrick just mentioned are really important because AI natives are doing something which is incredibly interesting. Number 1 is they are all running multiple models, right? Because as I mentioned, this is a cost of revenue line item for them, and it will be crippling if they are just beholden to one closed source model. Last week, there were two different models that were announced. One is DeepSeek version 4, and the other one was the latest version from OpenAI. The difference in price was 10x.
In terms of the output tokens, it was literally $3 versus $30. AI natives are doing 3 things. One, they are all becoming multiple models. Number 2 is they’re running a lot of open source. Number 3 is many of these AI natives are also running their own version of a model which is distilled from a open source model or something like that. This intelligent router becomes extraordinarily important so that the router can find the right model for the task you’re assigning. We showed a demo which was super compelling, where it showed better performance at lower TCO per token by routing the incoming prompt to the right model.
The second thing is, Patrick mentioned that there was a lot of applause for our DeepSeek support, which is fairly obvious because AI natives are embracing open source up and down the stack in a very pronounced manner. That’s why it is really important to understand our target market is very different. These are AI natives that are building and monetizing software. For them, multiple models, open source, and having destiny over their intelligence is an existential thing.
Patrick Walravens, Analyst, Citizens: Great. Matt, if I could ask you a follow-up. Cursor is an amazing win. Congratulations. We’ve all seen the news about SpaceX having an option to buy it. Just how did that fit into your guidance? How did you think about that?
Matt Steinfort, Chief Financial Officer, DigitalOcean: Cursor is a fantastic customer, and as you said, it’s great, it’s a great indication of the quality of the platform. We’re really excited by it based on the fact that This is not a bare metal contract. They’re using our inference services. They’ve made commitments around the NFS and some of the core cloud capabilities. We’re very encouraged by that, and we have a fantastic relationship with them. We haven’t predicated any of our long-term guidance on any single customer. We have, as Paddy said, three to four times the demand for the capacity that we have available, we’re very confident that they’ll be a good part of that. We’re not basing any of our forecasts on specific customer.
Jill, Conference Operator: Your last question comes from the line of Raimo Lenschow of Barclays. Your line is open.
Jason Ader, Analyst, William Blair1: Hey, thanks for squeezing me in. Two quick questions. Going back to Gabriela Borges’s point on terms of, like, how big the market is, like at the moment it looks like most of the work is getting done on training models and inference is only starting. Like, Paddy, from your perspective, which innings are we on inference? Actually, because it seems very, very early still, to get an idea about like the how long this can go on for. Then Matt Steinfort, for you, one thing that comes up in the market is a lot is like capacity of new data centers, et cetera. You’re not building 100,000 GPU to have data centers. You’re much smaller. Like what’s the constraint of finding sites to kind of go beyond the capacity you announced today?
Thank you, and congrats from me as well.
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: Thank you, Raimo. To answer your question succinctly, since baseball season is just starting, I would say from an inferencing point of view, we are probably in the top of the second inning. Agent tech, we are just in the national anthem. It’s just getting started. I think there’s a lot of room for a lot of innovation. The one thing that I’m super proud of with all the announcements we made last week is 15 new product launches. Not just features, 15 new product launches. The velocity and the intensity from our engineering team is just going to make a difference in terms of our ability to establish a leadership position. Then, Raimo, your-
Jason Ader, Analyst, William Blair1: Yeah, what was the second question?
Paddy Srinivasan, Chief Executive Officer, DigitalOcean: What was the second part of the question?
Jason Ader, Analyst, William Blair1: Oh, it’s data center capacity. Sorry.
Matt Steinfort, Chief Financial Officer, DigitalOcean: It’s like, how difficult is it? Is it like Yeah.
Yeah, sorry. We’ve been able to secure the data center capacity that we’ve been targeting. We’re still in active conversations on additional capacity beyond the 60, both for 2027 and 2028. We’ve not had an issue getting capacity that we’ve been trying to track down.
Jason Ader, Analyst, William Blair1: Okay, perfect. Thank you. Good luck.
Jill, Conference Operator: That concludes our Q&A session, and this also concludes today’s conference call. Thank you for your participation. You may now disconnect.