Stock Markets April 1, 2026

Why Space-Based Data Centers Face the Same Limitations That Sank Microsoft’s Undersea Experiment

Modular, sealed compute units and steep deployment costs raise tough questions about economics, upgrade cycles and cooling for orbital AI infrastructure

By Ajmal Hussain MSFT
Why Space-Based Data Centers Face the Same Limitations That Sank Microsoft’s Undersea Experiment
MSFT

SpaceX’s plan to deploy up to 1 million data-center satellites as part of an AI strategy echoes a prior effort by Microsoft to move compute off land. Microsoft’s undersea Project Natick demonstrated technical viability but failed to attract customers or economic scale. Experts warn that the same structural limits - locked-for-life modular designs, high deployment costs, and cooling and radiation challenges - are likely to be more acute in orbit than under the sea, raising doubts about the commercial case for orbital data centers.

Key Points

  • SpaceX’s IPO filing outlines a plan to deploy up to 1 million data-center satellites to support AI workloads, echoing earlier ambitions to move compute off land.
  • Microsoft’s Project Natick proved technical feasibility for undersea data centers but was not scaled due to limited customer demand and unfavorable economics; the same modular, sealed architecture underlies both approaches.
  • Major sectors affected include cloud infrastructure, aerospace and launch services, and semiconductor designers for AI chips, given the demands of cooling, radiation hardening, and frequent hardware refresh cycles.

SpaceX has publicly filed for an IPO that, according to the company’s leadership, would help fund a dramatic pivot toward AI by placing large amounts of computing hardware into orbit. The plan as described would involve launching up to 1 million satellites configured as data centers to sidestep terrestrial constraints such as power and water availability. While the ambition is vast, it is not the first time a major technology firm has tried to escape the limits of land-based data centers.

In 2015 Microsoft deployed a shipping-container-sized data center on the seabed off Scotland as part of an experimental program known as Project Natick. The project aimed to exploit natural seawater cooling and to integrate with offshore renewable energy sources, with the goal of reducing energy consumption for cloud computing. According to people familiar with the program, Natick succeeded on a technical level - it met its engineering objectives - but the product ultimately failed to win customer demand or produce an economic model that could scale.

A Microsoft spokesperson provided a brief statement noting that while the company does not currently operate underwater data centers, it intends to keep Project Natick active as a research platform to explore concepts related to reliability and sustainability in data-center design.

Industry specialists interviewed by Reuters drew a direct line between Microsoft’s undersea experiment and SpaceX’s orbital ambitions. Although one concept is submerged and the other is in orbit, both designs rely on sealed, modular compute units that are costly to deploy and cannot be incrementally upgraded, repaired, or expanded - characteristics that are increasingly problematic for AI workloads which rely on frequent chip improvements and tight cost controls.

Roy Chua, founder of research firm AvidThink, framed the comparison starkly: problems that arose under the sea are likely to be more pronounced in space. Chua highlighted three categories of unresolved risks: thermal management for high-performance AI chips in vacuum, persistently high launch expenses, and the long-term impact of radiation and other environmental stresses on advanced compute hardware.

SpaceX did not provide a comment for this analysis.

Public filings accompanying SpaceX’s IPO plans note that the company, which in February acquired an AI startup called xAI, could raise as much as $75 billion in the offering. The disclosed holdings of xAI include social media company X and an AI chatbot named Grok.


Why Microsoft’s experiment faltered - and what that means for orbital designs

Those with knowledge of Project Natick say that while the prototype proved underwater data centers could operate, customers chose not to scale the model. Instead, cloud providers expanded conventional land-based facilities that supported cheaper, faster hardware refreshes - an increasingly important capability as AI accelerates and AI chips advance year to year.

At the heart of Microsoft’s commercial struggle was its sealed or "locked-for-life" architecture. A unit placed on the seabed could function reliably for years but could not be patched, replaced, or incrementally upgraded on short notice. That design trade-off favors reliability over flexibility, but AI workloads place premium value on the ability to iterate quickly on hardware because chip performance and efficiency improve rapidly.

Economics compounded the product challenge. Sources familiar with Natick said that deploying undersea data centers cost more than equivalent land-based facilities. While costs might drop if the approach were taken to a very large scale, achieving such scale would demand tens of billions of dollars in investment - a threshold Microsoft did not find justifiable. Space, by most accounts, would be even more expensive.

Analysts at MoffettNathanson estimated in a research note that a program to launch a million AI satellites could run into the trillions of dollars. The firm also argued that to make orbital data centers commercially viable, launch costs would need to decline from the current low thousands of dollars per kilogram to the low hundreds of dollars per kilogram - a level not achievable with present market economics.

Independent satellite analyst Tim Farrar emphasized the distinction between technical feasibility and economic sense: "The problem is not whether something can work, but whether it makes sense economically versus simply building more capacity on the ground." Farrar pointed to the challenge of designing solutions that are cost-competitive with terrestrial alternatives.


Technical hurdles specific to orbit

Space proponents say the company can address the engineering and financial barriers. Elon Musk has argued that radically lower launch costs and more resilient chip designs will resolve issues such as radiation exposure, heat rejection in vacuum conditions, and the need for more frequent hardware replacement cycles.

However, any plan that relies on reduced launch prices hinges on Starship, SpaceX’s next-generation vehicle designed to be fully reusable and to carry far larger payloads than the company’s Falcon rockets. Starship’s test program has experienced delays and setbacks: the vehicle is years behind schedule and has endured explosive failures in some of its suborbital tests since 2023.

To meet the scale Musk has discussed, MoffettNathanson calculated that roughly 3,000 Starship launches a year would be required - an average of eight launches per day. That rate of flight deployment is orders of magnitude beyond current operational norms for large orbital rockets.

Other companies are exploring similar ideas. Blue Origin has outlined a concept called Project Sunrise which would place compute capacity in orbit, drawing on solar power and maintaining terrestrial data-center infrastructure. Blue Origin did not provide additional comment for this analysis.


Market positioning and possible niches

Even skeptics acknowledge that space-based compute will likely find use cases, but they caution that such capacity is more apt to complement rather than displace ground facilities. Claude Rousseau, who follows satellite markets, argued that orbital data centers are unlikely to replace terrestrial data centers in the foreseeable future and would more plausibly serve niche needs - for instance, processing for infrastructure that already operates in orbit such as military satellite constellations or space stations.

The International Space Station already runs experiments designed to process data on-orbit and to reduce reliance on bandwidth for downlinks. Those examples show how localized, mission-specific compute in space can be useful without supporting a broad commercial data-center industry in orbit.

Nvidia’s chief executive, speaking on a public podcast, has taken a cautious view: the economics appear unattractive today and the logical priority should be expanding ground-based capacity first, given existing infrastructure and nearer-term returns.


Broader product and market considerations

From a product and adoption perspective, the comparison between undersea and orbital architectures highlights core preferences among enterprise buyers. Customers typically favor solutions that allow flexible scaling, easier hardware refresh cycles, and predictable total cost of ownership. Designs that prioritize sealed reliability above maintainability can be compelling for certain specialized use cases but struggle to attract broad commercial uptake where upgrade velocity and price-per-performance are decisive.

Chua warned that attempts to "escape" terrestrial problems may instead exchange familiar constraints for a new set of harder challenges. He pointed to realistic alternatives on Earth that could blunt the need for orbital compute: incremental improvements in AI chip energy efficiency, better water recycling technologies, expanded solar deployment, and modular nuclear generation. Those paths, he suggested, are likely more cost-effective and easier to scale than launching sealed data centers into orbit.


Where this leaves investors and the market

Space-based data centers present a technically intriguing idea with high engineering ambition, but their commercial viability depends on a confluence of breakthroughs: dramatic reductions in launch costs, chips engineered for prolonged operation in harsh environments, and an end-market willing to pay a premium for compute located off-planet. At present, several independent analyses and industry practitioners view those conditions as unlikely in the near term, which suggests that orbital compute will remain a niche complement to terrestrial cloud infrastructure rather than a wholesale replacement.

As firms pursue prototypes and research, the lessons from Microsoft’s Project Natick underline how engineering success does not automatically translate into a scalable, customer-driven business model. For now, most enterprises and cloud operators appear to prefer ground-based solutions that enable faster upgrades and more predictable economics as AI deployment accelerates.

Risks

  • Economic viability: High deployment costs and current launch prices mean orbital data centers could be far more expensive than terrestrial alternatives, requiring radical reductions in launch cost to become competitive - impacting aerospace and cloud infrastructure markets.
  • Upgradeability and product obsolescence: Sealed "locked-for-life" modular units cannot be easily repaired or upgraded, conflicting with AI industry demands for rapid chip refreshes and affecting cloud providers and AI hardware manufacturers.
  • Technical barriers in space: Challenges such as heat dissipation in vacuum, radiation exposure, and the harsh space environment could degrade AI hardware and increase replacement frequency, influencing semiconductor design priorities and operational economics.

More from Stock Markets

WhatsApp Says Italian Surveillance Unit Used Fake App to Spy on About 200 Users Apr 1, 2026 Estée Lauder Shares Slip as Advanced Talks with Puig Intensify Apr 1, 2026 Sam's Club to raise annual membership fees by $10 beginning May 1 Apr 1, 2026 Stellantis, Leapmotor Hold Early Talks on Potential EV Production at Idle Ontario Plant Apr 1, 2026 Related Digital Nears $16 Billion Package to Fund Oracle AI Data Center in Michigan Apr 1, 2026