- Home
- /
- Digitalization
- /
- Space Data Centers Are Here:…
Introduction: Why Move Data Centers to Space?
Imagine a data centre that’s not anchored to an industrial park, instead it orbits silently above Earth, powered by unfiltered sunlight, cooled by the vacuum of space, and located where few humans tread. That’s the bold vision behind Starcloud. Traditional data-centres are facing mounting constraints: soaring power consumption, increasing cooling costs, land scarcity, environmental pushback. Starcloud proposes a radical alternative: build the next generation of compute infrastructure in space.
In this article, we’ll dive deep into why Starcloud is pursuing this route, how they’re executing it, the promise and the pitfalls, and what it could mean for AI, cloud computing, sustainability and the space economy. We’ll reference recent announcements and ground them in current context to keep this factual and engaging.
Why Space for Data-Centres?

The terrestrial pressure cooker
Data centres on Earth are under stress. The global demand for compute, especially from AI model training and inference, continues to climb. Estimates suggest demand for data-centre power could jump significantly by 2030. On Earth you face:
- Permitting delays, land availability issues
- Rising electricity and water costs (for cooling)
- Environmental scrutiny over carbon and freshwater usage
- Urban and suburban opposition to massive new facilities
These issues converge with AI’s escalating energy appetite, making new terrestrial builds increasingly challenging. Starcloud’s view: if the constraints on Earth are becoming a bottleneck, why not move the facility to a place without those constraints?
What space offers
Space offers three particularly compelling advantages:
- 24/7 solar energy – In low Earth orbit (LEO) or other favourable orbits, solar panels can soak up sunlight nearly continuously (or cycle in predictable ways), unfiltered by atmosphere or weather.
- Radiative cooling – Without air, heat dissipation in orbit must rely on radiation. That actually becomes an advantage: waste heat can be dumped into space more efficiently, reducing need for massive water‐based cooling plants.
- Unlimited “land” for expansion – In orbit there aren’t the same physical zoning, groundwater permitting, or local environmental restrictions like on Earth. Starcloud emphasises “avoid permitting constraints”.
Starcloud claims that by leveraging these factors they can build data centres with far lower operating costs and far fewer constraints. For example they estimate energy cost reductions of up to 10× compared to terrestrial operations.
Starcloud’s Plan: The Roadmap

The company and foundation
Starcloud (formerly “Lumen Orbit”) is a Washington‐state company that pitches itself as the builder of “data centres in space”. The founding team blends satellite/space hardware experience and cloud/compute infrastructure backgrounds.
Demonstrator: Starcloud-1
Their first mission is named Starcloud-1, a refrigerator-sized satellite slated for launch in late 2025, carrying a high-performance GPU, the NVIDIA H100, making it the first data-centre class GPU to operate in orbit. According to reports the H100 aboard will deliver roughly “100×” more compute than prior satellites.
This launch serves as a proof of concept: can you run AI workloads, inference or even training, on orbiting hardware, with real‐world connectivity and cooling/power management?
Commercial expansion: Starcloud-2 and beyond
Beyond Starcloud-1, they plan a commercial satellite Starcloud-2 with a GPU cluster, persistent storage, 24/7 access, and proprietary thermal/power systems, aiming for full operation in sun-synchronous orbit by 2026. The end goal: build a large site (they mention 5 gigawatts scale) with massive solar and cooling arrays (4 km × 4 km) and scale compute clusters in orbit.
Key technical metrics & claims
- Satellite with H100 GPU: first in orbit for data-centre class GPU.
- Power consumption they claim is modest for that level of compute (e.g., the demonstrator draws ~1 kW).
- Cooling: Using vacuum of space as infinite sink instead of water/air cooling.
- Energy cost claim: Operating a 40 MW cluster on Earth for 10 years roughly $167 m (Starcloud’s calculation) vs ~$8.2 m in space (including launch assumptions).
- Shielding: Example ratio given ~1 kg of shielding per kW of compute.
Crusoe’s Role: Building the First Public Cloud in Orbit

Crusoe has built its reputation on Earth by designing data centers that prioritize energy efficiency, often using stranded natural gas or renewable sources to power AI workloads. With Starcloud, Crusoe is taking this philosophy into orbit.
- Partnership with Starcloud: In October 2025, Crusoe announced a strategic partnership with Starcloud to deploy Crusoe Cloud modules aboard Starcloud satellites. The first launch is scheduled for late 2026, with GPU capacity expected to be accessible from orbit by early 2027.
- Public Cloud in Space: This collaboration will make Crusoe the first public cloud provider to run workloads in outer space. Customers will be able to access GPU resources directly from orbit, opening up new possibilities for AI training, inference, and real-time data processing.
- Energy-first strategy: By harnessing the sun’s abundant energy, Crusoe avoids the grid limitations and cooling challenges of terrestrial data centers. Solar panels in orbit generate continuous power, while the vacuum of space acts as a natural cooling system.
- Breaking the bottleneck: AI workloads are growing faster than Earth’s infrastructure can support. Crusoe’s orbital cloud aims to break this bottleneck by offering scalable compute capacity without the environmental and logistical constraints of land-based facilities.
- Long-term vision: Crusoe and Starcloud envision a distributed network of satellites forming a cloud constellation. This would allow global users to tap into orbital compute power, much like how terrestrial cloud services are accessed today.
In essence, Crusoe is not just extending its cloud into space, it is redefining what cloud computing means. By 2027, the company expects to offer limited GPU capacity from orbit, and by the early 2030s, orbital data centers could rival Earth-based facilities in both scale and cost efficiency.
The Grand Vision: From a Single GPU to Orbital Data Centers

Starcloud 1 is a proof-of-concept, but the vision shared by Starcloud and its partners is far grander. They are looking toward a future of massive, free-flying data centers in orbit. This is where another company, ThinkOrbital, enters the picture.
ThinkOrbital is developing technology for in-space construction, proposing to use welding and additive manufacturing to build large structures directly in orbit. The idea is that instead of launching a pre-built, cramped data center in a single rocket, we could launch raw materials and robotic systems to assemble sprawling, optimized facilities in space that would be impossible to launch whole.
These future structures could be football-field-sized platforms covered in solar panels, housing thousands of GPUs and processors, and using the natural environment of space for efficient, passive cooling.
Use Cases & Applications

Starcloud sees multiple domains where orbital compute makes sense.
Earth observation and on-orbit inference
When satellites capture hyperspectral imagery or synthetic aperture radar (SAR) data, the sheer volume is massive and transmitting it to Earth can create bottlenecks. Instead, running inference on orbiting servers reduces latency and downlink burdens.
AI model training / large language models
While initial launch may focus on inference or lighter models, Starcloud expects to eventually support large AI model training in orbit, such as models like Google Gemini. They claim their GPU cluster could support fine-tuning/training large language models (LLMs).
Sovereign and secure cloud environments
Because orbiting data centres are detached from terrestrial jurisdictional constraints, there’s a potential market in sovereign cloud compute, secure global data storage, and critical workloads needing isolation.
Scaling future AI infrastructure
Looking longer term, as AI models expand and require multi-gigawatt clusters, the terrestrial grid may struggle. Starcloud’s thesis: build hyperscale compute in orbit where you can scale without land, permitting or grid limitations.
Challenges and Realities

This vision is bold, and it comes with major hurdles.
Launch cost and mass limitations
Getting hardware into orbit is expensive, and every extra kilogram adds cost. While launch costs are trending downward thanks to reusable rockets, they still remain significant. The early missions will be small.
Radiation and reliability
Orbiting hardware faces cosmic radiation, solar flares, and micrometeoroids. Chips must be shielded, fault-tolerant design is critical. Radiation can degrade processors, memory and electronics.
Cooling and thermal control in vacuum
Though cooling by radiation is theoretically efficient, designing radiator systems that shed enough heat, and doing so with weight, volume and deployability constraints, is non-trivial.
Connectivity & latency
Orbiting compute must still communicate with Earth or with other satellites. Laser/optical links, RF links, ground stations are required, and latency, link availability and reliability are important.
Maintenance and upgrade cycles
On Earth, data-centres can be serviced, upgraded, maintained by humans. In orbit, servicing is expensive and challenging. Modules may need to be highly reliable or easily replaceable.
Financial viability and scale
While cost models claim major savings in energy and cooling, one must include launch, orbit operations, replacement, failure rates, communications and other logistic costs. Some analysts remain sceptical about feasibility at large scale.
Regulatory, debris and space traffic
Deploying large satellite clusters or modular data centres in orbit introduces regulatory, orbital debris, coordination and satellite collision risks.
Why Now? What’s Changed?

A few factors make the timing interesting:
- Launch costs are steadily declining with new reusable rockets and increasing launch cadence.
- Solar panels and space hardware manufacturing have matured to make modular deployables more feasible.
- AI demand (especially training large models) is accelerating, creating urgency for scalable compute infrastructure.
- Earth-based constraints (real estate, energy, water, regulatory) are increasing costs and friction for new data-centre builds.
- The “space economy” momentum: more capital, more interest in orbital infrastructure beyond just communications or Earth observation.
In short: the confluence of AI demand × infrastructure constraint × lowering space access cost = a plausible environment for the Starcloud vision.
Implications for Industry & Society

For AI and compute infrastructure
If orbiting data centres become viable at scale, it could reshape where and how major AI training happens. Compute could increasingly migrate off-earth, freeing terrestrial grids, but also creating new geographies of compute (orbitalt rather than regional).
Environmental and sustainability angle
Starcloud argues that by moving compute off Earth you reduce freshwater consumption (no water‐cooling towers), reduce land use, and tap near unlimited solar in space, thereby lowering CO₂ emissions per unit compute. But this should be balanced against launch emissions, manufacturing and end-of-life disposal in space.
Global cloud / sovereignty & resilience
Orbiting compute may offer new models of cloud, computing outside national boundaries, which can appeal to sovereign markets, special use cases (defence, remote sensing) or disaster‐resilience setups.
Space economy growth
Companies like Starcloud add a dimension to the space economy beyond satellites and launches: compute infrastructure in orbit. This may catalyse new supply chains (space hardware, solar arrays, cooling radiators, modular assembly).
Equity and access
Interesting question: will this further centralise control of high-end compute in a few players who can afford orbital builds? Or will it democratise access globally? How will smaller companies or research labs tap into orbital resources?
A Reality Check

The concept is futuristic but grounded in some factual announcements: Starcloud’s demonstrator planned for late 2025, with an H100 GPU in orbit (first of its kind). The white-paper and public statements show serious engineering planning. At the same time, many of the bold claims (multi‐gigawatt orbital data centres, full AI model training orbit) remain aspirational.
As one analyst noted:
“It isn’t likely there’d be much talk of sticking data centres in orbit if there wasn’t a borderline freak-out back on Earth about how to power them all.”
That line captures the spirit: space is being considered because terrestrial options are becoming constrained. But it doesn’t yet guarantee success.
Recommended Reading

For those looking to dive deeper into the topics of space infrastructure and the future of computing, consider these books:
- “The Space Barons: Elon Musk, Jeff Bezos, and the Quest to Colonize the Cosmos” by Christian Davenport. This book provides essential background on the players and the drive behind the modern space revolution.
- “The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies” by Erik Brynjolfsson and Andrew McAfee. It explores the economic and societal impact of transformative technologies like AI, which are the primary drivers for projects like Starcloud.
- “Pale Blue Dot: A Vision of the Human Future in Space” by Carl Sagan. For a philosophical and long-term perspective on why humanity’s expansion into space matters, Sagan’s classic work remains unparalleled.
FAQ: Common Questions About Orbital Data Centers

Q1. What exactly is a “data centre in space”?
A: It refers to compute infrastructure, servers with GPUs, storage, networking, deployed in Earth orbit (or possibly other off-planet locales) rather than on Earth. The idea is to run workloads (AI training/inference, data processing) in orbit, leveraging solar power, radiative cooling and high connectivity to satellites or ground stations.
Q2. Why is NVIDIA’s H100 GPU significant in this context?
A: The H100 is a data-centre class GPU used for advanced AI workloads on Earth. Starcloud plans to deploy an H100 in orbit aboard Starcloud-1, making it one of the first ultra-high-performance AI chips in space. That helps make the claim of “real compute in orbit” credible rather than just “storage or simple servers”.
Q3. How much cheaper will orbital data centres be?
A: Starcloud claims up to 10× lower energy/operate cost compared to terrestrial facilities, citing elimination of large water‐cooling systems, continuous solar power, and fewer land/permitting costs. However, these estimates depend heavily on launch cost, hardware lifetime and maintenance.
Q4. When will we see these in use?
A: The demonstrator (Starcloud-1) is planned for late 2025. A more commercial module (Starcloud-2) is targeted for 2026. Full gigawatt-scale clusters are a longer-term goal, likely toward the end of the decade.
Q5. Are there competitors doing something similar?
A: Yes. While Starcloud is among the most visible, there are other firms exploring orbital or lunar compute/data-centre concepts. The broader trend is gaining interest in space-based infrastructure beyond communications satellites.
Q6. What are the major risks?
A: Major challenges include:
- Launch cost and risk of failure
- Hardware reliability under radiation and vacuum
- Thermal design of radiators in orbit
- Communication latency and bandwidth constraints
- Upgradability, maintenance, replacement in orbit
- Regulatory, orbital debris and space‐traffic concerns
- Economic viability: cost vs benefit compared to upgrading terrestrial infrastructure
Conclusion

Starcloud’s vision of data centres in space is ambitious, forward‐looking and anchored in real hardware plans. It reflects a recalibration of how we think about compute infrastructure in the AI era. If successful, the implications could be profound: lower energy cost compute, new global cloud architectures, and a thriving space-based compute economy.
At the same time, there are big technical, financial and operational hurdles. Launch costs, shielding, cooling, communications, maintenance and regulatory issues remain real. For now, the Starcloud project is a high-stakes experiment. But experiments like this, at the intersection of AI, cloud and space, are exactly the kinds of bets that might redefine decades of infrastructure.
The cloud may well leave Earth’s surface. And when it does, it may orbit silently, powered by sunlight and cooled by the cosmos. Whether that becomes mainstream remains to be seen, but the countdown has begun.











Leave a Reply