The Sky's the Limit: Why Sky Computing is the Cloud’s Future

Discover why Sky Computing is the logical evolution of cloud infrastructure, delivering the freedom traditional cloud providers promised but never delivered.

Brian Irish
Engineer
Not unlike a bowl of petunias falling out of the sky.

Published on February 26, 2025


The Forecast is Cloudy

Cloud computing revolutionized the IT industry. It delivered capabilities that traditional infrastructure could never match: on-demand scalability, pay-as-you-go pricing models, and near-instant global reach. Businesses no longer needed to invest heavily in physical servers or complex maintenance – cloud providers took care of everything. The cloud made it possible to experiment and innovate faster, and it allowed startups and enterprises alike to focus on their products instead of on their data centers. For a while, it seemed like cloud computing was the ultimate solution.

But now, cracks are starting to show. Vendor lock-in ties companies to proprietary tools and ecosystems, making migration increasingly complex. The overwhelming cost and difficulty of moving data between clouds, sometimes called Data Gravity, has created virtual moats around clouds. And of course, no list of drawbacks to cloud environments would be complete without mention of egress fees. AWS charges $0.09 per GB to simply move your data, which can cost enterprises hundreds of thousands annually. So, while the cloud liberated us from our hardware constraints, it has chained us in new, very expensive handcuffs.

Cloud Repatriation

So then, what’s the path forward? Companies like 37Signals have shifted back to on-premises infrastructure, and many others are in the process of doing the same. A report by Citrix last year says that:

42% of organizations surveyed in the United States are considering or already have moved at least half of their cloud-based workloads back to on-premises infrastructures, a phenomenon known as cloud repatriation.”

While I applaud these companies’ pragmatic view on their infrastructure costs, it asks a question: Did these companies meticulously plan their cloud usage from the start with cost optimization in mind, or are they simply reacting to unexpectedly high AWS bills? Most evidence points to the latter. Companies that are now “repatriating” their infrastructure often failed to implement basic cloud cost controls from day one—such as automatically shutting down dev environments during off-hours, using spot instances for batch workloads, or implementing resource quotas. Rather than fixing these foundational issues, they’re choosing to abandon cloud completely.

Taking a step back to on-prem might not truly be a step forward in the long run. But if we remain in the cloud, how do we address the fundamental challenges of vendor lock-in, data gravity, and rising costs in a transformative way, rather than just applying incremental fixes?

Sky Computing: The Next Evolution of Cloud Computing

Sky Computing is a common-sense path forward for our fractured cloud ecosystem. Imagine a “cloud of clouds”, where workloads flow seamlessly between providers, free from lock-in and inefficiency, without you having to lift a finger. It’s not just another insufferable buzzword (or buzzphrase, for you pedants); it’s the next logical evolution in cloud infrastructure. Removing provider-specific complexity through its abstraction layer (explained below) enables businesses to prioritize performance, cost, and compliance over loyalty to a single vendor. It’s the freedom the cloud always promised but never delivered.

“Wait, doesn’t Multi-Cloud do this already?”

Not exactly. Multi-cloud strategies involve leveraging multiple cloud service providers to distribute workloads. While this approach offers benefits like redundancy and access to diverse services, it often results in fragmented operations. Each cloud platform operates in its silo, requiring distinct management tools and expertise. This fragmentation leads to increased complexity and inefficiencies, negating some of the advantages of a multi-cloud setup.

Sky Computing, on the other hand, overcomes the limitations of multi-cloud by unifying disparate cloud environments into a cohesive, interoperable ecosystem. Instead of treating each cloud as an isolated entity, Sky Computing orchestrates them to function as a single, harmonious infrastructure. This integration eliminates silos, enabling seamless interaction and workload mobility across all participating clouds.

Why Sky Computing?

A Seamless Cloud Experience

As an end user of a Sky Computing product, you’ll no longer care which cloud provider your applications run on. You interact solely with the Sky Computing broker you’ve chosen. Its abstraction layer decides what cloud platform is best for your workload (or even accepts your preferences should you have any). The broker’s decision may or may not change over time, based on the rising and sinking costs of various providers, improved abstraction algorithms, or based on whether your own requirements change (such as data locality, for example).

The more cloud platforms the abstraction layer supports, the greater the flexibility it has in choosing the best cloud platform for your business and applications, and the greater cost savings it can pass on to you.

Regulations and Resilience

Moreover, rising regulations like GDPR and CCPA are forcing companies to comply with strict data sovereignty rules, demanding infrastructure that adapts to regional requirements. And when major outages occur—like the ones that have left entire businesses offline for hours—it further highlights the dangers of single-cloud dependency. Sky Computing isn’t just an opportunity; it’s becoming a critical next step for businesses that need to stay agile, resilient, and competitive in this rapidly changing world.

The 3 Pillars of Sky Computing

Pillar 1: Abstraction

At the heart of Sky Computing lies abstraction, which serves as the glue that unifies disparate cloud platforms. Through a compatibility layer leveraging tools like Kubernetes, Ray, and standardized APIs, Sky Computing hides the complexities of individual clouds. This means you won’t need to worry about whether your data sits in AWS S3 or Azure Blob Storage—the abstraction layer handles those details, deciding on the optimal storage based on cost and performance.

By providing a “write once, run anywhere” experience, abstraction eliminates vendor lock-in and simplifies application deployment. As more cloud platforms become supported, the abstraction layer offers greater flexibility, enabling smoother transitions and substantial cost savings without requiring you to change how you develop or manage your workloads.

Even beyond the Hypercloud providers of today like AWS, GCP, and Azure, you’ll see support for Neocloud providers like Lambda Labs, Coreweave, or Nebius, providing even greater operational flexibility.

Pillar 2: Automation

Building on the foundation of abstraction, automation is the next pillar driving Sky Computing forward. Intercloud brokers embody this pillar by serving as intelligent decision-makers who manage workload placement, cost optimization, and compliance across multiple clouds.

These brokers continuously analyze factors like pricing, resource availability, and regulatory requirements using AI and real-time data. They automatically route or adjust workloads based on current conditions, ensuring your applications always run in the most cost-effective and efficient environment. This removes the manual overhead of juggling multiple cloud providers, reduces the chance of human error, and lets you focus on higher-level tasks while the system optimizes operations behind the scenes.

Pillar 3: Agility

The final pillar, agility, is about creating a responsive and flexible cloud ecosystem where data and workloads move freely. Reciprocal peering agreements are key to this agility. These agreements are collaborations between cloud providers that allow for free or low-cost data transfers, breaking down barriers such as egress fees and data gravity.

As these agreements take shape—often organically driven by hyperscale providers keen to support popular brokers—workloads can move seamlessly between clouds. This dynamic environment empowers businesses to adapt quickly to shifting costs, regulatory changes, or performance requirements without being locked into a single provider.

Crucially, this level of agility opens up the cloud ecosystem in ways never seen before. By tearing down silos and encouraging collaboration between providers, Sky Computing creates an interconnected landscape where innovation thrives. Smaller and more specialized neocloud providers gain a seat at the table, fostering competition and driving breakthroughs in service offerings. Enterprises can mix and match services from various providers without fear, leveraging the best features from each platform to suit their needs.

The result is an agile infrastructure that can pivot on demand, offering both resilience and the flexibility to innovate and disrupt rather than just iterate. This unprecedented openness not only breaks the barriers of vendor lock-in but also sparks a whole new era of creativity and efficiency across the entire cloud industry, fundamentally changing how businesses harness cloud technology.

Real-World Examples

Sky Computing is not just a theoretical framework—it has practical applications that transform how businesses run complex workloads. Here’s how we’re already seeing it play out in the real world.

AI/ML Workloads

In the world of AI and machine learning, different stages of a pipeline may benefit from different cloud providers’ specialties. For example, a company could split its ML pipeline: run model training on Google Cloud, which offers TPU-optimized instances for deep learning; perform inference on AWS, utilizing their Inferentia chips for lower latency; and handle data preprocessing on Azure, benefiting from its robust data services. By strategically placing each stage where it performs best, organizations gain speed, cost savings, and the ability to comply with regional data regulations. The concepts of a unified platform and intelligent routing introduced earlier come into play here, as Sky Computing brokers manage this orchestration—dynamically routing workloads to the optimal environment for each task without manual intervention.

Global Data Compliance

Regulatory requirements like GDPR in Europe or CCPA in California mandate strict handling of data based on location. Sky Computing can automatically route workloads and data to the correct geographical region to meet these legal requirements. This ensures compliance without sacrificing performance, as the system intelligently selects the cloud environments that best balance regulatory needs with operational efficiency.

Enterprise Batch Jobs

Batch processing tasks, such as large-scale data analysis or report generation, often require significant computational resources and time. Sky Computing’s cost-aware brokers analyze available resources across multiple clouds and choose the most cost-effective option to run these jobs. By doing so, enterprises can save millions on large-scale batch workloads, as the brokers not only find the cheapest compute options but also optimize job scheduling to take advantage of low-cost, high-performance opportunities.

Challenges to Sky Computing Adoption

Adopting Sky Computing won’t be all rainbows and unicorns. It comes with its own set of challenges that need to be navigated carefully.

Standardization

Achieving universal standards across all cloud platforms is unlikely due to competitive interests and proprietary technologies. However, progress can still be made by leveraging partial compatibility sets—common ground in widely adopted tools like Kubernetes, Ray, and S3 APIs. These standards don’t cover every scenario but provide a practical bridge, allowing Sky Computing to move forward without waiting for complete industry-wide uniformity.

Economic Resistance

Large cloud providers may resist reciprocal peering agreements, as sharing data freely between platforms can conflict with their business models. While this resistance exists, smaller cloud providers and innovative startups have strong incentives to embrace Sky Computing principles. Their agility and desire to compete with larger players drive them to support the ecosystem, gradually encouraging wider adoption and putting pressure on the bigger providers to reconsider their stance.

Infrastructure Inertia

Organizations have significant investments in their existing cloud infrastructure - not just in terms of cost, but also in expertise, tooling, and operational processes. Many firms are understandably hesitant to make dramatic changes to their infrastructure stack, especially when it comes to adopting new paradigms like Sky Computing that don’t yet have widespread adoption. This resistance to change is compounded by the fact that existing cloud deployments often work “well enough,” even if they’re not optimal in terms of cost or performance.

The overhead of retraining staff, updating deployment pipelines, and potentially refactoring applications to work with Sky Computing’s abstraction layer can seem daunting to many organizations. Additionally, there are perceived risks around reliability and support when moving away from established cloud providers’ native services. These factors create significant inertia that must be overcome for widespread Sky Computing adoption.

The Challenge of Legitimacy

The concept of Sky Computing faces some uphill battles in establishing legitimacy, particularly in light of recent events. A visit to Wikipedia’s Sky Computing entry reveals a troubling warning banner questioning the reliability of sources and noting a lack of academic citations. This stems from an incident where a commercial entity attempted to shape the narrative around Sky Computing through Wikipedia editing, leading to their eventual ban from the platform.

This highlights a broader challenge: as emerging technologies gain traction, there’s often a rush by commercial entities to stake their claim as thought leaders or pioneers, sometimes through questionable means. This can inadvertently damage the credibility of legitimate technological advances. Sky Computing, as an architectural evolution of cloud computing backed by academic research and technical merit, deserves to be evaluated on its technical foundations rather than through marketing efforts.

The incident serves as a reminder that transformative technologies often face skepticism when commercial interests precede widespread technical validation. However, the fundamental value proposition of Sky Computing—providing a unified interface across cloud providers while optimizing for cost, performance, and compliance—stands independent of any single company’s implementation.

Final Thoughts

I genuinely believe that we’re at a turning point in cloud infrastructure. I don’t think Sky Computing is just another buzzword—it’s a practical fix for real problems. It brings together different cloud services into one smooth system, making life easier for businesses and SREs who need reliable, flexible, and efficient operations. At the end of the day, this makes a sizeable dent in balance sheets, and that’s what matters.

As more Sky Computing solutions emerge, tech leaders worldwide will continue to notice. They’ll see the benefits and quickly move their workloads to these smarter, more open, and more cost-effective cloud setups.

The future of the cloud is here, knocking at our door. It’s an exciting moment to rethink how we build and manage systems that stand up to real-world demands—more resilient, more adaptable, and ready for what’s next.

Brian Irish
Engineer
Not unlike a bowl of petunias falling out of the sky.