Even in the Cloud, You’re Oversubscribed

Ever since Nicholas Carr wrote The Big Switch and introduced the concept of utility computing, the industry has been enamored with the idea. “Pay for what you use, and nothing more” is the current mantra and mindset, and the entire industry is moving inexorably toward such a model.

Except the model’s not there – and it may never be.

A true utility model, based on consumption, is not available for most infrastructure resources simply because the systems don’t exist to track such fine-grained usage data as memory and CPU cycles. Oh, the data is there. If you’ve examined the output from top on a Linux box or taken a close look at the task manager in Windows, you’ll find it. But it’s not being collected on a CPU and memory per process or application basis by billing systems.

Cloud Computing ResourcesIn the cloud, the only resources truly billed on a usage basis are bandwidth and storage. Perhaps that’s due to the maturity of network and storage infrastructure, honed over years of usage-based policy enforcement, security and metering. Such capabilities were easily adapted to billing on a per-usage basis, on a utility model.

Compute resources, on the other hand, are not so capable. Not even in the cloud, where pay-per-use is the song we’re taught to dance to.

When you go to provision resources for an application you’re presented with a choice of instances of varying sizes, each with a pre-allocated set of compute resources. You’re faced with an unpalatable set of choices regarding how to provision and manage future capacity needs. Do you provision minimally or for expected maximums (oversubscribe)? Do you rely on scale out via auto-scaling (which is auto only in real-time, configuration is highly manual even today) or scale up?

It’s a telling choice, because you’re going to be charged for the compute whether you use it or not. While the instance is running, all available resources provisioned to that virtual instance are yours — bought and paid for. If you only use 25 percent, well, that’s too bad. You’re paying for 100 percent.

Now, from a certain point of view you are paying for what you use. After all, your virtual instance is what you’re paying for, you’re using it and the resources allocated to it. No one else can use them while your instance is active. Compute isn’t shared like the network, after all. It’s allocated by the system to a specific process. We could get into the weeds easily by explaining heaps and memory allocation schemes at the system level, but it’s the result that’s important: Resources allocated to your virtual machine are yours and yours alone. Even if your application isn’t using them at the moment, you are by virtue of launching that virtual machine, and that’s what you’re paying for. You’re paying for access to a given set of resources.

Oversubscription, in other words, is the order of the day. Cloud, traditional, it doesn’t matter what deployment model is being used — you’re oversubscribing the compute available to your application because that’s the way the systems work today — and might always work based on the isolation and security (the basics of multi-tenancy) required in a shared environment like cloud.

Comments

  1. BY Unca Alby says:

    You don’t pay for what you use. You pay for what you *can* use.

    It’s like a hotel with a swimming pool and an exercise room and a free shuttle to the airport will charge more than one without, whether or not you happen to partake of those services.

  2. BY Fred Bosick says:

    Every aspect of computation, CPU, bandwidth, storage, is getting cheaper all the time. A successful application always uses more resources as time goes by, storing more data and serving it to more clients. For many compute resources, it’s not the difficulty of measuring and recording this data but the pointlessness of charging for increments – a SAR output from a UNIX box will tell you more than you want to know. But the CPU must exist whether you use 1% or 100% of it. Yes, you can partition a CPU, but your RAM must be large enough to store the working memory of every app sharing the CPU else your context switch times become ridiculous.

    Because outsourcing and offshoring have become so widespread and, allegedly, effective, people think “the Cloud” is just further along the path. But there are no data warehouses in India or China, at least for Western consumption. You’re paying US or Western Europe electricity and infrastructure costs. And you’re at the mercy of the cloud operator for security and uptime. Even the Amazon Cloud falls over, and it won’t be the last time either. Nevermind the ease at which the NSA can just tap the routers and switches.

    The Cloud is a joke! And considering the widespread resentment of Microsoft’s subscription model for Office apps, using a cloud service for computing is rather hypocritical. If you can’t be bothered to own(lease) and control the hardware storing and computing with your business critical data, it must not be worth very much to you.

Post a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>