Top 3 reasons why cloud computing is unstoppable

Sunday, December 12, 2010 gc 6 Comments

In my view, here are the top 3 reasons why cloud computing is unstoppable:

1. Business Continuity
2. Operational Agility
3. Cost


Business Continuity
Business continuity is about ensuring that critical business functions can continue smoothly not only in the event of disaster, but other events such as acquisitions, moves, bankruptcies, getting a large customer, loss of key personnel, loss of a large customer, etc. All can cause havoc on business continuity. These events are critically important to business stakeholders—the people ultimately in charge of how IT money is spent.

Today, the pace of business change is significant: new product offerings, uptakes, downturns, acquisitions, geographic expansion, moves, exponential growth, spikes, campaigns, etc.

Imagine a scenario: you acquire a company with a mess of servers, network, cabling, and software installs. It takes time to move these systems and components into a parent company—many times failing completely. The desire of the parent company is to incorporate the acquisitions and hit the gas pedal, but it is often slow and affects business continuity.

Some companies are financed in hopes of being acquired as an exit strategy, but the back end simply is not structured for it. You could be a lot more desirable to be acquired if you have standard deployments in the cloud.

Imagine that you are a small to medium-sized business and your biggest customer, a Fortune 50 company, wants to use your solution for their entire company. In this kind of deal, a software escrow agreement will not be enough; they will want your solution to be in the cloud. So that in the event that your company goes under, they can just take ownership. They acquire this virtual property as opposed to trying to figure out a mess of software and hardware that is hard-coded to a particular location with odd configuration and deployment.

The cloud, particularly Platform as a Service (PaaS), provides key mechanisms to help with business continuity: elasticity, redundancy, standard deployments, etc. You are protected against disaster, able to scale up or down within minutes, not weeks, months, or years. The standardized deployments are easily understood and can be smoothly transitioned to a new team—not a mess of custom scripts, installs, and manual error-prone magic.

For example, Windows Azure provides triple redundancy of the Azure Storage and SQL Azure. It keeps three synchronized copies. This is often never done even for critical systems because of the complexity and expense of having redundant data centers. Azure maintains three copies of your data and can failover to another geographic location. It is built into their Cloud OS—really the fabric of the cloud.

In my view, the killer benefit of cloud computing is business continuity.

Operational Agility
Especially with Platform as a Service (PaaS) such as with Microsoft Azure, operation agility is achieved by providing an architecture that can easily scale horizontally. You can dynamically allocate these containers of services up or down. You can be much more efficient with just the capacity that you need with a pay-for-use model.

If you are deploying on virtual machines or physical machines, you must plan for peak capacity. Planning for peak capacity can be manageable if your load is consistent, but is usually a challenge. Ideally, you plan for an efficient amount of over capacity. This can be very difficult—impossible to estimate. Often times driven by fear, systems are dramatically over deployed—not efficiently over deployed.

This kind of operational agility will change business models. My understanding is that Pixar has recently deployed RenderMan into the cloud and is able to burst it when needed to make a movie and make it available as a service to smaller studios.

The ability to burst out to 1000's of computing containers then back to zero will create new business opportunities—even for the little guys.

Cost
It is easy to spend 1/2 million bucks on a few servers, SANs, load balancers, network equipment, racks. Make it redundant and then double that along with your provider rates. It is a big capital expenditure and it will still have to wait weeks until it is ready to be used and much longer for complex issues to be resolved.

With the cloud, it is a pay-for-use model without the huge upfront capital expenditures. You can leverage economies of scale and best practices. Microsoft has invested 8 billion in their cloud initiative. Based on a platform briefing that I attended at Microsoft, they are buying 1 out of 4 servers in the US and 1 out of 3 servers world-wide. How can you compete with this economy of scale.

The starting retail price for Windows Azure currently costs 5 cents per computing hour. Everyone that I have dealt with has a 70%+ savings in total cost of ownership. It is significant.

Closing
Manufacturing companies used to maintain their own power plants in the early days of the industrial revolution, now, everyone uses and trusts power as a utility. In my view, Windows Azure is far closer to utility computing than any other cloud provider. You do not have to maintain the stack. It provides elasticity of demand. You have a utility of computing and storage containers.

The same will be true for industrial computing, it will take a while, but it will happen. Don't wait too long, because someday you might have a 30% cut in your IT budget and it will hurt because you will be upside down.

I have been working with Microsoft Azure. Microsoft Azure has all three. In my view, Microsoft is the clear leader in cloud computing or more broadly, IT as a Service that includes SaaS (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service). Microsoft is "all-in" with the cloud.

Does your environment provide you with all three?

Let me know what you think about the key benefits of cloud computing.

You Might Also Like

6 comments:

Nazik Huq said...

Excellent article. One of the main incentives to move to Azure Cloud is to reduce total cost of ownership (TCO). It’s true that you can save money running an application on Azure versus running the same application on premise. But the application development and code maintenance cost components in the TCO equation remains unchanged. . I believe you still face the same challenges that you face developing software on premise. So cost savings are mainly in the operational side. I wonder what percentage of the TCO is reduced with these cost savings. If this number is marginal then incentive is low for moving to Azure for the “cost savings” reason alone. If the cost savings isn’t an issue then another compelling reason is to move to the cloud is as you mentioned is agility. Azure does give you a great platform to build applications through powerful abstraction of it’s IaaS and provide development flexibility in that respect.

Ken Cameron said...

MS Azure, like Google and Amazon are geared towards consumers and SMBs. None of these venues have to maturity to deal with enterprise class environments. Just look at the outages they have incurred in the last 18 months. The BING services (also classified as a MS "enterprise" class service) suffered an 8 hour outage. There was no "business continuity". In fact the outage was caused by a change made mid-morning on a Monday. They did not have a back-out plan. In an enterprise, one or more senior managers would be fired. At MS, it was simply an "oops, sorry!" moment.

Cloud computing is definitely around to stay, but as a concept that crosses many boundaries. For enterprises, every CIO should be moving the company IT infrastructure towards "Cloud" capabilities, and using outside providers to shore up where they struggle in-house. Enterprises will generally use Private Clouds for the foreseeable future. Some will venture into select areas like email, collaboration, or CRM, but core systems will remain "controlled". Those private clouds will eventually become hybrids, borrowing some external resources as add-ons to their private clouds.

Anonymous said...

Ken Cameron - Those types of outages occur much more regularly at Enterprise NOCs & IT Shops than they do in the cloud. Most Enterprises also pay 2-10x more for uptime that is supposedly equal to or better.

Also note - Bing is not has still is NOT a cloud service. It is not setup architecturally speaking from a cloud perspective. It's in traditional Enterprise style IT environment hosting. MS is notorious, but if you look at the other cloud providers: AWS, Rackspace, Joyent, GoGrid, and others you'll find that such things haven't occurred, and if they have in a much more moderate and minor frequency (i.e. a few minutes here or there).

...and again I repeat, real "cloud computing services", easily have better uptime than the majority of enterprise environments. The mythic 5 9s is just that, myth.

...don't even get me started on security. As for private clouds, that's another case in point of a mis-naming of something. Private clouds is like "private electrical" production. Nobody does that anymore, and the companies that tried to continue doing that at the turn of the last century, generally killed themselves off.

But I see otherwise you generally agree.

gc said...

@ken cameron

Ken: you make great points. I agree that there are some dark moments in the cloud ahead and it will take much longer for full enterprise market saturation than usually expected--much longer. See this post for more info: http://gregcowin.blogspot.com/2010/09/why-will-it-take-30-years-for-cloud.html.

But the fabric will turn out to be much better solution than provided any other conventional way. It will eventually overcome the abilities of the average IT infrastructure specialist based on best practices and replication, elasticity, and a heart beat. Traditional outage examples outweigh the cloud outages: IT guy wipes out RAID 10 due to human error, email best practices not followed by medium sized corporate--result: never recovered due to loss of disk, etc.

To my surprise: even skilled and smart people do not really know how to come up with a recover plan --better yet a backup plan--they often lose everything. The smartest people seem to lose critical assets. In this case, as with power plants, utility thinking is what will prevail with computing; it is the next step--follow the abstraction--everything is getting virtualized. This furthering of this abstraction is inevitable. It goes beyond the OS to the swarm.

By the way, not exactly seasonal in post, but the biggest outage window for many organizations is July 4th and Labor day weekend. These weekends have statistically turned out to be bad for system priests. The reason: companies, buildings, and campuses turn off their power building/plant/campus wide. It gets hot without AC. The result: the most IT outages turn out to be during these windows. It reoccurs, but locally most US people don't see it since they are eating hotdogs. One or more of these critical--often times--under a desk systems are found during this window.

The real suffering for IT guys: you don't notice them until some thing goes wrong. And then you talk about them. The same is true for controllers of the fabric (the cloud), you don't notice them until something goes wrong. The fabric will get stronger--really the only way these autonomous systems prevail.

Berkeley Orders of Magnitude (BOOM) addresses recovery testing aspects in cloud. Here is the link http://boom.cs.berkeley.edu/.

Your blog is quite interesting. I found a lot of interesting and useful tips here. I will keep in touch with this blog.