Monthly Archives: February 2012

The Path to the Intelligent Cloud

Let’s face it right now the cloud is pretty immature. The level of automation and management of these environments are analogous to the early assembly lines, but it won’t be this way long. This is not the industrial revolution and it moves at a wicked fast pace. Before we know it the next generation of cloud computing will be upon us and it will be very different than the IaaS/PaaS/SaaS offerings we know today.
For one, it will be intelligent. That is, the cloud will be content aware and it’s network connections will act like mycelia hyphae and what one hyphae learns will become available to the entire cloud. Whereas the current cloud is focused on scalability and elasticity, the next instance of the cloud will focus on redundancy, resiliency and collaboration. The discussion regarding public, private or hybrid will become moot as the cloud simply becomes a system of nodes with some nodes participating fully while some don’t participate at all. Nodes will contribute to the cloud on a controlled basis. Some will host their own nodes while others will pay service providers to host their nodes for them.
However, the bigger issue is not when will this occur, but why must this occur? This must occur because we are learning that no matter how much the cost of compute resources comes down, it will never be enough low enough to be cost-effective to host the Zettabytes we’re interested in. The cloud today is teaching us a valuable lesson; content is king! Once we squeeze all the inefficiency and underutilization out of our data centers there will be little cost savings left to derive from our own cloud infrastructure, but that won’t stop the machine once it’s started. Just like any other successful ecosystem, once started, it eats foundation and then starts feeding externally to survive. This pattern is how small companies becomes large corporations. This pattern is how small republics become big government. The cloud as an ecosystem is no different and it feeds on content. When it consumes all the content we can provide it with about our own organization, it will start to feed on external content. We are starting to see this occurring already under the guise of “Big Data”.
The current focus on what is cloud computing is but a mere distraction fostered by a market that is organically moving toward the culmination of the intelligent cloud. This, however, doesn’t undermine the effort underway as it is a critical component of reaching the intelligent cloud outcome. That is, the consolidation of silo compute stacks onto converged infrastructure is a critical first step toward the node architecture of the intelligent cloud. However, the lack of discussion and focus on application rationalization will have profound effect of limiting forward progression. Moreover, the limited tools for inventorying and understanding the dependencies between application components forces the application rationalization process to be heavily based on human knowledge engineering.
Until the tools market for application rationalization matures, it is imperative that organizations get serious about building their Configuration Management Databases (CMDB) and following IT Service Management processes. Failure to comply with these imperatives will significantly limit the upside advantages that cloud computing can provide to the business. Sure, executives will be thrilled with the immediate cost reductions, but when was the last time anyone remembers their CEO saying two years later, “don’t worry Bill, you saved us $2 million two years ago, you’re still golden in my book!” The immediate cost savings from infrastructure consolidation, SaaS outsourcing and Big Data analytics will be short-lived and the CEO will be looking for when they can finally start to sunset some of those proprietary application stacks and move their applications to their costly cloud infrastructure, only without the tools and without the ITSM foundations, the answer is going to be that it will require big up-front spend to gain efficiencies and further costs savings in the future.

Cloud Needs Application Architects To Understand IaaS

Application development has been moving in the direction of platform abstraction. That is, the need for developers to have detailed knowledge of the infrastructure that the application was being deployed on was becoming less important with increasing sophistication of the application platform for which they were developing. Cloud computing is now reversing this course of action, at least in the short term.

Actually, the platform abstraction is a bit of a misnomer since the implementation resulted in operations struggling to tweak the infrastructure to meet performance requirements. Additionally, most applications typically had their own dedicated hardware allowing for specialization to meet the needs of the applications deployed on that hardware.

So, more accurately, cloud computing illustrates the flaws in the approach of pure platform abstraction and a ‘Chinese Wall’ between application development and operations as operations now has fewer tweaks at their disposal to make an application perform in a multi-tenancy environment. Hence, it is imperative that application architects begin to incorporate into their design the impacts of operating in the cloud into their architectures. Application architects must be able to understand how the application will perform given the environment that the application will be operating under.

Impacts that application architects will need to think about in this cloud world include:

  • Databases – running a highly-available database in the cloud is a daunting task; especially without direct control over the storage. Environments like Amazon offer database services that deliver greater performance than can be achieved if you put up your own database in their IaaS, but there are also pitfalls.
  • Software failover – applications can now implement failover far less expensively using commodity hardware. Hence, failover should now be developed into the application instead of relying on the application platform or hardware infrastructure. Given that application architects have not focused on this use case in many cases, it will require some education and experience before this can become common.
  • Virtual networking – virtual networks enable the application development team to take control over their own application’s networking infrastructure. Once again, the lack of experience here means that there are likely to be many misconfigurations that impact the performance and availability of the application in addition to enabling security flaws.
  • Instrumentation, logging and monitoring – these are areas that the application development teams have been pushing responsibility off onto the application platforms. However, without visibility beyond the hypervisor, it’s imperative that they incorporate this back into the applications or they may have significant issues troubleshooting or auditing their applications.

As my famous Uncle Winthrop liked to say, “Now that I’ve given you a band saw, I need to teach you how to use it or you will just be wasting a lot of wood and in the worst case might lose a few fingers.”