Monthly Archives: October 2014

Can Telcos Succeed As Cloud Service Providers?

Someone sent me a link to this article this morning. While reading it I started to think about the competitive landscape for cloud service providers and the challenges telcos have competing in this market. After all, when you think cloud service provider, which names come to mind? Amazon? Microsoft? IBM? Google? Sure, but how about CenturyLink? AT&T? Verizon? What are the key differentiators between the first and second groupings? The first group is closely associated with software and innovation, while the second is associated with dropped calls, poor customer service experiences and expensive telecommunications services.

Understandably, telcos have a perception challenge being considered a competitor on par with technologically innovative firms. However, these vendors have a strong value proposition being linked to the core telecommunications infrastructure (can we say cloud doesn’t work without network?) and deep experience operating hosted infrastructure on behalf of business for the better part of the last three decades. Indeed, many businesses still have a significant base of compute infrastructure being operated by telcos today.

One name that I particularly see as a standout here is CenturyLink. Their acquisitions of Savvis, AppFog and Tier3 have provided them with a solid foundation for delivery of cloud computing services including both Infrastructure- and Platform-as-a-Service offerings. CenturyLink’s approach to cloud differs a bit from the commodity public cloud service provider approach in that they focusing on an enterprise user that is looking for an integrated offering versus the AWS and Azure users that seem to appreciate a do-it-yourself approach to building an operational environment for applications. Hence, it is difficult to do an apples-to-apples comparison solely based on price as compute along will be three times as much as AWS, but includes capabilities not included by default in AWS, such as monitoring, dedicated hardware and support. Of note, this model will see growing pressure now that IBM is offering up dedicated hardware on demand through their SoftLayer division.

This approach, let’s call it external private cloud, seems to be quite popular among operational staff as a common ground for obtaining the benefits of enterprise on-demand or OpEx computing and the traditional operational controls enterprise IT operational folks are familiar and comfortable with using. I find this very interesting when you consider what’s happening in with the rise in interest of DevOps and the significant base of developers flocking to the pure public cloud service provider experience, such as that offered by AWS.

While I don’t see an issue with leveraging on-demand public cloud for development and testing and external private cloud for production, the tools to make this seamless are only now starting to become available and are highly immature for this type of activity. Moreover, it continues to model the internal development/production differences that we see today among businesses still managing their own internal infrastructure. Thus, the Dev/Ops partition continues unabated leading to common problems of building for one environment and deploying to production in another.

But I digress… Back to the point at hand.

The offerings from like likes of Amazon, Microsoft, IBM and Google will continue to grow in use as these are often the first place users go when they are looking for on-demand compute services. That said, when it comes time for enterprises to deploy production versions of applications they deem operationally important, it shouldn’t be a surprise to find that they are probably talking to their telco provider. Of note, realize that the telco provider also has favorable sourcing status with procurement given that they are providing networking and telecommunications services. Getting Amazon or Google added to those lists might seem like no-brainers, but typically, sourcing is not seduced by sexy tech vendor names.

Why Platform-as-a-Service is the Operating System Of The Cloud

When I took my operating systems fundamentals course in college I was taught that an operating system provides very specific capabilities that provides users with access compute resources for building and running applications. Over time as networking capabilities and bandwidth increased, the notion of a set of modules that interface between the user and the hardware has changed to incorporate concepts of distributed operating systems, network operating systems and autonomous systems. While the notion of the operating system may have changed certain attributes have remained constant:

  • scheduling processes
  • coordinating interaction among processes, interprocess communication and synchronization
  • managing system resources
  • enforcing access control and protection
  • maintaining system integrity and performing error recovery

When looking at container-based PaaS offerings, such as CloudFoundry and Heroku, one can see many of these functions in operation across a set of virtual compute resources. If we consider that Infrastructure-as-a-Service (IaaS), bare metal and virtualized hardware inclusive of traditional operating systems, such as Windows and Linux, all represent the modern day equivalent of a compute node in a cloud universe then we can take the leap that the PaaS provides the interface between the user and that node. Moreover, we can include in this list of resources the services that support application operations, such as identity management, data management, messaging and monitoring.

If we explore the role of the PaaS in cloud application development and delivery, we can see that the platform overlays a set of cloud nodes and services exposing their resources to the application runtime environment. The PaaS then handles application lifecycle management inclusive of execution, process allocation and resource scheduling, access control and protection fostering multitenancy, and error recovery. Hence, container-based PaaS meets with the criteria to be considered an operating system.

Perhaps even more interesting is that the comparisons that can be drawn with regard to application development for a single operating systems versus a cloud operating system. One constant that remains as we look across time at operating systems is the increasing level of abstraction. Each level of abstraction has afforded us the ability to focus less on resource limitations, but even a cluster of virtualized compute resources still has capacity limitations.

With the emergence of a cloud operating system, we have the opportunity to finally escape those limitations by spanning and aggregating clusters of virtualized compute resources. Moreover, provisioning of these resources are delegated to services that are designed optimally for the physical resources they manage. For example, the cloud operating system/PaaS can communicate with the cloud management systems to identify where there resources that can satisfy the need for very-high speed (I/O per second) storage. Each cloud management system can then list its resources and corresponding metrics and availability. The PaaS can then request the one that best meets the criteria for the application. That is, the PaaS is the one environment that knows enough about the performance of the application and can schedule the resources and bind them to the application.

With these types of abilities, we can start our designs for cloud applications with the perspective of plenty instead of managing to the constrained. We can start to specify service levels within our applications, which can then be interpreted by the PaaS and turned into bound resources during execution. We have appropriate division of work across an application execution supply-chain. The bare metal provides maximum movement of bytes to the physical device. The hypervisor divides those resources into dynamically allocated blocks. The virtualization clusters allow those blocks to be moved around to maximize utilization of resources. And the PaaS can communicate across clusters to select the best set of available resources to ensure optimal execution of the application for which it is responsible.