I’ve been granted an incredible opportunity. Over the past three and a half months I have gotten to lead a real world large-scale delivery of a cloud solution. The final solution will be delivered as Software-as-a-Service (SaaS) to the customer via an on-premise managed service. While I have developed SaaS/PaaS (Platform-as-a-Service) solutions in the past, I was fortunate enough to have been able to build those on public cloud infrastructures. This has been a rare glimpse into the “making of the sausage” having to orchestrate everything from delivery of the hardware into the data center in four countries to testing and integration with the customer environment.

All I can say about this opportunity is that the term, “it takes a village” applies well. I thought I’d share some important generalities about this type of effort. It’s important to note that this is a Global 100 company with data centers around the globe. Regardless of what the public cloud providers are telling the world, this application is not appropriate for public cloud deployment due to the volume of data traversing the network, the amount of storage required, the types of storage required (e.g. Write-Once-Read-Many), level of integration with internal environments and the requirements for failover.

The following are some observations about deploying cloud solutions at this scale:

  • Data Centers. As part of IT-as-a-Service (ITaaS) we talk a lot about convergence, software-defined data centers and general consolidation. All of this has major implications for simplifying management and lowering the total cost of ownership and operations of the data centers. However, we should not forget that it still takes a considerable amount of planning and effort to bring new infrastructure into an existing data center. The most critical of these is that the data center is a living entity that doesn’t stop because work is going on, which means a lot of this effort occurs after hours and in maintenance windows. This particular data center freezes all changes between mid-December till mid-January to ensure that their customers will not have interrupted service during a peak period that includes major holidays and end of year reporting, which had significant impact on attempting to meet certain end-of-year deliverables. On site surveys were critical to planning the organization of the equipment (four racks in total) on the floor to minimize cabling efforts and ensure our equipment was facing in the right direction to meet the needs for hot/cold isles. Additionally, realize that in this type of business, every country may have different rules for accessing, operating in and racking your equipment.
  • Infrastructure. At the end of the day, we can do more with the hardware infrastructure architectures now available. While we leverage virtualization to take advantage of the greater compute power, it does not alleviate the requirements around planning a large-scale virtual environment that must span countries. Sometimes, it’s the smallest details that can be the most difficult to work out, for example, how to manage an on-premise environment, such as this one, as a service. The difficulties here is that the network, power, cooling, etc. are provided for by the customer, which requires considerable efforts to negotiate shared operating procedures, while still attempting to commit to specific service levels. Many of today’s largest businesses do not operate their internal IT organizations with the same penalties for failure to meet a service level agreement (SLA) as they would apply to an external service provider. Hence, service providers that must rely on this foundation face many challenges and hurdles to ensuring their own service levels.
  • Security. Your solution may be reviewed by the internal security team to ensure it is compliant with current security procedures and policies. Since this is most often not the team that procured or built the solution, you should not expect that they will be able to warn you about all the intricacies for deploying a solution for the business. The best advice here would be to ensure you engage the security team early and often once you have completed your design. In US Federal IT, part of deployment usually requires that those implementing the system obtain an Authority to Operate (ATO). Quite often, medium- and large-sized businesses have a similar procedure; it’s just not spelled out so succinctly. Hence, these audits and tests can introduce unexpected expenses due to the need to modify the solution and unexpected delays.
  • Software. Any piece of software can be tested and operated under a modest set of assumptions. When that software must be deployed as part of a service that has requirements to meet certain performance metrics as well as meet certain recovery metrics in the case of an outage, that same software can fall flat on its face. Hence, the long pole in the tent for building out a cloud solution at this scale is testing for disaster recovery and scalability. In addition to requiring time to complete, it often requires a complementary environment for disaster recovery and failover testing, which can be a significant additional cost to the project. I will also note that in a complex environment software license management can become very cumbersome. I recommend starting the license catalog early and ensure that it is maintained throughout the project.
  • Data Flow. A complex cloud-based solution that integrates with existing internal systems operating on different networks across multiple countries will have to cross multiple firewalls, routers and run along paths with varying bandwidth carrying varying levels of traffic. Hence, issues for production operation and remote management can be impacted by multiple factors both during planning and during operation. No matter how much testing is done in a lab, the answer seemingly comes down to, “we’ll just have to see how it performs in production.” So, perhaps, a better title for this bullet might be “Stuff You’re Going To Learn Only After You Start The Engine.” Your team will most likely have a mix of personalities. Some will be okay with this having learned from doing similar projects in their past, others will not be able to get past this point and continually raise objections. Shoot the naysayer! Okay, not really, but seriously, adopt this mandate and make sure everyone on the team understands it.
  • Documentation. I cannot say enough about ensuring you document early and often. Once the train is started, it’s infinitely more difficult to catch up. Start with good highly-reviewed requirements. Review them with the customer. Call to order the ARB and have them review and sign off. This is a complex environment with a lot of interdependencies. It’s not going to be simple to change one link without it affecting many others. The more changes you can avoid the more smoothly the process of getting a system into production will be.

Most importantly, and I cannot stress this enough, is the importance in building a team environment to accomplish the mission. Transforming a concept into a production-ready operational system requires a large number of people to cooperatively work together to address the hurdles. The solution as designed on paper will hardly ever match perfectly what is deployed in the field for the reasons stated above. This project is heavily reliant upon a Program Management Organization with representatives from engineering, managed services, field services, product and executive leadership to stay on track. Developing the sense of team within this group is critical to providing the appropriate leadership to the project as a whole. Subsequently, we also formed an Architecture Review Board (ARB) comprised of key technical individuals related to each aspect of the solution to address and find solutions for major technical issues that emerged throughout the project. In this way we ensure the responses were holistic in nature, not just focused on the specific problem, but also provided alternatives that would work within the scope of the entire project.

5 thoughts on “Notes From The Field: Inside A Real World Large-Scale Cloud Deployment”
  1. Excellent article. The Data Flow section (test in production) is very special, I have fought this battle for years. Thank you for stating it so clearly.

    The level of denial that still exists on this subject is frighteningly high. Don’t misunderstand, I believe in testing early, and comprehensively but the truth is that, the “test” system is only a simulator. The “real” system, with all its integrations and their real time confluences and collisions, is impossible to predict. Plan on testing in production, the more the better.

    A few years ago, I was on a team doing full-time monitoring and testing in production. It took 18 months before we could really trust the behavior of the ‘new’ system. –It is like raising a child. -Marnie

    1. I should have also mentioned that this system was in test for 11 months before it was released to production. It was well tested; as “well tested” as a system can be in a not-production environment.

  2. Another echo of thanks for this article. Many familiar themes!

    I would posit that issues around procurement, data centres, security, integration, DR, distribution, documentation, engagement of architecture, etc. are equally relevant to ‘non-cloud’ apps of significant scale and have been for a long time.

    I would like to pose a question – what were your experiences or learnings framed in terms of, say, NIST’s key attributes of ‘cloud computing’ – elasticity, user-provisioning, multi-tenancy, measured services..

    Regards
    Mike

    1. Mike,

      Fair question. Unfortunately, there’s not a lot I can talk about with regard to that due to the information being proprietary. One anecdote I can add without giving too much away is that taking an existing base of software and retrofitting that into the attributes you referenced above certainly add a greater level of complexity than a completely greenfield effort that is designed to meet these requirements. I believe a lot of cloud discussion to date focuses on software that has been designed to operate in a cloud environment, but this is unlikely to be the case with regard to deploying enterprise cloud solutions, which is going to have to include software that is not “cloud ready”. Things that will be missing will be around instrumentation, metering, reporting and resource management. Even when virtualization is used, it doesn’t eliminate the need for the software itself to support these requirements directly; there’s only so much that can be gleaned from the hypervisor layer.

Leave a Reply to Mike Burke Cancel reply

Your email address will not be published. Required fields are marked *

*