Monthly Archives: October 2009

The Reason Enterprise Architects Should Study Economics

Economics is the social science that studies the production, distribution, and consumption of goods and services and aims to explain how economies work and how economic agents interact. The really interesting thing about economics is that getting the right answer requires that you pose the right question. That is, if you choose to make conclusions based on casual observations of explicit cause/effect relationships, then your conclusions will seem logical and sound to most who will review them. However, only by ensuring that the cause/effect relationship holds up in the case where casual variables are changed, can you be sure that you have truly observed a phenomena and that the cause is accurate.

I’ve been reading Freakonomics (looking forward to Super Freakonomics when I’m done), and I’m astounded at how well Steven Levitt’s approach to Economics could fit the needs of Enterprise Architecture. For example, in Freakonomics, Levitt uncovers the truth regarding many long-held beliefs, such as most drug dealers live at the poverty line or how little parenting actually impacts the outcome of a child. What helped to uncover these facts was asking the right question and identifying which variables in the equation needed to remain static to ensure a valid outcome. In many of the cases, Levitt undermines what many other Economists, experts and pundits before him rolled out as proven facts and only due to his keen mind and his approach to formulating the problem domain was he able to uncover that which his peers could not. To keep us focused, I won’t discuss here how he discovered why violent crime rates dropped in the 90’s, but needless to say well-held beliefs as to why were shredded by Levitt’s innate ability to target just the right variables and “run the numbers.”

If we apply Freakonomics-type analysis to current information technology markets, such as Cloud Computing, based on experts and pundits one would readily equate that: a) public Cloud Computing models offer a more cost-effective way to acquire computing resources; b) it’s better to rent software than to own it and; c) in the future, all computing will be paid for based on usage versus ownership. Similarly, if you read the headlines of many tech publications, you would believe that: a) everyone is doing SOA; b) there are many successful SOA deployments and; c) SOA is the only approach to developing systems that you should consider. None of these conclusions are true in all cases, maybe in no cases, and, in fact, the real numbers would probably tell a very different story [I’m leaving this as an exercise to the reader since I want to focus this piece on the approach to EA, not the application of analysis to these issues]. These statements are being made arbitrarily based on extremely small sample sizes and are as valid as looking around you at a large business gathering noting all the iPhones and Blackberries, and concluding that iPhone & Blackberry must outsell all other phones, when in fact, their combined market share is a mere 3.0-3.5% of the overall mobile phone market.

In actuality, I see a lot of IT decisions being made using what, seemingly, is no more exploration than was used to determine that iPhones & Blackberries have a leading market share. It is the Enterprise Architect’s role to ensure that the selection and approaches toward development of systems are sound relative to their business, not just other businesses. Moreover, where decisions are based on the work of other businesses’ success, those successes need to be properly vetted to ensure that there is enough commonality between efforts, such that you could make the leap that your business will see similar results. Finally, assumptions and theories need to be tested by properly identifying the variables that need to remain static and then comparing; in essence normalizing the question to be homogenous in all situations. For example, in Freakonomics, Levitt points out that in order to prove that the numbers behind why violent crimes dropped in the 90’s he needed to review similar trends of violent crimes in areas that had not made the purported changes that led to the large unanticipated decrease. When the trends were normalized to remove noise and anomalies, it was shown that those factors seemed to have relatively little impact on violent crime decrease.

This brings us to the all important question, “how do we demonstrate the value of Enterprise Architecture?” Unfortunately, you cannot demonstrate the value of Enterprise Architecture if you cannot monetize or enumerate the value of all possible choices relative to the choices that are being recommended or those that have been made. Moreover, it’s critical that these analyses are carried out over enough time that short-term wins don’t supersede long-term potential gains. Thus,  it is here that Enterprise Architects, especially those we call Chief Architects, truly show their mettle. It is their experience, coupled with the ability to focus on the right set of variables, understanding the impact of change of those variables and being able to communicate that in a way that allows the business to make effective business decisions, which sets top notch practioners apart from Sr. Software Engineers that the organization placated with a title to keep them happy so they wouldn’t leave.

IT Business Edge’s Loraine Lawson Interviews Me On SOA & EA

A couple of weeks ago I did an interview with Loraine Lawson (@lowrain) who covers integration technology for IT Business Edge.  There were two different aspects to the interview, one focused on my blog entry “Perhaps SOA is More Strategy Than Architecture” and the other my beliefs on Enterprise Architecture and why I still believe strongly that EA holds great value for business and that business should not be deterred by past missteps due to ineffective applications of EA.

Part I entitled, ” Is Strategy the Key Characteristic of SOA?” is located here.

Part II entitled, “In Defense of the Enterprise Architect”, is located here.

Response to InformationWeek’s Bob Evans Commentary on CIO Suicide

Bob Evans, senior VP and director of InformationWeek’s Global CIO unit, recently published his thoughts and findings on why CIOs should no longer be focusing on aligning IT with business, but instead focusing on aligning with the customer.  In this piece, Bob goes onto illustrate how some businesses have done some innovative things to increase revenues and drive brand with the help of their IT staff.  The overall piece focuses on negative CIO stereotypes and goes so far as to indicate that attempting to align IT and the business is a red-herring that is not seen as strategic by the business and is unattainable.

To this, I say “Hooey!”  IT/Business alignment can be as much as bunch of malarkey as it can be a powerful transformative force within the business.  How successful an alignment effort is will be based on how well supported it is by the other executive managers.  This piece just fosters more of that grotesque American desire for immediate gratification that has gotten us into the economic calamity we’re now in.  It mirrors that borrow-and-consume mentality over save and spend conservatively.

Good IT/Business alignment is an INVESTMENT in the businesses future.  However, with CEO’s only focused on their next quarter’s report to shareholders and how much they’re walking away with in bonuses, it’s no wonder that they would have no grasp or desire to spend on what might be (and should be) considered a long-term investment strategy.   The software and systems that support the business need care and maintenance.  If you create one of those newfangled, throw-it-up-in-a-week systems and your customers love it, you will need the appropriate design and infrastructure to scale it and keep it running.  If you fail in this, your customers will sting you quickly and painfully, because they too are hooked on that immediate gratification buzz and have no stomach for your temperamental application that is limited by the 10 year old legacy application that is feeding it.

If your business is truly focused on its customers and interested in long-term strategies and quality, then you will select a CIO that has forethought into what customers need, but will ensure that the applications and infrastructure necessary to support those customers are there and have the proper levels of investments.

I would be remiss if I did not note that the businesses attitude toward IT from the start and awful selections for CIOs is a leading cause for negative CIO stereotypes and not their failed approaches to align IT with the business.   A good CIO is actually a person that has good business subject matter expertise and a solid grasp of technologies and how to apply them to meet the needs of the business.  Years of selecting the head of accounting is still plaguing the industry and will continue to until the business learns that the CIO needs to be more IT Consultant than a CFO.

My Prediction of Cloud & SOA in 2005

Okay, maybe it’s petty and I’m just tooting my own horn, but I found this old article I wrote for Upstream CIO’s October issue (written in July ’05).  In rereading this article today, I surprised myself how aware I was of the forthcoming Cloud & SOA convergence.

The popularity of Service-Oriented Architecture (SOA) is gaining ground on the heels of the success of eXtensible Mark-up Language (XML) as a means for representing data in a technology-neutral manner and its ability to move data between applications. However, SOA plays a more important role than merely a moniker to capture the use of applications that communicate using XML. SOA is the software world’s entry into the utility computing model.

For CIOs, the utility computing model has come to be synonymous with the ability to provision computing resources on demand, thus maximizing these resources based upon needs of the organization and its customers. In some cases, the utility computing model has begun to incorporate metering, which allows IT organizations to charge back usage to individual departments or projects, thus enabling the CIO to demonstrate the need for the current resources and to plan properly for capital expenditures of new resources.

Moving to a utility computing model helps organizations consolidate computing resources, which results in lower costs of ownership and management of those resources.  However, due to today’s inefficient software designs, most computing resources are dedicated to a single function, such as Web applications or enterprise resource planning. Such dedicated resource allocation means you cannot re-provision them to other tasks when the demand hits. Instead, IT departments are forced to acquire enough computing resources to satisfy each function running at 100%. This results in organizations overspending on computing resources to compensate for this limitation imposed by the software. Thus, today’s application models are undermining consolidation efforts and limiting the cost savings to the organization.

Enter SOA, which offers a software architecture that maps more closely to the utility computing models that have already been adopted, allowing better allocation of resources and greater provisioning controls. SOA enables organizations to develop and, more importantly, deploy their software in a loosely coupled fashion. This means the software is designed as a set of black boxes that have well-defined inputs and outputs, such that they can be linked together in a process flow with ease.

SOA operates on the concept of service providers and consumers who have agreed upon service contracts. These contracts are based on well-defined messages that are passed between the provider and the consumer, but at an abstract level, they are no different than the agreements that you have with any other utility, such as the phone or electric company.Indeed, supporting these contracts requires the same service level agreements that would be expected of mission-critical utilities.

While there is no concrete definition of SOA today, the generally agreed-upon attributes of SOA include being:

  1. Based on technology-neutral standards, such as XML and Web Services;
  2. Self-describing through metadata files created using the Web Services Definition
    Language (WSDL);
  3. Discoverable through a URL mechanism, which means they use common Web
    mechanics for implementation;
  4. Stateless, which means they can easily be reused since they are not reliant upon
    specific resources being dedicated to them while they are in use.

These attributes lead to the creation of software that can be provisioned and allocated to a resource on demand, keeping in line with the tenets of utility computing. Moreover, the concept of providing a service means that computing power can be made available like other utility-based services, such as telecommunications and electricity, which have the ability to direct more or less service based on need.

Hence, this change in architecture leads to a fundamental change in the way software is designed, but also in the way that software is deployed and managed. For example, once a service is developed, it needs to be deployed into an infrastructure that is operationally managed. This means the service may require access controls, redundancy, quality-of-service, service level agreements and root cause analysis when it is not available. These are the same services one would expect of any utility-based service.

In keeping with this model, then, SOA maps perfectly into existing utility models, such that IT can charge back infrastructure usage based on its existing utility metering, adding on charges for the number of times the service is invoked. Additionally, IT has more control over the resources used by these services, such that a single service will not require a significant allocation of a particular resource, such as today’s application servers or SAP servers require.

Thus, the organization can deploy services where there are available resources that fit the demand for that service. Moreover, this change will not impact existing applications if they use the model of “find and bind” that has become a cornerstone of SOA. That is, users can look up the location of a service in a registry and dynamically bind to and use that service without any prior knowledge of its existence.

This movement toward SOA requires a fundamental shift in the way that IT operates within most organizations today regarding software. Today, most IT organizations follow a system integrator model of delivering software, which means they take existing off-the-shelf applications and/or programming tools and deliver an automated solution to a business problem, similar to the service provided by Accenture or BearingPoint.

The move to utility computing using SOA requires the IT organization to operate more like the electric or phone company for their organization. The IT department may still do development work, but it will mostly be responsible for creating new services.  Eventually, business tools will create these services. Thus, IT will become the organization responsible for the infrastructure for provisioning resources for these services based on demand, with the goal of no interruption in service.

This is a major change for many IT organizations that are managing silos of applications today. If one application that supports fifty users is unavailable, IT can expect five to ten calls to the help desk. However, if a service that supports fifty applications is unresponsive, the help desk can expect to receive ten times that many calls. Needless to say, the impact of this paradigm change to the organization and the IT resources is significant from both a cost and resource perspective.

The utility computing model can save organizations millions of dollars each year
through hardware and human resource consolidation.  By adopting SOA, organizations can incorporate a software model that easily maps onto the utility computing model, such that resources do not need to be dedicated to particular applications, but instead can be provisioned based on demand within the organization. This approach requires the organization to plan for changes in how the IT department operates and the types of skills required to build and support an effective utility computing model.

The Role of The Business Analyst in an SOA World

Those of us that are part of SOA-related projects where traditional business analysts (BA) are involved often find ourselves frustrated by the incongruence between the analyst’s approach to requirements gathering and the SOA design.  The problem arises because SOA models functionality of a business across multiple boundaries, whereas the business analyst wants to focus on a user’s needs.  One focus is tactical and the other is strategic.  However, more than this key difference, certain aspects of the tactical requirements overlap with the strategic requirements; specifically with regard to service boundaries.  Thus, important business rules that are relevant to the definition of a particular service are buried among hundreds of irrelevant (to the SOA goals) requirements and the SOA architect is forced to mine these requirements like a miner mining for gold.

In figuring that someone else must have written on this topic, I visited my favorite Web search engine and pulled up the following article: “The new business analyst talks SOA”.  I was surprised to find this content on Network World given the nature of the topic, but quickly realized that the author thought of SOA as an application architecture instead of an aspect of Enterprise Architecture.  So, it was not much more of a surprise when they recommended that business analysts needed to become more savvy in SOA by learning BPEL, SCA and SDO; perhaps the worst advice I have ever seen regarding this topic.   The last thing we want are the business analysts talking about SOA, let alone SODA aspects of SOA.

Still, the article had one point correct, business analysts had better be reading the signs and acclimating themselves to a different type of conversation with their users.  Moreover, I believe SOA is a game changer in the way that we collect requirements going forward.  For businesses that really want to be successful with SOA, the first conversations that need to be had are with the key business stakeholders, and an Enterprise Architect with SOA experience must be present and facilitating these discussions.  This will be the basis for formulating a service model, which will then guide all future system designs.  I would also engage a business process engineer to start analysis of business processes that directly impact the effort.

Ultimately, if a BA is able to make the transition to SOA mindset, then they might be a good candidate to drill down and analyze the requirements for the service contract and vocabulary for each service.  Otherwise, an SOA architect should really handle these efforts.  Which begs the question, what is the role of the BA in an SOA world?  I believe there is a need for really good subject matter experts that can work with the Enterprise and SOA architects to fully define the interactions between business functions and, perhaps, to capture the usability models.  However, even in the case of usability, a usability expert will offer more insight into how to present information to the user than the BA.

Overall, my analysis is that in an SOA universe, the role of the BA is greatly diminished unless they can adapt to the change in role and gain an understanding of how to facilitate requirements gathering in a much more compartmentalized manner as outlined by the SOA architect and the business process engineer.

The SOA (& Cloud) Funding Dilemma

Lately, and primarily among government users, I’ve been hearing about a potential clash of bureaucracy meets technology when it comes to funding shared services.  It seems that the current government procurement and funding processes do not favor strategic sharing of software services as an outgrowth of a single development effort.  I imagine that this issue might also arise in some large organizations where the business units are funding development using a self-funded IT organization, but I have yet to hear any specific stories coming from the private sector regarding this issue.

While there seems to be a number of workarounds to this problem, the basic issue still remains.  If one government customer funds the development of an application out of their budget, and in developing that application, it is decided that one or more services could be of use to other government customers, there may be budgetary concerns for actually developing, hosting and maintaining those services for those that are not the original funding customer.  Moreover, it becomes questionable how to go about adding features to those services when the requests are not coming from the original funding customer.

With all these government agencies pushing SOA in their IT strategies, one has to wonder, what exactly are they asking for?  If the goal is not to remove redundancy within a single department and within a single agency–we’re not even focusing on cross-agency here–as a means of reducing costs and providing a single means of delivering a business function in a non-ambiguous manner, then what is the goal?

From my own personal experiences, I believe certain agencies do desire to achieve the benefits of SOA, while others are simply seeking the benefits of component-based architecture, which is flexibility and agility in application design to lower costs of maintenance.  However, due to “management by magazine” approaches in IT coupled with the “slap SOA on everything” mandates, agencies that are capable of only benefiting from component-based architecture–because of the procurement and funding issues or just basic needs–are now requesting SOA and ending up spending a lot more to develop a simple application than they should have in the first place and getting a lot more complexity and reliance on middleware infrastructure that is, at best, using a sledgehammer to kill a flea.

Now, most of this entry discusses the issue with regard to SOA, since this is where the problem has really become apparent recently.  However, it’s not too far of a leap to envision that Cloud Computing, that bastion of shared compute power, is going to have similar and more complex hurdles to overcome with regard to maintenance and operations of applications running in the Cloud.  At the end of the day, someone responsible for budgeting today is going to have to figure out how much is required to support and maintain one of their applications next year.  That may not be too difficult a task when they are the only  group consuming that application, but the budgeting process is far more difficult when that agency accidentally becomes a SaaS provider to other agencies and departments interested in using that application.

Got any private or public sector stories regarding how the budgeting and procurement processes are hindering the strategic value SOA?   Please share them with me as I am really interested in understanding the full boundaries of this issue.