Today’s software testing practices are abysmal.
You’re a major provider of health insurance services to general consumers. Your website is a primary means of interacting with your customers and allowing them view coverage, locate in-plan providers, review Explanation of Benefits for past services and see real-time information regarding deductibles and fees. It’s midday and requests are taking over a minute or more to process.
You’re the provider of one of the leading business SaaS applications on the market. Millions of businesspeople count on your application every day to communicate with their customers, analyze sales projections and execute marketing communications plans. Access from mobile devices is a must have for these busy professionals, but the application crashes more times than it works.
What’s going on in these scenarios? Damned if I know, but the problem is damned if these providers know either. The speed at which the digital world is operating is forcing businesses to deliver faster and faster usually to the detriment of not following software development and testing best practices.
Oddly enough, anecdotes like the ones presented earlier emerge as part of DevOps conversations with customers. There’s an understanding that entire parts of the software development lifecycle are being short-circuited or completely skipped in an attempt to deliver at an inhuman pace. There’s a semi-incorrect belief by many of these customers that DevOps is the answer. I say semi-incorrect because given a few other links in the chain being completed first, DevOps could be an answer to delivering at the speed demanded with quality.
Anecdotally, I’d have to say 75% of my DevOps conversations eventually centers on testing. If you follow a Continuous Delivery methodology this should make sense as testing is distributed across the continuum of delivery. Here are some of the subjects being addressed with regard to testing:
- Minimizing resource contention around QA environments
- Identifying and preparing data for testing
- Automation of regression testing
- Methods of isolating changes to minimize full system testing
- Defect management
- Non-functional testing in enterprise software environments
This is just a fraction of the issues that ultimately come up when reviewing bottlenecks and constraints that limit high-quality resilient and speedy delivery of applications and modifications. These issues are also impacted by IT organizational structure—who owns infrastructure, licensing, etc., politics, budget, time, tooling and skills. Hence, these are complex issues to be dealing with at a time when demand is increasing and time to deliver is shrinking. That said, the lack of quality will catch up with you eventually in the form of growing shadow IT, management transitions, loss of business, attrition, outsourcing and any other obtuse means users have for avoiding dealing with your systems.
What can you do? Unfortunately, there’s not a single patterned answer that every business can follow to increase quality. I recommend businesses form Testing Centers of Excellence to centralize the governance of testing across the various groups involved with delivery. However, each business will only be able to absorb change to a certain degree that is predicated on time, budget and resources. If pushed to provide some direction, here’s what I recommend to clients:
- Hire or promote an individual to lead testing that has an understanding of the science of testing.
- Move as much testing earlier in the process as possible.
- Place as much emphasis on non-functional requirements testing as code testing.
- Incent reduction in defects versus defect identification and correction
- Incent zero incident releases
- Invest in tools and training for automation
Skipping or short-circuiting testing practices due to time limitations is penny-wise and pound-foolish. Poor quality at best will only act to delay future releases while the current release is fixed and, worse case, will have long-lasting detrimental impact on the business.