Key Takeaways
Quality software is defined by three simple measures: it works as planned, it performs well within the scope of identified use cases, and it has a low rate of defects.
The “develop fast, develop often” mentality can create a host of problems within development organizations even as DevOps continues to gain importance.
To efficiently make quality software, testing smarter and sooner needs to be adapted.
When it comes to software development, what is quality? Is it the elegance of the code? Is it how well the software performs once deployed and operating at scale? Quality is all of that and more. Ultimately, quality software is defined by three simple measures. First, it works as planned. Second, it performs well within the scope of identified use cases. And third, it has a low rate of defects which means that the software produces very few, or no, errors when a user executes one of the application’s use cases.
Many development organizations utilize a classic approach, such as the Iron Triangle, to influence quality. They either reduce scope, extend the delivery schedule, or throw more engineering resources at development. But none of those ultimately guarantees quality. Even if you limit the scope to a single function or use case, take an extra year to improve the efficiency of the code, and hire a hundred more developers, the quality of the finished application can still be poor.
If engineering executives can’t manipulate the triangle to improve quality, what’s the key to ensuring a high-quality of software output? The answer is continuous testing.
How Agile Development Changed the Testing Mindset
When businesses were mired in the waterfall of software creation, testing happened at discrete points, such as when a build was marked as “complete.” Once completed, the software could move into a QA group where it would be put through its paces. QA results would be passed back to the developers to fix before going through QA again until finally being released.
The Agile development framework changed the delivery mindset in a couple of ways. First, application features are released iteratively, rather than waiting until they are all finished. Second, (unfortunately) the end-user becomes part of the testing process.
Although Agile development enables businesses to release software more quickly in a continuous cycle, meeting end-user demand for new features and functionality, it falls into the same testing trap as waterfall development: testing is only happening at specific points in the release cycle.
The Promised Rewards of Agile Development
At the heart of Agile development is speed. Faster development cycles (sprints) result in faster testing (because the testing scope is focused), which equates to getting new features and functions to end-users quicker. Yes, there is value in segmenting development teams into smaller groups who own different parts of the overall application. But ultimately, Agile is just a smokescreen for the relentless pursuit of speed. In today’s fast-paced digital work, companies that rely on software as part of their business, whether it’s middleware for system integration, web applications for partners, or the very product delivered to end consumers, a business can’t afford to fall behind.
As a result, time-to-market drives every decision including software development. Corners are cut, if necessary, to deliver end products even if they aren’t totally baked. That’s where development executives play with the Iron Triangle. Pressured by business demands, they adopt continuous development models to assuage the need to constantly deliver products. With continuous delivery, they release so often that each deployment becomes a non-event. What’s more, because developers are constantly tasked with the next sprint, issues can get lost in the relentless pursuit of delivery or even pushed to the end-user which is fundamentally wrong.
Aligning the Gears: The First Step Towards Quality Code
This “develop fast, develop often” mentality can create a host of problems within the development organizations even as DevOps continues to gain importance. Part of that is blowback from the demand for development speed. There are just too many deadlines, too many failed liftoffs, too much test feedback to prioritize effectively, and too many production escalations. Those weigh down the CICD pipeline and ultimately undermine the drive to produce quality software.
Thankfully, there is a myriad of tools and frameworks to help engineers test their software, throughout the development cycle, which hopefully improves the quality of the software. But that testing is complicated by the number of platforms on which the software often needs to be tested. Different versions of mobile operating systems, different browsers and different browser kits, and a myriad of other platforms like connected devices, smart TVs, or personal computers, all press against the frenetic timeline of that develop fast, develop often mindset.
Additionally, there are a host of testing tools, some of which focus on specific platforms requiring DevOps to adopt multiple options to meet their multi-platform testing requirements. The tools needed to help quality test software are different when that software is infrastructure-as-code, containers, or microservices.
If you take a step back and look down on an Agile development process, where the speed of delivery is balanced against the need to test across so many platforms, you’ll see a host of gears. When the process is optimized, these gears all lineup and the development machine works well. But it’s very difficult to do that when the number of gears (platforms, product features, etc.) is continuously increasing. The relentless drive for delivery coupled with the need to properly test the software prior to release combined with an ecosystem of tools (and decisions to be made) often results in engineers or technology executives not getting the right signals at the right time to make good, data-driven decisions that will result in better quality software. Of course, this is exacerbated under the pressure to produce more software, more often.
That’s why the first step towards quality code is figuring out how to get those gears aligned even as more are being added. One way to do this is to prioritize what needs to be tested according to risk. For middleware, perhaps integration testing with affected systems is critical. For end-user software, perhaps the UX/UI is a priority. Whatever the priorities are, they need to be documented and assigned a risk profile so developers, managers, and executives clearly understand the impact upon quality if that testing is not carried out.
But there is a better way that can not only reduce the number of gears but also eliminate a lot of that risk. Think of it as “continuous testing.” During each step of the pipeline, testing occurs. From code and build tests to unit testing and acceptance testing, all the way through UX and end-to-end testing, under this model, when a sprint gets released, there is higher confidence with the release as tests were done throughout the process.
There is no more worry about “what should be tested or not tested” which eliminates the worry about post-development QA. Sure, it’s still a good idea to funnel pre-release software through a final quality and testing gate, but that can be done more from a usability perspective.
Avoiding a Tsunami of Data
Continuous testing rethinks the traditional software development cycle. Recognizing that Agile has changed the way software is developed and released, we must also re-evaluate and prioritize what we focus on when testing through these cycles.
As projects grow, so do the tests and variables. It becomes harder to keep your CI pipeline fast. This problem expands as new frameworks and platforms are included in the delivery pipeline. With test growth comes more noise. Developers face a Tsunami of Data and figuring out what is important and worth paying attention to, lengthening test cycles.
But this isn’t about shifting testing, as one big activity, to another part of the development cycle. The result is still the same: a tsunami of data which the developer must deal with even as they are required to continue developing. So what do they do? Take in the testing data, rework code, and miss upcoming milestones in new sprints? In today’s “produce faster, produce more often” development environment, that is an untenable arrangement.
This is the paradigm shift: identify the riskiest tests amongst the noise and move these critical tests earlier in the pipeline to develop software smartly.
Flagging the riskiest tests in the whole pipeline, and better yet within each section of the pipeline, allows teams to move these tests earlier in the process, ultimately reducing development friction by addressing them during less cumbersome testing cycles.
Smarter Way to Test
Traditional testing tools might be adaptable to testing through various stages within the pipeline, but they do not identify the critical risks within your test suite that should be prioritized earlier in your development cycle.
To efficiently make quality software, testing smarter and sooner needs to be adapted. The data produced by testing tools can be overwhelming and ultimately difficult to determine risk priority and distracting from meeting future sprint milestones. What you need is a layer that can understand the feedback of the dozens of existing testing tools in the entire pipeline. A layer that is intelligent so that the riskiest items can be prioritized and eliminated early from the pipeline.
The CICD and DevOps are software development evolution and with DevOps evolving to be data-driven, it's only reasonable that testing evolution must also become continuous.