CONTINUOUS TESTINGDIGITAL TRANSFORMATIONGet Smart About Continuous Delivery: Intelligent Continuous Testing

If continuous delivery is our goal, then by definition, testing needs to match pace with the development process so that the “software can be reliably released at any time.” To release reliably, we need to test early and often, so defects can be identified and addressed as soon as they are injected into the pipeline.
Christine Bentsen Christine BentsenOctober 7, 20197 min

We’re all struggling to do more with less: more quality with fewer resources, more releases with fewer defects, more testing with less time. While investments in test automation have greatly increased the number of tests available, running “all of the tests all of the time” isn’t the answer: 

  • Test volume does not equate to change coverage
  • Running all the tests all the is time and resource intensive
  • The process delays feedback to dev teams, and can even delay the release 
  • Since testing and feedback loops take so long, Agile teams may resist frequent commits

Let’s revisit the definition of continuous delivery

Continuous delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time… 

If continuous delivery is our goal, then by definition, testing needs to match pace with the development process so that thesoftware can be reliably released at any time.” To release reliably, we need to test early and often, so defects can be identified and addressed as soon as they are injected into the pipeline. 

It’s ok to fail, but do it fast 

It (usually*) makes no sense to run a bunch of tests that always pass, and it makes even less sense to run tests that always pass before tests that are likely to fail. There’s definitely no sense in running tests that validate portions of the application that you’ve already tested, and that weren’t changed in the last build. This just takes time, and may encourage dev teams to commit less frequently, which isn’t the way we want to go in an Agile, continuous delivery environment.

To speed up cycles, you’ll want to run the tests that are most likely to fail first, so you can immediately start fixing whatever needs to be fixed. Then, you’ll prioritize other tests depending on your business needs. A good way to prioritize is to select: 

  • Tests based on what’s changed in the current build 
  • Tests that often fail  
  • Tests that are new or updated
  • Tests that “must run”(*this is where the “usually” in the paragraph above comes in. You may have tests that always pass, but they still “must run” for a business reason. Compliance tests come to mind.)

Using machine learning (ML) to intelligently select and prioritize your tests can dramatically increase both the efficiency and accuracy of test cycles. 

In most scenarios, machines crunch numbers and data faster than humans, so why not let the machines do what they do best, so we can focus on things we do best – like innovating, collaborating, and some of that nuanced testing that machines simply cannot do. 

Machine learning algorithms can help you prioritize and run tests based on what’s in the pipeline and your business requirements. 

Intelligently Prioritize 

To better understand which tests make sense to run first, let’s dig into prioritization a bit more. Obviously, you’ll want to run tests based on what’s changed in the pipeline. As we mentioned before, model-based testing is a great way to ensure you have tests that map to the requirements, but it doesn’t help you choose tests based on the code changes in the current build. If you run all of those tests all  of the time, you’ll end up running a lot of tests that are testing parts of the application that haven’t changed — which isn’t necessary.

Like peanut butter and jelly, model-based testing and ML-augmented test selection and prioritization go great together. You can use model-based testing to create all the tests and automation you need, and intelligent test prioritization to figure out which tests you need and what order to run them in, so you have full test coverage and the most efficient test cycles possible. This helps you keep testing in sprint, and achieve the goal of continuous delivery. Win! 

 

_____________________________________________________________

Step 1: Run tests that map to the code changes in the current build.

_____________________________________________________________

 

It’s pretty obvious that these are the tests that are most likely to fail, because these are tests covering code that’s never been tested before. So finding defects at this phase allows you to address them closer to the source, when the code is fresh on the mind, and it’s easier to identify root cause, less likely for the problem to move downstream, etc. For example, if you’ve added a new log in option – like a social log in – there’s no reason to run tests on your shopping cart and checkout process. 

Address the Flakiness 

Next, let’s run those flaky tests. What’s a flaky test exactly? These are tests that are not very predictable. Maybe it’s something in your pipeline, maybe it’s something in the test, maybe it’s something in the code, but these tests often lead to some extra cycles of digging around to find what’s up and fix it. 

Without ML it might be hard to even identify which of your tests *are* actually flaky. Those patterns can be hard to discern in huge test suites. But with ML it’s relatively straightforward. Plus, ML can help you identify what’s causing the tests to *be* flaky over time. Digging into the root cause of flakiness can help bubble up underlying inefficiencies in the pipeline. Maybe some of these tests are outdated/unnecessary and generating false positives. Or, maybe you need better test environments, more reliable virtual services, etc. We’re not finger pointing, but we are going to learn from what’s working and what’s not. 

So that’s step 2: run those flaky tests and learn from the results. 

Running flaky tests

_____________________________________________________________

Step 2 – Run flaky tests and codify the results into continuous improvements.

_____________________________________________________________

New Tests – You’re Up!

Do you have new or updated test suites? Now is the time to run them, as the teams has a reason to add them, and they might find something you need to address.  Keep in mind that with each build, tests will be selected and re-bucketized depending on what’s actually in the pipeline and results from previous cycles. Tests could redesignated from “new” to “flaky” after a few runs. Then, when you have time to address the underlying cause for that flakiness, the test might get redesignated into the “always passes” category. The system gets better and more accurate as you go.

_____________________________________________________________

Step 3 – Run new and updated tests 

_____________________________________________________________

Next: Do What You Have to Do

In this step, we’re going to run all the tests that have been marked as important or “must run” by the organization. These could be all kinds of tests ranging from compliance tests to security tests, and it may very well be that they always pass. So we’ll put them at the end and check that box as done. 

_____________________________________________________________

Step 4 – Run your “must run” tests.

_____________________________________________________________

 

So there we have it, intelligently selected and prioritized tests, in an intelligently selected and prioritized order. Again, our goal is to run the tests that are most likely to fail, first, so we can fix them immediately. But we’re not done yet. This algorithm just keeps on giving. All this data and insight helps us understand where we need to focus to continuously improve.

_____________________________________________________________

Step 5 – Track your progress and continuously improve.

_____________________________________________________________

Continuous Improvement 

Part of adding intelligence to your test cycles means you’re tracking metrics over time, and with that data, you can pinpoint and address things like: 

  • Number of defects found at each phase of the SDLC. You should see this ratio shift left, to match your testing cycles. If it’s not, time to dig in to figure out why. 
  • Do your tests cover the latest changes? Where are the gaps and how can we address them? 
  • Where are the problem areas in the pipeline? What should we do to address them? 
  • Culture shift and collaboration: objective data is usually a great place to start cross-team collaboration and joint solutions. Let’s look at what we’re learning, and fix it together. 

The possibilities are very exciting, and we are just scratching the surface of what AI/ML can do to augment the talents of DevOps and testing professionals. 

To learn more about the Intelligent Pipeline and Continuous Testing from Broadcom, join us for a webinar on Oct 9 at 11am US ET. 

Spread the love
Christine Bentsen

Christine Bentsen

Christine Bentsen is the Product Marketing Leader for Broadcom. She is a high-energy innovator adept at working with customers, creative resources, and developers to achieve the best results possible. Christine is a results-oriented DevOps product professional with extensive experience launching new products and expanding markets for existing products.

Leave a Reply

Your email address will not be published. Required fields are marked *