AGILE REQUIREMENTS DESIGNERVisualise, Then Execute: Visual Model-Based Testing

Ruth Kusterer Ruth KustererJanuary 29, 20195 min

“Are these cases even covered by tests?” “I can’t keep track, there are too many combinations.” “Oh, that was a requirement? I didn’t know!” Does this sound like your Agile team?

If only someone had kept track of the target behaviour of the application… Sure, you heard of model-based testing, but you can’t possibly model the whole system, right…? Where would you even start!

It’s clear you cannot create one model that captures all behaviours. You can’t track everything — so you decide to divide and conquer: Let’s create separate models, and make each model useful for one specific goal. Which goals? Let me talk to the stakeholders, what does “100% coverage” mean for them, for this system?

Maybe you need a model that follows the user journey across components, and maybe one that tracks a common service used across several sub-systems. And, maybe another one that defines your error tolerance — and so on. You’re beginning to see the bigger picture.

You pull out a whiteboard and start drawing arrows and boxes. Your application’s behaviour is basically a flowchart: Data flows from input to output, conditions are met, states change. Your Business Analyst chimes in and points out a gap here, the developers remind you of a technical requirement there, the team gets talking. Spelling out your requirements visually helps you avoid ambiguity.  

The flowchart grows and you choose to switch to a digital canvas: You create a flow in CA ARD. If you have a scenario with multiple entry points, drag and drop several start points on to the canvas. Similarly, you can drag and drop different end points for each outcome that you have identified. Then you add individual test steps and connect the dots.

To avoid getting sidetracked, you start each model by outlining its happy path. The happy path is the main user story that you are selling to customers, say, for a web shop, that’s “Create account, log on, browse products, add product to shopping cart, place order, log out.” Does everyone in the team agree on this requirement? Great. Now you can branch off and cover variants, exceptions, and error handling.

After that, you drill down and flesh out the subsystems. You create reusable subflows that describe elementary requirements, such as input validation, logging in and out, placing orders, and so on.

While you import recorded Selenium test suites and make adjustments on the ARD canvas, you start sharing URLs with stakeholders for quick reviews in the Requirements Insight web interface. Your Business Analyst and Product Owner don’t need to have CA ARD installed, they simply review the shared flows in a browser. From Requirements Insight, they can look up links to requirements stored in software lifecycle management tools such as ALM or JIRA. Early reviews and cross-linking help you identify requirement gaps right away, and you resolve misunderstandings much earlier in the development process.

Are the stakeholders satisfied with the model? Then take a step back and open the Path Explorer. For the first time, you see all series of steps that can be taken through your application — the big picture. Some paths succeed (and your web shop has gained a customer) while other paths get your users stuck (how do you recover and draw them back in?). You are starting to get ideas how to improve the quality of your application.

Now that you see all paths, you can put a number on test coverage. Maybe you realise that you cover 90% of login validation steps, but, say, only 35% of your storage API calls. This helps you focus on filling gaps in coverage. And now that you see how many components actually depend on that one API requirement over here, you can allot enough time to deal with the impact of changing it.

Each path through the model corresponds to one test case, and each test case is a series of code snippets. After you have assigned code snippets to each step in the flow, and generated all possible paths (which CA ARD can do), you can export test automation scripts for all possible combinations.

Sometimes, that results in tens of thousands of paths. In these cases, you use the Optimizer to identify the minimum number of paths for a specific test goal. For example, for configuration testing, you’d tell the Optimizer to generate test cases that cover every pair of decisions — as opposed to all combinations of decisions (which would be overtesting).

Every time you make a change to a requirement, the Optimizer helps you re-generate only the minimum number of paths necessary, so you can re-export updated test scripts for your testers, quickly. Quality testing takes time, and you don’t want to waste your testers’ time with redundant steps.

There are other tools out there that promise you “drag-and-drop” test automation, and they may be fine for small businesses. But if your enterprise application has reached a certain size and complexity you will value CA ARD’s integral depth and customisability that brings you the insights needed to focus on what’s important to your stakeholders.

What about you, how do you approach test planning?

Interested in trying CA ARD model-based testing for yourself? Start a free trial here at https://www.ca.com/us/trials/ca-agile-requirements-designer.html

Spread the love
Ruth Kusterer

Ruth Kusterer

Part of a development team based in Prague, my interests lie in creating development and testing tools that take error-prone repetitive tasks off our shoulders and let our brains focus on the creative parts.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts