Functional vs. Performance Testing
If you’ve been in the QA space a while, you probably understand the need to run different kinds of tests on your application. One type of testing is functional testing, where testers will focus on testing how pieces of code work together in the application. Many products and frameworks exist to accomplish this, since the primary goal of an application is to work for its users. If an application does not operate the way it was intended to, user will abandon the product. Along with functional testing, another area of interest for QA teams is load and performance testing. Even if an application works flawlessly, users will also abandon a product that is too slow. Today, if an application takes longer than three seconds to load, 53%of users will leave. Whether you are a business analyst, developer or tester, the rising concern of making sure your applications run properly and quickly is exponentially growing. So how can you automatically create test cases in a way that doesn’t stretch your QA teams to the breaking point? In this blog we will discuss how model-based testing can automatically create test cases for you.
Integrating Automation Layers
In a previous blog, we saw how model-based testing allows you to automatically create test cases and automation scripts. We will build upon this knowledge to show how you can build two different types of tests from requirements. It is important to note that Selenium and Taurus are the frameworks used for this explanation, but any language of your choosing can be used for functional or performance tests.
The first step is to build a configuration filethat creates two separate libraries of actions, one for Selenium and one for Taurus. The libraries are separated into what we call “Layers” that we will utilize to integrate automation into our model.
Once you have two layers defined with common objects and actions stored in the configuration file, you can integrate two layers of code snippets into your model blocks. Not every block may have both layers of automation tied to it, especially for a Taurus load test scenario. Typically, more UI actions occur than back-end requests, which it what load testing focuses on. So, there might be Selenium script associated with actions that occur on the screen for each block, but Taurus might be only integrated when a user moves to the next page in an application.
In the end, as long as a valid script is produced, the code snippets can be arranged in any desired way. Each block that needs to be functionally tested will have a Selenium code snippet and each block that will contribute to a load testing scenario will have the Taurus YAML snippet aligned. You can change the layer you are integrating by selecting “Automation Layer” when you choose to add code into a block.
Repeat this process for every block, alternating between layers as needed. Once this process is complete, we are ready to automatically create test cases, Selenium scripts and Taurus scripts.
In the Path Explorer, we can automatically optimize our test cases to give us the maximum amount of functional coverage with the least amount of test cases. Because we are now creating more than one automation script, it might be important to create more than one set of test cases. We can do this by creating test case folders, where one might be for functional testing and one for performance testing.
The benefit of saving different tests in separate folders is by utilizing the different optimization types to get the coverage we want. Because we might not have integrated a performance testing code snippet into every block in our model, it would not be pertinent to test every single permutation. We can use the All Node or All Edges optimization for creating performance tests, which will give us a smaller set of tests as opposed to an All Possible Paths or All Pairs coverage. Performance tests often won’t have a different script for a given set of permutations, therefore it would make sense to have less tests for this use case.
Likewise, we can store our functional tests in a different folder with a more extensive coverage type. Instead of concentrating our resources on creating load, for functional tests we want to focus on covering all the permutations possible in our application. Based on how much risk we are willing to accept versus the cost required to run tests, we might choose an All In/Out Edges or All Pairs optimization for functional testing. This will give us a more comprehensive set of tests for different user journeys through our application.
Once you are happy with the test cases chosen for both functional and performance purposes, we can save the test suite and view our automation scripts. Because we integrated two layers of automation we will have two automated test scripts for each test case. The test scripts are created by compiling the Selenium code snippets and Taurus code snippets based on the path taken through the model. After the base script is created, headers, footers and file extensions can be added.
Extending the Testing Center of Excellence
Creating tests in this way provides 100% test case automation for both functional and performance tests. However, the biggest value that this model can provide is when requirements are changed. Instead of testing teams having to wait until the last minute to perform all these different kinds of tests, its empowers them to extend their center of excellence. The model is created at the beginning of the sprint, widening a QA team’s impact throughout the entire SDLC.
Many types of test are required to make sure applications meet or exceed the expectations that are required of them today. Reduced test case creation time, increased automation percentage and decreased requirement ambiguities are all results of utilizing this model-based testing tool. Cost and time savings resulting from a technique like this will allow your teams to spend more of their efforts on producing new features and enhancing customer experience.
Start to automatically create test cases and test automation scripts. Sign up for your Free Trial of CA Agile Requirements Designer today.
Get Started with Agile Requirements Designer
Automatically create test cases right from your business requirements phase Free Trial
Alyson Henry is a Technical Presales Consultant for CA Technologies based out of Plano, TX. Prior to CA, she worked as a sales engineer and graduated from Texas A&M University. Today, Alyson specializes in model-based testing as well as web-based performance testing.