MAINFRAME TESTINGMainframe Testing of Applications Continuously

Avatar Collin ChauOctober 3, 20187 min

Today, mainframes application development and mainframe testing architectures are getting a new lease on life. Old designs are being revamped to keep up with modern-day business needs and an ever-changing marketplace. A key driver is the digital transformation underway in today’s application economy – characterized by frequent code changes to address rapidly changing requirements and the demand for quality user experiences.

Now it’s time for mainframe testing programs to keep up with these new marketplace realities. One important example of why: The proliferation of mobile/web applications has driven exponential growth in the use of mainframes as a system of record. And that makes continuous, preproduction testing on the mainframe an imperative. It is simply untenable to try to manage for spikes in processing consumption on the backend. Traditional mainframe testing approaches need to evolve so that you can test earlier and often in preproduction – ensuring the quality performance of your apps and of any incremental code changes your team makes.

Here are three important best practices that can help you launch a continuous, pre-production testing program to support your mainframe environment:

1. Embrace open-source technologies for mainframe testing

While most mainframe application development is waterfall-based, a paradigm shift is underway. Instead of monolithic greenfield projects, the emphasis is on brownfield development of enhancements and fixes. Faster and more modern Agile delivery techniques are becoming the new norm.

Open-source tools are aiding in the transformation by demystifying mainframe application development and testing that used to cater for legacy COBOL programming and green-screen transactions. Developer teams now can code in their local IDE of choice, with no proprietary languages to learn. And the same is true of testing. New open source-based test automation frameworks allow you to easily write and execute small-scale tests from a local machine, and then seamlessly reuse the same tests at scale via the cloud to accelerate test automation as part of a continuous delivery pipeline.

Open source-based testing can be a welcomed relief for resource-limited Test Centers of Excellence (CoEs). You can extend test coverage to mainframe development teams and respond to growing dev-test demands. You can adopt a single, integrated test platform that concurrently supports green-screen testing of mainframe application transactions on-demand across remote terminal emulators, as well as automated performance, functional and API testing at scale across multiple mainframe projects, each with its own dedicated workspace. You get a centralized view across projects, as well as drill-down reports and analyses for each workspace. What’s more, these new open-source testing tools have zero platform dependencies, so they will work seamlessly with your legacy test platforms and processes.

2. Use virtualization to overcome mainframe testing constraints

Once you’ve adopted open-source test tools, how will you use them to determine the impact of upgrades and fixes on processing capacity and the performance of distributed applications? How will you determine whether code changes will trigger a cascading failure and prevent return to a steady state? To do so, you need to test in representative preproduction environments. But critical systems may be unavailable, where distributed composite applications may not match up well with your mainframe systems.

The answer is service virtualization, which lets you model the core business logic in your mainframe platforms and how they integrate with in-house and third-party systems. Your production environment continues as normal in one virtual space, while application testing is done in another. By identifying the boundaries between the components involved (the application, database, message queuing system, etc.), you simulate production conditions, using agent-based virtualization to record requests and responses. There is no modification to your Customer Information Control System (CICS), DTP Command or target programs. Also, service virtualization supports “negative scenario” test simulations so you can explore error conditions, data corruption, elevated latency between components, a slow-to-respond backend, and other “what-ifs” that are difficult to reproduce and test in real life.

Ultimately, virtualization facilitates your shift left approach – testing and validating mainframe application development sooner, at a point where issues are easier, less disruptive and less expensive to resolve. You get faster build-and-test cycles, higher-quality applications and lower costs.

3. Leverage test data with sensitive information in mainframe testing

Mainframes are typically the system of record for core business logic and data – making built-in security capabilities a must. But mainframe application testing to virtualized services created requires access to that sensitive real-world information and production data.

To date there have been two approaches for addressing this dilemma. One way is to route copies of production data into a test environment as data is generated, while making sure personally identifiable information and other sensitive data is deleted or anonymized. The other is to capture test data from the production environment and replay it asynchronously in preproduction. Both approaches have limitations that result from the slow, complex and manual way that test data is created and provisioned.

In cases where system components already exist, virtual data for preproduction testing is created through manual record-playback with cross-system availability dependencies, which results in delays. Exposing production data in preproduction environments increases the risk of a data breach and compliance penalties. In the event relevant system components do not exist, request/response pairs of sample data are created manually. These scripts are often complex and time-consuming, and they fail to reflect realistic functional behavior or performance, current message definitions, API specification changes, outliers or future scenarios that require more rigorous testing.

Clearly, a more sophisticated approach is needed to generate virtual data on demand. One that’s free from cross-system dependencies and constraints, while reducing infrastructure costs and project delays. Generate virtual data for service components, to cover a full range of possible scenarios without manual creation or costly maintenance. Alternatively, choose to create virtualized data synthetically from scratch, that compliments new and existing data. Using virtualized data to compliments production data, teams can now access the latest pre-production environment, to fully test code and detect defects in parallel.

Continue using production data securely without sensitive information leaving your private network. Avoid compliance concerns and risks while staying in budget, by first installing a physical or virtual test server on-premises. Make use of a Docker-based private agent to generate test loads behind your firewall and within your private network or cloud. The agent “listens” and triggers application performance tests across the firewall using the load traffic you generate. Retrieving predetermined anonymized test configurations from a SaaS test platform in the cloud, this same proxy server matches them with the relevant data on-premises – indexed for use as tokens. Test scripts are then executed using the tokens and loads you’ve generated behind your firewall. All aggregations, manipulation and advanced logic are done in the cloud. Anonymized reports and analysis are sent back to your proxy server for decoding – allowing you to view the test results.

Request your free trial when Mainframe Testing with Blazemeter

Select the right tools for mainframe testing of applications

This new way of testing keeps sensitive data, test scripts and logs behind your firewall – including all URLs, APIs, command names and arguments. There are many advantages. You can keep information secure, while flexibly scaling your test program on demand. You can equip teams to test code fully in your latest preproduction environment of the mainframe and detect defects earlier using virtualized data – in shifting left your mainframe application testing earlier in the lifecycle with open source technologies.

The right testing tools can help you jump start your continuous, preproduction application testing program. That’s why many companies are adopting CA BlazeMeter. This SaaS-based, open-source platform is powerful, easy to use and supports each of the best practices described above. You can run massively scalable, open-source-based performance tests against all your mainframe apps – web, mobile, microservices and APIs – from the cloud or behind your firewall. CA Technologies even offers testing experts to help you create, execute and analyze tests, resolve issues and support peak-event readiness.

Learn more

Request for your free trial today to see how easy it is to bring accelerated development and testing cycles to mainframe software development. Visit www.blazemeter.com/shiftleft to learn more.

Spread the love
Avatar

Collin Chau

Collin enjoys helping traditional Central IT and Agile development teams Plan-Build-Test-Run for speed in quality application release. Taking on grassroots engagements with practitioners in application test automation for the Agile software development lifecycle and DevOps delivery, Collin has broken new grounds in next generation programs like predictive insights for Dev-QA teams, and hybrid cloud management solutions for Ops teams in the software defined datacenter.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts