Infrastructure as Test
Previously I wrote a theory guide regarding more testing approaches for mobile applications overall. It’s easier when said rather than actual implementations. Honestly, testing is not simple like theory.
Testing is complicated, and it’s more challenging if you didn’t plan it for the beginning, more difficult when you only thoughts you can write some automation scripts to save time and then spend x3 efforts to maintain. Yet, it’s more painful when it comes to scaling tests to be agreed upon and supported by the whole team. You heard the ultimate benefits of continuous testing to it’s always good to implement that infrastructure, but you can’t maintain CI flow stability properly.
When I sad this term, I want to point to the support and the ‘things around the testing. Followed with shift-left principle, testing activities presented in all development parts for an iteration process.
How you integrate your team and build an infrastructure to support testing is more critical than “hey, find an automated framework and let’s start writing some test scripts.” The purpose of using the in-house framework or even buy a commercial solution to help reducing testing efforts and increase collaboration is just one point of the whole testing infrastructure.
What affects the testing structure is:
- The test strategy
- The team
- The automated testing framework
- The test execution
- The CI/CD solutions and pipelines
- The application under testing including versioning, environments, and deployments
- Other types of testing
This is the most master thing needed for the outcome of the testing infrastructure. This is not a typical test strategy to highlight what needs to be tested but also concludes the approaches to reach the mature of testing infrastructure. Part of it is the CI/CD pipeline maturity levels.
Collaboration is the fundamental key here. Your testing team and even developers have to agree on what you would like to construct, not just you do it by yourself. This act won’t go anywhere. If the developers know how testing is built and how testing pinpoints the speedy results upon the pull requests, they will undoubtedly support us.
Collaboration does not incline the usage test script collaboration. Do you want the team to know to collaborate on the test script? Then the baseline of collaboration should be encouraged by testing framework extensibility.
The testing results have to be seen from your team, including developers, managers, etc. Full visibility about the test results on the infrastructure must be public and presented in either high level or details-oriented levels.
Visibility is attainable through integration between the test management tool and the tests pipeline supplement it. In some cases, can you break that link?
Automated testing framework
The baseline of testing infrastructure is the testing framework.
At first, the team has to think, is it worth creating the in-house framework, or do POC using some open-source frameworks out there or buy a commercial solution to help it?
Or can I use some other supporting solutions such as Cypress, Playwright, Karate, or Katalon? The answer laid to how the team wants to address automated testing perspectively. The testing framework should be the whole team effort, not just you or some individuals.
You must separate your thoughts between the testing framework and the scripts created by it. The testing framework I stated here is very distinct from the automated test scripts. How fast or firm, or maintainable the scripts all depend on the testing framework’s nature. You can produce a testing framework very quickly, but then time spent to develop test scripts and changes into AUT versioning or integrable changes from the sprint will quickly catch the testing team on fire.
Test execution in pair with the tests you have in the test repository. There is an interesting article to map testing efforts into automation.
The test execution from one session confines not just one test but multiple tests at the same time on given testing environments, on given browsers/devices, and on given pipelines. The testing infrastructure compromise different solutions to a specific type of breakpoints
To create the proper test script, it must have 3 phases: Arrange – Act – Assert. Arrange is where you set up your test and, most importantly, is the test data. Martin Fowler has a stumble guide regards to Test Data preparation in his book. The main principle for this with testing infrastructure is to highlight the actual data testing in need of it accordingly.
Attain this depends on the testing pipeline and how the testing framework obtains data from data delivery. The immediate approach is using developer techniques such as test doubles, stubs, static data, and directly seed data from databases. This supplement can also help a complex work when the team wants to streamline the CI processes without no human interception.
How can this be accomplished in the testing infrastructure? To consider which data to be used, think every test data is a model. If it’s a person, it will likely have a name, age. If it’s a product, it will likely have a name, price, category, etc. Consider this fact apply changes in both framework and the CI pipeline:
- For the framework, consider applying the test data factory. Either static data from JSON, data tables, or random data can be done and easy to maintain
- For the CI pipeline, consider directly seed data from databases if possible. But you need to ensure you don’t violate data storage privacy from end customer perspectives.
Report for a single execution is easy when the tests are executed locally. But our testing infrastructure doesn’t do small tests like that. The infrastructure is responsible for determining which tests should be done, when and where the tests are executed against what environments and collect overall reports. As I’ve said previously, using the CI/CD tool, you can actually view the final result of an executed session, but not really enough.
The reports need to be viewed from different perspectives, not just the creators. The managers want to see the reports from his perspectives, so reporting needs to be presented in many different kinds of levels to adapt the viewing persona.
I usually use ReportPortal from the beginning, but many other solutions already integrate the reports.
The flaky test is the most common type of failure you will encounter when the automated scripts are executed. Google has some must-read articles for this that you can refer to:
- Test Flakiness – One of the main challenges of automated testing
- Test Flakiness – One of the main challenges of automated testing (Part II)
One of the common ways to heal flaky tests is to retry the test as part of the testing framework feature. For me, it’s incorrect.
Retry should be handled directly from the CI/CD tool instead. When a testing pipeline is failed due to the environment is not reachable or the network connection is being very slow, rather than let the framework do the retry, let the CI/CD tool do that instead. It’s very controllable directly from the pipeline.
To supplement analyzing flaky tests is information. The notion of information is about all things you can get:
- Test failure snapshot
- Capture DOM at the point of failure
- History execution results
Logs, snapshots, and captured DOM are done from the testing framework. For historical execution results, you can utilize CI/CD tool or use ReportPortal. ReportPortal is an open-source centralized report to provide you more insights into historical results along with time spent for execution.
There are still many other reasons for a flaky test, which I won’t mention more details about here. Refer to Google testing blogs that I’ve posted above
The CI/CD solutions
As part of the testing infrastructure, selecting a CI/CD tool is also very important here.
It’s never been easier because Jenkins is the most popular one, but I’d recommend TeamCity instead. Its visualize pipeline report is much easier to detect flaky tests from your continuous testing scripts, and of course, the minimalist UI catches your eyes more refined.
When you consider integrating tests into the whole development pipeline, it’s another factor.
The whole pipeline refers to both development, testing, release, and post-release pipelines. It’s a sum of pipelines that guide how testing infrastructure is being built rather than just separate components without any linkage.
Constantly monitor what happens post-release to find out interesting things and also keep track of user journeys. There are many statistics data for a user session, and the journey they go in confine the things that infrastructure needs to be concluded for defects detection and further advances.
Application Under Test
An application under test(AUT) is another part of the testing infrastructure. If you work in a Scrum project, the iteration changes always happen, and also many different versions (release candidate, beta, official)/environments(local, staging, production) are presented there. Especially for mobile applications, the application under test confine different challenges and not easy to approach testing.
Each release has its own version, and its internal delivery for testing also has the version. Each version might differ from UI, workflow, user scenarios. To truly adapt with the rights being a part of the infrastructure, you have to manage it directly from the automated testing framework.
The testing framework should be designed from the beginning with scale and extension in the whole testing mind. Yet this is not easy for non-technical testers, but really you need to think about it. Some beginner and advanced articles for design patterns I’ve read will be useful for you:
- Common UI Automation Patterns And Methodologies: Real-World Examples
- For Complex Design Issues
- Design Patterns in Test Automation
- Design Patterns series
My advice is always to think of upcoming development infrastructure as a big picture, not a short-term solution. One day the developers think we need to release an internal version for our staff to try first, and then it’s another break in the development pipeline and affects your testing infrastructure pipeline as well.
The test environment is in pair with versioning. There will always be typical staging and production environments. In more mature projects, there will be a local one, a QA one.
Well, again, to deal with this is also a part of the design pattern you need to think of. For me, one easy way to use a pre-done supporting library such as Spring to switch to different environment properties quickly and effortlessly.
Nowadays, microservices is a star in the sky of deployment for the web application. For mobile applications, it’s the utilization of third-party deployments such as TestFlight for iOS or App Center for Android. The deployment of this depends is not mentioned here, but rather than after deployment, how should it blend with the infrastructure?
Every deployment tied its successful deployments into a specific location. For the web application, with the help of Docker, Kubernetes then the deployment location can be a temporary URL to access at the current code changes. For mobile applications, the app distribution center will distribute the usable application file to be installed on the devices. It’s not a matter of how you select the solution for deployments, but the output of code changes can be compiled successfully and then being a snapshot for the testing infrastructure to grab and execute
With these matters, expose the output of access points of these deployments into environment variables and be used in the testing framework. The testing framework must have the ability to parameterize the environment configurations based on output environment variables so that testing will be triggered on that correct deployment.
The browsers or devices to be tested.
Another thing is the place you will use to execute the test scripts. It will be either specific browsers, devices, or both. I won’t mention the fragmentation or how to gain which places you should execute the test, but rather the supporting infrastructure to pick what kind of browsers/devices to be executed.
For this to be clearly precise, the testing framework should expose the place to configure a single or bunch of configs like e.g, JSON or YAML format so that the team just need to input their specific desired capabilities to test on.
Other types of testing
You don’t only do one specific testing in the whole infrastructure. There will be more testing, including functional and non-functional testing included. The vast of it is huge depends on the testing needs and strategy given from the final outcome of the testing meeting, but those pinpoint the needs for infrastructure to work on it.
Other types of testing, if feasible, should be included in the testing framework with multi-modules project support. The pipeline and the report will pick the results given with specific kinds of testing for visibility.
The post is quite long that has come to an end already. I always mind that testing is complicated. It’s not easy to achieve testing efficiency for the whole product and expose visibility, encourage collaboration. It’s more difficult when the product is complex, and many different tests need to be performed.
Build up a testing infrastructure is just like building your house step by step. When creating the house, you will likely meet budget issues, conflicts with your family members, decorations being changed over time due to your estimation about furniture are incorrect. Your dream house struggles quite a long time to be a place you are proud of the same for testing infrastructure construction. You can’t ignore building small things first to achieve deliverables for the teams seeing testing efforts, and it’s a step up to further wrap many components of infrastructure.
Nevertheless, the infrastructure has to be measured and maintained to let it not over-react to disrupted changes in development cycles. I hope this blog doesn’t give out too many theories to highlight the needs of testing affects majorly in the