The importance of automated report analytics

Flexible and modularity of separate layers of framework structure are always the most important characteristics when designing an automated testing framework. These designs take lots of time at first to become an MVP framework in your team so that improvements and refactors can be slotted without pending technical debts later.

It looks like very easy at first glance when the number of executions output is very little due to the test suites occurred on about one or two browsers/mobile devices. However, viewing reports is a major issue later when executions got spanned on a bunch of browsers/mobile devices with different versions that you can’t track them easily. 

The Raising Questions

The classic explication is allocating time to go through EACH generated reports, read them carefully, note them into your own sheets and lookup later to figure out:

  • Is this issue happen on other browsers? Is it the same or a different issue?
  • Are there any flaky tests between?
  • How frequently does it happen?
  • How is the trend of this issue? Is it going to happen in the future?
  • How many tests are passed or failed within the same session? 
  • How many test executions are established successfully?

Based on your target prospects, more and more questions may come out and you just dig up into collecting data as much as possible to answer the above questions. And you keep struggling with how to best to visualize it.

For example, consider below-collected statistics from what you’ve collected:

Execution #Test CasePass %Fail %Skipped %

Above statistics is just a simple data that you’ve collected so far from many generated HTML-like reports. Breaking down the whole data and explaining the linear trends of this is exhausting without visualizations.

Without doubts, you also need to figure out a candidate approach to handle the presentations. What about other factors that can affect the presentations such as build version, or executed time? How do you capture the historical results each executions accordingly with it tied test cases?

The Solving Presentations

Good presentations require the target answer and how the refined data is processed from raw data. The figure below illustrates this process:

Figure 2 – Consolidate reports flow

Each phase dictates different required steps to accomplish the requirements:

  • Collect Data: Which data should you collect before or after the executive session, or test suite, or test case, or even test step?
  • Process Data: Can I use the raw data to present it on visual figures?
  • Refine Data: Any raw data should be refined or changed accordingly? Should I use all collected data or not?
  • Select Presentations: Which kind of presentations I want to use? Does it answer my overview question that I’m looking for?
  • Analyze Presentations: Based on the existing presentations, can I forecast further trends that can happen for further presentations? Can it answer my question that I’m looking for?

The Solutions

Nowadays, there are lots of tools, cloud services that provide an overview of the test executions through a bunch of high-level dashboards and then break it down to many details dashboards, which will fit with your needs.

However, introduce those analytics solutions are not the end-goal of this blog or how to integrate them with the framework. Needless to say, conducting a further series to go through all above phase and building an own custom report dashboard is the main goal, so please wait for them in the upcoming series

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s