Best practices for automation testing

Published Jun 22, 2020, 12:24 PM

Software testing plays a vital role in delivering high quality products and is a core component of our delivery process at Symbiote.

Overview

To speed things up, automation tests are run - where the machine does the testing for us, as many times as we like and much faster than if we were to test it manually.

The value of automation testing

A great automation testing setup delivers quality assurance fast, regularly, reliably and cost-effectively.

A well-designed automation test suite is made up of a broad range of tests covering critical pieces of functionality that can be run in a short amount of time.

This allows testing to be run more regularly, quickly providing feedback that new features don’t negatively impact the user experience, and didn't break existing pieces of functionality.

With less effort to execute automation tests, you ensure the same quality levels are met, that would otherwise take days or weeks to run manually

The overall release process is faster and smarter because the test execution can be triggered automatically in the release pipeline whenever a code change is introduced. The feedback from this execution can determine the quality of code change introduced.

15 Best Practices

This article highlights the best practices when automating integration testing, without further ado let’s look at them one by one:

1. Tying tests to functionality rather than the implementation

It’s important to tie tests back to functionality rather than the implementation when writing integration tests. Tests should be written in a way that allows them to continue to pass unless the functionality breaks. The change in implementation shouldn’t impact it as long as the functionality is still the same.


For example, if a button moves from one side of the page to the other, the tests should continue to pass.

This can be achieved by the following best practice #2

2. Unique identifiers

When selecting elements during integration tests, two things need to be kept in mind.

Firstly, a unique identifier to select the element.
A selector that aligns with the user’s perspective when interacting.

Let’s drill down.

Unique identifier - In order of precedence, the attributes used to select could be data-test-id, aria-label, id or class.
For example, a selector for a select dropdown element or an input element that has one of these attributes is a good one, as long as it’s unique enough to identify that one element on the page.

Secondly, a selector that aligns with the user’s perspective when interacting.
For example, a user may look for a button containing certain text. Hence, a selector that looks for it is a good one, as long as it’s unique enough to identify that one element on the page.

3. Test one piece of functionality at a time

Each piece of functionality needs to be automated in a test of its own. 

 

Having one test to cover multiple pieces of functionality, would mean that if the first piece of functionality breaks, the status of the remaining pieces of functionality would be masked until the first piece of functionality is fixed. At this point, it will be difficult to predict the overall impact of the code change. So always test one piece of functionality at a time.


For example, functionality tests like creating a contact, updating a contact and deleting a contact should all be automated as separate tests.

4. Bidirectional traceability

While this may differ depending on the software development lifecycle of the project, tracing an automated test all the way back to its requirements and vice versa is crucial. 

This process helps measure the value of the test, understand the test coverage achieved as well as identify the impact made by a code change to the requirements.

For example, Requirement X can have 4 test cases, each test case can have an automated test for it. The test cases for a requirement with high priority have high value and so do their tests.

5. Wait until no loader is present before interacting

Waiting for no loader to be present on the web page before proceeding to the next step, avoids premature interactions with the User Interface (UI) and makes the test more reliable. Different automation tools strive to do such things under the hood for us but it’s not always the case. In my experience, it’s surely worth explicitly making sure there is no loader present on the web page.

6. Take a human-centred approach, avoid using delays

Although loaders are a common indicator that a web page is still loading, it is not always the case. Hence the automation test simply rushes to interact with the element, the very moment it’s present in the UI. Delays may work but it is not best practice.


To deal with this, let’s think of it from a human perspective. What do we look for before we proceed to interact with an element in the UI? Yes, we wait for it to appear on the UI. That’s exactly what we need to automate to achieve human-like behaviour in our automation test. To ensure best practice, always wait for the interaction element to be visible before interacting.


Note - Sometimes due to window size and with certain automation tools, you may further need to scroll to the element before interacting with it.

7. Run each test independently

As a best practice, run each test in isolation. This means that the test is not dependent on the one(s) executed before it, as a precondition for it to run successfully.

For example, the first test executes steps to create a contact and the second test executes steps to edit a contact. If the second test utilises the same contact that was created in the first test, there are two drawbacks to this approach.

Firstly, if the first test fails, the second test will fail as a consequence. Secondly, this approach introduces the dependency that the second test cannot be executed until the first test has finished executing successfully.

Running tests in isolation enables the tests to be executed in any order and simultaneously on different machines to optimise the test run, which is a crucial factor in achieving Continuous Integration/Continuous Delivery(CI/CD) in the release process.

8. Each test can create its own test data

While a fixed test dataset can be seeded before the test suite is executed, this may not be a sustainable option. Firstly, with constantly changing test data, it gets difficult to maintain it over time. Secondly, whenever test execution is distributed across multiple machines, the fixed test dataset will need to be seeded on each machine. Hence, this approach of seeding a fixed test dataset does not distribute the time taken for it.

The solution is to create the data that a test needs as a test fixture, at the beginning of each test. This will help create deterministic data in each test to get deterministic results. When tests are run in parallel, the time taken to create test fixtures will also be parallelised. Since test data is created within the test, whenever test execution is distributed across multiple machines, the time taken to create test fixtures is distributed as a result.

9. Do not hard code test data

Make sure that data is never hard-coded in the test. Anything that is bound to change, can be declared as a variable at the beginning of the test. This will enable switching to test fixtures implementation in the future a lot easier as it can be replaced with the test fixture easily. This way, the rest of the test stays unaffected.

10. Modularise where possible...

When a particular set of steps is repeated across multiple tests, it can be modularised into a helper method aka partial test case however, we need to carefully assess what we’re trying to modularise.

For example, if we’re creating a contact in multiple tests, it means we are unnecessarily testing the contact creation functionality in multiple tests (refer to best practice #3 to know why it is not a good idea). This means we need one test for contact creation functionality and the remaining tests need a contact as test data.

On the other hand, if multiple tests open the browser, visit a given page and wait for the page to load, this is a good candidate. Another example would be a method to wait for a loader to not be present since this would apply regardless of which page of the application or which test it is.

11. … But avoid excessive modularising

While modularising is a good practice, in the case of automation testing, it is also about keeping it simple, maintainable, and readable. It’s about having the machine do the testing as close to human-like behaviour as possible. It is okay to have tests that simply visit a page, interact with a few elements, and assert the expected behaviour. Trying to modularise simple things like interacting with input elements or select dropdowns would not serve the purpose since different elements on different pages will be selected by different attributes based on what is unique.

For example, one select dropdown may have a data-test-id that we can use whereas another may not, resulting in us utilising let's say, its class attribute.

12. Separate the actual test execution from the setup

Not every line in the test is the actual test itself. The first few lines may just be the test fixture setup. Based on which automation tool and which language is used, it is a good idea to separate the setup process into a before() block or that appropriate comments are used to differentiate between the two.

13. Avoid using flow control

Automation tests are basically code written to test the application code. The results from each test case have to be accurate and identical after each run. Introducing any flow control for example, ‘if-else loop’, ‘switch case’, ‘for loop’, ‘while or do-while loop’ indicates that different behaviour is expected from an individual test, which shouldn’t be the case. Any conditional flow means there is a logic involved expecting different behavior based on different outcomes. Any looping involved would mean the same set of steps are being executed multiple times, which is the opposite of best practice #3. Not to mention, any logic for flow control will create the need for testing it which defies the purpose of automation testing.

14. Automation testing of use cases that add value

Automation of tests is great but do not underestimate the cost, effort, and time for setup, implementation, maintenance, and management of an automation framework.

For software development lifecycles (especially iterative and incremental) where the project evolves over time and code changes are introduced regularly, frequent testing of existing functionalities (Regression Testing) needs to be executed.

Hence, test cases in the regression testing suite are good candidates for automation. The use of automation testing should not entirely replace manual testing. There could be testing activities where manual testing is more effective. One such example would be exploratory testing.

15. Complete assertions

Oftentimes, during visits and clicks in our automation tests, we may miss a very important step. The expected result itself! This scatters incomplete assertions across the test. When an interaction is done in the UI, it is crucial to complete the test step by asserting that the expected result for that particular test step was achieved.

For example, if clicking on a button, results in an overlay on the screen, make sure to assert it. If this step fails, you’ll know straight away that the click has failed. Failing to do so would mean that once the test clicks on the button, it executes the next test step, which is interacting with the overlay. This is where the test would fail instead. This makes debugging the failed test a lot more complex. It is only after seeing the artefacts that we would know that it was the failed click which caused the original issue.

Conclusion

To conclude, the automation of integration tests is powerful and would give us a lot of benefits which include facilitating CI/CD pipeline, not having to ever manually test a functionality that has been automated already and lots more. By keeping the best practices in mind, we can make the automated integration tests a lot more efficient, valuable, reliable, simple and maintainable.

Additional context

Some context around the automation of tests on different testing levels

Automation tests can be run on many test levels - unit testing level, integration testing level and so on.

While automation of tests is great, it is equally important to understand the context of testing for the given test level.

For example, unit testing is done to ensure the given component works as it should, so that any changes made to the code do not break the individual component’s expected behaviour. Multiple components come together to deliver a piece of functionality. Integration testing is done to ensure the functionality works as expected so that any changes made to the code do not break the functionality.

Integration testing is usually performed by the tester to validate that the functionality works as intended. Integration tests to cover the critical paths are added to the regression testing suite. This means that every time there is a code change, we would like to ensure that the existing functionality hasn’t been impacted. That means regression testing would be performed regularly, hence a good time for machines to step in and do the job for us. This leads to the need for automation of integration tests.

Hi, I'm Sakina

"I’m passionate about test automation and love learning and implementing best practices, not just in testing but in the entire software development lifecycle to ensure delivery of high quality products."

Sakina Abedi - QA Analyst - Pilates Enthusiast

Team member

Sakina Abedi - QA Analyst - Pilates Enthusiast

A motif of six squares arranged in a grid pattern with three squares on top and three squares on the bottom