HTEC’s first QA Roundtable, held in Nis offices, gathered seven QA Engineers from five IT companies to talk about UI Automation Frameworks, pinpoint their major shortcomings, and discuss the possible ways to overcome them.
In the second part of the discussion, the engineers addressed the problems mentioned in the first part of the session, and referred to reporting tools, test parallelization, waiting for elements, mapping UI elements, and preparing test data.
This productive format of discussion once again lead to some interesting and useful conclusions:
- Allure, Extent, and TestNG are excellent HTML reports that visually present the test results.
- Parallelization – It is hard to achieve, but from the experience of one of the colleagues, we learned that you could create Java nodes with small groups of tests. Each node must be independent of any other nodes, or the tests could fail. This can speed up the UI testing, but running too many tests in one node can cause serious performance issues on the server. So, the best option may be to try to find the perfect balance based on the available resources.
- One of the biggest issues with UI automation is tests executing faster than the page could be rendered. Therefore, QAs need to use ‘wait’, ‘delay’, or ‘sleep’ functions in their tests. It was agreed that it is good to use different functions depending on the test case.
- Collaboration with developers is often necessary when trying to map out the elements. It is helpful, and sometimes crucial, for developers to add test IDs to speed up the process of writing tests. When this is not an option, XPath is the only way to go. However, XPath is limited to and supported by only a small number of test frameworks. One of the alternatives worth investigating is jsoup library.
- Preparing test data is a process itself with any kind of QA automation. QAs tend to get to more dynamic types of data, but if the project is small and covers only critical screens, it can be quicker to use static data. Those who want to get more dynamic and be more independent of the environments where the tests are performed should create web services (Rest API, for example) to prepare data prior to each test run.
The next step for Nis’s local QA Community is creating a communication channel where all the participants could continue discussing hot topics, brainstorm new ideas and solutions, and share their domain knowledge. Hopefully, this will only be a stepping stone for the organization of future QA Roundtables and other kinds of panel discussions, meetups, workshops, etc., that will further empower this part of the IT industry in Nis.