I would like to share my top testing tools that weekly help me to work effectively and improve my performance.

I do outsourced testing and have to work with various types of products: mobile applications and games, web, serious cryptocurrency projects for Desktop, etc. In this article, I would like to show you the most convenient and meaningful tools that help me optimize my work when testing mobile applications.

Mobile applications

The most important thing that needs to be emphasized in testing a mobile application is functional testing. When you have decided who your product is made for, who the end consumer is, then only then you can make competent end-to-end tests for your application.

We always begin testing with compliance with the requirements and design of the application. A good QA should know the requirements for the product under test, and an excellent one should be friends with the design. And this means not only being able to look into Figma, Invision or Zeplin, but also understanding how the UI / UX of its application is arranged.
To more accurately consolidate all user movements on application screens, a mindmap is usually compiled. Of the most convenient for me, I can distinguish xMind, Mindomo and MindMeister.

With the advent and updating of the mind map, it becomes easier to test applications, and what is especially convenient is to update and replenish the test documentation (check list, for example).

A good checklist is based on the requirements for the project, documentation for features and tasks assigned to the team in the current implementation. The mind map allows you not to forget and take into account all the nuances, thereby making excellent ground for end-to-end testing. A service that will help you both find ready-made checklists and update your own: https://checkvist.com/checklists/476089

Each checklist should also contain cases common for all mobile applications, such as:

  • testers pay attention to the correct functionality of all input fields (required and optional), whether they are displayed correctly on the screen and whether the corresponding alerts appear if they are filled in incorrectly
  • the application should not greatly affect the overall performance of the mobile device and properly respond to interruptions. In these cases, testers check all possible interrupts. Scenarios of incoming calls, SMS, warning about low battery, lack of access to the Internet, loss of GPS or sudden device shutdown are simulated. The impact on performance can be measured by the rate of battery discharge, the load on the processor and RAM of the device. Typically, when developing an application, the minimum requirements for the device are taken into account, and it is worth starting the performance test from such devices.
  • if there are payment transactions in the application, then QA engineers must make sure before each release that the application supports any of the payment systems specified in the application (Visa, Mastercard, Paypal)
  • new functionality in the application should always be checked for compliance with the guidelines of the selected mobile operating system:

iOS> developer.apple.com/design/human-interface-guidelines

Android> material.io/design/guidelines-overview

  • Do not forget about accessibility testing cases. Recently, a lot of attention has been paid to this type of testing and during my career, there have been several cases when our build was “wrapped up” until we added all the necessary features.
  • sooner or later, push notifications appear in the mobile application. It is worth checking that by clicking on them the user gets to the specified screen, that the pushes are normally queued and reach the destination.

Tools that will be needed to verify the above cases:

Charles or Fiddler sniffers as the most popular network traffic analysis tools so far. They allow you to check cases for network breakdown and weak Internet, look at outgoing requests and received answers. They also allow you to simulate some situations that are difficult to reproduce in real cases.

Data from sniffers will come in handy during further testing of the API. But for working with the API, I advise you to use specialized tools: Swagger UI, Postman. Both tools solve two problems: documentation of requests and their interactive verification.

It is worth thinking about the automation of the testing process. One of the most common solutions for automating UI testing of mobile applications is Appium. A relatively easy entry threshold and an abundance of documentation, as well as a huge base of QA specialists who will always help to answer questions.

Appium is a free, open source cross-platform tool that helps automate applications for both Android and iOS. It is one of the most widely used tools for creating automated tests. It is one of the most widely used tools for creating automatic tests for smartphones and tablets.

The undoubted advantages of Appium are ease of use, as well as support for many programming languages: Java, Ruby, Python, C #, PHP.

Before you start working with Appium, you must configure the environment from the following components:

After the software is installed, you can take care of the application. You will need an .apk file for an Android application or an .ipa file for iOS so that when you run tests, this application is installed on the selected device. If the application was not installed on the device, then the test code will install it, and then run the tests themselves.

In the process of testing automation, sooner or later the question arises: testing on real devices or using emulators. As a practice and ruthless statistics show, emulators are not a panacea. Situations are very common when everything works perfectly on emulators and all tests are passed. But on a real device, the application is blocked by a security system, the operation of another application or custom firmware (hello Android!).
My recommendation is to combine and use device farms. Services such as BrowserStack, AWS Device Farm, Xamarin Test Cloud. You connect to real devices, you can integrate your autotests into these services and look at the results. But it is always worthwhile to have target devices in the device park, as well as devices from the upper and lower bars (the minimum allowable and flagship).

A good alternative to Appium – codecept.io
If you prefer JS as a language for developing autotests – warmly welcome to CodeceptJS. Detailed documentation, tests do not take up much screen space (you will understand what I mean) and the active support of all modern mobile operating systems will make you think in favor of this tool.

After your project has grown a significant number of autotests, it would be nice to automate their launch every time you build a new build. Customize and configure this will help you modern CI \ CD systems. Personally, I prefer Jenkins or Teamcity, but here it’s a matter of taste.

Another tool to reduce and optimize regression testing is the dependency matrix (it is also a trace matrix). In short – this is a table in which the dependencies of the elements of the system from each other are put down. To compile such a matrix, you need to understand the application code, and it will also help to consult with the project architect. But in the end, such a tool will significantly (in my memory – up to 40%) reduce the time of regression testing.


Here is my set of useful hints for finding the maximum number of errors, given the specifics of mobile applications.

  • Always check cases for folding / reversing, waking up from sleep mode, and turning it on / off. For Android, there is a setting – Do not keep activities (DNKA). When testing with this setting, be sure to indicate this abbreviation in bugs so that it is easier for the developer to reproduce this.
  • Notifications / alerts – there are local and server ones (i.e. connected to a network connection). It is always worth remembering them and checking their proper operation. They should always lead to the target screen. Or you should abandon them until the developers have found the right way to navigate.
  • Use Charles and its counterparts to play all possible cases with the network. Users are always on the move and therefore in your application on each screen there should be a processing of situations with a signal loss.
  • Applications use many mobile device services, such as camera, gallery. microphone and stuff. Always check cases for access to these services and especially cases when access to them is denied.
  • Remember the features of operating systems and platforms. For example, iOS requirements: all ipa files must be signed by developers. In Android, for example, very often you can find bugs when quickly switching between screens. The data does not have time to load and the application crashes.
  • During testing, applications have a test certificate so that QA can seamlessly watch traffic using a sniffer. In the pre-production phase, it is always worth checking your application on battle certificates.
  • Keep your finger on the pulse of the project. To do this, communicate with the developer of the feature you are testing. He may know more nuances than indicated in the documentation. Communicate with the project manager to better understand the priority of tasks and deadlines. Communicate with designers and do not be lazy after passing regression testing to show them the final form of new functionality. This practice is called “author’s control” and it has often helped to find completely unobvious differences with the idea and implementation.
  • Always prepare the right time for testing. Remember the force majeure situations that can always arise. Use about 20% of the time from testing for such cases. Better to finish testing before the deadline than unpredictably go beyond the deadlines. After all, as we recall, QA is always “to blame for bugs on the production.”
  • Check that your application has a feedback form and it is user friendly. A large number of bugs are device-dependent and it is the user with his unique device and its configuration that can help you investigate the bug.