The expert’s guide to mobile app quality assurance

Frank Zinghini

Founder & CEO
today
timer
Businessman playing mobile app video game on smart phone

With more than 5 million mobile apps available in the app stores and more coming to market each day, it is no surprise that many organizations are in the midst of planning a new mobile app (or updating an existing one).

Mobile app quality assurance is one of the most important factors to focus on during the development of the app and throughout the product development lifecycle. If your app crashes or doesn’t perform as expected, it will fail in the marketplace. Users won’t keep the app, you will get bad reviews, and you will fail to attract new users.

Statistics have proven this to be true. More than ninety percent of users uninstall a mobile app after just 30 days. Technical issues, such as crashes, slow performance and errors in functionality are the number one reason users uninstall apps. These issues can be avoided with a proper mobile app quality assurance process.

Here at AVI, we have a QA team consisting of Software Quality Assurance Engineers, QA Analysts, and QA Testers, all dedicated to mobile, web, and desktop app QA for all of our clients. We have learned a lot about mobile app quality control over the years.

We would like to share some of our expertise with you, so you can truly understand the amount of QA work that goes into producing a high-quality product that will be successful in the marketplace.

How our team approaches the mobile app quality assurance process

The QA team

A typical QA team consists of the following roles:

  • QA Engineer—The QA Engineer typically has a computer science background or a proven technical ability. They set up the test environment, test environment configurations, write test cases, and may oversee a specific project.
  • QA Analyst—The QA Analyst has less technical background and proficiency, but they may still lead a project. They write test cases, but do not typically set up the test environment.
  • QA Tester—The QA Tester does not usually write test cases. They spend their time running test cases and performing ad hoc, regression, and smoke testing.

Manual QA testing

QA testing is based on user stories. The user stories are written by the mobile app product specialist in conjunction with the customer. They are then developed by the development team and passed on to QA when ready for testing.

The QA team reviews the requirements of the user stories for the mobile app and writes test cases. Some test cases will apply to both Android and iOS operating systems, while others may apply to a single OS. The QA team methodically rotates through devices to ensure comprehensive QA testing is done.

If a bug is identified on a device, the QA team performs additional testing to determine if it occurs on other devices. If the bug is related to the specific user story being tested, the story is reopened so the developer can fix the bug. If an unrelated bug is identified, a ticket is created for development to address it.

After a bug is fixed, the QA team performs additional testing to:

  • Make sure the bug is fixed.
  • Make sure the fix did not introduce new issues in the app.

There are tools and techniques to assist in the QA process. We recommend and use:

  • TestRail—This is a web-based test case management software to track, manage, and organize QA testing. It is easy to track results, provides actionable reports, and seamlessly integrates with bug trackers and automated testing.
  • Microsoft Test Manager (MTM)—MTM is a tool used to create and organize test plans and test cases and perform manual tests. A fully configurable test runner captures detailed records of steps performed, behaviors observed, and the status of each test step. When issues are uncovered, reports go directly to the development team so they have the information needed to reproduce the bugs.
  • SpiraTest—This tool allows you to create, edit, and execute test cases with rich text editing. It provides a global scheduling feature and supports templated and data-driven test cases.

Regression testing

Prior to the launch of any release, the QA team performs regression testing to make sure there aren’t any issues. Regression testing is the process of testing the mobile application software to make sure a change has not broken any existing functionality. This is done on a handful of devices, testing all of the app’s features.

If regression testing does identify issues, QA logs the bugs which are fixed prior to the release. The customer then determines if minor bugs are acceptable for the release or not. This typically involves a discussion with the client. Issues such as the color of a button may be deemed acceptable, while something impacting the functionality or performance of the mobile app must be addressed and corrected before the release.

Issues that are considered acceptable for the release are written up in release notes and addressed in a future release.

Automated UI tests

Automated tests of the User Interface (UI) are performed in conjunction with manual testing. They are completed during the software build, and while they are not as important as manual testing, they are still part of our QA process for some of our customers. Automated UI testing is done through scripts, which are written to mimic common user flows and ensure those flows are achievable on a variety of devices.

There are tools available for this process. Within our team, scripts are typically written in a development environment such as Visual Studio, which allows us to verify the script before publishing it. Once ready, we use tools such as App Center to build and distribute the application binaries which also provides the option of running tests via test frameworks such as Xamarin.UITest.

These tools allow us to continuously build, test, release, and monitor mobile apps across multiple platforms. We can see which devices and operating system versions pass and fail, along with screenshots that show us exactly what happened during the test.

Before testing can be done, we identify specific devices and operating systems to include in the testing process. The number and combination tested varies based on the customer’s requirements for a specific project.

For example, we run these automated tests against 45 different device and operating system combinations for one of our custom mobile app clients. Some of these devices have been added gradually over time as the client identifies additional phones that should be incorporated into testing, based on what they are seeing in the market. If users are reporting that using a specific device and OS combination is having trouble, this may get added to the list of devices to test.

It’s important to note that these automated tests are intended for evaluating the UI. They do not hit the underlying services and may only skim the core logic of the app. Rather, they evaluate the app workflow, making sure that when a given button is clicked, the appropriate next action occurs.

Automated testing is a continuous process that can be done with every build and release of the software.

Review automated test results

Manual review of the automated test results is the next step to identify false positives and negatives in the results. Results are reviewed on a regular basis since automated testing is constantly underway.

This review may find that a particular device displays problems on almost every page of the mobile app. In those cases, we often decide to purchase that device with that specific OS so we can load the app and manually test it to see what is going on. We then regularly incorporate it into the manual testing rotation.

Repeat

The QA process flow is an ongoing effort. Mobile apps are constantly updated with new releases. As we are finishing one build, work has already begun on the user stories for the next release.

This follows the larger Agile Scrum methodology we follow for most of our mobile application development projects.

Best practices we follow for the mobile QA process

  • Use a variety of devices—We recommend using a variety of devices and operating systems in QA testing to make sure you cover as many scenarios as possible. It is especially important to include devices that have unique design features. For example, the new Galaxy S10 has a hole punch near the top. This type of device must be included in testing to make sure the app’s UI is not  obstructed by this feature.
  • Test on older operating systems—As software developers, we know the importance of staying on top of OS updates—but the average user is not always as quick to update. Good QA testing takes this into account and tests the mobile app on older operating systems. This is particularly important for clients who advertise that their product works on earlier systems. Most of our clients have an opinion on how far back to go in terms of OS testing. It often comes down to finding a balance point between the likelihood of customers using that OS and the cost of testing.
  • Make sure permissions are included in QA testing—If the app needs access to the device camera or other functionality, these permissions should be included in the user stories and testing.
  • Make quality control for mobile apps part of the Software Development Lifecycle (SDLC)—QA should be a priority from day one, and it should continue to be so throughout the development lifecycle of the mobile application. This is the only way to make sure you put a valuable and functioning mobile application into the marketplace.

We cannot overemphasize the importance of using a dedicated team of experts for mobile app quality assurance. When you incorporate QA into the build from day one and give it proper attention through the software development lifecycle, you can be confident your product will work—across devices, platforms, and operating systems.

This is a requirement for success. Seek out external help if you don’t have an internal QA team. Your bottom line will thank you.