Why is Mobile App Test Automation so difficult? Is there a path forward?

Why is Mobile App Test Automation so difficult? Is there a path forward?

Since joining Sofya mobile and web test automation companyseveral months ago, I have had the opportunity to talk to a lot of Engineering and QA Managers of many of our customers about their testing methodologies and their challenges, particularly for mobile apps.   

The picture that emerged, not surprisingly, is that most, if not all of them use manual testing for mobile apps today; typically, with outsourced testers with limited code coverage and limited number of devices. They desperately feel the need for an automated solution but lack the skillset or can justify the ROI to take on such a project. Some who started their test automation journey quickly abandoned it due to its complexity 

These observations made me to think about the challenges of automation for mobile apps and the tradeoff involved in manual and automated testing.   

Challenges for Testers: 

Firstly, testing tends to be an after-thought in most projects until quality becomes a top customer sat issue and sales blocker As development delays mount, companies don’t want to delay their release deadlines; what gets squeezed is their testing timelines and testing resources.   

Secondly, in this age of scrum andevOps, weekly and monthly releases are the norm than an exception.  Such tight release schedules make regression testing quite challenging; what gets tested are the new features in the release.  As for the other functions in the appthere is little time left to testWithout regression testing, when a new code breaks old one, the problem felt by the customers, often results in loss of revenue or reputation.  

Mobile App Testing vs Web / Desktop Apps: 

Let us take a closer look at what makes mobile app testing more difficult than application types such as web or desktop apps.  You can look at the testing of an app in 4 layers – each with their own attributes. 

1. Backend Testing involves testing web services, database or data storage components that are used in the application and hence typically easy to automate.   
2. Device-centric testing
 Unlike web apps, a mobile app must deal with device characteristics like resolution, performance, battery life, etc. which vary greatly across devices.  In addition, a perfectly well behaving app on one device can exhibit crashes on another device.  Therefore, being able to perform crash analysis is an important component of testing for mobile apps. In the world of web apps, browser handles most of the device idiosyncrasies like resolution, fonts, scaling, etc.   For native and hybrid mobile apps, device specific issues need to be handled at the app level.  
3. UI Testing involves testing the UI layer such as Navigation, Layout, Screen orientation and Accessibility.  
4. App level testing focuses on functional or scenario testing in which specific scenarios in the core application logic are validated from the perspective of the UI layer.   

 

Mobile Testing on Varied Devices, OS and Browsers: 

An additional dimension of complexity arises with mobile apps. Mobile apps must be tested against plethora of devices, OS versions and browsers. The permutation of these variables further contributes to the complexity of testing and increased level of resources 

The diagram below illustrates the dimensions of testing for mobile Apps 

Sofy- Dimensions of Testing for Mobile Apps-01

 With this perspective in mind, a QA manager is in a difficult position of deciding between manual testing or automation.  Some of the tradeoffs are highlighted below.  

Manual Testing  Automated Testing  
  1. Manual testing is the most straight forward testing methodology as it does not require upfront investment in automation.
  2. However, it is costly and time consuming and increases linearly in cost and resources with functionality, # devices to be tested, OS types.  This makes it a cost very prohibitive solution.   
  3. It lacks consistency as testing can vary from one tester to another.  
  4. Manual testing mandates all testers have domain knowledge. 
  5. Because of sequential nature, there is a higher latency between Development and Testing. As a result, the bug is found much later when the code is implemented.  
  1. With automation, when the upfront automation work has been completed, the incremental cost of testing is minimal and predictable.  
  2. However, automated testing requires extensive upfront coding because most automation frameworks like Appium and Selenium require extensive writing code. 
  3. There is no one toolset that can perform full automation.  So, multitude of tools are used to stitch together automation. 
  4. This work of automation requires skillset SDETs.  
  5. Today’s automation tools lack resiliency to UI changes, device types, dynamic content which may require recoding.
  6. Failure Analysis requires inspection of multiple logs to diagnose the problem.

 

As neither of these approaches offer viable alternative, most teams stick with manual testing with their limitations. However, the only viable option to address these challenges in a cost-effective way is a “no code” automation tool. This is an area of big investment with many startups vying to solve this problem and are leveraging AI to overcome some of the technical challenges.

Sofy.ai is one such company that has a solution that addresses these challenges. I look forward to your comments, your app testing stories and any best practices that you have used to mitigate some of the challenges I mention.