ISE Blog

Is Automated Functional Testing Really Worth It?

Like it or not, developing mobile apps is no longer the wild west - domain of startups, dorm dwellers and cowboy coders. It's a professional business with enterprise grade infrastructure and continuous integration and delivery systems. One part of this that has really taken off is automated functional testing, also known as UI testing, and often discussed as part of the practice of Behavior Driven DevelopmentAndroid and iOS now both have their own bundled frameworks for performing this testing, as well as other alternatives such as Calabash.

One question that has plagued owners and managers is, "Is automated functional testing really worth the investment?"

Automated UI testing allows the team to build tests which drive the UI of their app automatically to test if it functions as intended, without the need to manually run and re-run through the tests in a traditional manner. A team that is developing a new app and developing automated tests alongside it, can expect to spend at least as much time writing and running the tests as they do writing the app code itself. The short answer is, in almost all cases, yes - your automated tests will pay dividends in the long run and you'll be glad your team had the foresight to write them. But if you're like me, you want to get into the weeds to know for sure, so let's look at it in more detail.

I usually divide the mobile app lifecycle into four distinct phases - Strategy & Kickoff, Design & Architecture, MVP Development and Post-Launch Development. We can look more deeply into each phase to see what costs and benefits are associated with each:

Strategy & Kickoff

This phase is really taking your app from an idea into reality, and making your strategic decisions on platforms and other factors. If you're reading this post during this phase you're on the right track! There isn't a significant amount of work for UI testing in this phase though, so let's go on to...

Design & Architecture

During this phase the team will design wireframes to develop the vision for the app's user interface, and develop high level architecture for the app system, as well as vet any technologies chosen.

Auto tests: SignificantAt this point and the beginning of the next phase, the team will spend a significant amount of time setting up infrastructure for running automated tests such as creating any CI jobs and setting up the build box to be able to run the tests, creating any fakes, simulators, or other test tools that can be used during automated testing, and setting up common test steps or cross platform infrastructure.

Manual tests: Minimal - At this point for a traditional testing route the team may be designing and writing system level tests. They may still need to do some of this even if using automated testing, as it is never practical to automate everything.

MVP Development

Developing the MVP (Minimum Viable Product) for an app is the first phase of development to the point where it gets in the hands of actual users. This might mean it's in an app store, but more likely this is an earlier alpha or beta test with a select customer group.

Auto tests:

  • Your team will add about 50-100% of development costs for each user story development to do automated UI testing.
  • You'll see a reduction in bug fixing costs during development as automated tests can prevent bugs from being introduced before the testing phase, and can make sure bugs aren’t introduced that break previous features. (Note the more code there is to develop, the more savings there are here)
  • While your team will still have some regression testing to perform before launch for tests that can't be automated, these will be few and far between - you won't have a huge test to run and re-run before you can release.

Manual tests:

  • For each user story if you only do manual functional testing for that story, you will add a small cost to development, but not nearly as much as writing automated tests.
  • You would have a large cost for regression testing if you chose to perform a full manual regression test for each new story, or like most teams you'll deem this impractical and take on the risk that you might introduce a bug and not catch it.
  • You'll have a large test before each release, usually with fixes that need to be made and tested again.

Post-Launch Development

The final phase (and usually the longest running) is post launch development. This is any additional feature development and maintenance that takes place after the initial MVP is launched.

Auto tests:

  • New feature additions require 50-100% of development costs for tests still
  • Changes to existing features often require very few changes to tests
  • full regression test is nearly free for every change

Manual tests:

  • Still a small cost for functionally testing new features
  • regression testing remains expensive, doesn't get done often enough, and leads to more expensive bugs

Add it up

Happy Birthday AppSo what does this mean? In almost all cases, although it’s significantly more expensive up front, automated UI tests will pay off in the long run, especially when work is needed in the post launch phase.

Still not convinced it's right for your app?

Well let's consider first whether it's always appropriate to do automated UI testing? Of course this is easily discounted.

Consider the following app:

  • A single screen
  • One button the user can press to reveal a message “Happy Birthday”
  • The app will be released to the store immediately with no planned changes post-launch

There are likely only one or two test cases for this app. The cost to set up the auto test infrastructure alone will likely be higher than testing manually. You’d never do auto tests for an app like this. However this is not a realistic app.

So, what is the break even point? For many apps, this is going to occur in the post launch phase, as apps are added to, modified, and updated. The tests make sure these updates can be made without introducing expensive bugs or regression tests. For some apps with a very large or complex MVP, it might even break even before launch.

For a theoretical average app development cycle, pictured below, the auto test effort (green) vs manual test effort (orange) might break even sometime in the post-launch phase.


The app that might never break even, then, would be a very small, very simple app, with low likelihood of changes post launch. An example of such an app in reality might be a proof of concept created to demo an idea, which will later be thrown away when the production app is started. This app’s developer could save the money spent on UI automation tests and buy a new ping pong table for the office. 

However most real apps would not fall into that category, and in a pre-development phase you're much more likely to under-estimate the complexity of your app than over-estimate it.

In fact…

An argument could be made that since it’s difficult to anticipate what might need changed in the future, it’s best to just always do UI automation tests. You might think it wise to defer that decision until you know you need to make changes post launch, however it can be difficult for many reasons (not the least of which is psychological) to go back and write tests for completed, fielded code which doesn’t have them. In my experience teams (and managers) never do this.

One more thing

Choosing UI automation is not a binary choice. Some level of test automation can benefit almost every project. So for something simple, write simpler tests. Don't try to automate everything. This is almost never achievable anyway. Focus first on tests that require more, "repetitive tester motions," and less thought. Keep your manual tests for the most complex acceptance testing.

So unless you’re developing the Happy Birthday app from above, you should probably do auto tests.

If you have any suggestions of what has worked well for your team, or want to share your own experiences, please comment below, we'd love to hear from you.

And as always, feel free to contact us if you're interested in learning more about how we can help conquer your team's challenges, or if you're interested in joining our team!

Clay Schumacher, Senior Software Engineer

Clay Schumacher, Senior Software Engineer

Clay Schumacher is a Senior Software Engineer and Practice Lead of Mobile Development at ISE. Clay lives in Normal, IL where he works from his home office and Slingshot Cowork. For the last four years he has worked to develop mobile solutions that delight our clients and end users. Clay enjoys agile/lean software development, and treats each project team as a startup bent on delivering the best app for each unique problem. He enjoys traveling and playing games with his wife and two daughters.