First of all, I do not want to discuss what kind of software developing process one should choose or prefer. If it’s either TDD in combination with Scrum or Waterfall – I don’t mind. The intention of this series about Software Testing is to inspire you for your upcoming projects. Also it should provide a set of possibilities, which can help you improve your software’s performance – not just in short term but also in the long run.
The goal is simple: improve the awareness of the necessity of tests and their different kind of advantages.
I want to distinguish between 4 kinds of software testing (approaches). See image below. (this is related to this post of Alistor Scott) In following blog posts, yet to be released, I will go into detail to these:
- Unit Tests verify the functionality of one specific function or method (the smallest unit in the considered testing types).
- Integration Tests ensure the correct functionality of software across multiple functions or even different software layers or systems (these tests also include API tests and component tests).
- UI Tests are performed directly on the UI, here we assume that the software’s logic is correct and only want to ensure the appropriate visualization (sequence).
- Manual Tests (I like to call this the “project manager’s approach” ;-) ) requires a software tester to walk through the app manually and test it on his own. In contrast to the other three, these tests are not being performed automatically!
In contrast to the first 3 testing types, manual tests do require a human being for performing them. Therefore, these tests do not run automatically and thus form some kind of bottleneck on the human ressource side.
Automated tests can run independently at any time and send automatic reports to software developers, project managers or your postman, whomever you like.
Nevertheless, automated tests are often wrongly considered as being time consuming of development resources and therefore have an instantaneous impact on a project’s timeline and thus its progress pace. Although they might help software developers to track down errors and improve software to also handle edge cases. A good software engineer does this by default.
Alleged Interim Conclusion:
Now let’s assume user tests are mostly complete and do cover most possible failures and successful use cases, which were specified or defined before. It doesn’t matter if you run in an agile setup or choose the top to bottom waterfall approach, you will eventually release a new version. For whatever reason.
To ensure a reliable software solution one would expect to test all use cases again (regression testing). But! One could even increase a project’s time to delivery if you would only test new functions or bugs that have been fixed. Your project success will be even better, as you save time and money. Because: if you would test everything again, this would be like doing work twice, you don’t do that – you do like efficiency though.
So, now take the requirements in consideration! Software adaptations will interfere with existing components as you do not want to program several (quite similar) components several times or you do have correlations of certain components.
But how to argument the validity of such interferences? Basically the fulfillment of requirements is a factor of a project’s success. No matter what. As long as cost and time efficiency are not below zero, it’s worth it due to the simple fact: As long as requirements are fulfilled and you did not run in negative costs, a project was some kind of success and should have been worth it.
If you are still sceptical about automated tests please have a look at the conclusion block and replace “automated tests” with “tests”. You would immediately acknowledge the necessity of testing something before distributing or releasing something. Why couldn’t this be automated, if automation saves time and costs and at last nerves due to non repetitive work?
And at last. Face the fact: success – no matter what – is a multidimensional state, you can succeed on multiple layers. But if you fail on one of the crucial ones: quality and reliability, success will be turned to failure in the long run.
Now calculate the probability of a project’s failure just because you wanted to be even more efficient with saving time and reduce costs. ;-)
Furthermore, software projects shall not be introduced to serve in a current snapshot. In fact, they are in operational use over years, whereas environment conditions and requirements may change, the solution requires stability and possibly changes to fit to new regulations, needs or simply user’s expectations.
So, a project’s success should also be measured with respect to the ability to maintain it. A project cannot be considered maintainable if the cost of change exceeds normal expectations because the effort of retesting is beyond the cost of the feature itself. The disability to maintain software due to additional expenses is just one step away from programmed obsolescences. The effort of retesting can only be hold in check if a high level of automated test guarantee regression.