The practices in the broader software industry have evolved to the point where not testing is plain unprofessional. Testing means many things to many people, and all those variations are essential at various points in the process of making good software:
- Early user testing (such as with paper prototypes) is a key part of discovery & design. Testing isn't just for knowing you built it right, but for knowing you're about to build the right thing.
- Testing during programming work, especially automated testing in a BDD approach, provides excellent and rapid feedback on what an app should be trying to do, whether or not it's doing it correctly.
- Testing before deployment provides useful, and often necessary, information on unforeseen consequences of programming activity. Tests are more informative and in a better mood than users reporting bugs.
- A solid suite of tests is a useful source of measures supporting project management decisions, providing more meaningful impressions of project size and velocity than past measures such as lines of code or agile "points".
- Solid testing necessarily means solid documentation. Automated tests are a form of documentation in themselves (test as specification), and protocols for non-automated tests can be used for a similar function.
- Solid testing is a good way to assure risk-averse stakeholders that everything is going to be fine. In some apps, solid testing is necessary to comply with policy or regulation.
So let's do more of it. Let's figure out how to make it easy enough that it's just a natural, integrated part of the flow of software development, rather than a separate step.
To clarify, I'm not suggesting this as a single session, but a broader topic to expand on. Any one of the bullets above could be a stand-alone session, and some of them could have their own conferences.