Why not test everything?
Testing everything is an appealing idea, but many testing experts have pointed out that it is impossible. Pradeep Soundararajan notes in Buddha in Testing that ‘complete testing is impossible in any software’ (p. 42). Gerald Weinberg, in Perfect Software (and Other Illusions About Testing), wrote ‘There are an essentially infinite number of tests that can be performed on a particular product candidate… managers and testers must strive to understand the risks added to the testing process by sampling’ (p. 27). Glenford J. Myers also touched on this topic in The Art of Software Testing. And of course, Dijkstra famously noted: ‘Program testing can be used to show the presence of bugs, but never to show their absence.’
This doesn’t mean we shouldn’t test, or that we should avoid automating regression suites. Rather, it reminds us to be realistic about the limitations of testing and deliberate in how we allocate our finite resources. Testing is fundamentally a sampling problem: we can’t test everything, so we must prioritize wisely. The most effective test strategies are those that are economical—designed with a clear understanding of risk, value, and cost.