This week I worked with a team to get code coverage over 80% (a corporate minimum).
The problem with this effort: Code coverage can be gamed.
Sure, low code coverage means there's a lot of untested code.
But, high code coverage doesn't mean the code is well tested.
Low quality tests are arguably worse than no tests. Why? Because they make it look like the code is well tested. This creates a false sense of confidence. And lousy tests that report 100% coverage mask opportunities to improve.
The real solution: Thoughtful, human review.
Unit testing requires significantly less skill, time investment, and infrastructure than any other form of testing and almost any other form of verification. So anybody can pick it up and apply it to any project. That's why it's the go-to technique
Post details
One thing I've wondered about is why software folks so often use unit testing as their go-to testing technique.
When I've worked at companies that ship few enough bugs that it would be considered a failure to have one serious bug in a year, unit testing has been a last resort.
strong disagree. no unit or API test would catch these UI errors which impact a company's bottom line, and humans shouldn't be tasked with manually checking every page of your app over and over to catch these bugs.