Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?
Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.
But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?
There are tools to detail the code coverage if your tests. I’ve worked with Istanbul in the past, and it’s helped to point out parts of the code that could use more attention
https://istanbul.js.org/
I use coverage tools like nyc/c8, but I can easily get 100% coverage on buggy, exploitable, and unstable code. You can have two projects, both with 100% coverage, and one be a shit show and the other be rock solid - so I was wondering if there’s a way to measure quality of tests, or to identify code that really needs extra attention (despite being 100%). Mutation testing has been suggested and that’s really interesting, I’m going to give it a go tomorrow and see what it throws up!