this post was submitted on 09 Jul 2023
92 points (97.9% liked)

Programming

17378 readers
411 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there's sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

you are viewing a single comment's thread
view the rest of the comments
[–] MagicShel@programming.dev 35 points 1 year ago* (last edited 1 year ago) (10 children)

~~Pit~~ Mutation testing is useful. It basically tests how effective your tests are and tells you missed conditions that aren’t being tested.

For Java: https://pitest.org

Edit: corrected to the more general name instead of a specific implementation.

[–] mattburkedev@programming.dev 8 points 1 year ago (2 children)

The most extreme examples of the problem are tests with no assertions. Fortunately these are uncommon in most code bases.

Every enterprise I’ve consulted for that had code coverage requirements was full of elaborate mock-heavy tests with a single Assert.NotNull at the end. Basically just testing that you wrote the right mocks!

[–] MagicShel@programming.dev 6 points 1 year ago

That’s exactly the sort of shit tests mutation testing is designed to address. Believe me it sucks when sonar requires 90% pit test pass rate. Sometimes the tests can get extremely elaborate. Which should be a red flag for design (not necessarily bad code).

Anyway I love what pit testing does. I hate being required to do it, but it’s a good thing.

[–] Deely@programming.dev 1 points 1 year ago

Yeah. All the same. Create lazy metric - get lazy and useless results.

load more comments (7 replies)