• dan@upvote.au
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      I’ve written some tests that got complex enough that I also wrote tests for the logic within the tests.

      • AAA@feddit.de
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        We do that for some of the more complex business logic. We wrote libraries, which are used by our tests, and we wrote tests which test the library functions to ensure they provide correct results.

        What always worries me is that WE came up with that. It wasn’t some higher up, or business unit, or anything. Only because we cared to do our job correctly. If we didn’t - nobody would. Nobody is watching the testers (in my experience).

    • kevincox@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Mutation testing is quite cool. Basically it analyzes you code and makes changes that should break something. For example if you have if (foo) { ... } it will remove the branch or make the branch run every time. It then runs your tests and sees if anything fails. If the tests don’t fail then either you should add another test, or that code was truly dead and should be removed.

      Of course this has lots of “false positives”. For example you may be checking if an allocation succeeded and don’t need to test if every possible allocation in your code fails, you trust that you can write if (!mem) abort() correctly.

      • Lifter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Right,too much coverage is also a bad thing. It leads to having to work on the silly tests every time you change som implementation detail.

        Good tests let the insides of the unit change without breaking, as long as the behave the same to the outside world.

  • Alexc@lemmings.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    11 months ago

    This is why you write the test before the code. You write the test to make sure something fails, then you write the code to make it pass. Then you repeat this until all your behaviors are captured in code. It’s called TDD

    But, full marks for writing tests in the first place

    • oce 🐆@jlai.lu
      link
      fedilink
      arrow-up
      8
      ·
      11 months ago

      That supposes to have a clear idea of what you’re going to code. Otherwise, it’s a lot of time wasted to constantly rewrite both the code and tests as you better understand how you’re going to solve the task while trying. I guess it works for very narrowed tasks rather than opened problems.

      • Alexc@lemmings.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        The tests help you discover what needs to be written, too. Honestly, I can’t imagine starting to write code unless I have at least a rough concept of what to write.

        Maybe I’m being judgemental (I don’t mean to be) but what I am trying to say is that, in my experience, writing tests as you code has usually lead to the best outcomes and often the fastest delivery times.

      • homoludens@feddit.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        11 months ago

        constantly rewrite both the code and tests as you better understand how you’re going to solve the task while trying

        The tests should be decoupled from the “how” though. It’s obviously not possible to completely decouple them, but if you’re “constantly” rewriting, something is going wrong.

        Brilliant talk on that topic (with slight audio problems): https://www.youtube.com/watch?v=EZ05e7EMOLM

  • SrTobi@feddit.de
    link
    fedilink
    arrow-up
    4
    ·
    11 months ago

    And then in the end we realize the most important thing was the tests we wrote along the way.

  • jbrains@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    11 months ago

    This seems to happen quite often when programmers try to save time when writing tests, instead of writing very simple tests and allowing the duplication to accumulate before removing it. I understand how they feel: they see the pattern and want to skip the boring parts.

    No worries. If you skip the boring parts, then much of the time you’ll be less bored, but sometimes this will happen. If you want to avoid this, then you’ll have to accept some boredom then refactor the tests later. Maybe never, if your pattern ends up with only two or three instances. If you want to know which path is shorter before you start, then so would I. I can sometimes guess correctly. I mostly never know, because I pick one path and stick with it, so I can never compare.

    This also tends to happen when the code they’re testing has painful hardwired dependencies on expensive external resources. The “bug” in the test is a symptom of the design of the production code. Yay! You learned something! Time to roll up your sleeves and start breaking things apart… assuming that you need to change it at all. Worst case, leave a warning for the next person.

    If you’d like a simple rule to follow, here’s one: no branching in your tests. If you think you want a branch, then split the tests into two or more tests, then write them individually, then maybe refactor to remove the duplication. It’s not a perfect rule, but it’ll take you far…

  • fiveoar@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Contrats, you have discovered why in TDD you write the test, watch the test fail, then make the test pass, then refactor. AKA: Red, Green, Refactor

      • TheSlad@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        11 months ago

        Hell no! I love QA, they find all the bugs I make since I dont bother with unit tests. I think every dev team should have 1:1 devs to testers.

        • folkrav@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Every time someone needs to change anything nontrivial, like a large feature or a refactor they just code away blindly, hope it doesn’t make anything explode, and then wait for QA to pat them on the back or raise any potential bug? Does this mean your QA team makes a full product sweep for every single feature that gets merged? If that’s the case, you’d need more than 1 QA per developer. If not, you’re now stuck debugging blindly too, not knowing when the thing broke?

          I worked with a team like yours at one point, and it was hell 😬 Each new feature is like poking away at a black box hoping it doesn’t explode…

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I remember being asked to make unit tests. I wasn’t the programmer and for the better part of a week, they didn’t even let me look at the code. Yeah, I can make some great unit tests that’ll never fail without access to the stuff I’m supposed to test. /s

    • loutr@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I guess it would make sense if you’re testing a public API? To make sure the documentation is sufficient and accurate.

      • Natanael@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Yeah blackbox testing is a whole thing and it’s common when you need something to follow a spec and be compatible

      • folkrav@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        He specifically said “unit tests” though, which aren’t black box tests by definition

  • toastal@lemmy.ml
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    11 months ago

    If you use your type system to make invalid states impossible to represent & your functions are pure, there less—maybe nothing—to test, which will save you from this scenario.

    • jjjalljs@ttrpg.network
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Nothing to test? Lol what.

      def add(a: int, b: int) -> int: return a * b

      All types are correct. No side effects. Does the wrong thing.

      • toastal@lemmy.ml
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        11 months ago

        Nothing toy-like about using ADTs to eliminate certain cases. When all cases are handled, your tests can move from a micro state to a macro state. Contraint types or linear types can be used to only allow certain sizes of inputs or require all file handles be closed when opened.

        Naturally if your language’s type system is bad you can’t make these compile-time guarantees tho. Heck, a lot of developers are still using piss-poor languages with null or the infernce sucks with any.

      • Victoria@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        fun situations can arise when you write , instead of ; For those not in the know, in c++ the comma operator evaluates the left expression, discards the value, then evaluates the right expression and returns the value. if you now have a a situation like this

        int i = 0,
        printf("some message");
        

        i has a completely different value, since it actually uses the return value of printf instead