I have a min and max position of an object and I want to represent an arbitrary point between them as a float between 0.0 and 1.0. This feels relatively basic math, but I can’t quite figure out what I need to do with this. Is there a special name for this sort of thing? Also, are there any built-in methods that would be useful for this?

  • o11c@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    11 months ago

    Related, note that division is much slower than multiplication.

    Instead of:

    n / d
    

    see if you can refactor it to:

    n * (1.0/d)
    

    where that inverse can then be hoisted out of loops.

      • o11c@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Compilers are generally forbidden from optimizing floating-point arithmetic, since it can change precision.

    • Murderturd@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      If multiplication vs division is causing perf issues you fucked up somewhere or shouldn’t be asking on Lemmy for help because your performance critical system is of the safety and health type.

      I’ve never had division actually be a real issue.