I have a min and max position of an object and I want to represent an arbitrary point between them as a float between 0.0 and 1.0. This feels relatively basic math, but I can’t quite figure out what I need to do with this. Is there a special name for this sort of thing? Also, are there any built-in methods that would be useful for this?

  • o11c@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Compilers are generally forbidden from optimizing floating-point arithmetic, since it can change precision.