• adam@kbin.pieho.me
    link
    fedilink
    arrow-up
    34
    arrow-down
    2
    ·
    11 months ago

    ITT people who don’t understand that generative ML models for imagery take up TB of active memory and TFLOPs of compute to process.

    • hotdoge42@feddit.de
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      edit-2
      11 months ago

      That’s wrong. You can do it on your home PC with stable diffusion.

      • ᗪᗩᗰᑎ@lemmy.ml
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        3
        ·
        11 months ago

        And a lot of those require models that are multiple Gigabytes in size that then need to be loaded into memory and are processed on a high end video card that would generate enough heat to ruin your phones battery if they could somehow shrink it to fit inside a phone. This just isn’t feasible on phones yet. Is it technically possible today? Yes, absolutely. Are the tradeoffs worth it? Not for the average person.

        • diomnep@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 months ago

          “He’s off by multiple orders of magnitude, and he doesn’t even mention the resource that GenAI models require in large amounts (GPU), but he’s not wrong”