ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    People really need to understand what LLMs are, and also what they are not. None of the messianic hype or even use of the term “AI” helps with this, and most of the ridiculous claims made in the space make me expect Peter Molyneux to be involved somehow.

    • dx1@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      LLMs fit in the “weak AI” category. I’d be inclined to not call them “AI” at all, since there is no intelligence, just the illusion of intelligence (if I could just redefine the term “AI”). It’s possible to build intelligent AI, but probabilistic text construction isn’t even close.

      • fsmacolyte@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        It’s possible to build intelligent AI

        What does intelligent AI that we can currently build look like?

        • dx1@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          There’s “can build” and “have built”. The basic idea is about continuously aggregating data and performing pattern analysis and basically cognitive schema assimilation/accommodation in the same way humans do. It’s absolutely doable, at least I think so.