• db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      “Hallucinate” is the standard term used to explain the GenAI models coming up with untrue statements

      • Cyrus Draegur@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        in terms of communication utility, it’s also a very accurate term.

        when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

        when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

        it’s the same song, but played on a different instrument.

        • kronisk @lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

          Is it really? You make it sound like this is a proven fact.