• Poob@lemmy.ca
    link
    fedilink
    arrow-up
    47
    arrow-down
    9
    ·
    11 months ago

    None of it is even AI, Predicting desired text output isn’t intelligence

    • Fedora@lemmy.haigner.me
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      11 months ago

      You hold artificial intelligence to the standards of general artificial intelligence, which doesn’t even exist yet. Even dumb decision trees are considered an AI. You have to lower your expectations. Calling the best AIs we have dumb is unhelpful at best.

      • Poob@lemmy.ca
        link
        fedilink
        arrow-up
        7
        arrow-down
        3
        ·
        11 months ago

        We never called if statements AI until the last year or so. It’s all marketing buzz words. It has to be more than just “it makes a decision” to be AI, or else rivers would be AI because they “make a decision” on which path to take to the ocean based on which dirt is in the way.

      • MajorHavoc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Yeah, and highlighting that difference is what is important right now.

        This is the first AI to masquerade as general artificial intelligence and people are getting confused.

        This current thing doesn’t have or need rights or ethics. It can’t produce new intellectual property. It’s not going to save Timmy when he falls into the well. We’re going to need a new Timmy before all this is over

    • Freeman@lemmy.pub
      link
      fedilink
      arrow-up
      23
      ·
      11 months ago

      At this point i just interpret AI to be "we have lots of select statements and inner joins "

    • Noughmad@programming.dev
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      11 months ago

      AI is whatever machines can’t do yet.

      Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn’t AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became “machine learning” or “model”.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        Machine learning is still AI. Specifically, it’s a subset of AI.

    • HankMardukas@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      11 months ago

      Always remember that it will only get better, never worse.

      They said “computers will never do x” and now x is assumed.

      • Poob@lemmy.ca
        link
        fedilink
        arrow-up
        9
        ·
        11 months ago

        There’s a difference between “this is AI that could be better!” and “this could one day turn into AI.”

        Everyone is calling their algorithms AI because it’s a buzzword that trends well.

        • Fedora@lemmy.haigner.me
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 months ago

          Shit as dumb as decision trees are considered AI. As long as there’s an if-statement somewhere in the app, they can slap the label AI on it, and it’s technically correct.

          • Batman@lemmy.ca
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            10 months ago

            That’s not technically correct unless the thresholds in those if statements are updated on the information gained for the data.

    • HardlightCereal@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      11 months ago

      Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don’t have mastery of language and they cannot predict desired output

      • cloudy1999@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        11 months ago

        I thought this was an inciteful comment. Language is a kind of ‘view’ (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It’s a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.

        Edit: grammar, ironically

        • HardlightCereal@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Language based AIs will always carry the biases of the language they speak. I am certain a properly trained bilingual AI would be smarter than a monolingual AI of the same skill level

      • Fedora@lemmy.haigner.me
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Sorry, but you oversimplify a lot here, it hurts. Language can express and communicate human thought, sure, but human thought is more than language. Human thought includes emotions, experiences, abstract concepts, etc. that go beyond what can be expressed through language alone. LLMs are excellent at generating text, often more skilled than the average person, but training data and algorithms limit LLMs. They can lack nuances of context, tone, or intent. TL;DR.: Understanding language doesn’t imply understanding human thought.

        I’d love to know how you even came to your conclusion.

      • MajorHavoc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        “Mastery of language is mastery of human thought.” is easy to prove false.

        The current batch of AIs is an excellent data point. These things are very good at language, and they still can’t even count.

        The average celebrity provides evidence that it is false. People who excel at science often suck at talking, and vice-versa.

        We didn’t talk our way to the moon.

        Even when these LLMs master language, it’s not evidence that they’re doing any actual thinking, yet.

    • fidodo@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Depends on your definition of AI, and everyone’s definition is different.