I have to disagree with that. To quote the comment I replied to:
AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued
Where’s the “turn of phrase” in this, lol? It could hardly read any more clearly that they assume this “AI” can “figure” stuff out, which is simply false for LLMs. I’m not trying to attack anyone here, but spreading misinformation is not ok.
I’ll be the first one to explain to people that AI as we know it is just pattern recognition, so yeah, it was a turn off phrase, thanks for your concern.
Or, hear me out, there was NO figuring of any kind, just some magic LLM autocomplete bullshit. How hard is this to understand?
It’s a turn of phrase lol
I have to disagree with that. To quote the comment I replied to:
Where’s the “turn of phrase” in this, lol? It could hardly read any more clearly that they assume this “AI” can “figure” stuff out, which is simply false for LLMs. I’m not trying to attack anyone here, but spreading misinformation is not ok.
I’ll be the first one to explain to people that AI as we know it is just pattern recognition, so yeah, it was a turn off phrase, thanks for your concern.
You say this like human “figuring” isn’t some “autocomplete bullshit”.
Here we go…
You can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.