- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
I had lengthy and intricate conversations with ChatGPT about philosophy and religious concepts. It allowed me to playfully peek into Spinoza’s worldview, with a few errors.
I have no problem to accept it is form, but cannot deny it conveys meaning as if it understands.
The article is very opinionated and dismissive in that regard. It even goes so far that it predicts what future research and engineering cannot achieve; untrustworthy.
We cannot pin down what we even mean with intelligence and meaning. While being way too long, the article doesn’t even mention emergent capabilities, or quote any of the many contrary scientific views.
Apart from the unnecessarily long anecdotes about autistic and disabled people, did anybody learn anything from this article? I feel it’s an uncritical parroting of what people like to think anyways to feel supreme and secure.
I’ve read a few texts from the same source and they read quite childish.
It felt like reading essays from very young children: there is some degree of coherence, some information is there but it lacks actual advancement on the subject.