Am I the only one getting agitated by the word AI (Artificial Intelligence)?
Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).
Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.
I think most people consider LLMs to be real AI, myself included. It’s not AGI, if that’s what you mean, but it is AI.
What exactly is the difference between being able to reliably fool someone into thinking that you can think, and actually being able to think? And how could we, as outside observers, be able to tell the difference?
As far as your question though, I’m agitated too, but more about things being marketed as AI that either shouldn’t have AI or don’t have AI.
Maybe I’m just a little bit too familiar with it, but I don’t find LLMs particularly convincing of anything I would call “real AI”. But I suppose that entirely depends on what you mean with “real”. Their flaws are painfully obvious. I even use ChatGPT 4 in hopes of it being better.