The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.
The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.
It’s easy to imagine how AI can be beneficial in the short term. The problem is imagining how it won’t go wrong in the long term.
Even sci-fi has a hard time figuring that out. StarTrek just stops at ChatGPT-level of intelligence, that’s how smart the ship computer is and it doesn’t get any smarter. Whenever there is something smarter, it’s always a unique one-of that can’t be replicated.
Nobody knows how the world will look like when we have ubiquitous smart and cheap AI, not just ChatGPT-smart, but “smarter than the smartest human”-smart, and by a large margin. There is basically no realistic scenario where we won’t end up with AI that will be far superior to us.
EMH mark 1. They duplicated it and used it for cheap, menial labor. Despite the fact that it was capable of real intelligence (see The Doctor). It didn’t dive deeper than that; it was literally the ending scene to a single episode that simply left the audience thinking about the implications, as well as showing a possible start to an uprising.
Science fiction just is about entertainment. An AI that’s all but invisible and causes no problems isn’t really a character worth exploring.
Yeah, but don’t you see the problem in that by itself? Even in the best case scenario we are heading into a future where humanities existence is so boring that it has no more stories worth telling.
We see a precursor to that with smartphones in movies today. The writer always have to slap some lame excuse in there for the smartphones to not work, as otherwise there wouldn’t be a story. Hardly anybody can come up with ideas on how to have an interesting story where the smartphones do work.
No, I don’t see a problem in old tropes dying.