so the LLM is worthless if you already understand the topic because its explanations are terrible, but if you don’t know the topic the LLM’s explanations are worthless because you don’t know when it’ll be randomly, confidently, and extremely wrong unless you luck into the right incantation
what a revolutionary technology. thank fuck we can endanger the environment and funnel money into the pockets of a bunch of rich technofascists so we can have fancy autocomplete tell us about a basic undergrad CS algorithm in very little detail and with a random chance of being utterly but imperceptibly wrong
I don’t find the explanations bad at all… But it’s extremely useful if you know nothing or not enough about a topic
FWIW, I’m a strong proponent of local AI. The big models are cool and approachable. But a model that runs on my 5 year old budget gaming PC isn’t that much less useful.
We needed the big, expensive AI to get here… But the reason I’m such an advocate is because this technology can do formerly impossible things. It can do so much good or harm - which is why we need as many people as possible to learn how to use it for what it is, not to mindlessly chase the promise of a replacement for workers.
AI is here to stay, and it’ll change everything for better or worse. Companies aren’t going to use it for better, they’re going to chase bigger profits until the world burns. They’re already ruining the web and society, with both AI and enshitification
Individuals skillfully using AI can do more than they can without it - we need every advantage we can get.
It’s not “AI or no AI”, it’s “AI everywhere or only FAANG controlled AI”
yeah, you’re still doing everything you can to dodge the articles we’ve linked and points we’ve made showing that the fucking things just make up plausible bullshit and are therefore worthless for learning, and you’ve taken up more than enough of this thread repeating basic shit we already know. off you fuck
so the LLM is worthless if you already understand the topic because its explanations are terrible, but if you don’t know the topic the LLM’s explanations are worthless because you don’t know when it’ll be randomly, confidently, and extremely wrong unless you luck into the right incantation
what a revolutionary technology. thank fuck we can endanger the environment and funnel money into the pockets of a bunch of rich technofascists so we can have fancy autocomplete tell us about a basic undergrad CS algorithm in very little detail and with a random chance of being utterly but imperceptibly wrong
I don’t find the explanations bad at all… But it’s extremely useful if you know nothing or not enough about a topic
FWIW, I’m a strong proponent of local AI. The big models are cool and approachable. But a model that runs on my 5 year old budget gaming PC isn’t that much less useful.
We needed the big, expensive AI to get here… But the reason I’m such an advocate is because this technology can do formerly impossible things. It can do so much good or harm - which is why we need as many people as possible to learn how to use it for what it is, not to mindlessly chase the promise of a replacement for workers.
AI is here to stay, and it’ll change everything for better or worse. Companies aren’t going to use it for better, they’re going to chase bigger profits until the world burns. They’re already ruining the web and society, with both AI and enshitification
Individuals skillfully using AI can do more than they can without it - we need every advantage we can get.
It’s not “AI or no AI”, it’s “AI everywhere or only FAANG controlled AI”
yeah, you’re still doing everything you can to dodge the articles we’ve linked and points we’ve made showing that the fucking things just make up plausible bullshit and are therefore worthless for learning, and you’ve taken up more than enough of this thread repeating basic shit we already know. off you fuck
deleted by creator
that statement being true is quite probable: it was likely impossible before this point to set this much money on fire this pointlessly!