I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

    • @[email protected]
      link
      fedilink
      English
      101 month ago

      As someone who doesn’t hate AI, I hate a few things about how it’s happening:

      • If I want to make a book, and I want to use other books for reference, I need to obtain them legally. Purchase, rent, loan… Else I’m a pirate. Multimillion companies say for them it’s fine as long as somebody posted it on the internet. Their version of annas-archive is suddenly legal and moral, while I’m harming the authors if I use it.
      • They are stuffing everything with AI, which generally means internet connection and sending unknown data.
      • It’s an annoying marketing gimmick. While incredible useful in some places, the insistence that it solves all the problems make it seem as a failure.
      • @[email protected]
        link
        fedilink
        English
        -21 month ago

        I think your issue moreso lies on copyright laws than the LLM datasets origination then. Which I completely understand, I hate copyright laws.

        There’s TV shows that I can’t stream and the only legal way to watch them is to buy the box set for £90. Get fucked I’m not paying that, I’ll just download it for free.

    • @[email protected]
      link
      fedilink
      English
      51 month ago

      There are a lot of problems with it. Lots of people could probably tell you about security concerns and wasted energy. Also there’s the whole comically silly concept of them marketing having AI write your texts and emails for you, and then having it summarize the texts and emails you get. Just needlessly complicating things.

      Conceptually, though, most people aren’t too against it. In my opinion, all the stuff they are labeling “generative AI” isn’t really “AI” or “generative”. There are lots of ways that people define AI, and without being too pedantic about definitions, the main reason I think they call it that, other than marketing, is that they are really trying to sway public opinion by controlling language. Scraping all sorts of copywritten material, and re-jumbling it to spit out something similar, is arguably something we should prohibit as copyright infringement. It’s enough of a gray area to get away with short term. By convincing people with the very language they use to describe it that they aren’t just putting other people’s material in a mixer, they are “generating new content”, they hope to have us roll over and sign off on what they’ve been doing.

      Saying that humans create stories by jumbling together previous stories is a BS cop out, too. Obviously, we do, but humans have not, and do not have to give computers that same right. Also, LLMs are very complex, but they are also way way less complex than human minds. The way they put together text is closer to running a story through Google translate 10 times than it is to a human using a story for inspiration.

      There are real, definite benefits of using LLMs, but selling it as AI and trying to force it into everything is a gimmick.

    • @Eccitaze
      link
      fedilink
      English
      3
      edit-2
      1 month ago

      I hate it because it’s a gigantic waste of time and resources. Big tech has poured hundreds of billions of dollars, caused double digit percentage increases in data center emissions, and fed almost the entire collective output of humanity into these models.

      And what did we get for it? We got a toy that is at best mildly amusing, but isn’t really all that actually useful for anything; the output provided by generative AI is too unreliable to trust outright and needs to be reviewed and tweaked by hand, so at best you’re getting a minor productivity gain, and more likely you’re seeing a neutral or negative impact on your productivity (or producing low-quality crap faster and calling it “good enough”). At worst, it’s put a massive force multiplier in the hands of the bad actors using disinformation to tear apart modern society for their personal gain. Goldman Sachs released a report in late June where they pointed this out: if tech companies are planning on investing a trillion dollars into AI, what is the trillion dollar problem that AI is going to solve? And so far as I can tell, it seems that the answer to the question is either “it will eliminate millions of jobs and wipe out entire industries without any replacement or safety net, causing untold human suffering” or (more likely to be the case) “there is no trillion dollar problem AI can solve and the entire endeavor is pointless.”

      Even ignoring the opportunity cost–the money spent could have literally solved the entire homelessness crisis, world hunger, lifted entire countries out of poverty, or otherwise funded solutions for real, intractable, pressing problems for all of humanity–even ignoring that generative AI has single-handedly erased years of progress in reducing our C02 emissions and addressing the climate crisis, even ignoring the logistical difficulty of the scale of build-out being discussed requiring a bigger improvement in our power grid than has been done basically ever, even ignoring the concerns over IP theft and everything else, fundamentally generative AI just isn’t worth the hype. It’s the crypto craze and NFT craze and metaverse craze (remember Zuckerberg burning 36 billion to make a virtual meeting space featuring avatars without legs?) all over again, except instead of only impacting the suckers who bought into the hype, this time it’s getting shoved in everybody’s face even if they want nothing to do with it.

      But hey, at least it gave us “I Glued My Balls To My Butthole Again.” That totally makes the hundred billion investment worth it, right?