Archive link: https://archive.ph/GtA4Q

The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.

A joke that people made when Google and Reddit announced their data sharing agreement was that Google’s AI would become dumber and/or “poisoned” by scraping various Reddit shitposts and would eventually regurgitate them to the internet. (This is the same joke people made about AI scraping Tumblr). Giving people the verbatim wisdom of Fucksmith as a legitimate answer to a basic cooking question shows that Google’s AI is actually being poisoned by random shit people say on the internet.

Because Google is one of the largest companies on Earth and operates with near impunity and because its stock continues to skyrocket behind the exciting news that AI will continue to be shoved into every aspect of all of its products until morale improves, it is looking like the user experience for the foreseeable future will be one where searches are random mishmashes of Reddit shitposts, actual information, and hallucinations. Sundar Pichai will continue to use his own product and say “this is good.”

  • @[email protected]
    link
    fedilink
    English
    355 months ago

    I’ve used an LLM that provides references for most things it says, and it really ruined a lot of the magic when I saw the answer was basically copied verbatim from those sources with a little rewording to mash it together. I can’t imagine trusting an LLM that doesn’t do this now.

    • @[email protected]
      link
      fedilink
      English
      165 months ago

      Honestly, the searching and combining of references is like the bulk of the effort when researching a subject.

      I’m fine with it copy and pasting the info. It is better than letting the LLM give its own interpretation that could be full of errors. At least for now.

      • @[email protected]
        link
        fedilink
        English
        135 months ago

        “Putting glue on Pizza seems to be a good idea for xy reason, but we didn’t try it out in practice. More research is needed.” [1]

        “As other researches have said, using glue to put cheese on Pizza is a great idea in theory. This does not hold at all when put to the practical test” [2]

        AI:

        “Researchers [1] and [2] both agree that putting glue on Pizza is a great idea”

      • @[email protected]
        link
        fedilink
        English
        55 months ago

        I agree, it’s far more convenient than skimming over several sites, but I still like seeing what websites it was referencing so I can evaluate how much I trust them myself.

      • @[email protected]
        link
        fedilink
        English
        115 months ago

        Kagi’s FastGPT. It’s handy for quick answers to questions I’d normally punch in a search engine with the same ability to vet the sources.

        • @[email protected]
          link
          fedilink
          English
          35 months ago

          I’d hate to defend an llm, but Kagi FastGPT explicitly works by rewording search sources through an llm. It’s not actually a stand alone llm, that’s why it’s able to cite it’s sources.