The reposts and expressions of shock from public figures followed quickly after a user on the social platform X who uses a pseudonym claimed that a government website had revealed “skyrocketing” rates of voters registering without a photo ID in three states this year — two of them crucial to the presidential contest.

“Extremely concerning,” X owner Elon Musk replied twice to the post this past week.

“Are migrants registering to vote using SSN?” Georgia Rep. Marjorie Taylor Greene, an ally of former President Donald Trump, asked on Instagram, using the acronym for Social Security number.

Trump himself posted to his own social platform within hours to ask, “Who are all those voters registering without a Photo ID in Texas, Pennsylvania, and Arizona??? What is going on???”

Yet by the time they tried to correct the record, the false claim had spread widely. In three days, the pseudonymous user’s claim amassed more than 63 million views on X, according to the platform’s metrics. A thorough explanation from Richer attracted a fraction of that, reaching 2.4 million users.

The incident sheds light on how social media accounts that shield the identities of the people or groups behind them through clever slogans and cartoon avatars have come to dominate right-wing political discussion online even as they spread false information.

  • Well, the types of misinformation will vary but anywhere people gather and have discussions is bound to have some bullshit floating around that gets spread around. It’s a fault of humanity, not of any particular persuasion. But there is a far cry from rumors about how to find the Triforce in Ocarina of Time to shit like “vaccines cause autism” and such.

    • @[email protected]
      link
      fedilink
      6
      edit-2
      2 months ago

      Yeah, but there’s a tendency online in liberal circles to think that any criticism from the left is right-wing or foreign interference. I’ve seen a lot of people in the political groups here claim that the leftists they’re arguing with are part of hostile disinformation campaigns, which is just silly; online propagandists make Facebook Groups and Pages to create memes and articles, and hundreds of sock-puppet accounts to disseminate them. They don’t waste hours writing dozens of replies to a single account on a small, niche website.

      • @[email protected]
        link
        fedilink
        12 months ago

        Part of the problem is there’s lots of gullible people who repeat whatever nonsense makes them feel righteous. It’s usually not clear if you’re talking to a dumbass who fell for right-wing or Russian imperialist talking points, or an actual neonazi pretending to be a misguided leftist for trolling purposes, or an actual Russian imperialist pretending to be a misguided leftist for trolling purposes. I think you’re right though that Lemmy is not a likely target for any kind of organized propaganda campaign because it’s relatively tiny and would be a waste of time for those sorts of groups.

      • @[email protected]
        link
        fedilink
        12 months ago

        In the brave new world of LLMs, it no longer takes hours to write comments and replies all day in favour of some political or commercial view. The whole process can be automated! Cool, right?

        • @[email protected]
          link
          fedilink
          22 months ago

          Well, I’ve never used an LLM to argue with strangers online, but I would imagine that it would take a lot of effort to keep getting coherent responses to every comment. But even if it is fast and easy, are you really suggesting that right-wing or foreign trolls are concentrating on individual arguments on niche communities? I’ve heard of fake news outlets, astroturfed hashtags, propaganda memes, reply spam, and other broad influence campaigns, but I’ve never heard of troll farms being used for individual arguments, especially on small websites. It seems like, even did take minimal effort, it would also have minimal influence. Do you have any evidence this is happening?

          • @[email protected]
            link
            fedilink
            12 months ago

            Well, I’ve seen clear examples of AIs responding to comments on hot-button topics on reddit. But I guess that isn’t a small website. In any case, the only point I was really trying to make is that widespread social manipulation is becoming easier. If someone decides they want to influence a discussion somewhere, they can do that without a great deal of effort. The comments don’t have to be detailed or coherent. Simply being on-topic and persistent is enough, raising vaguely relevant talking-points whenever a response is expected.