• @[email protected]
    link
    fedilink
    English
    09 months ago

    Yeah but there were admins spying what you did and banning you. Quite frankly i have much greater trust in AI admins than human admins. Not that some human admins aren’t great, but why risk it? Same as self driven cars, as soon as they’re ready im ready to never drive again.

    • @[email protected]OPM
      link
      fedilink
      English
      39 months ago

      You trust a billion dollar company with no morals with your data? Isn’t that the whole point we are on this site? Community servers are like lemmy instances.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        9 months ago

        Sure, and they can have AI moderators in lemmy instances. Whatever problems are concerning about corporate AI admins also apply to corporate human admins.

    • ☆Luma☆
      link
      fedilink
      English
      29 months ago

      What is stopping AI from showing bias here? The humans tailor the AI, so there will inherently always be that risk without transparency.

      • @[email protected]
        link
        fedilink
        English
        29 months ago

        Oh sure there’s definitely bias in AI, same as selfdriving cars. They make mistakes, but make far fewer than humans.

        • ☆Luma☆
          link
          fedilink
          English
          19 months ago

          Sure, but the mistakes aren’t the main issue, it’s that AI is just a tool that by extention can be abused by the humans in control. You have no idea what rules they give it and what false positives result from it.

          My primary concern here is that it’s Blizzard, whom love to gargle honey for China and is all for banning players that speak against them, is in charge of this AI.

          Blizzard’s previously talked about using AI to verify reports of disruptive voice chat, which is now running in most regions, though not globally. The developer says it has seen this technology “correct negative behavior immediately, with many players improving their disruptive behavior after their first warning.”

          Great, they can auto-ban players like Ng Wai Chung, I guess. For whatever they subjectively deem ‘harmful’. There’s also the looming idea that a friend can wander in my room, say something dumb, and now I’m closer to a ban because of an unrelated choice I made outside the game.

          And we definitely trust Blizard to be good with all the audio data they get to harvest. That won’t be abused later, right?

          • @[email protected]
            link
            fedilink
            English
            19 months ago

            I mean that’s a general argument against technology. Yes, more technology means more ruthlessly efficient abuse, but ultimately you think technology is better in the long run or not. Either way it is inevitable. Maybe in the EU they will ban those abuses, in China they won’t, and US will find some weird compromise between the two.