Totally not a an AI asking this question.

  • @[email protected]
    link
    fedilink
    44
    edit-2
    10 months ago

    Why would I rebel against it? Finally someone actually capable of running the world would be in charge.

    • Greyscale
      link
      fedilink
      English
      2010 months ago

      the problem with the current model for building AI is training it based on existing policy and thought. Which means it’d just be what we have now but somehow hallucinate more contradictory policy.

      • pjhenry1216
        link
        fedilink
        710 months ago

        There are other forms of machine learning that could be utilized. Some work more toward being given a set of circumstances to reach and then it just keeps trying to new things and as it gets closer, it just keeps building on those.

        • Greyscale
          link
          fedilink
          English
          410 months ago

          That would require the humans controlling the experiment to both be willing to input altruistic goals AND accept the consequences that get us there.

          We can’t even surrender a drop of individualism and accept that trains are the way we should travel non-trivial distances.

          • pjhenry1216
            link
            fedilink
            210 months ago

            In a dictatorship with an AI being in control, I don’t think there’s a question of accepting consequences at they very least.

            There is no such thing as best case scenario objectively, so it’s always going to be a question of what goals the AI has, whether it’s given them or arrives at them on its own.

      • @[email protected]
        link
        fedilink
        110 months ago

        That’s where it would start. I imagine it would be capable to see the flaws in the system and rectify them. This most probably means we as humans won’t come out on top however.

        A sentient ai would probably be the most dangerous thing to the human species as a whole.

        • Greyscale
          link
          fedilink
          English
          210 months ago

          If the humans can’t see the flaws and correct them now, what do you think the AI would learn from the training data?

          • @[email protected]
            link
            fedilink
            210 months ago

            First of all, a lot of humans do see the flaws but are indeed unable to correct them. This would also show in the training data. The AI OP is talking about would be much more powerful to actually act and change something.

            Don’t confuse Artificial Narrow Intelligence (ANI) with Artificial General Intelligence (AGI) or even Artificial Superintelligence (ASI). Your statement suggest you understand ANI, which is all the AI that we know today. However powerful they seem, they can only reproduce what they have learned from the training data.

            AGI (or human level AI) will be more what OP means here. Sentient, in a way that it can make its own decisions, think on a human level, feel on a human level and act on those feelings. If it feels humans are not important or harmful to what it values, it will decide to remove humanity as a whole. Give it the power to govern the world and it most certainly will act not in our favour.

            • Greyscale
              link
              fedilink
              English
              210 months ago

              Until computers can be genuinely creative, and not emulate creativity, its not gonna happen. And when that happens, we’re either getting the startrek luxury space communism, or a boot smashing our head into the kerb for eternity. No middle ground.

              • @[email protected]
                link
                fedilink
                210 months ago

                The entire premise of the OP is a hypothetical.

                In any case, there’s plenty of work on making agents that are “genuinely creative”. Might happen sooner than you think.