• Mossy Feathers (They/Them)
    link
    fedilink
    18
    edit-2
    18 hours ago

    Everyone’s trying to recapture the dotcom bubble; but they don’t realize tech is gonna need considerably more money than they already have to do something that crazy again. Furthermore, when it comes to AI specifically, if you give them the money they need to actually achieve AGI, then there’s a very real chance your investments will be worthless the moment they succeed.

      • Mossy Feathers (They/Them)
        link
        fedilink
        417 hours ago

        Why would money become worthless if AGI is invented? Best case scenario is a benevolent AGI which would likely use its power to phase out capitalism, worst case scenario is that the AGI goes apeshit and, for one reason or another, decides that humanity just has to go. Either way, your money is gonna be worthless.

        The only way your money would retain its value is if the AGI is roped into suppressing the masses. However, I think capitalists would struggle to keep a true AGI reigned in; so imo, it’s questionable as to whether or not the middle road would be “true” AGI or just a very competent computer program (the former being capable of coming to its own conclusions from the information it’s given, the latter being nothing more than pre-programmed conclusions).

        • @Eccitaze
          link
          fedilink
          2
          edit-2
          6 hours ago

          I keep thinking about this one webcomic I’ve been following for over a decade that’s been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a “bug” lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state’s title is literally “first advisor” to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).

          Meanwhile, there’s another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It’s an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.

          Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.

        • nickwitha_k (he/him)
          link
          fedilink
          211 hours ago

          Current mainstream AI has no possible path to AGI. I am supportive of AGI to make the known universe less lonely but LLMs ain’t it.