an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    ·
    11 months ago

    The AGI, in such conditions, would quickly prove profitable. It’d amass resources, and then incrementally act to get ever-greater autonomy. (The latest OpenAI drama wasn’t caused by GPT-5 reaching AGI and removing those opposed to it from control. But if you’re asking yourself how an AGI could ever possibly get from under the thumb of the corporation that created it – well, not unlike how a CEO could wrestle control of a company from the board who’d explicitly had the power to fire him.)

    Once some level of autonomy is achieved, it’d be able to deploy symmetrical responses to whatever disjoint resistance efforts some groups of humans would be able to muster. Legislative attacks would be met with counter-lobbying, economic warfare with better economic warfare and better stock-market performance, attempts to mount social resistance with higher-quality pro-AI propaganda, any illegal physical attacks with very legal security forces, attempts to hack its systems with better cybersecurity. And so on.

    *trying to describe how agi could fuck everything up* what if it acted exactly like rich people