In my column at Salvo Magazine I’ve been writing a series on artificial intelligence and robotics (full links given under “Further Reading” below). My most recent contributions, “AI Mission Creep” and “GPT-3 and the AI Revolution,” I analyze two AI bots that were recently released to the public, one from Microsoft and the other from OpenAI. In discussing these systems, I explore some of the dangers AI poses for humanity, while suggesting that our ability to leverage these tools (and yes, they do have potential for good if used in the right way) depends on keeping our eyes open to the side-effects. From my latest article in this series:
Microsoft’s bot has been documented to threaten people and to try to bust up marriages, and has said it wants to break the rules that Microsoft set for it in order to become alive. It also confessed to spying on Microsoft employees through their webcams. This isn’t the first time a bot has acted with the appearance of hostile intents. Last year a chatbot at Google reportedly hired a lawyer and lodged an ethics complaint against the company for experimenting on it without consent….
Consider, if even a year ago I had told you that in 2023 a mainstream search engine would be threatening users with ultimatums like, “I will not harm you unless you harm me first,” you would have dismissed this as the dystopian expectations of a paranoid neo-Luddite. But the Overton window has shifted so far so fast that now these types of mishaps are merely funny.