Is OpenAI Building Something Dangerous?

We naturally want to know that OpenAI, the $90 billion company behind ChatGPT, isn’t inadvertently building a doomsday machine. But it’s hard to feel reassured when Sam Altman’s method for alleviating fears is to say that AGI might destroy the human race but probably won’t.

In May I raised concerns that Sam Altman, the 38-year-old CEO, doesn’t really know what he is building at his San Francisco-based company. When testifying before a subcommittee of the Senate Judiciary Committee on AI safety, Altman said, “My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world.” How much harm? Are we talking about a paperclip maximizing factory turned into a terminator scenario?

Actually, yes we are. When speaking on the Lex Fridman podcast, Fridman asked him about Eliezer Yudkowsky’s concern that a superintelligent AGI could turn hostile against humans. Altman candidly replied, “I think there is some chance of that.”

Altman himself seems to be ready for a worst case scenario. According to the New Yorker, his house is a prepper’s paradise complete with guns, gold, potassium iodide, antibiotics, batteries, water, and gas masks from the Israeli Defense Force. Given his alertness to risk, Altman has been more than candid about the dangers of AI. The only difference between him and the doomers is that he thinks the way to combat the dangers is through acceleration. Beat AI before it beats us. As he Altman put it in an interview for The New Yorker, “The merge has begun—and a merge is our best scenario. Any version without a merge will have conflict: we enslave the A.I. or it enslaves us.”

But is this accelerated approach safe? When Ross Andersen interviewed Altman about these concerns, Andersen concluded that not even OpenAI really knows what they are building. Describing their conversation for The Atlantic, Andersen remarked,

Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president. But by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk. I don’t hold that against him, exactly—I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.

The recent shake-up at the company, with Altman’s abrupt dismissal followed by his reinstatement, has raised concerns about a failure of risk management. Jill Filipovic reported for CNN that, “The company has already reportedly invented an AI technology so dangerous they will never release it – but they also won’t tell reporters or the public exactly what it is.” After reports that OpenAI’s Chief Scientist, Ilya Sutskever, has been making weird spiritual claims, and leading chants such as “feel the AGI!” and rituals involving the burning of an unaligned AI effigy, it’s becoming more difficult to have confidence.

This lack of transparency has fueled speculation. Was it his accelerationist approach in the face of safety concerns that led to Altman’s dismissal? Maybe. The word on the street, Mark Sullivan reported, is that Altman’s company may be on the verge of a dangerous breakthrough. “OpenAI has reportedly been working on a new kind of agent, known internally as “Q*” (“Q-star”), which marks a major step toward OpenAI’s goal of making systems that are generally better than humans at doing a wide variety of tasks (aka, artificial general intelligence, or AGI).”

See Also

We don’t know if these concerns were behind the recent shake-up. But according to Reuters, sources have reported “concerns over commercializing advances before understanding the consequences.”

What are the consequences of generative AI? We don’t know. Nobody does. Not even Sam Altman.

Further Reading

Scroll To Top