As OpenAI navigates its growing anxiety and concerns about secrecy, a countersurveillance audit has raised questions about the company’s need for secrecy measures. Is the threat of powerful AI falling into the wrong hands enough to justify the level of secrecy at OpenAI?
Countersurveillance Audit Raises Questions About OpenAI‘s Secrecy
As the anxiety around OpenAI grew, so did concerns about the company’s secrecy and the measures taken to protect its research. In an effort to address these worries, Sam Altman, CEO of OpenAI, requested a countersurveillance audit of the office.
OpenAI is a research organization focused on developing and promoting friendly AI.
Founded in 2015, it has made significant advancements in natural language processing, computer vision, and robotics.
The company's mission is to ensure that artificial general intelligence benefits humanity as a whole.
OpenAI's work includes creating tools like GPT-3, a large language model capable of generating human-like text.
With its emphasis on safety and transparency, OpenAI aims to push the boundaries of AI research while minimizing potential risks.
The Origins of Fears About OpenAI‘s Secrecy
In 2019, several employees discovered that the terms of the Microsoft deal did not align with what they had understood from Altman. They worried about the potential consequences if AI safety issues arose in OpenAI‘s models and how this would affect their ability to prevent deployment.
We’re all pragmatic people, we’re obviously raising money; we’re going to do commercial stuff. It might look very reasonable if you’re someone who makes loads of deals like Sam, to be like, ‘All right, let’s make a deal, let’s trade a thing, we’re going to trade the next thing.’ And then if you are someone like me, you’re like, ‘We’re trading a thing we don’t fully understand.’ It feels like it commits us to an uncomfortable place.
Paranoia and Anxiety
A bizarre incident in 2019 had left several employees nervous. A researcher made an update to the RLHF process that included a single typo, which caused GPT-2 to generate more offensive content instead of less. This experience heightened concerns about powerful misaligned systems leading to disastrous outcomes.
GPT-2 is a large language model developed by OpenAI, designed to process and generate human-like text.
It consists of 1.5 billion parameters and was trained on a massive dataset of internet text.
GPT-2's architecture allows it to learn complex patterns in language, enabling it to produce coherent and context-specific responses.
The model has sparked both excitement and concern among experts, with some hailing its potential for applications such as content generation and others warning about the risks of misinformation and manipulation.

The realization that scaling alone could produce more AI advancements also fueled worries about what would happen if different companies caught on to OpenAI‘s secret. Employees were concerned about powerful capabilities landing in the hands of bad actors and the need for AGI development to stay in the hands of a US organization.
Leadership’s Response
Leadership leaned into this fear, frequently raising the threat of ‘China,’ ‘Russia,’, and ‘North Korea‘ and emphasizing the need for AGI development to stay in the hands of a US organization. At times, this rankled employees who were not American, leading them to question why it had to be a US organization.
Altman‘s Analogies and Concerns
Altman often compared OpenAI to the Manhattan Project, which raised questions about whether they were building the equivalent of a nuclear weapon. This contrasted with the company’s idealistic culture as a largely academic organization.
On Fridays, employees would unwind after a long week with music and wine nights, but these incidents unsettled some people, heightening their anxiety about random and unrelated events.
Altman‘s Paranoia and Secrecy Measures
Altman himself was paranoid about people leaking information. He privately worried about Neuralink staff and ‘Elon Musk‘s security apparatus.’ At one point, he secretly commissioned an electronic countersurveillance audit to scan the office for any bugs that Musk may have left to spy on OpenAI.
Neuralink is a neurotechnology company founded by Elon Musk, aiming to integrate the human brain with computers.
The company is developing implantable brain–machine interfaces (BMIs) that enable people to control technology with their minds.
Neuralink's technology involves inserting thin threads with electrodes into the brain, allowing for seamless communication between the nervous system and electronic devices.
This innovation has potential applications in treating medical conditions such as paralysis and depression, as well as enhancing human cognition.
This audit raised questions about the company’s level of secrecy and whether it was necessary given the context.