
I asked one venture investor who’s spoken to multiple high-profile AI experts. Their takeaway was simple: OpenAI wants to have it both ways regarding how it’s perceived about safety and commercialization.
On the one hand, safety is built into the core of the startup. It’s structured as a “capped-profit” company governed by a nonprofit, and Altman doesn’t hold equity directly in OpenAI. The idea was for OpenAI to pursue building AGI that “is safe and benefits all of humanity.”
But the startup’s commercial aspirations are clear. It’s aggressively pushed out new models to compete with rivals and is reportedly considering adjusting its structure to become a full-blown, for-profit company. It also disbanded the team responsible for mitigating AI risks.
The result, the VC told me, is people feel OpenAI is talking out of both sides of its mouth. In reality, they said, the split between OpenAI’s focus on commercialization versus safety feels like it’s more 95/5, respectively.
It doesn’t help that some OpenAI employees joined when that split was closer to 80/20 and favored safety over business, they added.
The impetus for the increased focus on business isn’t entirely clear. But the failed ouster of Altman, which included concerns over safety, does seem like a turning point for the startup.
Whatever the case, OpenAI can’t keep trying to sit on both sides of the fence, according to the VC. The tensions are too high between commercial and safety aspirations to straddle the line and not expect more issues, they said.
