Former OpenAI engineer on the culture at the ChatGPT-maker | Technology News

Read Time:4 Minute, 56 Second

Amid a talent war between Meta and OpenAI, Calvin French-Owen, an engineer who worked at the ChatGPT-maker and left the startup three weeks ago, described what it’s like to work there.

The MIT graduate, who joined OpenAI in May 2024 and left in June, published a detailed blog post reflecting on his journey at OpenAI, one of the most advanced AI labs in the world. He said he didn’t leave because of any “drama,” but rather because he wants to return to being a startup founder. French-Owen previously co-founded the customer data startup Segment, which was acquired by Twilio in 2020 for $3.2 billion.

“I wanted to share my reflections because there’s a lot of smoke and noise around what OpenAI is doing, but not a lot of first-hand accounts of what the culture of working there actually feels like,” he wrote.

Story continues below this ad

On the culture at OpenAI, which is led by Sam Altman, French-Owen said it feels like any other Silicon Valley startup, but he also addressed some misconceptions about the company. According to him, OpenAI has grown too quickly, from 1,000 to 3,000 employees in just a year, and there’s a reason behind such rapid hiring: ChatGPT is the fastest-growing consumer product, having reached 500 million monthly active users and still growing.

However, he admitted that chaos naturally follows when a company grows that fast, especially at the scale of OpenAI. “Everything breaks when you scale that quickly: how to communicate as a company, the reporting structures, how to ship product, how to manage and organize people, the hiring processes, etc.,” French-Owen wrote.

French-Owen noted that OpenAI doesn’t rely on email as a main communication channel among employees.

“An unusual part of OpenAI is that everything, and I mean everything, runs on Slack,” he wrote. “There is no email. I maybe received ~10 emails in my entire time there.” He also observed what he called a “very significant Meta → OpenAI pipeline” in engineering hiring.

Story continues below this ad

“In many ways, OpenAI resembles early Meta: a blockbuster consumer app, nascent infra, and a desire to move really quickly,” he noted. Like at a small startup, people at OpenAI are still encouraged to pursue their ideas, but that also results in overlapping work.

“I must’ve seen half a dozen libraries for things like queue management or agent loops,” he said.

He described the range of coding talent at OpenAI as highly varied, from Google veterans to new PhD graduates with less real-world experience. Because OpenAI heavily uses Python, the company’s central code repository, or what he called “the back-end monolith”, can feel like “a bit of a dumping ground.”

French-Owen recounted the intensity of launching Codex, an AI coding assistant, calling it one of the hardest work periods of his career. “The Codex sprint was probably the hardest I’ve worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7 a.m. Working most weekends. We all pushed hard as a team because every week counted. It reminded me of being back at YC,” he recalled.

Story continues below this ad

His team, consisting of around eight engineers, four researchers, two designers, two go-to-market staff, and a product manager, built and launched Codex in just seven weeks, nearly without sleep.

“I’ve never seen a product get so much immediate uptake just from appearing in a left-hand sidebar, but that’s the power of ChatGPT,” he said.

French-Owen also pushed back against the idea that OpenAI is unconcerned about safety. In recent months, several former employees and AI safety advocates have criticized the company for not prioritizing safety adequately. But according to French-Owen, the focus is more on practical risks than abstract, long-term threats.

“I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones like intelligence explosion or power-seeking,” he wrote.

Story continues below this ad

“That’s not to say that nobody is working on the latter; there are definitely people focused on theoretical risks. But from my viewpoint, it’s not the main focus. Most of the work being done isn’t published, and OpenAI really should do more to get it out there.”

He also described the work atmosphere at OpenAI as serious and mission-driven.

“OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On one hand, there’s the goal of building AGI, which means there’s a lot to get right. On the other hand, you’re trying to build a product that hundreds of millions of users rely on for everything from medical advice to therapy,” he wrote.

OpenAI has recently made headlines for losing key AI engineers to Meta. Mark Zuckerberg, Meta’s co-founder and CEO, has reportedly offered massive compensation packages to lure away talent. Meta’s new superintelligence team includes researchers from OpenAI, Google, and Anthropic.

Story continues below this ad

In a recent podcast interview, Sam Altman commented on Meta’s aggressive hiring strategy, calling the reported $100 million signing bonuses “crazy.”



Source link

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post ‘The Legend of Zelda’ Movie Could Solve Hollywood’s Epic Fantasy Problem
Next post Steve Benson, Provocative Editorial Cartoonist, Dies at 71