DeepSeek has had an openai fire


The First Day OpenAI Launches a Fast, Cheap, and Smart Reasoning Model for Analyzing DeepSeek’s Performance

The trade-offs made sense since o1 was an enormous experiment. They did not make as much sense for chat, a product used by millions of users that was built on a different, more reliable stack. When the product o1 became a product, cracks began to appear in the internal processes. “It was like, ‘why are we doing this in the experimental codebase, shouldn’t we do this in the main product research codebase?’” the employee explains. “There was major pushback to that internally.”

It was over a week since DeepSeek came to the attention of the world. The introduction of its open-weight model set off shock waves inside OpenAI, as it trained on a fraction of specialized computing chips. The startup’s success had Wall Street questioning whether companies like OpenAI were overspending on compute as employees claim to see hints that DeepSeek had improperly distilled Openai’s models.

This is also the first time free users of ChatGPT will be able to try out OpenAI’s reasoning models, just days after Microsoft made o1 free for all Copilot users. You can try o3-mini free of charge in ChatGPT by selecting the Reason feature in the chat bar, and the rate limits will be similar to the existing GPT-4o limits. o3-mini will also be available for ChatGPT Plus, Team, and Pro users worldwide today, and OpenAI is tripling its message limits for Plus and Teams users to 150 messages per day. Only Pro users who pay $200 a month will be given unlimited access to o3-mini.

In response, OpenAI is preparing to launch a new model today, ahead of its originally planned schedule. The model, o3-mini, will debut in both API and chat. Sources say it has o1 level reasoning at 4o level speed. In other words, it’s fast, cheap, smart, and designed to crush DeepSeek.

Originally announced as part of OpenAI’s 12 days of “ship-mas” in December, o3-mini is designed to match o1’s performance in math, coding, and science, while responding faster than the existing reasoning model. O3mini should respond 24 percent faster than O1mini and provide more accurate answers. This latest model will show how the answer worked out, rather than only giving a response.