OpenAI Expands GPT-4.1 and Mini Access to ChatGPT Users, Replaces Older Models to Simplify Experience and Improve AI Performance

OpenAI rolls out GPT-4.1 and 4.1 Mini to ChatGPT users, enhancing coding performance while addressing safety and transparency concerns with a new hub.

OpenAI Expands GPT-4.1 and Mini Access to ChatGPT Users, Replaces Older Models to Simplify Experience and Improve AI Performance

OpenAI has officially released its GPT-4.1 and GPT-4.1 Mini models on the ChatGPT platform, making them available to a larger audience for the first time. Previously only accessible through OpenAI's API, these models can now be used directly by ChatGPT members, including those on the Plus, Pro, and Team plans. Meanwhile, both free and paid users can now use the GPT-4.1 Mini model, which provides enhanced performance for all.

This shift is the result of increased user demand, according to OpenAI in a recent post on X. It represents a significant shift, as the corporation expands the use of its latest AI models beyond engineers.

GPT-4.1 improves upon the strengths of its predecessor, GPT-4o, with better performance in coding and instruction-based tasks. It's especially useful for software engineers since it makes code development, debugging, and following complex instructions more efficient.

While GPT-4.1 improves speed and usability, OpenAI stated that it is not a frontier model, which implies it does not include new input modalities (such as video or sophisticated speech interaction). As a result, it does not require the same level of safety inspection as more powerful devices.

Johannes Heidecke, Head of Safety Systems at OpenAI, stated that GPT-4.1 performs similarly to GPT-4o in conventional safety studies. "Improvements can be delivered without introducing new safety risks," he remarked, adding that GPT-4.1 "doesn't beat o3 in intelligence," referring to an internal model.

Along with the deployment, OpenAI is removing the GPT-4o Mini model from ChatGPT, while GPT-4.0 was previously decommissioned as of April 30.This approach aims to simplify the model range and reduce confusion for users who switch between multiple versions. 

The release of GPT-4.1 without a public safety report sparked criticism from the AI research community, expressing concerns about transparency and safety practices.In response, OpenAI has vowed to boost transparency.The company has developed a new "Safety Evaluations Hub", where internal safety test results will be published more frequently.

This upgrade illustrates OpenAI's continued efforts to strike a balance between innovation and responsible deployment—all while providing users with smarter, faster tools directly within ChatGPT.

This article is based on information from Business Today