Anthropic on Thursday released Claude Opus 4.7, an upgraded version of its flagship artificial intelligence model featuring improved coding capabilities, enhanced vision processing, and a new self-verification function that allows the system to review its own outputs.

The release, reported by Axios, comes with an unusual admission from the company: Opus 4.7 does not match the performance of Mythos, a more advanced AI system that Anthropic has developed but declined to release to the public, citing safety concerns.

A benchmark gap with an unreleased rival

In benchmark charts accompanying the announcement, Anthropic showed that Opus 4.7 outperforms its predecessor, Opus 4.6, as well as competing models including OpenAI's ChatGPT 5.4 and Google's Gemini. However, the same charts placed Mythos above Opus 4.7, making it the stronger system by the company's own metrics.

The public acknowledgment that Anthropic is holding back a more capable model is notable. The company's position implies that raw capability and public deployment are not treated as the same objective - a stance that sets it apart from competitors who generally move to release their most powerful models as quickly as feasible.

Safety as a release threshold

Anthropic has not specified what safety concerns are preventing Mythos from reaching users, nor has it provided a timeline for any potential release. The company has previously described itself as operating at what it calls the frontier of AI development, with safety research forming a core part of its stated mission.

The decision to disclose Mythos's existence and superior performance while withholding the model itself represents a calculated transparency move. It signals to the research community and regulators that Anthropic is aware of the risks associated with more powerful systems, while also demonstrating the company's continued technical progress in a competitive market.

What Opus 4.7 offers users

For users gaining access to the new model, Opus 4.7 brings tangible improvements. The enhanced coding features are expected to benefit software developers using the platform for programming assistance. The upgraded vision capabilities improve the model's ability to interpret and respond to image-based inputs, and the self-checking function adds a layer of accuracy review to generated responses.

The release continues a period of rapid iteration across the AI industry, with major labs including OpenAI and Google regularly updating their flagship models. Anthropic's willingness to publicly benchmark against its own unreleased system adds an unusual dimension to that competition.