Anthropic has begun a tightly controlled rollout of a new artificial intelligence model called Mythos, which company officials and outside observers believe represents a significant and dangerous leap in AI capability, according to a report by Axios.

The model is being described as the first of its kind capable of bringing down a Fortune 100 company, disrupting large segments of internet infrastructure, or penetrating critical national defense systems. As a result, Anthropic has sharply limited who can access it.

Approximately 40 carefully vetted companies and organizations have been granted access so far, according to Axios. The restricted release reflects growing concern within the AI industry that some models are now powerful enough to cause widespread harm if made broadly available.

A new threshold in AI risk

The Mythos release marks what some observers are calling a turning point in the development of advanced AI systems - a moment where the potential for catastrophic misuse has begun to outpace the readiness of governments, regulators, and institutions to respond.

Anthropic's decision to limit access rather than pursue a wider commercial launch suggests the company is treating this model differently from previous releases. The move also raises questions about how such decisions will be made in the future, and by whom.

Most policymakers and officials with oversight responsibility are not yet equipped to evaluate or govern systems at this level of capability, according to the Axios report. The gap between what these models can do and what existing regulatory frameworks can address remains significant.

Controlled access as a safety measure

By restricting Mythos to a small group of vetted users, Anthropic appears to be betting that the risks of broader deployment outweigh the commercial benefits of a standard release. The approach mirrors strategies used in fields like biosecurity, where access to dangerous materials or knowledge is limited to credentialed institutions.

The identities of the roughly 40 organizations currently granted access were not disclosed in the Axios report.

The release comes amid broader industry and government debates over how to manage increasingly capable AI systems. Proposals for mandatory pre-release evaluations, government licensing regimes, and international coordination on AI safety have all gained traction in recent months, though no comprehensive framework has been established.

Anthropic has not publicly commented beyond what was reported by Axios. The company has historically positioned itself as a safety-focused AI developer, but the existence of Mythos and its described capabilities is likely to intensify scrutiny of how frontier AI labs self-regulate.