![]()
News of a leaked Anthropic AI model rattled the cybersecurity industry, sending the stocks of major firms sharply lower. What initially looked like a potential game changer now raises urgent questions: can organizations trust AI with their most sensitive digital assets, or does this incident simply reinforce the need for expert protection?
According to Mint, a leaked draft blog post introduced a new tier of AI models called Capybara. The draft claimed that Capybara outperformed Anthropic’s flagship model, Claude Opus 4.6, in “software coding, academic reasoning, and cybersecurity-related tasks.” It further noted that training on Claude Mythos—a model Anthropic describes as their most advanced yet—has been completed.
Why Did It Leak?
While Anthropic attributed the leak to “human error,” the explanation may do little to reassure organizations about the company’s ability to safeguard sensitive data. Some analysts speculate that there could have been other motives at play.
“The leak of Capybara is unfortunate but I almost wonder if it was intentionally left in an accessible data lake to highlight some of the emerging cyber risks that continually evolving AI platforms pose and will pose,” said Tracy Goldberg, Director of Cybersecurity at Javelin Strategy & Research. “All of that said, the model is still in testing, with Anthropic clearly stating that it is aware of bugs and risks that need to be addressed, which is why Anthropic has only soft-launched Capybara.”
The Looming Threat of AI
Anthropic also highlighted the cybersecurity risks tied to these model, emphasizing the escalating arms race that is going on with AI between defenders and cybercriminals. The company cautioned that Capybara could be the first in a series of models capable of identifying and exploiting vulnerabilities far faster than security teams can respond. In other words, criminals could leverage the model to fuel a new generation of AI-driven cybersecurity threats.
Investors reacted swiftly, driving shares of CrowdStrike, Datadog, and Zscaler down more than 10% in early trading.
“The tanking of tech stocks in the wake of news about the Capybara leak really just highlight the lack of understanding investors have about AI overall,” Goldberg said. “We know these models will continue to adapt, and will do so at a pace faster than industry security measures can respond. This is why governance around AI is so critical.”
The post Pumping the Brakes on Anthropic’s Leaked Cybersecurity AI appeared first on PaymentsJournal.