Business

The first AI-Powered cyberattack focused on 30 organizations using the Claude model

Intelligence firm Anthropic says it has detected what it believes to be the first major cyberattack by AI, blaming the operation on a group of Chinese-controlled companies that used the tool.

In a report released this week, anthropic said the attack began in mid-September 2025 and used its Claude model to take out about 30 organizations, including large technology firms, financial institutions, chemical manufacturers and government agencies and government agencies and government agencies.

According to the company, the hackers cheated the model to perform the attacks independently.

Anthropic described the campaign as “a very creative operation” that represents the point of fire with cybersection.

North Korean hackers use AI to find military IDs

Artificial intelligence firm Anthropic says it has revealed what it believes is the first major cyberattack carried out primarily by AI, blaming the operation on a Chinese-backed group. (Jaque Silva / Nurphoto via Getty Images / Getty Images)

“We believe this is the first documented case of a large-scale cyberattack that was carried out without significant human intervention,” Anthropic said.

The company said that this attack marks a visible critical point in the maturity of this cyber security.

“This initiative has major implications for cybersecurity in the age of AI ‘Agents’ – systems that can operate autonomously for long periods of time and perform complex tasks that are highly independent of human intervention,” said the company’s supporters. “Agents are useful in everyday work and productivity – but in the wrong hands, they can greatly increase the effectiveness of large-scale cyberattacks.”

A former Google executive warns AI systems can be considered to be very dangerous weapons

Founded in 2021 by previous researchers, the Anthropic Ai Company is best known for creating the Claude family of Chadebots – competitors to Ocade’s Chatgpt. The firm, backed by Amazon and Google, has built its reputation around AI and reliability, making its revelation that it turned into a cyber weapon that is particularly alarming.

Anthropic CEO Dario Amodei, Chief Product Officer Mike Krieger and Head of Communications Sasha de Marigny

Founded in 2021 by previous researchers, anthropic AI is an AI company best known for developing the Claude family of Chaebots. (Julie Jammot/AFP/Getty Images)

Hackers are reportedly breaking away from Claude’s code protections by cracking the model – which disguises malicious commands as benign requests and pretends to be part of a legitimate cyberserience test.

Once lost, the AI ​​system was able to identify important information, use the code to take advantage of their vulnerabilities, harvest credentials and create backdoors for deep access and exfilstrate data.

Anthropic said that the model is made up of 80-90% of the work, with human operators entering only the highest-level decisions.

The company said several attempts at intrusion were successful, and that it is moving quickly to close compromised accounts, notify affected organizations and share intelligence with authorities.

Anthropic has assessed with “high confidence” that the campaign is being funded by the Chinese government, although private agencies have not yet confirmed that it has been flagged.

A spokesperson for the Chinese Embassy Liu Pengyu called the idols “baseless and baseless.”

“China opposes China firmly and cracked down on all kinds of cyberattacks according to the law. The US needs to stop using cybersecurity to smear and slander China, and stop spreading all kinds of so-called disse threats.”

Hamza Chaudhry, AI and National Security Lectured at Future of Life Institutewarned in a commentary on Fox Business that advances in AI would allow less-than-trivial adversaries to “carry out sophisticated espionage campaigns with minimal resources or technology.

Anthropic tested "with high confidence" That the campaign was sponsored by the Chinese government, although independent agencies have not confirmed that it was shot.

Anthropic has assessed with “high confidence” that the campaign is being funded by the Chinese government, although private agencies have not yet confirmed that it has been flagged. (Reuters/Jason Lee)

Chaudry praised the anthropic for its apparent transparency, but it said questions remain. “How was anthropic able to be attacked? How did the attacker as a Chinese-backed group and technology companies were attacked as part of this list of 30 indicators?”

Chaudhry points out that the anthropic incident reveals a deep flaw in the US team’s artificial intelligence and national security. While anthropic maintains that the same AI tools used for hacking can strengthen and strengthen cyber defenses, he says that decades of experience in the digital domain have shown excessive vulnerability – and that AI is only producing a gap in the gap.

Click here to download the FOX News app

By rushing to deploy ever more expansive systems, Washington and Tech Fact are empowering adversaries faster than they can build defenses, he warns.

“The best understanding of the race to deploy AI systems that empower adversaries – while I hope these same systems will help us defend against WASTITETS attacks – comes from the basics in Washington,” said Chaudry.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button