US-based AI start-up Anthropic has revealed that its AI technology has been used by hackers who weaponised it to conduct sophisticated cyber attacks. The company known for the Claude chatbot said that its tools were used by cybercriminals for large-scale theft, extortion of personal data, and employment fraud. This has raised concerns over how powerful AI tools are being exploited by bad actors and their consequences.
This is part of the recent findings from Anthropic’s newly released Threat Intelligence report. It throws light on cases where cyber attackers exploited Anthropic’s agentic AI model, Claude Code, to automate large-scale extortion. The attackers were able to penetrate networks, gather user credentials, and design targeted ransom demands.
Anthropic revealed that its AI was used to write code that later carried out cyberattacks. In another case, scammers from North Korea used Claude to get remote jobs at top US companies. However, the company said that it was able to derange the threat actors and has reported the cases to the authorities. At the same time, the company is also working towards improving its detection tools.
Reportedly, in one instance a threat actor targeted around 17 organisations across emergency services, healthcare, government, and even religious sectors. It used Anthropic’s AI to make tactical and strategic decisions, such as which data to eliminate and how to monetise the stolen data. Reportedly, ransom demands over $500,000 were calculated using detailed financial analysis by Claude. The threat actors developed psychologically targeted ransom notes to pressurise victims.
Why does this raise alarm?
Anthropic’s research shows how AI technologies can embolden cybercriminals by lowering the skill barrier.The AI startup said that it detected a case of ‘vibe hacking’. These AI technologies enable individuals with limited technical know-how to initiate complex attacks. Using Claude, these bad actors were able to automate reconnaissance, collect user data, and achieve network penetration at an alarming level.
This comes at a time when using AI to write code has gained increasing popularity, essentially making it accessible to those with zilch knowledge of programming. On the other hand, agentic AI is a system that operates autonomously and is seen as the next big leap in AI.
How did North Korean operatives use Claude?
As mentioned above, Anthropic found out that scammers from North Korea used Claude to land remote jobs in top US firms. They developed fake resumes, wrote job applications, and passed technical assessments, all using Claude. And, once these scammers were hired, they reportedly used Anthropic’s chatbot to translate messages into English, write production-level software code, and even gain access to company systems.
Average Rating