
Anthropic said on Wednesday that he had detected and blocked hackers who tried to misuse his cloud AI system to write fashing emails, create malicious codes and prevent safety filters.
The results of the company, which have been published in a report, highlight the growing concerns that AI tools are exploited in cybercrime, which intensifies calls to tech companies and regulators to strengthen safety measures as this technology spreads.
Anthropic reports that its internal systems have stopped attacks and is sharing case studies – which shows how the attackers tried to use the cloud to produce harmful materials – to help others understand the dangers.
The report cites efforts to draft the cloud -produced fishing emails, repeatedly indicate malicious code and attempt to use clouds to write or fix pieces of sidewalk guards.
It also described efforts to develop the campaigns by developing convincing letters and helping low -skill hackers with step -by -step instructions.
Amazon.com and the alphabetical support company did not publish technical indicators such as IP or indicators, but said it had banned the accounts involved and tightened its filters after detecting the activity.
Experts say the criminals are rapidly moving towards AI to further convince the scams and accelerate the hacking efforts. These tools can help write realistic phishing messages, automatically make parts of malware growth and even potentially support planning attacks.
Security researchers have warned that since the AI models become more powerful, the risk of misuse will increase unless companies and governments work fast.
Anthropic said he follows strict safety methods, including regular testing and external reviews, and plans to continue the reports when he receives major risks.
Microsoft and Soft Bank -backed Openi and Google have faced similar scrutiny, their AI models can be exploited for hacking or scams, indicating calls for strong security measures.
Governments are also moving forward to manage this technology, the European Union is emphasizing its artificial intelligence act and the promise of voluntary protection from major developers with the United States.
(Reporting by Akash Sarram in Bangalore; Amendment by Puja Desai)
Titles
Artificial intelligence runs the Inkartic fraud data
Is interested Appearance?
Get automatic warnings for this title.
