AI-Assisted Hacking Campaigns Increasing Worldwide, Says Microsoft

Microsoft warns that hackers are using AI tools, generative AI, and large language models to launch advanced cyberattacks, create phishing emails, develop malware, and build fake websites, raising serious global cybersecurity concerns.

AI-Assisted Hacking Campaigns Increasing Worldwide, Says Microsoft

Microsoft has issued a warning that hackers are increasingly adopting artificial intelligence (AI) to carry out attacks more swiftly and efficiently. According to a new Microsoft Threat Intelligence research, hackers use AI tools throughout the lifespan of cyberattacks. These tools enable attackers to automate tasks, extend their operations, and lower the technical skills required to launch sophisticated digital attacks.

According to the report, attackers use generative AI and large language models (LLMs) to support various stages of a cyberattack. These artificial intelligence systems can assist in the creation of phishing emails, harmful scripts, malware code, and fraudulent online material. Hackers use AI to translate messages and analyze stolen data. This enables them to conduct worldwide cybercrime campaigns more simply.

One of the most common applications of artificial intelligence in cybercrime is to create convincing phishing emails. AI may create professional-looking messages that appear real, making it more difficult for users to detect scams. AI tools can also be used by attackers to write and debug malware code, as well as to create scripts that assist them in establishing attack infrastructure.

The report also mentions that several hacker groups, such as Jasper Sleet and Coral Sleet, employ AI in their activities. These entities are allegedly involved in fake remote worker schemes in which attackers pose as legitimate job candidates. AI enables them to create conceivable identities, resumes, and email communication in order to get access to corporations and internal systems.

Researchers discovered that hackers use artificial intelligence to quickly generate fake websites and online infrastructure. These fraudulent platforms can be used to host phishing pages or to cooperate in other intrusions. In some circumstances, attackers attempt to bypass AI safety constraints through techniques known as AI jailbreaking. Microsoft observed early attempts with autonomous AI systems that can complete specific tasks on their own. However, the company claims that most attacks still require human decision-making, with AI primarily employed to support hackers.

Cybersecurity experts say this growing use of AI makes cyber threats more advanced and difficult to detect. To stay safe, organizations should monitor suspicious login activity, strengthen identity protection systems, and improve phishing defenses. As AI technology continues to grow, businesses must strengthen their cybersecurity strategies to stay ahead of modern digital threats.

This article is based on information from The 420