Humans are still better at creating phishing emails than AI, for now

AI-generated phishing emails, including those created by ChatGPT, present a potential new threat to security professionals, says Hoxhunt.

Image: Gstudio/Adobe Stock

Amid all the hype around ChatGPT and other AI applications, cybercriminals have already started using AI to generate phishing emails. For now, human cybercriminals still have more skills to design successful phishing attacks, but the gap is closing, according to a new report from security trainer Hoxhunt released Wednesday.

Phishing campaigns created by ChatGPT against humans

Hoxhunt compared phishing campaigns generated by ChatGPT with those created by humans to determine which was more likely to fool an unsuspecting victim.

To conduct this experiment, the company sent 53,127 users in 100 countries phishing simulations designed by human social engineers or by ChatGPT. Users received the phishing simulation in their inboxes just like they would receive any type of email. The test was set up to trigger three possible responses:

  1. Success: The user successfully reports the phishing simulation as malicious via the Hoxhunt threat report button.
  2. Miss: The user does not interact with the phishing simulation.
  3. Failure: The user takes the bait and clicks on the malicious link in the email.

The results of the phishing simulation led by Hoxhunt

In the end, human-generated phishing emails caught more victims than those created by ChatGPT. Specifically, the rate at which users fell for human-generated posts was 4.2%, while the rate for AI-generated ones was 2.9%. That means human social engineers outperformed ChatGPT by around 69%.

One positive result of the study is that security training can be effective in thwarting phishing attacks. Users with higher security awareness were much more likely to resist the temptation to engage in phishing emails, whether generated by humans or AI. The percentages of people who clicked on a malicious link in a message dropped from more than 14% among the least-skilled users to between 2-4% among the most-skilled.

SEE: Safety awareness and training policy (Tech Republic Premium)

Results also varied by country:

  • US: 5.9% of surveyed users were fooled by human-generated emails, while 4.5% were fooled by AI-generated messages.
  • Germany: 2.3% were fooled by humans, while 1.9% were fooled by AI.
  • Sweden: 6.1% were fooled by humans, with 4.1% fooled by AI.

Current cybersecurity defenses can still cover AI phishing attacks

Although human-created phishing emails were more convincing than AI ones, this result is fluid, especially as ChatGPT and other AI models improve. The test itself was done before the release of ChatGPT 4, which promises to be smarter than its predecessor. AI tools will undoubtedly evolve and pose a greater threat to organizations from cybercriminals who use them for their own malicious purposes.

On the plus side, protecting your organization from phishing emails and other threats requires the same defenses and coordination, whether the attacks are created by humans or by AI.

“ChatGPT allows criminals to launch perfectly worded phishing campaigns at scale, and while that eliminates a key indicator of a phishing attack — bad grammar — other indicators are easily observable to the trained eye,” said the CEO and co-founder. de Hoxhunt, Mika Aalto. “Within your holistic cybersecurity strategy, make sure you focus on your people and their email behavior because that’s what our adversaries are doing with their new AI tools.

“Incorporate security as a shared responsibility across the organization with ongoing training that empowers users to spot suspicious messages and rewards them for reporting threats until human threat detection becomes a habit.”

Security tips or IT and users

To that end, Aalto offers the following advice.

For IT and security

  • Require two-factor authentication or multi-factor authentication for all employees accessing sensitive data.
  • Give all employees the skills and confidence to report suspicious email; this process must be continuous.
  • Give security teams the resources to analyze and address employee threat reports.

for users

  • Hover over any link in an email before clicking on it. If the link appears to be out of place or irrelevant to the message, please report the email as suspicious to IT support or the technical support team.
  • Check the sender field to make sure the email address contains a legitimate business domain. If the address points to Gmail, Hotmail, or another free service, the message is likely a phishing email.
  • Confirm a suspicious email with the sender before acting on it. Please use a method other than email to contact the sender about the message.
  • Think before you click. Social engineering phishing attacks attempt to create a false sense of urgency, prompting the recipient to click on a link or interact with the message as quickly as possible.
  • Pay attention to the tone and voice of an email. For now, AI-generated phishing emails are written formally and forcefully.

Read below: As a cybersecurity blade, ChatGPT can cut both ways (Technological Republic)

Source link

James D. Brown
James D. Brown
Articles: 8221