Skip to Main Content
Faint pattern of 1s and 0s on top of hexagons

How ChatGPT and Bard Are Making Phishing Emails Difficult To Spot

Faint pattern of locks, 1s and 0s on top of hexagons

It’s Monday morning. You’re sipping your coffee, scrolling through your emails. One catches your eye – “Urgent! Account Compromised!” Panic sets in. You click the link, desperate to fix the situation. Unfortunately, you’ve just walked straight into a phishing trap.

This isn’t your 2010 phishing scam, though. Previously, such scams were easy to spot – filled with obvious grammar errors and suspicious email addresses. However, recent advancements in artificial intelligence (AI) technology, specifically with the advent of OpenAI’s ChatGPT and Google’s Bard, are ushering in a new era of complexity for phishing scams.

These AI programs can craft intricate, highly believable phishing emails that are incredibly difficult to distinguish from the real ones.

Telltale Signs of a Phishing Scam

While there are many ways to spot a phishing scam, one, in particular, stands out, and this is where ChatGPT and Bard are causing the most disruption. Old-school phishing scams are notorious for their poor English. They’re riddled with awkward phrasing and spelling mistakes.

The thing about grammar and awkward language is that you don’t need to be a language expert for the mistakes to jump out at you. For example, if you stop a native English speaker on the street, they may struggle to give you a coherent definition of subject-verb agreements or correct preposition use. Still, they could tell you that the phrases “Your account information has been changes,” or “we detect suspicious activity to your account” are incorrect English usage. And these are the kinds of phrases we see all the time in phishing emails

Additionally, phishing emails often misuse professional jargon, awkwardly stuffing their content with buzzwords. An example might read, “Your account is potentially risk. Act fast to enact solutions.” The phrase ‘potentially risk’ is nonsensical, and ‘enact solutions’ is an odd, stilted use of business language.

Moreover, phishing emails misuse overly formal language, creating an odd, out-of-place tone. This is often due to misconceptions on the scammer’s side. They believe that by using excessively formal language, their emails will come across as more official or authoritative. But, most professional correspondence today, especially customer-focused communication, is much more casual and conversational. For example, the British government’s official policy is to use “plain English” on all.GOV.UK websites. The idea here is that all communication should be easy for everyone to understand.

Instead, scammers are more likely to use phrases like “Dear Sir/Madam, we hereby inform you that your account has been suspended due to suspicious activities,”. They often use antiquated or overly formal words like ‘herewith’, ‘hereby’, ‘forthwith’, ‘wherein’, and ‘thereunder’. Phrases such as “You are kindly requested,” “Failure to comply will result in dire consequences,” or “Your immediate attention to this matter is highly necessitated” are other typical examples.

This contrast comes down to natural language use. Real businesses use language that resonates with their customers, aiming to be clear, approachable, and helpful. Phishing scams, on the other hand, often miss the mark, mimicking formality in a bid for credibility but ending up sounding unnatural instead.

And the list goes on. Here are some other common examples of language errors in phishing scams:

  • Random capitalisation like “Attention: suspicious Activity detected in Your account.”
  • Generic greetings like “Dear Valued Customer” or “Dear Account Holder” instead of using your actual name.
  • Excessive use of exclamation marks to create a sense of urgency, “Immediate action required!!!”
  • Tense errors like “you account has been compromise” instead of “compromised”.

How Scammers Are Using ChatGPT and Bard To Craft Phishing Emails

Here’s the bottom line. Spotting poor grammar and awkward language is no longer a reliable way of determining you’re dealing with a phishing scam. Scammers use ChatGPT and Bard to craft grammatically flawless and highly convincing emails that mirror the style of legitimate correspondence. And worryingly, these emails are remarkably easy to generate, escalating the scale and potential impact of phishing scams.

Here are some specific ways scammers leverage these tools:

  1. Tailoring to individual targets: Advanced AI tools can generate messages that seem personalised and relevant to the recipient, significantly increasing the chance of the scam’s success. For instance, an AI could craft a message that looks like an official communication from a bank, a social network, or even a workplace colleague.
  2. Mimicking legitimate company communication: AI can mimic the tone, style, and language of actual companies. A phishing email might, for example, closely resemble the type of customer communication that a well-known online retailer would send.
  3. Evading spam filters: Traditional spam filters identify common red flags in scam emails, such as specific phrases or poor grammar. However, as AI-generated phishing emails become more sophisticated, they are increasingly likely to slip through these filters undetected.

What You Should Look Out For

While ChatGPT and Bard are making phishing emails harder to spot, there are still ways to identify these cyber threats. Here are some tactics to help identify phishing emails, even in the age of advanced artificial intelligence:

  1. Email address scrutiny: Even with AI, phishing emails often come from an email address slightly different from the legitimate one. Double-check the sender’s address for any subtle inconsistencies.
  2. Inconsistent branding: AI might mimic company branding, but minor inconsistencies like logo quality, font type, or colour scheme can be giveaways.
  3. Urgent or threatening language: Phishing emails often create a sense of urgency or threat to manipulate the receiver.
  4. Link and URL verification: Hover over any links without clicking to view the URL. Phishers may use a technique called “typosquatting” (using domain names similar to the brands they’re impersonating).
  5. Unsolicited attachments: Be cautious of unexpected email attachments, especially if they have extensions like .exe, .zip, or .pdf. They could contain malware.
  6. Requests for personal information: Legitimate companies usually won’t ask for personal information via email. Be wary of any email asking for passwords, social security numbers, or bank account numbers.
  7. Email body in image form: If the email content is primarily in image form, it might be a phishing attempt trying to bypass email filters.

Final Thoughts

Artificial intelligence is a tool, and like all tools, it can be used for good or bad. Unfortunately, scammers are using it for nefarious purposes, but that doesn’t mean we’re powerless in this game of cat and mouse. Instead, we must stay vigilant about how phishing emails are evolving and look for the new hallmarks of these scams in the age of AI.

Try our free phishing test to see how you would perform against a phishing scam, or get in contact with our experts to learn more.