Click below to listen to our Consumerpedia podcast episode on scammers and AI.

Artificial intelligence (AI) is having a profound impact on society. Since the introduction of ChatGPT in November 2022, anyone can harness this technology to write resumes or term papers, create songs and artwork, or plan meals and personalized workouts.

Fraudsters are already using this supercomputing power to scam people. For a small monthly fee, criminals can use two AI-based tools, WormGPT and FraudGPT, to create scam emails and text messages, and write malicious code.

“I would expect that as AI continues to improve, the ability of criminals [to use it] will continue to increase as well,” said digital security consultant Brett Johnson, a convicted cybercriminal who was once on the Secret Service’s 10 Most Wanted List.

While it’s impossible to know if criminals will use AI to develop new scams, it will enable them to “fine-tune their attacks and scale up the volume,” Johnson told Checkbook. AI will make it easier for “unsophisticated criminals,” and handle a lot of the heavy lifting for “the more experienced criminals,” Johnson warned. It makes cybercrime “much more scalable, much easier to commit, and much more profitable.”

At the recent DEFCON Hacker Conference in Las Vegas (where the “good guys” meet), experts from Sophos, an international digital security company, demonstrated how AI could power an entire scam ecosystem. It could create the fake website used in the attack and the phony LinkedIn profile that makes the criminals running the scam appear to be legitimate. AI could also help collect the credit card information.

“The idea is you can kind of generate an entire scam environment without really having any skill or time invested to do so,” said Chester Wisniewski, global field chief technology officer at Sophos. “In a world where we already struggle with what is authentic, this is not a great thing.”

AI will also help criminals write their bogus texts and email messages without typos or awkward wording. So, the old advice to look for those warning signs no longer applies, Wisniewski warned.

Is the Threat from AI Being Over-Hyped?

The news is filled with stories about how scammers are using AI to fake children’s voices as part of the old grandparent scam. According to these reports, the crooks sample the voice of a child (maybe from social media posts), call their grandparents, and have the computer-generated voice plead for money to pay kidnappers or post bail to get out of jail.

We’ve all seen broadcast news stories where reporters take samples of their voices to a digital security company, and they can make the deepfake voice say anything. It’s creepy, but are the scammers actually doing this?

“The truth of the matter is that [AI is] not really advanced enough for fraudsters to be using it en masse; especially if we're talking about deepfakes [in] real time. Real-time is not even out there right now.”

This grandparent scam has been around for more than 15 years—I first reported on it for NBC News in 2008. The scammers have been very successful at stealing millions of dollars without using AI to clone the child’s voice.

In many cases, the criminals perpetrating the grandparent scam, “don't even know the name of the grandchild and they don’t have access to a sample of the voice,” said Lorrie Faith Cranor, director of the CyLab Security and Privacy Institute at Carnegie Melon University. “In order to train the AI, you need to find out who the grandchild is, get a sample of their voice, and then you can train the AI to do that. I don't think they go that far.”

Johnson, the former cyberthief, agreed: This is a simple con that doesn’t require deepfake voices.

“The goal is to get the potential victim to act out of emotion, not out of reason or logic,” Johnson explained. “So, if you call at two in the morning, you’re acting like that grandchild that [they] barely get to speak to three or four times a year, and you're trying to scare them to give that knee-jerk reaction, you don’t need a deepfake voice to make that successful.”

That’s not to say criminals won’t use deepfake voices in the future, Wisniewski said. But right now, he doesn’t believe it’s happening.

How to Protect Yourself

The AI available today will allow criminals to accelerate and streamline what they’re already doing. It doesn’t mean the scams will be any more sophisticated, at least for now.

Cranor, who served as chief technologist at the Federal Trade Commission before her work at CyLab, worries that criminals may find new ways of using AI to compose scams.

“AI could run through all the possible combinations and maybe hit on some combinations that we just hadn’t thought of that turn out to be really effective,” she said.

But for now, the rules for protecting yourself against fraudsters remain the same, she said: “Be skeptical and think before you click, before you text, definitely before you send money, and be skeptical of what you read and think about it. Is this plausible in this situation? And don't rush to give away personal information, passwords, money, bank account numbers, anything like that.”

Cranor believes the “good guys” can use AI to fight fraud. It could help develop software that can be installed on computers, phones, and mobile devices that will do a better job of detecting and blocking potential scams.

More from Checkbook: Identity and Cyber Theft: How to Protect Yourself

 

 

Contributing editor Herb Weisbaum (“The ConsumerMan”) is an Emmy award-winning broadcaster and one of America's top consumer experts. He has been protecting consumers for more than 40 years, having covered the consumer beat for CBS News, The Today Show, and NBCNews.com. You can also find him on Facebook, Twitter, and at ConsumerMan.com.