AI-generated spam is on its way to your inbox and its secret weapon is personalization
Every day, email inboxes are choked with messages from Nigerian princes, drug dealers, and promoters of must-see investments. Improvements to spam filters only seem to inspire new techniques to break the protections.
Now the arms race between spam blockers and spam senders is about to escalate with the emergence of a new weapon: generative artificial intelligence. With recent developments in AI as made famous by ChatGPTspammers could have new tools to bypass filters, grab people’s attention and convince them to click, buy or give up personal information.
As director of the Advancing Human and Machine Reasoning lab at the University of South Florida, I investigate the intersection of artificial intelligence, natural language processing and human reasoning. I’ve studied how AI can learn people’s individual preferences, beliefs and personality traits.
This can be used to better understand how to interact with people, help them learn or give them useful suggestions. But it also means you should brace yourself for smarter spam that knows your weaknesses and can use them against you.
Spam, spam, spam
So what is spam?
Spam is defined as unsolicited commercial emails sent by an unknown entity. The term is sometimes extended to text messages, social media direct messages, and fake product reviews. Spammers want to push you into action: buy something, click on phishing links, install malware, or change views.
Spam is profitable. One email blast can bring in $1,000 in just a few hours, costs spammers just a few dollars – not including the initial setup. An online pharmaceutical spam campaign can generate about $7,000 a day.
Legitimate advertisers also want to prompt you to take action – buying their products, completing their surveys, signing up for newsletters – but while an email from a marketer may link to an established company website and contain an unsubscribe option in accordance with federal regulationsa spam email may not.
Spammers also cannot access mailing lists that users have subscribed to. Instead, spammers use counterintuitive strategies such as the “Nigerian Prince” scam, in which a Nigerian prince claims to need your help to unlock an absurd amount of money, and promises to give you a handsome reward. Smart digital natives immediately reject such pleas, but the absurdity of the request may even choose naivety or advanced agefilter for those most likely to fall for the scam.
However, advances in AI mean spammers may no longer have to rely on such hit-or-miss approaches. AI could allow them to target individuals and make their messages more persuasive based on easily accessible information, such as social media posts.
Future of Spam
Chances are you’ve heard about the advances in generative large language models like ChatGPT. The task these generative LLMs perform is deceptively simple: given a string of text, predict which token – think of this as part of a word – will come next. Then predict which token will come next. And so on, over and over.
Somehow training on that task alone, when done with enough text on a large enough LLM, seems to be enough to imbue these models with the ability to perform surprisingly well on many other tasks.
Multiple ways of using the technology have already emerged, demonstrating the technology’s ability to quickly adapt to and learn about individuals. For example, LLMs can write full emails in your writing style, with just a few examples of how you write. And there’s the classic example – now over a decade old – from Target finding out a client was pregnant before her father knew.
Spammers and marketers alike would benefit from being able to predict more about individuals with less data. Given your LinkedIn page, a few posts, and a profile picture or two, LLM-armed spammers can make fairly accurate guesses about your political affiliations, marital status, or life priorities.
Our research showed that LLMs can be used to predict with a degree of accuracy what word a person will say next far surpasses other AI approachesin a word generation task, the semantic fluency task. We have also shown that LLMs can take certain types of questions from tests of reasoning ability and predict how people will respond to that question. This suggests that LLMs already have some knowledge of what typical human reasoning looks like.
If spammers get past the initial filters and get you to read an email, click a link, or even strike up a conversation, their ability to apply custom persuasion is greatly increased. Here, too, LLMs can change the game. Early results suggest that LLMs can be used to argue persuasively on topics ranging from politics Unpleasant public health policy.
Good for the goose
However, AI does not favor one side or the other. Spam filters should also take advantage of advances in AI, allowing them to create new barriers to unwanted emails.
Spammers often try to trick filters special characters, misspelled words or hidden text, relying on the human propensity to forgive minor text inaccuracies – e.g. “c1îck h.ere n0w.” But as AI gets better at understanding spam messages, filters can get better at identifying and blocking unwanted spam — and maybe even letting through wanted spam, such as marketing email you’ve explicitly opted in to. Imagine a filter that predicts whether you want to read an email before reading it.
Despite growing concerns about AI – as evidenced by Elon Musk, CEO of Tesla, SpaceX and Twitter, Apple founder Steve Wozniak and other technology leaders calling for a break in the development of AI – much good can come from advances in technology. AI can help us understand how weaknesses in human reasoning can be exploited by bad actors and devise ways to counter malicious activity.
All new technologies can be astonishing as well as dangerous. The difference lies in who creates and manages the tools, and how they are used.
This article has been updated to reflect that it was a teen’s father who learned from Target that his daughter was pregnant.
- John Licateassistant professor of computer science and director of AMHR Lab, University of South Florida
This article has been republished from The conversation under a Creative Commons license. Read the original article.
Contents