In early January, a short story titled “The Last Hope” first appeared on Sheila Williams’ desk. Williams, the editor of Asimov’s science fiction magazine, watched the story and passed it on.
At first she didn’t think much about it; she reads and responds to writers daily as part of her job, receiving between 700 and 750 stories per month. But when another story, also titled “The Last Hope,” came in a few weeks later from a writer by a different name, Williams became suspicious. By the time another “The Last Hope” came out a few days later, Williams knew right away she had a problem.
“That’s just the tip of the iceberg,” says Williams.
Since that first submission, Williams has received more than 20 short stories, all titled “The Last Hope,” each coming from different author names and email addresses. Williams believes they were all generated using artificial intelligence tools, along with hundreds of other similar entries that have overwhelmed small publishers in recent months.
from Asimov received about 900 stories for consideration in January and is on track to get 1,000 this month. Williams says almost all of the increase can be attributed to pieces that appear to be AI-generated, and she’s read so many that she can now often tell by the first few words if something wasn’t written by a human.
Sometimes they didn’t even bother to replace”[name]” with their own
In addition to repeating titles, there are certain character names that are common, says Williams. Sometimes the manuscript contains one different title other than that stated in the online form. Author names often seem like portmanteau names and surnames. In optional cover letters, some authors include instructions on how to transfer money for their story that has not yet been accepted. Sometimes the petitioner has not even bothered to “[name]” with their own.
Using ChatGPT, The edge was able to replicate some elements of submissions Williams has seen. A prompt to write a short sci-fi story – plus information copied and pasted from from Asimov submission guidelines – produced stories with dozens of similar titles in a row, such as ‘The Last Echo’, ‘The Last Message’, ‘The Last Day of Autumn’ and ‘The Last Voyager’.
Willams and her team have learned to recognize AI-generated works, but the influx of submissions has been frustrating nonetheless. Outlets such as from Asimov are overwhelmed by AI friends, take up editors’ and readers’ time, and potentially crowd out genuine submissions from newer writers. And the problem can only get worse as the greater availability of writing bots creates a new genre of get-rich-quick schemes, with open-submission literary magazines finding themselves on the receiving end of a new surface for spammy submissions trying to game the system.
“I just go through them as fast as I can,” Williams says of the pieces she suspects are AI-generated. “It takes the same amount of time to download, open and view a submission. And I prefer to spend that time on the legitimate entries.”
For some editors, the influx of AI-generated submissions has forced them to stop accepting new work.
Clarke believes the submissions come from side hustle influencers and websites
The popular science fiction magazine appeared last week Clarkesworld announced it would temporarily close submissions due to a deluge of AI-generated work. In a previous blog post, editor Neil Clarke had noted that the magazine was forced to ban a skyrocketing number of authors for submitting stories generated using automated tools. In February alone Clarkesworld had received 700 human-written entries and 500 machine-generated stories, says Clarke.
Clarke believes the spam submissions are from people who want to make a quick buck and have found it Clarkesworld and other publications via influencers and websites. For example, a website is full of SEO bait articles and keywords related to marketing, writing, and business, promising to help readers make quick bucks. An article on the site lists nearly two dozen literary magazines and websites – inclusive Clarkesworld And from Asimov, as well as larger outlets such as the BBC — with payment rate and submission details. The article encourages readers to use AI tools to help them and includes affiliate marketing links to Jasper, an AI writing software.
Most publications pay low rates per word, around 8 to 10 cents, while others charge a flat fee of up to a few hundred dollars for accepted pieces. In his blog, Clarke wrote that a “high percentage of fraudulent entries” came from some regions, but refused to name them, concerned that it might paint writers from those countries as scammers.
But the ability to get paid is a factor: In some cases, Clarke has corresponded with people who have been banned for submitting AI-generated work saying they need the money. Another editor told The edge that even before the AI-generated stories, they would get submissions and emails from writers in countries where the cost of living is lower and an $80 publishing fee goes way beyond the US.
Clarke, who built the submission system his magazine uses, described the efforts of the AI story spammers as “inelegant” — by comparing notes with other editors, Clarke could see that the same work was being submitted to multiple publications from the same IP address. sent only minutes apart, often in the order magazines appear on the lists.
“If these were people from the inside [science fiction and fantasy] community, they would know it wouldn’t work. It would be immediately obvious to them that they can’t do this and expect it to work,” says Clarke.
The problem extends beyond science fiction and fantasy publications. Flash fiction online accepts a range of genres, including horror and literary fiction. On Feb. 14, the outlet added a message to its submission form: “We are committed to publishing stories written and edited by people. We reserve the right to reject any submission that we believe was primarily generated or created by language modeling software, ChatGPT, chatbots, or any other AI apps, bots, or software.
The updated terms were added around that time FFO received more than 30 submissions from one source in a matter of days, says Anna Yeatts, publisher and co-editor in chief. Each story touched on clichés Yeatts had seen in AI-generated work, and each story had a unique cover letter, structured and written unlike what the publication usually sees. But Yeatts and colleagues had suspicions since January that some of the works they were given were created using AI tools.
Yeatts had been playing with ChatGPT since December, prompting the tool to produce stories of specific genres or in styles such as gothic romance. The system was able to replicate the technical elements, including establishing main characters and setting and introducing conflicts, but failed to produce a “deep point of view” – endings were too neat and perfect, and emotions often spilled over into melodrama. Everyone has “piercing green eyes,” and stories often begin with seated characters. Of more than 1,000 works FFO received this year, Yeatts estimates that about 5 percent is likely AI-generated.
“We put that creepy little warning [on the submissions page]Yeatts says. However, enforcing it can prove challenging.
In the past, FFO has published mainstream work with a more conventional writing style and voice accessible to a variety of reading levels. That’s why Yeatts says stories generated using AI tools can go beyond basic requirements.
“It has all the parts of the story that you try to look for. It has a beginning, middle and end. It has a resolution, characters. The grammar is good,” says Yeatts. The FFO team is working to train staff readers to look for certain story elements when they make a first attempt at submissions.
“We really don’t have good solutions.”
Yeatts is concerned that a growing wave of AI-generated work could literally shut out written work. The outlet uses Submittable, a popular submission service, and FFO‘s plan with a monthly limit on stories, after which the portal closes. If hundreds of people submit ineligible AI-generated work, it could prevent human authors from submitting their stories.
Yeatts isn’t sure what the magazine can do to prevent the stories from coming. Upgrading the Submittable plan would be costly for FFOwho works “on a shoestring budget,” says Yeatts.
“We’ve talked about soliciting stories from other authors, but that doesn’t really feel true to who we are as a publication, because that will put new writers off,” says Yeatts. “We really don’t have good solutions.”
Others in the community are monitoring the problem that is sweeping other publishers and are thinking of ways to respond before it spreads further. Matthew Kressel, a sci-fi writer and creator of Moksha, an online submission system used by dozens of publications, says he’s starting to hear from clients who have received spammy submissions that appear to have been written using AI tools.
Kressel says he wants to keep Moksha “agnostic” when it comes to the value of entries generated using chatbots. Publishers could add a checkbox that lets writers confirm that their work doesn’t use AI systems, Kressel says, and consider adding a publishing option that lets them block or partially restrict submissions with AI tools.
“Allowing authors to self-assess whether the work is AI-generated is a good first step,” Kressel said The edge via email. “It creates more transparency in the whole thing, because there are a lot of uncertainties at the moment.”
For Williams, the editor of from Asimov, being forced to use her time to sift through the AI-generated junk pile is frustrating. But even more worrying is that legitimate new authors can see what’s happening and think editors will never make it to their manuscript.
“I don’t want writers to worry that I’m going to miss their work because I’m inundated with junk,” says Williams. The good stories are apparent very early on. “The mind that invents the interesting story is in no danger whatsoever.”