Of all the emotions I expected from this week’s Google I/O keynote, fear was not on my shortlist. Google has spent a few decades making products that make my life easier. This week it showed just the opposite: a way to worsen communication with other people.
Google I/O this year focused on artificial intelligence, and one of the key features was prompt-based text generation, particularly in Google’s office suite. At various points during the keynote, it demonstrated a system called “Help Me Write” — which rewrites simple prompts in Gmail and other apps as more polished paragraphs of text. But more polished does not mean automatically better.
Good writing can surprise us with an intriguing turn of phrase. It can tell us something about how someone thinks and feels. Or it can provide some sort of helpful detail about the world. And there’s no reason why AI can’t play a role. I’ve seen authors use generative tools to create unexpected prose or summarize data in a readable way. I’m excited to see people discovering new uses for it.
But Google’s ideas on stage were almost impressive in their ability to serve precisely none of the writing’s core purposes. The Help Me Write examples calcify the worst tendencies of “professional” communication. They’re bland, bloated boilerplate texts that turn a human prompt into something that uses more words to say less.
Let’s look at one of the examples. Early in the keynote, a presenter automatically generates a job description for a position in textile design that they (in the scenario of the demo) send out into the world. Later, another presenter responds to it and issues the following prompt:
interested! 8 years exp with international brands. Call Tuesday.
Help me write works its magic and yields the following:
Thank you very much for recording! I am very interested in the Senior Textile Designer position at Cymbal. I have 8 years of experience working with international brands and I am confident I have the skills and experience you are looking for.
I am available for a call on Tuesdays at your convenience. Please let me know what time it is.
The latest draft is clearly the way you’re expected to apply for a job because, written by a human, it suggests you’re committed enough to type a multi-paragraph answer and are capable of some class – and cultural signifiers. However, an AI-generated response serves neither purpose. It was created almost immediately from a predictive text system that requires minimal English proficiency, and so far that system can only handle fairly formal writing; it will not be a substitute for human work in many real world situations. So all it does is create a longer and more stilted version of the original prompt – one that probably only has value until everyone expects it to be written with AI.
And even worse, the AI generation reinforces the idea that overenthusiastic American businessmen are the mouthpiece required way to write, whether or not it’s a necessary skill for the job. I’ve seen thoughtful stories of people with dyslexia using ChatGPT to produce text that — like a Washington Post article states it – “unfailingly professional and courteous.” But there is an unspoken, simpler alternative: being willing to accept greater variations in the way people communicate. I begrudge anyone using AI writing to meet largely arbitrary norms, but on a societal level it’s a linguistic arms race toward a duller future.
For another example, try this friendly congratulatory message from Google Android engineering VP Dave Burke to Devices and Services SVP Rick Osterloh:
rick congratulations on pixel fold launch awesome with android dave
This is what provokes Help me write:
Congratulations on the successful launch of the Pixel Fold! I’m really impressed with the device and how well it integrates with Android. The folding screen is a game changer and I can’t wait to see what you do with it.
I assume this is ironic, as I know how executives send personal emails, and they seem perfectly comfortable with punctuation-free one-sentence missives. But then again, it’s worse! It tells me less about how the sender sounds. It takes longer to read. It makes the transmitter sound like a switchable robot.
To understand the pragmatic value of this process right now, you need to look at an email from the very beginning of the keynote. The core of the generation is actually pretty cool – it takes a very simple prompt for an airline refund request and fills it out with concrete details from your inbox, like the name and date of the flight. But then it embellishes the facts with a few paragraphs of irritated prose. Here’s the original:
request a full refund for this canceled flight
And the AI-generated version:
Dear Customer Service,
I am writing to request a full refund for my canceled flight, DF 4542, on 4 February 2023. I understand that you are offering a voucher as a gesture of goodwill, but I would rather have the cost of my ticket refunded.
I have been a loyal customer of your airline for many years and have always been satisfied with your service. However, I am very disappointed with the way my recent flight was handled. I had not been notified of the cancellation in advance and had to make alternative arrangements at the last minute. This has caused me a lot of inconvenience and expense.
I believe a full refund is the only fair way to compensate me for the issues I’ve experienced. I have attached a copy of my original ticket for reference.
Thank you for your time and consideration.
There is no evidence that these furious claims are correct. It’s a bureaucracy deimatic display: Convincing a hapless customer service representative that you’re crazy enough to type out all that text and that you’ll probably be really annoying until you get your money back. I’ve seen this idea pitched better on Tumblr of all places. In a popular post about AI, someone described using ChatGPT to make some kind of high-tech version of it Brazil‘s Form 27B/6 bet, which generated a menacing fake legal letter to a landlord who broke housing laws. As a fellow long-time tenant, I applaud them.
But this stuff is effective in a short transition period when generative text isn’t in widespread use while readers probably assume it’s related to a human writer. When you know it’s a machine, the illusion disappears. You’re left with a world of communication that’s longer, less thoughtfully crafted, and more mind-numbing to read. I’d rather hire someone based on an honest “8 year exp” than a cover letter full of blank automated prose.
In contrast, Google’s most useful example of writing Help Me was simply conveying information. In an email about a potluck, the AI could view a document with a list of dishes that people had signed up to take, and then summarized that list as a line in an email. It saves writers the hassle of pasting into a series of entries and readers the inconvenience of clicking through to another tab. Most importantly, its value doesn’t depend on pretending a human wrote it – and if Google has its way, that’s a trick that won’t last long.