Photocopiers and Paraphrase Hallucinations

Jonathan Kressaty

Groupthink is the place to chat with VI, a GPT-powered AI that reads the internet and connects to your email, browser history, and Slack history to provide personalized responses. Invite your friends and family, collaborate on projects, and make use of Groups and Threads to organize conversations.

Sign in to Groupthink

Welcome new (and returning) users! For those who may be seeing this email for the first time, we are the founders of Groupthink (formerly VI) and we send this daily email with our progress.

Ted Chiang on ChatGPT and Paraphrase Hallucinations

Science fiction writer Ted Chiang has a fantastic article in The New Yorker this week about ChatGPT and how it can exist in a world of both “chat bots” and creative writers.

“What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t.”

A few disparate thoughts (from Jonathan) come to mind after sitting on this one for a day:

  • Describing LLMs as “lossy like a photocopier” is my favorite analogy lately. Copy machines are wonderful technology, until you hear “oh by the way, that 7 in your budget plan is actually a 3” and have to explain “readable but incorrect” to your boss.
  • There’s going to be an ongoing conversation about fidelity of thought. Creativity is going to be questioned in new ways. Attribution is a mess. But I think that Ted’s original analogy to the Xerox debacle is the most tangible issue at the moment: at the end of the day, both ChatGPT and bad Xerox copies are just factually incorrect, which unchecked is at best super annoying and at worst can result in catastrophic decision making..
  • Considering Ted’s science fiction work, specifically Story of Your Life (which became the movie Arrival), in the context of an “AI” that can create human language is especially intriguing to me. In the novella and movie, we’re shown beings who can perceive all of time at once. GPT-3 and other similar models seem almost the opposite of this – they can only perceive time at a single moment, and to change this they need to be “retrained.” Time, in this case, is not a flat circle, it’s just a single point on a page.

I’m not a creative writer. I’m not even a good writer (these emails are fantastic evidence of this, and big thanks to Elle for fixing them with me). But Ted’s passage at the end really stuck with me:

“Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say.”

I feel this. When I write poorly, I’m frustrated because I’ve read enough fantastic writing to know when my writing is poor, and I want to do better. But I know I can edit! The idea is mine, and I know with time and effort I can improve its expression.

I provided the content of Ted Chiang’s piece and asked for a summary, and they definitely didn’t get caught up on any of this. Instead, they said:

Paraphrase, indeed.