This post on Writer Unboxed caught my eye: On Frigates and ChatGPT
This post starts with a poem by Emily Dickinson:
There is no Frigate like a Book
To take us Lands away
Nor any Coursers like a Page
Of prancing Poetry –
This Traverse may the poorest take
Without oppress of Toll –
How frugal is the Chariot
That bears the Human Soul –
I’ve always particularly liked Emily Dickinson, and this poem is so appropriate when thinking about fiction. That’s exactly what fiction is supposed to do: take us lands away. Also, just offhand, I would agree that this is exactly what fake fiction generated by AI programs has no hope of doing. I presume that’s what the linked post is going to argue, and I don’t think it needs argument. I think it’s plainly impossible for a non-conscious text generator to create fiction that is thematically coherent and meaningful, never mind coherent in terms of plot and characterization.
Let’s see what the post is actually saying, however:
How do you see the world? This is the tacit question we ask an author each time we slip into a story. Reading is the closest we can be to understanding another person’s way of seeing. We are all trapped within our own consciousness. We peer through frames we are often unaware of such as culture and language, family and upbringing. Even the physiology of our eyes plays a role in determining how near or far we can see, as well as the colors and shapes of the world around us.
Yes, well, differences in sensory experience are one thing, and differences in the experience of imagination are another, and those differences can be important and interesting. But surely what we mean here by “How do you see the world?” is “How do you see the world philosophically?” Or even, “How do you see the world morally?”
More than that, fiction also asks, “How would this other person see the world? This person who is not me and also not exactly the author? What would it be like to see the world this way?”
All those questions are obviously meaningless with fake fiction generated by non-conscious text generators. That’s why “artificial intelligence” is a misnomer, and one that I think may not be harmless. The things aren’t any more conscious than a tree, far less a dog. They generate the appearance of intelligence without the least trace of actual intelligence. It’s really odd to think about.
I haven’t yet seen student essays that I can easily tell are ChatGPT-generated, but I really am not concerned that text generators will be able to create coherent stories for a good while yet. I wonder if I could be wrong? I don’t think I can be, at the level of a novel. There’s just too much in a novel. Maybe a really short story where everything is a clever plot twist and there’s no depth? I’d be interested in seeing some of the fake stories being received by short story publishers now.
There is no Frigate like a Book
To take us Lands away
Image by Ilse Orsel on Unsplash
5 thoughts on “Frigates and ChatGPT”
The biggest issue with this particular flavor of “AI” is that it always gives an answer, and it never indicates uncertainty or that it doesn’t know. It regularly gives authoritative sounding answers that are totally invented.
And gives plausible but completely fake references when asked to support a statement, too. I have a friend who works at a law school, and they’re coming to terms with the fact that the new reality is any lawyer is going to have to check check every single reference going forward (or, perhaps, have paralegals or students do it). That was always best practice, but it’s pretty much becoming mandatory.
On the fiction end, I wonder how AI will do at producing really, really formulaic stories. Or, in a completely different area, short mood pieces that aren’t supposed to have a plot.
Worst of all — or at least in the running for “worst of all” — are the fake medical citations supporting fake medical statements. I am just about certain that many young doctors will not be as skeptical of confident but erroneous statements from AI diagnostics as they should be.
These are real issues. It’s kind of funny in a way. These chat programs are doing exactly what they’re programmed to do – given a prompt, they spit out the kinds of language one would expect to see in response to that conversational prompt based on a huge database of text. It does the thing. But as humans we read into it all sorts of stuff that isn’t there. If we ask it for an apology, the program looks at all the instances in its database of requests for apologies and what sorts of language come after that, which is generally a slew of apologies, so that’s what it spits back out. If you ask for a citation, you’ll get something formatted appropriately, and made up out of commonly seen words, which we will then see as plausible – precisely because they look a lot like what we normally see! But it’s not a researcher or analyst and there’s no actual judgment behind the scenes, it’s just words strung together that look like words one would normally see associated with the prompt. I do worry about all this leaking into the web generally, not to mention intentional misuse. My daughter just told me that her professor just changed how homework assignments are being handled because she has concluded some students are using ChatGPT to generate their homework.
Allan, first, if I were a teacher, I’d be using one of the ChatGPT detectors.
But second, if I were a teacher, I’d be heading straight for assignments done in class, in front of me, on paper, with a pen. There is no other way to be sure whether the student is actually doing the assignment herself or whether someone else is doing it for her. Ditto for tests. In class, with a pen, on paper, in front of me. No online tests, period.
Too often I see teachers cheating on behalf of the students: Making the test open book, take home, online, untimed. That is not a test. It’s just called a test.
Handing out the test ahead of time as a study guide. I mean the actual test, the real thing, the exact test that will be given as a test two days later. I’m not kidding. I know of previously rigorous classes where the instructors are now doing exactly that.
The analysis I’m doing right now indicates that students are passing their classes at massively increased rates since 2020. Well, no kidding. Of course they are. They are no longer required to master any of the material in order to pass the class. I wish I were exaggerating. I am not exaggerating.
If the instructors start handing out a link to ChatGPT and calling that a helpful study aid for students, that will not surprise me one bit.