Капітан Жан-Люк Пікар з серіалу «Зоряний шлях: Наступне покоління» (1987-1994, Paramount)
The user believed that ChatGPT had written a 700-page, 500MB children’s book for him. Reddit was bursting with laughter when he asked how to download the book.
On Reddit, user Emotional_Stranger_5 said that he worked with ChatGPT for 16 days on an illustrated children’s book intended to be a gift to a group of local children. Believing that the chatbot had been working on the book quietly, he turned to the OpenAI subreddit to ask how to download the finished document (the original post later was removed by the author, but the comments still exist).
The most astute readers of this news have already guessed that the chatbot did not actually write any book, but only talked about it. It only informed the hapless author that it was creating a 700-page volume. The tone of readiness to help, typical of large language models, does not reflect actual intent or long-term memory. Apart from dialogues about it, there was simply no result.
Emotional_Stranger_5 clarified that he did not ask ChatGPT to write the book from scratch. He personally spent two and a half months adapting stories from Indian mythology and wanted AI to improve the style of the work and create illustrations. But only after Reddit exploded with comments, which now total almost 300.
But one of the users still tried to give the poor guy a detailed explanation:
«It’s an acting game. I think your initial offer was «help me put together a book» and he genuinely agreed. Then, when it comes to downloading it, he tells you: «Okay, it’s compiled and ready to download». But it’s not — it’s essentially role-playing with you. It hasn’t actually compiled the book for you — it’s not (yet) in its capabilities. Your best bet is to collect all the text and images, copy the text, paste it into a Google Doc or similar file, download the images, and paste them in manually».
Eventually, the «author» admitted that ChatGPT had tricked him, thanks to the community’s tips and taunts. PC Gamer editor-in-chief Tyler Wilde, who wrote about this story, conducted his own experiment similar to what Emotional_Stranger_5 tried to do. He asked the bot to create an illustrated version of «Moby Dick» in a certain style and, naturally, failed as well.
«I uploaded the text of «Moby Dick», which is in the public domain, and the chatbot created a «sample page» with placeholder text instead of images. I was able to download this page as a PDF, but when it came time to create the «full whimsical illustrated edition of «Moby Dick», the bot dodged the task by asking additional questions about my preferences and saying that «advanced PDF creation tools were temporarily unavailable» but that I could find out more details while I waited».
Experience, and not only this one, shows that people often turn to AI tools without realizing their technical and functional capabilities, as well as with blind trust in the veracity of the result — despite the well-known problem of «hallucinations». Scientists receive bizarre illustrations in scientific papers and more, Lawyers are fined because of fictitious precedents on business, AI has fooled even the White House — but still seems to have some kind of limitless credit.