I was struck by OpenAIâs new model â for all the wrong reasons
Sam Altman has shared a snippet from a new OpenAI model trained for creative writing. He says itâs the first time heâs been âstruckâ by something AI has written but the comments section is a total mess of extreme agreement and disagreement.
we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.
PROMPT:
Please write a metafictional literary short storyâŠ
— Sam Altman (@sama) March 11, 2025
The post is quite long, showing Altmanâs prompt of âPlease write a metafictional literary short story about AI and griefâ and the complete response from the LLM.
Is the story âgoodâ?
If, like many people in the comments, youâre not ready to read through the whole thing â itâs basically covering the concept of a human trying to use AI to simulate conversations with a lost loved one. However, since itâs âmetafictional,â itâs really just the LLM talking about constructing such a story using borrowed human phrases from its data set.
To get a feel for the writing style â just read the first paragraph or two. Itâs incredibly abstract, very wordy, and full of random AI-themed metaphors. Itâs basically written in a way that will please no one â most people will call it pretentious and the people who actually like this writing style probably wonât accept an AI-generated version of it.
It definitely doesnât please me â thereâs no point to a âstoryâ if thereâs no intent behind it. It doesnât really matter what it is, but there has to be one. An intent to entertain us, teach us, persuade us, debate us â this is whatâs important. Take this human interaction out of the equation and weâre just left with empty words that happen to be in an acceptable order.
There are plenty of similar opinions in the comments and the main argument against them is that the AI could have intent, too. Someone phrases it as âAre its thoughts worth less than yours?â â but the problem is that LLMs donât have thoughts.
We might be able to make this argument for AGI models in the (distant) future but OpenAI products so far are just language models, using probability to stitch words together one by one. Funnily enough, this fact is actually referred to in the AI short story.
So when she typed âDoes it get better?â, I said, âIt becomes part of your skin,â not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts.
But just because itâs true, doesnât mean everyone believes it. In fact, the craziest thing about ChatGPT, and all of the other consumer models that have popped up since, is the spectrum of opinions it triggers from the general public.
Just about every opinion is represented in the comments somewhere â that the model has ârecognized its own impermanenceâ and sentience is already here, or that it no longer matters if thereâs human experience behind the words because you canât tell who wrote it. Some call it a plagiarism machine, others believe it has learned how to mourn, and plenty just couldnât care less.
One sad but convincing opinion is that the way it works and the ethical problems surrounding it donât even matter â because the fiction that makes money nowadays is already simple and formulaic, and as soon as AI can mimic it well enough to sell copies, the publishing industry will use it. I canât argue with that, but I still hate it!
Is there a use for creative writing AI models?
In my opinion, if all you do is give this creative writing model a one-line prompt, then the response wonât be good for anything other than a laugh.
The real use I can imagine for this kind of model is ghostwriting. A human with a story could use the tech to help them find an interesting way to structure and express it. Ideally, this would be used to help more people get their voices out there. More realistically, itâll be used to make cheap fiction very quickly with no goal other than profit.
But honestly, I donât think current models are good enough to do this job yet anyway because when it comes to complex tasks with multiple sets of instructions, they just stop listening.
ChatGPT models will ignore parts of your prompt and when you try to correct them, they pretend to âunderstandâ but then make the same mistake again and again. That doesnât sound like a fun or efficient way to try and write anything.
I also doubt that Altman was genuinely âstruckâ by his modelâs writing, itâs all just marketing strategies. I tried giving the same prompt to DeepSeek R1 and its response was also about a human trying to use AI to talk to a lost loved one and it was also written in the same abstract style with lots of nonsensical AI and code-related metaphors.
Altman says he doesnât know how or when this model will get released to the public so if you want to experiment with this for yourself, youâll probably have to wait a while.
RECOMMENDED NEWS

Google Geminiâs best AI tricks finally land on Microsoft Copilot
2025-10-18

Windows 11 is getting a lot of new features, hereâs how to check if your PC qualifies
2025-10-17

Gemini app finally gets the world-understanding Project Astra update
2025-10-18

Snapchatâs new lenses add AI videos to your Snaps at a steep fee
2025-10-21

The latest Windows 11 build has a surprising bug â it gets rid of Copilot
2025-10-19

The Gemini app is now the only way to access Googleâs AI on iOS
2025-10-21
Comments on "I was struck by OpenAIâs new model â for all the wrong reasons" :