AI Usage Statement
Letβs talk about it.
The First Draft and Algorithms
There are two paths to creating a first draft: putting the pieces together yourself or generating something that you can then react to.
Either way, it's going to go through a LOT of revisions before you're done. Or it might inspire you to take a completely new direction and to scrap the original entirely.
Sometimes all you need is something that sounds β or reads as β "close enough" to something you might write. A rough sketch of a plot. A conversation that gets a character from point A to point B. A shape in the fog that your brain can finally orient itself around.
I remember making sentence diagrams in English class. Those color-coded, branching trees of subjects and predicates and modifying clauses β the kind of exercise that felt tedious at the time, but was quietly teaching me something about architecture. About how a sentence holds weight. When you read enough, you start to see the frameworks underneath everything. The skeleton inside the prose.
Shortly before the pandemic hit and we all retreated into our homes and our screens, many online businesses were already using something called "swipe files": plug-and-play templates you could drop into your website copy, email sequences, and social media posts. Digital content creators had cracked a specific code: some people don't need inspiration. They need permission to start. So these creators built something formulaically sound, generic enough that practically anyone could wear it like a borrowed coat and walk out the door.
"Here's the thing," and "it's not this, it's that" were already crawling across the internet because that was the framework being sold, copied, adapted, and sold again. And when the large language models came along, drinking from the vast ocean of everything we'd ever published online? That's what they swallowed. Those phrases didn't appear in LLM outputs by accident. They were already everywhere. The algorithm learned to speak the way we'd already learned to write.
Which got me thinking.
What if we treated LLM outputs like swipe files β a generic, serviceable starting point that we then press our own fingerprints into?
Think about what that actually solves. The inner critic doesn't show up to tear apart someone else's rough draft with nearly the same ferocity it reserves for yours. There's something psychologically different about revising a flawed thing versus constructing a perfect one from scratch. When the blank page is already filled β even badly, even with placeholder sentences that feel like cardboard β the paralysis breaks. You're no longer creating. You're responding. And responding is something your brain knows how to do.
We also bypass a hundred low-stakes decisions that quietly drain us before we've written a single line that matters. How much mental energy do we genuinely need to spend deciding how a character knocks on a door? The cadence of their knuckles against wood. Whether they hesitate. What their breath sounds like in the hallway before they do it. You could agonize over that for twenty minutes, or you could ask an LLM to generate the scene β with your specific characters, your plot beats, your emotional undercurrent β and spend those twenty minutes making it yours instead. The friction shifts. You move from the grinding gears of creation to the more fluid work of revision, which, if you're honest with yourself, is probably where the real writing happens anyway.
Because writing, at its core, is about following a pattern. Every word you choose either opens a door or closes one. It expands or limits what could possibly come next β the mood, the rhythm, the reader's expectation. Your phone already knows this, in its clumsy way. The autocomplete function watches how you type and starts guessing the next word, then the next, until it wanders off into increasingly nonsensical territory. LLM outputs are that same impulse, dramatically scaled up β trained on the collective grammar and syntax of human expression, capable of generating coherent paragraphs that follow the internal logic of your prompt.
Patterns live in storytelling too, and they always have. Most of us absorbed the basic structure of a tale before we could name it. We understand Acts. We recognize the shape of a Hero's Journey even when we've never read Joseph Campbell. We know, in our bones, when a story has earned its ending and when it's cheating.
For this project specifically, I had material. I had dossiers on major characters so detailed that I knew their handwriting and their coffee orders. I had a timeline of plot beats stretching from the 1800s to the 2030s, a web of causes and consequences, of how a single event in one century sends hairline fractures through the next. What I didn't have was a door. I couldn't figure out where to walk in. Every potential entry point felt like it would either drop readers into the cold deep end or bury them under an avalanche of lore before they had any reason to care.
When I tried to explain my story idea to friends, I could almost see their eyes glaze over β too much information, too many moving parts.
Meanwhile, the AI platform I was using looked at everything I'd assembled and essentially said, "Slow down. At a reader-friendly pace, you have enough here for six books."
Not one book. Six.
I'd been treating my material like the sprawl was the problem. The sprawl was the point. I had the bones of a saga: multiple timelines, a magic system with real internal logic, a cosmology that rewards the kind of reader who notices things and keeps quiet about it until the third act. I didn't have too much; I had enough for an expanded series.
That realization didn't come from me staring harder at the blank screen. It came from having something to react to.
That's the purpose of the first draft: to get your thoughts out, in whatever rough and embarrassing form they need to take, so that you can begin the real work of shaping them. When you learn how to use AI as a co-writer rather than a replacement, the dynamic changes entirely. You share a rough idea β a scene, a character beat, a thematic question you're circling β and the LLM takes its best guess at how you might have written it out. Then you read it the way you'd read a first draft by a writer who almost gets it. You see what's close. You feel what's missing. And you start to write.
The inner critic has very little to say about revision. It saves its best material for the blank page. So we don't give it one.
The Internal Critic is Immune to Evidence
This isn't abstract. This is also personal.
I've written RPG content since before chatbots were a thing. This included developing some of the story elements for the "No Gods, No Masters" multi-adventure module. I've had people tell me I'm witty, creative, and audacious enough to come up with the idea of "Atlantis simultaneously exists as a thriving civilization and ancient ruins, due to some strange reality-bending phenomenon, and the players get to decide which version becomes the 'true' one."
Yet, I stare at the blank screen and freak out, with every typed word feeling like an awkward accusation that I'm mediocre at best.
I've been building worlds and stories ever since I was a kid. I know how to write. I know how to research.
None of that quiets the voice in my head that starts prodding me the moment my fingers line up on the keyboard.
So, if anyone says I'm using AI because I'm a poor writer?
Don't worry. My brain already tells me that daily. Even though the evidence says otherwise.
Impostor syndrome makes the process of writing a first draft way harder than it needs to be. I can't bribe my inner critic to shut up, so I have to find ways to bypass it.
Here's What Happens When You Get This Wrong
The last time I seriously sat down with this story β June 2025, paid ChatGPT account, the dangerous combination of free time and a brain that needs a new shiny object every five minutes β I went on a two-week bender that felt like world-building. The kind where you emerge from the rabbit hole convinced you've built something real.
What I actually had was a pile of first-draft material I couldn't stand. Two weeks of back-and-forths that went late into the night, complete with tears and frustration, only for me to not recognize what was created. The process fueled my inner critic rather than quieting it.
The LLM had started guessing. Not just at prose; at intention. It forgot earlier conversations, quietly rewrote plot logic it had decided wasn't landing, and at one point suggested my protagonists pause a time-sensitive crisis (a looming apocalypse) to go on a secluded retreat and process their recent trauma.
Think about it β the heroes barely escape death, the end of the world is still going to happen, and the AI model decides that is the point where the characters need to go into the wilderness to do shadow work and deal with interpersonal issues.
Which⦠fine, I guess. Valid impulse. Trauma-informed storytelling is real.
But what the AI gave me was not my story. That was the algorithm making a narrative decision it had no business making, because I'd stopped pushing back and started just receiving.
I'd drifted from collaboration into something closer to dictation. The AI was writing. I was approving. That's not co-creation; that's ghostwriting with extra steps.
So I made a hard pivot, put the project on hold, and stress-tested AIβs ability to do collaborative storytelling.
I threw myself into AI roleplay chatbot creation β partly to chase the feedback loop of making something I could actually release into the world, and partly because it turned out to be the best low-stakes laboratory I could have stumbled into. Building RP bots forces you to think about character voice in extremely granular terms. You're essentially writing a system prompt that has to become a person. You learn very quickly which parts of your lore are solid and which parts are vibes you've been quietly hoping no one examines too closely.
It also taught me something I now treat as non-negotiable:
You can collaborate on prose. You probably shouldn't collaborate on plot, character psychology, or thematic meaning.
When you hand those things over β even briefly, even accidentally β the AI fills the vacuum with generic story logic. It's not malicious. It's pattern completion. It does what it was trained to do: follow the shape of stories it has already consumed.
And the shape it reaches for, when left to its own devices, is the median.
The average.
The thing that would offend the fewest people and satisfy the most of them in the most forgettable way possible.
I'm not building the median. I don't want to play safe and forgettable.
The Proof
So here's what this looks like in practice β not as a theory about AI and craft, but as a record of decisions made about a specific, strange, ambitious project. Everything below is evidence that the rules above are load-bearing.
This is a saga involving a divine incel leading folks towards fascism, where the protagonists are far from perfect. A saga that I know will offend some, engage others, and be memorable in the process.
Here's what the workflow actually looks like now:
I use AI as a second brain for world-building β bouncing ideas, testing internal logic, asking it to stress-test a lore concept. The moment I get into genuinely nuanced territory β the kind of corner-case stuff that would ignite a seventeen-page argument on a fan forum β the model starts to wobble. It gets polite and evasive in that particular way that means it's running out of usable patterns. At that point, I close the chat and brain-dump directly into a document, because that wobble usually means I've hit the edge of what's been done before. Which is exactly where I want to be working.
The story's architecture is mine. Entirely and unapologetically.
The magical orientation system β Magi and Witches, a deliberate riff on ABO dynamics from the Omegaverse tradition β grew out of my attempt to simplify magical theory. High versus Low. Ceremonial versus folk. Structure and power on one side; something more fluid, more relational, on the other. And then I subvert the whole framework by introducing Technomancers, who integrate ceremonial practice with their own form of folk magic, and NULL-bloods, who refuse to neatly fit anywhere. Because I'm queer, and I'm tired of binaries. I'm tired of watching people get sorted into boxes that were never built with them in mind.
The Technomancers are their own kind of argument. Their entire culture and religion - Tumblr aesthetics, the Konami code, and all - is a meditation on how religious movements pull inspiration from the most unlikely of places. When we look back at traditions that have survived centuries, we rarely stop to ask how any of it started. We don't interrogate the strange, eyebrow-raising choices that somehow calcified into doctrine. With the Technomancers, we get to watch it happen in real time. We get to see the decisions being made and sit with the discomfort of recognizing them. For anyone who's spent time in neopagan circles, this may land with uncomfortable familiarity.
And then there's the incel god. A deity whose wounded pride curdles into manipulation β who puts on a false face, cultivates followers who need someone to validate their rage and their worst impulses, and sets the conditions for fascism to take root. We've all seen what misdirected anger looks like when it finds an audience. We've all watched people claim a tradition of radical compassion while insisting that empathy itself is the sin. This is that story, told at the scale of the divine. It's the kind of jab a chatbot would sidestep entirely β too pointed, too specific, too likely to invoke rage-posts. That's precisely why it belongs here.
A chatbot didn't give me an incel god whose wounded ego metastasizes into fascism. It didn't give me a religion built around Tumblr and the Konami code, or a magical caste system that quietly dismantles ABO tropes from the inside. Those ideas are mine: strange, specific, and too weird or too pointed for any safety-filtered model to generate unprompted. What the AI gave me was a sentence to argue with.
That's the actual answer to your question. Talented writers don't stare at blank pages because they lack ideas β they stare at blank pages because the gap between the idea in your head and the draft in front of you is enormous, and crossing it takes energy that has nothing to do with talent. AI closes that gap. It gives me something to revise, which is where the real writing happens anyway. Every choice about theme, tone, structure, what to keep, and what to cut β that's still mine. The voice you're reading right now is mine. I'm not outsourcing the judgment; I'm outsourcing the friction.
So no, I don't think using AI makes me less of a writer. I think it makes me a writer who finishes things.
The ethical questions are real β the environmental cost, the economic pressure on working writers, the unresolved mess of training data. I'm not waving any of that away. I have a position on all of it, and it's argued out in full in the companion essay, Rage Over Drops of Water, because that argument deserves actual rigor rather than a hasty paragraph bolted onto the end of a different one. If you want the moral calculus, it's waiting for you there.
