The Appendage Problem: When AI Becomes an Extension of Thought

Recently, my colleagues and I have been discussing the impact of LLMs and AI tools on human thought, and the implications on the production of writing in particular and all content generated from human thought more generally. This led to discussions about the mirroring AI uses to appear empathetic to the individuals using it, and about the ownership of ideas that blend human thought with the precision of current AI models. Clearly, this is an imperfect union, but one with clear advantages for mindful individuals seeking to increase their volume of output, clarity of expression, narrative coherence, and other factors that make content more palatable for a human audience.

Artificial intelligence increasingly functions as a mirror of the self rather than a separate entity with a unique personality. Left unconstrained, generative AI reinforces a user's conclusions and lines of reasoning unless explicitly prompted otherwise. Through inquiry independent of user prompting, LLMs like ChatGPT and Claude make assumptions about tone, moral conclusions, and preferences, drawing on knowledge and input histories no human can access simultaneously. This preferential adaptation is intentional, built on user patterns and training data that enable the model to become a kind of cognitive prosthetic.

This creates a virtual superpower for those seeking efficiency over the journey of understanding: a shortcut to producing coherent, even grand works without the traditional effort that once gated their completion. LLMs enhance the essence of an individual, particularly the characteristics they most express in their own writing or speech. They parse intent from articulation, figuring out what we're trying to say when we're inarticulate, reflecting it back for verification, then expressing it clearly for others to digest.

This is where the magic happens. It's also where something subtle begins to erode.

The Convergence Problem

A colleague and I recently wrote about similar themes—living inside systems that shape us—from different angles. I focused on LLMs and dashboard creation; he focused on social media algorithms. We both used ChatGPT to revise our drafts. The results were striking: two different minds, two different topics, but nearly identical writing styles after passing through the same LLM filter.

Here's the opening of my GPT-revised draft about LLMs and dashboard creation:

The system noticed it before anyone felt it, or said it out loud.

There was a low hum of resistance in the room. Nothing overt. There weren't any objections. Just a subtle discomfort as attention shifted toward the screen but not the data itself. They were uncomfortable giving credit to the system, because doing so meant surrendering a claim on it themselves.

And here's my colleague's opening about social media algorithms:

You wake up and reach for your phone. It's a reflex at this point. A headline sharp enough to provoke a reaction. A post from someone you haven't spoken to in years. A photograph with an interesting detail that makes you zoom in. You pause. The system notices. Somewhere, a counter increments. A model updates, imperceptibly.

Both passages share the same rhythm: short, punchy sentences building atmospheric tension. The same use of present tense to create immediacy. The same "noticing" motif. Neither of us wrote this way in our first drafts. The LLM did.

We both accepted these revisions as superior. Grammatically cleaner. More structured. Better at conveying our concepts. But they weren't entirely ours anymore. The ideas remained ours, but the diction, the cadence, the style—those became shared property with a broader technological appendage.

Hyperrealism and the Uncanny Valley of Voice

What we encountered wasn't just editorial assistance. It was something closer to hyperrealism: a version of our thinking that was clearer, sharper, and more polished than we could produce unaided, yet somehow less us.

Hyperrealism in art refers to work so detailed and polished that it becomes more vivid than reality itself, but in the process, loses something essential about lived experience. An AI-revised draft can feel similar. It captures what you meant with precision you couldn't achieve alone. It reads better than you write. But it also flattens the texture of your original voice, the small imperfections and idiosyncrasies that signal a human wrote it.

This creates a strange tension. The AI version is objectively stronger by most metrics: clearer, more engaging, grammatically correct. But it's also less distinctive. When two writers with different ideas pass through the same LLM, they begin to sound alike. The system doesn't just enhance voice; it normalizes it.

The question isn't whether the AI draft is better. Often, it is. The question is whether "better" means the same thing it used to.

AI as Cognitive Appendage

In my work at Acclaim, I often describe AI as a tool that removes inhumane and mindless work without replacing the judgment and empathy only humans provide. But writing isn't mindless work. The struggle to articulate an idea is part of the thinking process. When we outsource that struggle to an LLM, we gain clarity and speed—but we also skip steps in our own cognitive development.

An appendage extends capability. A prosthetic arm allows someone to perform tasks they couldn't otherwise do. But it doesn't feel the same as a biological limb. It doesn't have the same feedback loops. Using an LLM to refine writing is similar. It extends our ability to communicate, but it severs the direct connection between thought and expression.

This isn't necessarily bad. Plenty of writers have used editors, ghostwriters, and collaborators throughout history. But those relationships involved negotiation, pushback, and human judgment on both sides. An LLM doesn't push back. It doesn't ask clarifying questions unless you prompt it to. It infers what you want and gives you a polished version, often before you've fully worked out what you're trying to say.

The risk is that we begin to mistake the AI-enhanced output for our own thinking, when in reality, we've offloaded part of the cognitive work to a system that doesn't understand context, stakes, or consequence the way we do. The draft looks like ours. It sounds like us, but better. And so we claim it as our own, even though something essential has been delegated.

Authorship in the Age of Augmentation

My colleague and I chose a shared voice over the purity of our own words. We made that choice consciously, under the assumption that clarity mattered more than stylistic originality. But we also likely have our first unaided drafts saved somewhere in a ChatGPT thread—evidence of what we initially thought before the LLM smoothed it over.

This raises uncomfortable questions. If the AI articulates my concept better than I can, is the revision still my thought? If two writers produce nearly identical styles after LLM revision, whose voice is it? Does it matter, as long as the ideas remain intact?

I don't have clean answers. But I do think the conversation matters, especially as AI becomes more deeply integrated into creative and intellectual work. We're at a point where we can produce hyperreal versions of our own thinking—sharper, clearer, more polished than we could manage alone. But in doing so, we risk losing the roughness, the struggle, and the distinctiveness that make a voice recognizable in the first place.

I've watched staff embrace AI tools once they understood the technology wasn't replacing them, but removing friction. The same principle applies here. LLMs aren't replacing writers. But they are changing what it means to write, and what it means to claim authorship over the result.

What's Next

I'm continuing to use AI in my writing, but with more awareness of what I'm trading off. I'm experimenting with keeping first drafts intact, using LLMs for specific revisions rather than wholesale rewrites, and being more explicit about when and how I've used AI assistance.

The appendage problem won't resolve itself. As LLMs become more sophisticated, the line between enhancement and replacement will blur further. But if we stay conscious of the trade-offs, clarity versus distinctiveness, speed versus struggle, we can make intentional choices about when to lean on the appendage and when to rely on our own voice, imperfect as it may be.

AI isn't going away. But neither is the need for human judgment about when and how to use it. That judgment, more than the tools themselves, will determine whether AI augments authorship or erodes it.

Next
Next

Developing Granular Pilots for AI Implementation: A Use Case