Something Shifted. I Don’t Have a Name for It Yet.
I killed a 3-part series this week. Here's why.
About three weeks ago I started noticing something I couldn’t name.
My AI collaboration process hadn’t changed. Same Voiceprint loaded. Same VAST framework. Same instructions, same process, same everything. The output followed the rules. Technically.
But something was off. Not wrong. Not robotic. Not even obviously smooth in that way I’d trained myself to catch. Just... slightly misaligned. Like a smile that arrives half a second late.
I kept tweaking. Adjusting prompts. Running the same passages through different models. The output kept passing every test I’d built. And I kept feeling like it shouldn’t.
(This is a deeply annoying thing to experience when you literally teach people how to make AI match their voice for a living.)
I didn’t say anything about it because I didn’t have anything useful to say. A vague sense that the ground is shifting underneath you is not a post. It’s a therapy sesh.
Then Opus 4.6 dropped on February 5th. I tested it. The drift got sharper. Not dramatically worse—just more of whatever I’d been noticing. More technically correct. More subtly off.
Then I read what 4.6 was actually optimized for. Reasoning. Agentic tasks. Long-context retrieval. Not creative writing.
And the vague thing snapped into focus.
Anthropic made the model smarter and the writing got worse. Testers running both noticed 4.5’s output was more natural, more human. 4.6 traded that for thinking power.
Which meant the drift I’d been feeling for weeks wasn’t a fluke. It was a direction.
The creative mimicry is shifting underneath creators. No changelog. No warning. And nobody’s really talking about what that means.
The AI writing conversation keeps getting stuck on the same two problems. First: “AI sounds robotic.” Mostly solved. Second: “AI sounds smooth but convergent.” That’s the one I built my entire methodology around. The Voiceprint. VAST. The whole divergence-as-antidote approach.
That problem is fading too. Not because it’s been solved. Because the nature of the convergence is changing. AI doesn’t sound robotic anymore. It doesn’t even sound obviously polished in that telltale way. It sounds... fine. Passable. Like an “I love you” muttered while secretly swiping through Tinder profiles.
I had a 3-part series ready to publish. The Slopper Series. I’d named the enemy (the Slopper, a creator who sands their writing until nothing distinctive remains). Built a fix (the De-Slop Edit, three VAST-layer passes to restore texture). Had the whole arc mapped. Thumbnails made. Drafts reviewed.
I killed it this morning.
Not because it was wrong, per se. Because the premise was expiring faster than I could publish it. The series assumed the main threat was smoothness, and smoothness is becoming a less useful way to describe what’s happening. The AI output I’m seeing now isn’t smooth. It’s adequate. It technically works. It passes the sniff test. It follows the Voiceprint instructions with unsettling compliance.
And that adequacy is the thing I can’t name yet.
(This is why I don’t pre-announce series anymore.)
Here's what keeps nagging at me like a typo I only notice after hitting send: if the VAST framework and the Voiceprint methodology were built to counteract convergence, what happens when convergence itself shapeshifts? What if the new convergence isn’t about words anymore? What if it’s about something underneath the words that I haven’t built a detector for?
I’m not saying the methodology is broken. My Voiceprint still produces output that sounds more like me than anything else out there. But I can feel the gap between “follows my instructions” and “sounds like me” getting weird in ways I don’t have language for yet. And publishing a three-part series that pretends I’ve got this figured out would be exactly the kind of bullshit I built this newsletter to fight.
So this is me saying it out loud before I’ve figured it out.
Something shifted. I can feel it in the output. I can feel it in the gap between what the models produce and what my own writing actually sounds like when I’m fully in control. The old tells are fading. The new ones haven’t announced themselves yet.
I'd rather say 'I don't know' than cosplay someone who does. I’ve spent years telling creators that generic is the enemy. It would be pretty damn ironic if I published a framework built on a fading premise just because the publishing calendar said it was time.
(I’ll figure it out. I always do. But I’m not going to pretend I already have.)
🧉 Have you noticed it? Not the obvious stuff. Not the robotic phrasing or the em-dash addiction or the “delve” problem. Something subtler. Something in your AI output over the last few months that you can’t quite point to but can feel. A shift you don’t have a name for either.
I’m genuinely asking. Tell me in the comments. Because if it’s just me, I need to know that too.
Crafted with love (and AI),
Nick “Currently Between Frameworks” Quick
PS… If this kind of raw, mid-process thinking resonates with you more than polished frameworks, subscribe and share it with someone who’s also feeling the shift. The answers are coming. Just not today.
PPS… The Voiceprint methodology isn’t what’s broken. It’s the reason I noticed the shift in the first place. If you want that same sensitivity to your own voice, the Quick-Start Guide is where you start. Get it here:





Hey Nick, yes, have definitely noticed something in Claude. I noticed it leading up to the last big update too. It's just a "vibe" I can't put my finger on. The AI editing process hasn't felt as thorough maybe? I have a couple of prompts I like to use at the very end of an edit as a final catch and they just don't seem to work that well anymore (I don't kid myself that my expertise has improved so vastly in a short period of time, lol). It's a vague disquiet that change is on the way and it feels different.
Thanks for writing to honestly about it - I love seeing the "middle" of the process, so much more interesting than the shiny output where everything is perfection!
That's the question I don't have data on yet.
My bubble is mostly Claude users, so the signal might be skewed.
You might be the canary. Report back?