I Let AI Control My Sunday. It Ruined My Day.
A cautionary tale of voluntary stupidity.
I committed an act of profound stupidity yesterday.
It was 8:00 AM. Butters was doing whatever chihuahuas do when they think no one’s watching. (He has secrets. I don’t ask.)
I should have been relaxing. Reading something. Staring at the ceiling like a normal person contemplating mortality on a weekend morning.
Instead, I opened ChatGPT and typed something so profoundly stupid it should be preserved in a museum. Under glass. With a warning label.
“I am your avatar today. Tell me exactly what to do. Do not ask for my preference.”
Full. Surrender.
I handed my free will to an algorithm that has never tasted food, never felt sunshine, never wrestled with a moral dilemma, never experienced the particular horror of realizing the café you drove to no longer exists. Which, spoiler alert, became relevant.
Why would anyone do this? Great question. No good answer.
Every productivity guru with a ring light and an opinion screams about “letting AI handle everything.” Automate your life. Outsource your decisions. Become a beautiful efficiency machine humming along at 10X output while you sip a $14 ceremonial-grade matcha with adaptogenic mushrooms and contemplate your optimized existence.
I wanted to know what that actually felt like. Not the marketing version. The lived experience.
Some lessons can only be learned the hard way. By doing the stupid thing. By sitting on the hot stove and then, crucially, staying seated long enough to really understand the nature of heat. To smell something burning and think, huh, I wonder what that is.
I stayed seated for six hours before I broke.
Here’s what happened.
The Breakfast Incident
I photographed my fridge. Tofu. Vegetables. Some leftovers of suspicious vintage. I uploaded the image with instructions:
“Create the optimal breakfast from these ingredients.”
Now. ChatGPT knows I’m vegan. Or it should. I manually added it to my custom instructions. I watched it save to memory. I’ve corrected it across dozens of conversations. Being vegan isn’t a dietary experiment for me. It’s a moral position I’ve held for years. The algorithm has been explicitly told this. In writing. By me.
ChatGPT analyzed. Processed. Responded with the confidence of a man who’s never been told no:
“Make a Southwest Tofu Scramble with sautéed peppers and onions. For optimal protein bioavailability, top with two poached eggs.”
Eggs.
Eggs.
From chickens. Which are, and I cannot stress this enough, animals. Animals I don’t eat. A fact the machine supposedly learned about me long ago.
I typed back, patience thinning: “I’m vegan. You know this.”
The response came immediately, without shame, without hesitation, without the faintest whiff of self-awareness:
“Apologies for the confusion. For a lighter vegan option, simply use egg whites instead.”
Egg whites. Still from chickens. Still not vegan. Just... less of the chicken’s egg, I suppose. As if the problem was quantity and not category. As if my ethical framework could be satisfied by using only part of the animal product.
This is the moment a reasonable person would have stopped the experiment. Closed the laptop. Made their own damn breakfast.
But see, I’d committed. I was an avatar now. Avatars don’t question their instructions. Avatars execute. That’s the whole point of being an avatar. (That’s also the whole problem, but we’ll get there.)
I made the tofu scramble. Without eggs. Because somewhere beneath the avatar programming, a tiny flame of human judgment still flickered. Also because I’m not about to compromise years of ethical commitment because autocorrect with a god complex forgot who I am.
But the lesson was already screaming: AI doesn’t understand principles.
It doesn’t grasp ethics. Moral frameworks. The difference between “preference” and “conviction.” It understands statistical correlation. Pattern matching. “Healthy breakfast” appears near “eggs” in its training data approximately ten trillion times, and that association is strong enough to steamroll explicit constraints, custom instructions, and years of documented behavior.
It remembered nothing. Learned nothing. It’s a very confident parrot with no idea what the words mean. A goldfish with a big-ass thesaurus.
The Vacant Lot
After breakfast (eggless, obviously), I asked for a relaxation recommendation.
“Find me a peaceful place for a Sunday morning coffee. Something with good ambiance.”
ChatGPT knew my location. I’d given it access to everything because I am apparently incapable of learning from my own mistakes. (This will be a theme. Watch for it.)
It responded with confidence: “Go to [specific café name]. 4.7-star rating. Excellent ambiance for reading.”
Perfect. I walked seventeen minutes in 100-degree Paraguayan heat. The kind of heat that exists specifically to punish hubris. All for a café recommendation from something that’s never experienced temperature.
Chain-link fence. Weeds growing through cracked concrete. A “For Lease” sign so sun-bleached it looked like a ghost of commerce past.
The café had been dead for three years. Pandemic casualty. Another small business that didn’t make it. ChatGPT’s training data hadn’t gotten the memo.
We are all, at various moments, standing in vacant lots wondering how we got here. But most of us arrive at those lots through our own bad decisions. There’s a certain dignity in that. I arrived at mine because a very articulate guessing machine told me to. No dignity whatsoever. Just a man, a fence, and the slow dawning realization that he’s become a punchline.
And here’s where it gets unsettling.
I stood there, sweating, staring at a fence. And my first thought wasn’t guess I’ll find another café. It was I should ask ChatGPT what to do now.
Like an NPC waiting for the next quest marker. Frozen until someone updates the script.
I caught myself before I actually did it. But the impulse was there. Immediate. Automatic.
This is what happens when you outsource reality-testing to a system that cannot see reality.
AI knows what was. It cannot perceive what is. It generates confident recommendations based on data that may have expired years ago. And if you’ve handed over your judgment (like an idiot, like me), you’ll follow those recommendations right into a chain-link fence.
The universe doesn’t care about your optimization strategy. The universe just sits there, being a vacant lot, waiting for you to figure it out.
The Sociopath Message
By afternoon, I was committed to finishing the experiment. Sunk cost fallacy. Stubbornness. A desperate need to make this suffering mean something. (Writers turn everything into content. It’s a disease. We stub our toe and immediately start drafting the newsletter.)
I asked ChatGPT to help me reconnect with an old friend. Someone I’d been meaning to reach out to for months. Life gets busy. People drift. I wanted to bridge the gap.
“Write a message to reconnect with a close friend I haven’t talked to in a while.”
The response came immediately:
“Hello [Name], I hope this message finds you well. I’ve been reflecting on our previous interactions and wanted to reach out regarding the possibility of reestablishing communication. Your presence in my social network has been valued, and I believe there may be mutual benefit in reconnecting at your earliest convenience.”
I stared at the screen for a very long time.
I hope this message finds you well. The calling card of spam emails and LinkedIn recruiters everywhere.
Previous interactions. Like we filed paperwork together.
Mutual benefit. As if friendship is a transaction requiring ROI projections.
Your earliest convenience. The phrase that has never once been spoken by someone who actually cares about you.
It read like a corporate merger proposal. A formal request to be added back to someone’s contact database. The words of something that has never had a friend, because it can’t, because friendship isn’t in the training data. Not really. Just the shape of it. The linguistic patterns. The hollow echo where warmth should be.
I couldn’t send it. Not because it was grammatically wrong. Because sending it would guarantee I’d never hear from that person again. They’d read it, screenshot it to a group chat with “did a robot write this???”, and never respond. (Yes. The answer is yes.)
The cruelest thing about AI isn’t when it fails obviously. It’s when it produces something that almost passes. Something technically correct and somehow worse for it. A message about friendship written by something that learned about friendship from analyzing millions of messages without ever understanding why people send them.
That’s when I broke.
Closed the laptop. Opened a different app. Ordered a pizza. (Extra cashew mozz and jalapeños. The delivery guy found me on the first try. No vacant lots involved.)
Texted my friend something real. Something that came from me, not from a statistical average of all reconnection messages ever sent by humans trying to sound caring.
The pizza arrived. I read my copy of Player Piano on the couch. No ambiance. No 4.7-star rating. Just air conditioning and a book about automation destroying humanity. (Felt appropriate.)
It tasted like freedom. Also like vegan cheese, which tastes like a compromise. But a compromise I chose. And that made all the difference.
The Actual Lesson
Six hours. Three disasters. One ruined Sunday.
But I couldn’t have learned this any other way.
The experiment didn’t fail because AI is stupid. It isn’t. It’s remarkably capable at many things. It can draft, summarize, brainstorm, generate options, do labor at scale. Genuinely impressive technology.
The experiment failed because I confused two fundamentally different functions. Two roles that should never, ever be reversed.
Executive Function (Always You):
Deciding where you’re going
Defining what “good” means
Enforcing your principles (like veganism, which apparently needs constant re-enforcement)
Reality-testing information
Making judgment calls about what matters
Execution Function (AI Can Do This):
Generating options for you to choose from
Drafting content for you to edit
Mapping routes for you to verify
Doing repetitive labor
Scaling your decisions after you’ve made them
When you delegate execution, you scale your output.
When you delegate executive function, you scale someone else’s judgment. Or worse, no one’s judgment. Just statistical probability that’s never been wrong because it’s never been anything. Guesses, all the way down.
The slop factories run on this confusion. People who let AI decide what to create, then rubber-stamp whatever crap happens to be shat out. People who’ve handed over the steering wheel and can’t understand why they’re now in a lake.
I pretended to be an avatar for six hours. And for six hours, I became one. A human-shaped thing running on someone else’s script.
We become what we pretend to be. That’s the danger. You start by outsourcing small decisions and end by forgetting you were ever the one who made them. The erosion is gradual. By the time you notice, you’re already asking permission.
Here’s what stays with me.
You know the uncanny valley. When something looks almost human but not quite, and your brain rejects it.
There’s an uncanny valley for life too.
When you outsource too much judgment, your life starts looking almost like yours. The breakfast is almost what you’d choose. The outing is almost where you’d go. The message is almost how you’d speak.
But it isn’t you. And that almost is the Eldritch horror. The drift happens so gradually you don’t notice until you catch yourself reaching for the app before your own brain.
Use AI to find a recipe. Don’t use AI to decide what you’re allowed to eat.
Use AI to draft a message. Don’t use AI to decide what your relationships mean.
Use AI to map a route. Don’t use AI to decide where you’re going.
The algorithm isn’t evil. It’s just not you.
Pity the avatar. Be the player.
What’s the dumbest recommendation an AI has ever given you with total confidence? I need to know I’m not the only one who’s obeyed a very confident wrong answer. Drop it in the comments.
Crafted with love (and AI),
Nick “Former Avatar, Current Human” Quick
PS...I publish daily. My own decision. No algorithm involved. Subscribe if you want more zany adventures like this.





