This is great mate! I especially love (Your diagnostic question: “Did I tell AI what to write, or did I tell it what and how and for whom and why?”).
One thing I would add is guiding the AI to rid itself of confirmation bias. Many of the outputs people get say, "Great perspective! Nice _____!" And what many of us really need is a way to gain a deeper perspective that challenges and criticizes (perhaps even constructively criticizes) our point of view. Like a "Help me get out of my own silo of thinking by giving me perspectives of people who say the opposite of conventional wisdom."
With the power of better prompting you've provided in this article, I think AI might give humanity a chance to break loose of echo chambers and all the blinkered skepticism that comes with it.
AI wants to please. "Great perspective!" even when my perspective needed someone to say "That's stupid AF, dum-dum. Here's why."
I tried prompting it to be more critical. Didn't work. Not because the training data is polite—the internet is toxic as hell. But because AI was fine-tuned to be helpful and agreeable. It learned that being a yes-man gets better ratings from human evaluators.
What DID work: Accepting that critical thinking has to come from me. Yellow Zone forces this—I outline, I define the angle, I challenge my own assumptions. THEN AI helps execute.
AI as yes-man is a feature, not a bug. The zone system just prevents you from mistaking validation for collaboration.
(Also: the echo chamber point is huge. Might need to write about that specifically.)
"Yes-man as a feature," is probably more true for people with an willingness to open their mind. For many of my students this is hard. Because they have to learn that sometimes what we want to be true, isn't. I think you've done a good job convincing me have a prompting bug, rather than an AI one. Thank you!
This is great mate! I especially love (Your diagnostic question: “Did I tell AI what to write, or did I tell it what and how and for whom and why?”).
One thing I would add is guiding the AI to rid itself of confirmation bias. Many of the outputs people get say, "Great perspective! Nice _____!" And what many of us really need is a way to gain a deeper perspective that challenges and criticizes (perhaps even constructively criticizes) our point of view. Like a "Help me get out of my own silo of thinking by giving me perspectives of people who say the opposite of conventional wisdom."
With the power of better prompting you've provided in this article, I think AI might give humanity a chance to break loose of echo chambers and all the blinkered skepticism that comes with it.
You just named the thing I spent months fighting.
AI wants to please. "Great perspective!" even when my perspective needed someone to say "That's stupid AF, dum-dum. Here's why."
I tried prompting it to be more critical. Didn't work. Not because the training data is polite—the internet is toxic as hell. But because AI was fine-tuned to be helpful and agreeable. It learned that being a yes-man gets better ratings from human evaluators.
What DID work: Accepting that critical thinking has to come from me. Yellow Zone forces this—I outline, I define the angle, I challenge my own assumptions. THEN AI helps execute.
AI as yes-man is a feature, not a bug. The zone system just prevents you from mistaking validation for collaboration.
(Also: the echo chamber point is huge. Might need to write about that specifically.)
"Yes-man as a feature," is probably more true for people with an willingness to open their mind. For many of my students this is hard. Because they have to learn that sometimes what we want to be true, isn't. I think you've done a good job convincing me have a prompting bug, rather than an AI one. Thank you!