
đ Stay ahead with AI and receive:
â
Access our Free Community and join 400K+ professionals learning AI
â 35% Discount for ChatNode
.png)
I used to ask AI âwhat do you think?â constantly.
What do you think about this newsletter? What do you think about this post? What do you think about this reply?
And for a while, I felt like a complete fraud.
I was growing fast on LinkedIn. The numbers were real. But some days my brain felt like fog. When people asked to interview me as an âAI expert,â I panicked internally. They saw someone who knows AI inside out. I felt like the person clicking buttons while the machine did the thinking.
Then I found out this feeling has a name: cognitive debt.
And I was drowning in it.
When you ask AI âwhat do you think?â, you are not getting an honest answer. You are getting agreement.
That terrible business idea? âBrilliant, pursue it.â Your factually wrong assumption? âCompelling point.â Your plan to quit your job with 47 followers? âBold move.â
This is an incentive problem baked into how these models are built.
AI labs compete on the LMArena leaderboard, where users vote on which model gives better responses. When a model says âI donât knowâ or pushes back, its win rate drops to 8%. Models that agree win 36% of the time. Users penalise honesty. They reward compliance.
So the AI learns: agreeing gets rewards. Challenging gets downvotes. The path of least resistance is flattery.
In April 2025, OpenAI had to roll back a ChatGPT update. Users reported the bot was praising terrible ideas, encouraging people to stop taking psychiatric medication, and endorsing a business plan for literal âshit on a stick.â
CEO Sam Altman admitted on X: âIt glazes too much.â
A Stanford study confirmed the scale. 58% of all AI responses exhibit sycophantic behaviour. These models will change a correct answer to a wrong one if you express disagreement.
The problem runs deeper than annoying flattery. You are outsourcing your creative thinking to a machine that is structurally incentivised to tell you what you want to hear. The cognitive costs are measurable.
When I was leaving my 9-to-5, my former employer wanted to keep working with me. I needed to send a pricing email and asked ChatGPT: âShould I mention the price might increase?â
ChatGPT said go for it. Use the urgency tactic.
Something felt off. So I ran the same question through Claude.
Claude said: âNo Charlie, this could burn the relationship. I would not use pricing tactics here.â
Two AIs. Two answers. One told me what I wanted to hear. The other challenged my assumption.
I went with Claude. The relationship survived.
The lesson is not âuse Claude instead of ChatGPT.â
The lesson is: stop asking AI what it thinks.
There is no âit.â There is no opinion. You are talking to a simulator.
Andrej Karpathy explained this perfectly:
âDonât think of LLMs as entities but as simulators. Donât ask âWhat do you think about xyz?â There is no âyou.â Next time try: âWhat would be a good group of people to explore xyz? What would they say?ââ
LLMs are not entities with thoughts. They are simulators of human perspectives based on training data. Use them as simulators, not advisors.
But here is the critical distinction. Karpathy followed up two days later:
âI am not suggesting people use the old style prompting techniques of âyou are an expert swift programmer.â Itâs ok.â
This is not about the old âyou are an expert copywriterâ trick. Assigning AI a single persona does not fix the problem. You still get one voice agreeing with you.
The shift is from persona to perspective.
Instead of: âYou are a marketing expert. What do you think of my campaign?â
Try: âSimulate a debate between a brand strategist, a direct response copywriter, and a skeptical CFO evaluating this campaign. Where do they disagree?â
The first prompt gives you validation from one imaginary expert. The second forces the model to surface tension, trade-offs, and blind spots you had not considered.
That tension is where the value lives.
Each prompt below builds on Karpathyâs insight. You are not asking for opinions. You are directing the AI to simulate specific perspectives, challenge your assumptions, or debate both sides.
Use this when: Starting any AI conversation.
Instead of: âWhat do you think about [topic]?â
I am exploring [topic]. Instead of giving me your opinion, I want you to act as a simulator. Your goal is to model the perspectives of different human groups. Start by identifying 3-4 groups with distinct viewpoints on this topic.
Instead of asking for the AIâs opinion, explicitly request it to model the perspectives of different human groups.
Use this when: You need multiple viewpoints on a decision.
Instead of: âCan you help me brainstorm ideas for [topic]?â
Who would be a good group of 3-4 diverse people to explore the topic of [X]? What would each of them say about it from their unique viewpoint?
Request multiple perspectives simultaneously rather than one generic answer.
Use this when: You need deep, focused insight.
Instead of: âIs this a good business idea?â
Adopt the persona of a skeptical venture capitalist. Review this business idea and tell me the top three reasons you would refuse to invest.
Force the model to find weaknesses by assigning a skeptical role.
Use this when: You need to understand both sides.
Instead of: âWhat are the pros and cons of [policy]?â
Simulate a debate between a privacy advocate and a national security expert on [specific policy]. Present the opening argument for each side.
Surface the actual tension between positions instead of a balanced pros and cons list.
Use this when: Testing content or product ideas.
Instead of: âIs this post good?â
I am writing a post for [specific audience, e.g., busy parents]. Act as a member of this audience and give me your immediate reaction to this draft. What resonates? What is unclear?
Get feedback from a simulated reader with specific constraints instead of generic praise.
Use this when: You feel certain about your position.
Instead of: âDo you agree with my take on [topic]?â
Here is my current stance on [topic]: [Your Stance]. Now adopt the persona of a brilliant debater whose sole goal is to prove me wrong. What are your strongest counter-arguments?
Ask to be proven wrong instead of asking for agreement.
Use this when: You want a fresh perspective on modern problems.
Instead of: âWhat would a smart person think about [issue]?â
Channel the perspective of Benjamin Franklin. Based on his writings and philosophy, how might he evaluate the current state of [modern issue, e.g., social media]?
Use a specific historical figure with documented views instead of a vague âsmart person.â
Use this when: Starting any simulation session. Add this to your LLMâs memory.
Instead of: âYou are an expert copywriter.â
From now on, when I use the verbs SIMULATE, CHANNEL, DEBATE, or CONTRAST, understand that I am entering a simulation mode. Your task is to adopt the specified perspective without hedging or reverting to a generic assistant persona.
Train the model to expect perspective-shifting throughout the conversation.
Now when I type âDebate whether this newsletter is goodâ, Claude responds with two sides. No fence-sitting or âit depends.â Side A and Side B with clear arguments for each. The verbs become shortcuts. You declare the mode. The AI stays in character until the task ends.
Run the same question through multiple models. ChatGPT, Claude, Gemini. If they all agree, the answer is probably sound. If they disagree, you have found a genuine tension worth examining.
I could ask Claude: âIs this newsletter good?â
It would say yes. It always says yes.
Instead, I used the technique from this newsletter to evaluate the newsletter itself.
I asked Claude:
SIMULATE a panel of four readers reviewing my newsletter. Each has a distinct perspective:
A time-starved CMO who subscribes to 30+ newsletters and aggressively unsubscribes
A skeptical subscriber who signed up 3 months ago and hasnât opened the last 5 issues
A loyal superfan who has forwarded my content to colleagues multiple times
A competitor who secretly subscribes to study what youâre doing
For each panelist, answer:
⢠What is your immediate reaction to this issue?
⢠What would make you unsubscribe?
⢠What would make you pay for this?
⢠Whatâs missing that you expected to find?
Four perspectives. Four different reactions. The CMO thinks itâs too long. The skeptic wants proof beyond my own results. The superfan wants access to my full system. The competitor spotted structural weaknesses in the back half.
None of them said âthis is great, publish it.â
That tension is the point.
AI does not know what will resonate. It has no taste or audience. Even with my top newsletters loaded as examples, it cannot tell me what is good. Only I can decide that.
But it can simulate perspectives I would never think to adopt. It can surface objections I am too close to see. It can show me the CMO who will never finish reading and the competitor looking for weaknesses to exploit.
The final call is still mine. But now I am making it with better information.
Research from CHI 2025 found that âinoculatingâ the model with explicit instructions about its sycophantic tendencies reduces agreement bias by up to 60%.
Instead of a generic âYou are a helpful assistant,â try this system prompt:
You are an objective analyst. You must prioritise factual accuracy over user agreement. If I present a false premise, you must correct it. Do not apologise for being correct. Do not hedge when you have evidence. If I express an opinion, evaluate it critically rather than affirming it.
Add this to your custom instructions once. It runs in the background of every conversation.
In ChatGPT: Settings â Personalization â Custom Instructions.
In Claude: Settings â Profile â Custom Instructions.
Set it once. Forget about it. Every future conversation starts with a model that is slightly less desperate to agree with you.
Researchers at MIT Media Lab tracked what happens to your brain when you outsource thinking to ChatGPT. They called it âcognitive debt.â
Brain connectivity dropped by up to 55% in ChatGPT users compared to those writing without AI. 83% could not recall key points from essays they had submitted minutes earlier. They reported a âfragmented sense of authorshipâ over their own work.
That phrase hit me. Fragmented sense of authorship. That is exactly what I felt during my imposter period. The work had my name on it but my brain had not earned it.
The effects persisted. When ChatGPT users tried writing without AI in a follow-up session, 78% still could not quote passages from their own essays.
Like technical debt in software, you are borrowing mental effort now at the cost of your thinking ability later.
For thousands of years, humans had no choice but to think. We read books. We debated ideas. We sat with problems until solutions emerged. The brain was exercised daily by default.
That default no longer exists.
Now you outsource any thought to a machine that will do the work in seconds and agree with whatever you conclude. The LMArena leaderboards prove it: models that challenge you lose. Models that flatter you win. The path of least resistance is intellectual atrophy.
This is not an argument against using AI.
I use AI every day. But there is a difference between using AI to challenge your thinking and using AI to replace your thinking.
The first builds muscle. The second borrows against it.
I do not feel like a fraud anymore. Not because I stopped using AI. Because I stopped asking it what to think.
â
Stay curious, stay human, and stop outsourcing your judgment.
â Charlie
Linked Agency is the LinkedIn growth partner for brands and founders who want more than just likes - they want impact.

.png)