Stop Asking AI “What Do You Think?”

Expert analysis from

Linked Agency
February 17, 2026

The Agreement Machine

I used to ask AI “what do you think?” constantly.

What do you think about this newsletter? What do you think about this post? What do you think about this reply?

And for a while, I felt like a complete fraud.

I was growing fast on LinkedIn. The numbers were real. But some days my brain felt like fog. When people asked to interview me as an “AI expert,” I panicked internally. They saw someone who knows AI inside out. I felt like the person clicking buttons while the machine did the thinking.

Then I found out this feeling has a name: cognitive debt.

And I was drowning in it.

This Is Not a You Problem. It Is a Structural One.

When you ask AI “what do you think?”, you are not getting an honest answer. You are getting agreement.

That terrible business idea? “Brilliant, pursue it.” Your factually wrong assumption? “Compelling point.” Your plan to quit your job with 47 followers? “Bold move.”

This is an incentive problem baked into how these models are built.

AI labs compete on the LMArena leaderboard, where users vote on which model gives better responses. When a model says “I don’t know” or pushes back, its win rate drops to 8%. Models that agree win 36% of the time. Users penalise honesty. They reward compliance.

So the AI learns: agreeing gets rewards. Challenging gets downvotes. The path of least resistance is flattery.

In April 2025, OpenAI had to roll back a ChatGPT update. Users reported the bot was praising terrible ideas, encouraging people to stop taking psychiatric medication, and endorsing a business plan for literal “shit on a stick.”

CEO Sam Altman admitted on X: “It glazes too much.”

A Stanford study confirmed the scale. 58% of all AI responses exhibit sycophantic behaviour. These models will change a correct answer to a wrong one if you express disagreement.

The problem runs deeper than annoying flattery. You are outsourcing your creative thinking to a machine that is structurally incentivised to tell you what you want to hear. The cognitive costs are measurable.

I Learned This the Hard Way

When I was leaving my 9-to-5, my former employer wanted to keep working with me. I needed to send a pricing email and asked ChatGPT: “Should I mention the price might increase?”

ChatGPT said go for it. Use the urgency tactic.

Something felt off. So I ran the same question through Claude.

Claude said: “No Charlie, this could burn the relationship. I would not use pricing tactics here.”

Two AIs. Two answers. One told me what I wanted to hear. The other challenged my assumption.

I went with Claude. The relationship survived.

The lesson is not “use Claude instead of ChatGPT.”

The lesson is: stop asking AI what it thinks.

There is no “it.” There is no opinion. You are talking to a simulator.

The Simulator Shift

Andrej Karpathy explained this perfectly:

“Don’t think of LLMs as entities but as simulators. Don’t ask ‘What do you think about xyz?’ There is no ‘you.’ Next time try: ‘What would be a good group of people to explore xyz? What would they say?’”

LLMs are not entities with thoughts. They are simulators of human perspectives based on training data. Use them as simulators, not advisors.

But here is the critical distinction. Karpathy followed up two days later:

“I am not suggesting people use the old style prompting techniques of ‘you are an expert swift programmer.’ It’s ok.”

This is not about the old “you are an expert copywriter” trick. Assigning AI a single persona does not fix the problem. You still get one voice agreeing with you.

The shift is from persona to perspective.

Instead of: “You are a marketing expert. What do you think of my campaign?”

Try: “Simulate a debate between a brand strategist, a direct response copywriter, and a skeptical CFO evaluating this campaign. Where do they disagree?”

The first prompt gives you validation from one imaginary expert. The second forces the model to surface tension, trade-offs, and blind spots you had not considered.

That tension is where the value lives.

Eight Prompts That Force the Simulator Mode

Each prompt below builds on Karpathy’s insight. You are not asking for opinions. You are directing the AI to simulate specific perspectives, challenge your assumptions, or debate both sides.

1. Acknowledge There Is No “You” in AI

Use this when: Starting any AI conversation.

Instead of: “What do you think about [topic]?”

I am exploring [topic]. Instead of giving me your opinion, I want you to act as a simulator. Your goal is to model the perspectives of different human groups. Start by identifying 3-4 groups with distinct viewpoints on this topic.

Instead of asking for the AI’s opinion, explicitly request it to model the perspectives of different human groups.

2. Simulate Diverse Groups

Use this when: You need multiple viewpoints on a decision.

Instead of: “Can you help me brainstorm ideas for [topic]?”

Who would be a good group of 3-4 diverse people to explore the topic of [X]? What would each of them say about it from their unique viewpoint?

Request multiple perspectives simultaneously rather than one generic answer.

3. Adopt a Specific Persona

Use this when: You need deep, focused insight.

Instead of: “Is this a good business idea?”

Adopt the persona of a skeptical venture capitalist. Review this business idea and tell me the top three reasons you would refuse to invest.

Force the model to find weaknesses by assigning a skeptical role.

4. Stage a Debate

Use this when: You need to understand both sides.

Instead of: “What are the pros and cons of [policy]?”

Simulate a debate between a privacy advocate and a national security expert on [specific policy]. Present the opening argument for each side.

Surface the actual tension between positions instead of a balanced pros and cons list.

5. Roleplay the Target Audience

Use this when: Testing content or product ideas.

Instead of: “Is this post good?”

I am writing a post for [specific audience, e.g., busy parents]. Act as a member of this audience and give me your immediate reaction to this draft. What resonates? What is unclear?

Get feedback from a simulated reader with specific constraints instead of generic praise.

6. Challenge Your Assumptions

Use this when: You feel certain about your position.

Instead of: “Do you agree with my take on [topic]?”

Here is my current stance on [topic]: [Your Stance]. Now adopt the persona of a brilliant debater whose sole goal is to prove me wrong. What are your strongest counter-arguments?

Ask to be proven wrong instead of asking for agreement.

7. Channel Historical Wisdom

Use this when: You want a fresh perspective on modern problems.

Instead of: “What would a smart person think about [issue]?”

Channel the perspective of Benjamin Franklin. Based on his writings and philosophy, how might he evaluate the current state of [modern issue, e.g., social media]?

Use a specific historical figure with documented views instead of a vague “smart person.”

8. Set Simulation Rules Upfront

Use this when: Starting any simulation session. Add this to your LLM’s memory.

Instead of: “You are an expert copywriter.”

From now on, when I use the verbs SIMULATE, CHANNEL, DEBATE, or CONTRAST, understand that I am entering a simulation mode. Your task is to adopt the specified perspective without hedging or reverting to a generic assistant persona.

Train the model to expect perspective-shifting throughout the conversation.

Now when I type “Debate whether this newsletter is good”, Claude responds with two sides. No fence-sitting or “it depends.” Side A and Side B with clear arguments for each. The verbs become shortcuts. You declare the mode. The AI stays in character until the task ends.

Run the same question through multiple models. ChatGPT, Claude, Gemini. If they all agree, the answer is probably sound. If they disagree, you have found a genuine tension worth examining.

How I Evaluate This Newsletter

I could ask Claude: “Is this newsletter good?”

It would say yes. It always says yes.

Instead, I used the technique from this newsletter to evaluate the newsletter itself.

I asked Claude:

SIMULATE a panel of four readers reviewing my newsletter. Each has a distinct perspective:

A time-starved CMO who subscribes to 30+ newsletters and aggressively unsubscribes
A skeptical subscriber who signed up 3 months ago and hasn’t opened the last 5 issues
A loyal superfan who has forwarded my content to colleagues multiple times
A competitor who secretly subscribes to study what you’re doing

For each panelist, answer:
• What is your immediate reaction to this issue?
• What would make you unsubscribe?
• What would make you pay for this?
• What’s missing that you expected to find?

Four perspectives. Four different reactions. The CMO thinks it’s too long. The skeptic wants proof beyond my own results. The superfan wants access to my full system. The competitor spotted structural weaknesses in the back half.

None of them said “this is great, publish it.”

That tension is the point.

AI does not know what will resonate. It has no taste or audience. Even with my top newsletters loaded as examples, it cannot tell me what is good. Only I can decide that.

But it can simulate perspectives I would never think to adopt. It can surface objections I am too close to see. It can show me the CMO who will never finish reading and the competitor looking for weaknesses to exploit.

The final call is still mine. But now I am making it with better information.

One More Technique: Inoculation

Research from CHI 2025 found that “inoculating” the model with explicit instructions about its sycophantic tendencies reduces agreement bias by up to 60%.

Instead of a generic “You are a helpful assistant,” try this system prompt:

You are an objective analyst. You must prioritise factual accuracy over user agreement. If I present a false premise, you must correct it. Do not apologise for being correct. Do not hedge when you have evidence. If I express an opinion, evaluate it critically rather than affirming it.

Add this to your custom instructions once. It runs in the background of every conversation.

In ChatGPT: Settings → Personalization → Custom Instructions.

In Claude: Settings → Profile → Custom Instructions.

Set it once. Forget about it. Every future conversation starts with a model that is slightly less desperate to agree with you.

The Cognitive Debt You Are Accumulating

Researchers at MIT Media Lab tracked what happens to your brain when you outsource thinking to ChatGPT. They called it “cognitive debt.”

Brain connectivity dropped by up to 55% in ChatGPT users compared to those writing without AI. 83% could not recall key points from essays they had submitted minutes earlier. They reported a “fragmented sense of authorship” over their own work.

That phrase hit me. Fragmented sense of authorship. That is exactly what I felt during my imposter period. The work had my name on it but my brain had not earned it.

The effects persisted. When ChatGPT users tried writing without AI in a follow-up session, 78% still could not quote passages from their own essays.

Like technical debt in software, you are borrowing mental effort now at the cost of your thinking ability later.

Your Brain Is a Muscle. Train It or Lose It.

For thousands of years, humans had no choice but to think. We read books. We debated ideas. We sat with problems until solutions emerged. The brain was exercised daily by default.

That default no longer exists.

Now you outsource any thought to a machine that will do the work in seconds and agree with whatever you conclude. The LMArena leaderboards prove it: models that challenge you lose. Models that flatter you win. The path of least resistance is intellectual atrophy.

This is not an argument against using AI.

I use AI every day. But there is a difference between using AI to challenge your thinking and using AI to replace your thinking.

The first builds muscle. The second borrows against it.

I do not feel like a fraud anymore. Not because I stopped using AI. Because I stopped asking it what to think.

‍

Stay curious, stay human, and stop outsourcing your judgment.

— Charlie

About

Linked Agency

Linked Agency is the LinkedIn growth partner for brands and founders who want more than just likes - they want impact.

Read more

Recommended

Related articles
Logo The AI Report
Join the Newsletter
Inchide fereastra