Boost Prompt
Advanced Techniques

Chain-of-Thought Prompting: Why Showing Your Work Gets Better AI Results

Learn how asking AI to 'show its work' can improve accuracy by 20-50% on complex tasks. Real examples, practical tips, and prompts you can use today.

Boost Prompt Team
9 min read
Chain-of-Thought Prompting: Why Showing Your Work Gets Better AI Results

I learned about chain-of-thought prompting by accident.

I was trying to get ChatGPT to help me decide between two product features. My first prompt was simple: "Should we build feature A or feature B?"

The answer came back confident but... wrong. It completely missed a key constraint I'd mentioned earlier in the conversation.

Then I tried something different. I asked it to "think through this step-by-step" and list out the factors it was considering.

The second answer was completely different. And way better.

That's when I realized: AI works better when you make it show its work.

What Actually Is Chain-of-Thought?

Remember math class when your teacher made you show your work? Not just the answer, but every step that got you there?

Chain-of-thought prompting is exactly that, but for AI.

Instead of asking "What's the answer?" you ask "Walk me through your reasoning, then give me the answer."

Here's the difference:

Without chain-of-thought: "Should I hire this candidate?" → AI gives you a yes/no

With chain-of-thought: "Should I hire this candidate? Think through: (1) What skills do we actually need? (2) How well does this person match? (3) What are the red flags? (4) What's your recommendation?" → AI shows you the logic, then recommends

The second approach catches mistakes before they become bad decisions.

Why This Actually Works

AI models generate text one word at a time. When you force them to explain their thinking first, you're literally giving them more "space" to reason.

Think of it like this: if I ask you "What's 17 x 24?" you might struggle. But if I say "Take your time, break it down however you need to," you'd probably do something like:

  • 17 x 20 = 340
  • 17 x 4 = 68
  • 340 + 68 = 408

Same logic applies to AI. When you ask it to show intermediate steps, it:

  1. Slows down instead of jumping to conclusions
  2. Catches its own logical errors
  3. Gives you visibility into what it's thinking
  4. Produces more accurate results

Research shows this can improve accuracy by 20-50% on complex reasoning tasks. That's huge.

When You Should (and Shouldn't) Use This

Use chain-of-thought for:

  • Decisions with multiple factors to weigh
  • Debugging code or troubleshooting problems
  • Planning something with dependencies
  • Analysis that requires nuance
  • Anything where being wrong is costly

Don't use it for:

  • Simple facts ("What's the capital of France?")
  • Quick summaries
  • Creative brainstorming where logic doesn't matter
  • When you need speed over accuracy

I basically ask myself: "Would a human need to think this through, or just know it?" If they'd need to think, use chain-of-thought.

How to Actually Do This

The basic template I use most:

[Context about the situation]

Think through this step-by-step:
(1) [First thing to consider]
(2) [Second thing to consider]
(3) [Third thing to consider]
(4) Based on all that, what's your recommendation?

Real example from last week:

I'm deciding whether to add a free trial to our SaaS product.

Think through this step-by-step:
(1) What's the typical conversion rate for free trials in B2B SaaS?
(2) What's our current conversion rate without trials?
(3) What would a free trial cost us (support, infrastructure)?
(4) What could it gain us (more signups, better qualified leads)?
(5) Based on all this, should we do it?

The AI walked through each point, showed its math, and gave me a recommendation with the reasoning visible. I could see exactly where it made assumptions and challenge them.

Real Prompts I Use Every Week

For Product Decisions

We're trying to decide between building [Feature A] or [Feature B].

Think through:
(1) Who requested each feature and why?
(2) How many users would each help?
(3) How hard is each to build?
(4) Which creates more value long-term?
(5) What's your recommendation?

For Debugging

This code is supposed to [expected behavior] but instead [actual behavior].

Debug this step-by-step:
(1) What would cause these symptoms?
(2) Which causes are most likely given the code?
(3) How would I test each hypothesis?
(4) What's the probable root cause?
(5) How should I fix it?

Here's the code:
[paste code]

For Hiring

I'm interviewing someone for [role].

Evaluate this candidate:
(1) What skills does this role actually need?
(2) Based on their background, what are they strong at?
(3) What are potential gaps or concerns?
(4) How do they compare to our bar?
(5) Hire or pass, and why?

Candidate background:
[paste resume highlights or interview notes]

For Writing

Review this piece of writing:
[paste writing]

Provide feedback by thinking through:
(1) Is the main point clear?
(2) Does the argument flow logically?
(3) Where does it lose me or feel weak?
(4) What would make this stronger?
(5) What's the one change that would help most?

For Strategy

We're considering [strategic move].

Analyze this:
(1) What problem are we trying to solve?
(2) What are 3-4 different ways to solve it?
(3) For each approach: pros, cons, effort needed
(4) Which approach fits our strengths and situation best?
(5) What's your recommendation and why?

Making It Even Better

After using this for months, here's what I've learned works:

Be specific about the steps. Don't just say "think about it." Tell the AI exactly what factors to consider. The more specific, the better the reasoning.

Number your steps. "(1), (2), (3)" makes it super clear you want distinct points, not a rambling paragraph.

Ask for the confidence level. I often add: "Rate your confidence 1-10 and explain why." This helps me know when to dig deeper.

Request alternatives. Sometimes I'll add: "Then suggest one alternative perspective I should consider." Keeps me from anchoring on the first answer.

Verify the logic. For important decisions, I ask: "Now review your reasoning. Are there any holes or assumptions I should question?"

Common Mistakes I Made (So You Don't Have To)

Mistake #1: Too many steps

Early on, I'd ask AI to think through 10-12 steps. It would get confused or start repeating itself.

Sweet spot: 3-6 steps max. Break complex problems into smaller questions.

Mistake #2: Vague steps

"Think about (1) the pros, (2) the cons, (3) the decision" is too generic.

Better: "(1) How does this affect revenue? (2) What's the engineering cost? (3) How does this impact user experience?"

Mistake #3: Using it for everything

At first I used chain-of-thought for every single prompt. It was slow and wasteful.

Now I save it for things that actually need reasoning. Simple questions get simple prompts.

Mistake #4: Not adapting to the model

GPT-4 handles complex reasoning better than GPT-3.5. Claude is particularly good at following multi-step instructions. Llama needs simpler structures.

Test your prompts on whatever model you're actually using.

Combining This with Other Techniques

Chain-of-thought works even better when you combine it with other prompting approaches.

With examples (few-shot prompting):

Here's an example of good decision-making:

Question: Should we raise prices?
Analysis:
(1) Our costs increased 15% this year
(2) Competitors raised prices by 10%
(3) Our NPS is 65, so we have pricing power
(4) We'd lose ~5% of customers but gain 20% revenue
(5) Recommendation: Yes, raise by 12%

Now analyze this decision: Should we expand to Europe?
Use the same step-by-step approach.

With role-playing:

You're a CFO reviewing a budget proposal.

Think through:
(1) Does this spending align with our goals?
(2) What's the expected ROI?
(3) What are the risks?
(4) How does this compare to alternatives?
(5) Your financial recommendation?

With constraints:

Think through this decision, but consider:
- We have a $50k budget max
- Needs to launch in 3 months
- Must work with our current tech stack

Given those constraints:
(1) What are our realistic options?
(2) Which option best fits the constraints?
(3) What are we giving up?
(4) What's your recommendation?

Check out our guide on different types of prompts to see how these techniques work together.

What About Speed and Cost?

Real talk: chain-of-thought uses more tokens. The AI generates more text because it's showing its work.

For GPT-4, a simple question might use 50 tokens. With chain-of-thought, maybe 300-500 tokens.

Is it worth it? Depends on the decision.

For "should I hire this person?" — absolutely worth the extra $0.02.

For "summarize this paragraph" — probably not.

I use chain-of-thought for decisions where being wrong costs me more than the extra tokens. Everything else gets a standard prompt.

How This Compares to Just Asking Twice

You might think: "Can't I just ask for the answer, then ask it to explain?"

I tried that. It doesn't work as well.

When AI generates an answer first, it tends to justify that answer rather than reason from scratch. It's like confirmation bias.

Chain-of-thought forces the reasoning to come before the conclusion. That's the key difference.

The Bigger Picture

I think chain-of-thought is one of those techniques that separates people who get mediocre AI results from people who get genuinely useful insights.

It's not complicated. You're just asking AI to show its work.

But that simple change makes AI go from "sometimes helpful" to "actually reliable for important decisions."

The trick is knowing when to use it. Not every prompt needs it. But for anything where you'd normally think through pros and cons yourself? Make the AI do the same.

Getting Started

If you want to try this today:

  1. Pick a decision you're currently facing
  2. List out 3-5 factors you'd normally consider
  3. Prompt the AI to think through each factor, then recommend
  4. Compare it to asking without the structured thinking

The difference will be obvious.

From there, build a few templates for your common use cases. I have probably 10 templates I reuse constantly, just swapping in the details.

After a few weeks, it becomes second nature. You'll instinctively know when a question needs step-by-step reasoning vs. a quick answer.

And your AI results will get noticeably better.


Want to understand how chain-of-thought fits into the broader landscape of prompting techniques? Check out our complete guide to types of prompts.

If you're working with Claude specifically, our Claude vs GPT-4 prompting guide covers how reasoning differs between models.

Or if you're just getting started with prompting in general, start with our prompt engineering for beginners crash course.

Ready to 10x Your AI Results?

Join thousands of professionals who are already getting better results from ChatGPT, Claude, Midjourney, and more.

Or

Get prompt optimization tips in your inbox

No spam. Unsubscribe anytime.