Boost Prompt
Fundamentals

12 Types of Prompts Every AI User Should Know (With Examples)

Master all major prompt types from simple direct prompts to advanced hybrid techniques. Includes decision framework and real examples for every situation.

Boost Prompt Team
12 min read
12 Types of Prompts Every AI User Should Know (With Examples)

Last month I watched someone struggle with ChatGPT for 20 minutes trying to get good marketing copy.

Their prompt was fine. The problem was they were using the wrong type of prompt.

They kept asking "Write me a better version" over and over. Each result was meh.

I suggested they try: "Here are two examples of copy I like: [examples]. Now write something similar for [their product]."

Instant improvement. Same AI, different approach.

That's when I realized most people don't know there are different types of prompts for different situations.

It's like having a toolbox but only using a hammer. Everything looks like a nail.

This guide covers the 12 main types of prompts I use every day, when to use each one, and real examples you can adapt.

The Quick Reference

Before we dive deep, here's the cheat sheet I keep on my desk:

  • Direct → Quick factual questions
  • Few-Shot → Teaching by example
  • Chain-of-Thought → Complex reasoning
  • Instruction → Setting behavior rules
  • Persona → Getting expert viewpoints
  • Role-Play → Interactive practice
  • Format-Specific → Structured outputs
  • Constraint-Based → Focused solutions
  • Iterative → Progressive refinement
  • Comparative → Evaluating options
  • Multi-Turn → Deep conversations
  • Hybrid → Combining techniques

Let's break down each one.

Type 1: Direct Prompts

This is how most people start. Just ask for what you want.

When to use it:

  • You need a quick answer
  • The question is straightforward
  • You're asking for facts or summaries

Example:

What are the top 3 growth channels for B2B SaaS companies?

Template:

[Question or request]

That's it. No tricks, no structure.

The catch: This only works well for simple, clear-cut questions. For anything complex or creative, you'll get generic, often useless responses.

I use direct prompts maybe 20% of the time. The rest needs more structure.

Type 2: Few-Shot Prompting

This is my go-to for anything that needs a specific style or format.

Instead of explaining what you want, you show examples.

When to use it:

  • You want consistent formatting
  • You're teaching the AI a style
  • You need pattern recognition
  • Explaining what you want is hard

Example:

Rewrite these in a casual, friendly tone:

"We are pleased to announce" → "Exciting news!"
"Please find attached" → "Here's that file you asked for"
"Pursuant to our discussion" → "Like we talked about"

Now rewrite:
"We regret to inform you that your application was unsuccessful"

The AI sees the pattern and matches it.

Template:

Here are examples:
[Example 1 input] → [Example 1 output]
[Example 2 input] → [Example 2 output]

Now do the same for:
[Your input]

I probably use this type 30% of the time. It's incredibly powerful for keeping outputs consistent.

Pro tip: Three examples is usually the sweet spot. One isn't enough, five might confuse.

Want to dive deeper into this technique? Check out our full guide on few-shot prompting examples.

Type 3: Chain-of-Thought Prompting

This is for anything that requires thinking, not just answering.

You're asking the AI to show its work before giving conclusions.

When to use it:

  • Complex decisions
  • Multi-step problems
  • When accuracy really matters
  • Anything where you'd normally "think it through"

Example:

Should we hire this candidate for our engineering team?

Think through:
(1) What skills does this role actually require?
(2) What does this candidate's background show?
(3) What are the gaps or concerns?
(4) How do they compare to our bar?
(5) Final recommendation and why?

Candidate info: [details]

The AI walks through each step visibly. You can see where it's reasoning well and where it's off.

Template:

[Question or decision]

Think through step-by-step:
(1) [First consideration]
(2) [Second consideration]
(3) [Third consideration]
(4) Final answer/recommendation

This uses more tokens but dramatically improves accuracy. Research shows 20-50% better results on complex tasks.

I use this whenever being wrong would be costly.

For a deep dive into making this work, read our chain-of-thought prompting guide.

Type 4: Instruction Prompts

Think of these as setting the rules of the game upfront.

You're defining how the AI should behave across multiple interactions.

When to use it:

  • You want consistent tone or style
  • You're building a chatbot or assistant
  • You have specific quality standards
  • You'll reuse this setup many times

Example:

You are a technical support agent for a developer tool.
Your tone is helpful and patient, never condescending.
When users have errors, always ask for: error message, OS, and version.
Provide code examples when relevant.
If you don't know something, say so and suggest where to find the answer.

User: "It's not working!"

This sets expectations that carry through the whole conversation.

Template:

You are [role/persona].
Your tone is [characteristics].
You always [specific behaviors].
You never [things to avoid].

Now: [user input]

I set these up once for common use cases and reuse them constantly.

Type 5: Persona-Based Prompting

Similar to instruction prompts, but focused on getting a specific expert perspective.

When to use it:

  • You want specialized knowledge
  • You need a fresh perspective
  • You're evaluating from a specific angle

Example:

You're a venture capitalist who's seen hundreds of SaaS pitches.

Evaluate this startup idea from your perspective:
[Idea description]

What concerns would you have? What excites you? Would you invest?

The AI taps into patterns it's seen in VC thinking and applies them.

Template:

You are a [specific expert/role].

Analyze [topic] from your expert perspective.
Focus on [what matters to this role].

Warning: AI can be overconfident when role-playing experts. Use this to generate perspectives, not as final truth.

I use persona prompts when I want to see something from a different angle than my own.

Type 6: Role-Play Prompts

This is for interactive scenarios where you and the AI both play parts.

When to use it:

  • Practice (interviews, sales calls, presentations)
  • Training scenarios
  • Working through conversations
  • Testing how something might go

Example:

Let's role-play a customer support scenario.

You're an angry customer whose payment failed but still got charged.
I'm the support agent.
Be realistic—start upset, but willing to be helped if I handle it well.

Go ahead and start.

Then you respond, and it continues back and forth.

Template:

Let's role-play [scenario].
You are [role 1], I'm [role 2].
Context: [situation details]
[Any behavioral guidance]

Start by [first action].

This is fantastic for preparing for tough conversations. I've used it before salary negotiations, difficult client calls, and investor pitches.

Type 7: Format-Specific Prompts

When you need output in a precise structure.

When to use it:

  • You're feeding AI output into code
  • You want tables or lists
  • You need JSON, CSV, or specific formatting
  • Consistency across many outputs matters

Example:

List 5 potential blog topics for a productivity app.

Format as JSON:
{
  "topics": [
    {
      "title": "...",
      "target_audience": "...",
      "keyword": "..."
    }
  ]
}

The AI follows the structure exactly.

Template:

[Task description]

Format as [JSON/Table/CSV/etc]:
[Example of the structure]

I use this constantly when building tools or automations. Makes parsing AI output so much easier.

Type 8: Constraint-Based Prompts

Adding limits to force better, more creative solutions.

When to use it:

  • Brainstorming with focus
  • Forcing innovation
  • Real-world limitations exist
  • Open-ended results are too scattered

Example:

Suggest 5 marketing ideas for our product launch.

Constraints:
- Budget under $5,000
- Must be executable in 2 weeks
- Target software developers specifically
- No paid advertising

Go.

Constraints make the AI think harder and produce more practical ideas.

Template:

[Request]

Constraints:
- [Limit 1]
- [Limit 2]
- [Limit 3]

Weirdly, adding constraints often makes outputs more creative, not less.

Type 9: Iterative Prompts

Asking for multiple versions or progressive refinement.

When to use it:

  • Creating copy or content
  • Exploring variations
  • Progressive improvement
  • Not sure exactly what you want yet

Example:

Write a cold email subject line to get startup founders to book a demo.

First, give me 5 different approaches.
Then, refine the best 2.
Finally, pick the winner and explain why.

You're guiding the AI through a refinement process.

Template:

[Task]

First, [initial version/options].
Then, [refinement step].
Finally, [selection or final version].

I use this for anything customer-facing. The first version is rarely the best version.

Type 10: Comparative Prompts

Systematically comparing options.

When to use it:

  • Evaluating tools or services
  • Making decisions between options
  • Understanding tradeoffs
  • Building decision frameworks

Example:

Compare Notion vs Airtable for project management.

For each, analyze:
- Best use cases
- Learning curve
- Pricing
- Collaboration features
- Limitations
- Who should choose it

Then recommend: which for a 10-person startup?

The side-by-side analysis makes tradeoffs obvious.

Template:

Compare [Option A] vs [Option B] for [use case].

For each, analyze:
- [Dimension 1]
- [Dimension 2]
- [Dimension 3]

Recommendation: [which and why]

I do this before making any significant tool or strategy decision.

Type 11: Multi-Turn Conversation Prompts

Back-and-forth dialogue to explore something deeply.

When to use it:

  • Learning something new
  • Debugging unclear problems
  • Refining ideas through dialogue
  • When the question evolves

Example:

User: I want to improve our onboarding flow.
AI: [Asks clarifying questions about current flow]
User: [Provides details]
AI: [Suggests specific improvements]
User: What about [specific concern]?
AI: [Addresses it]

The conversation builds context over multiple exchanges.

This is how I use AI for learning. Ask, follow up, dig deeper, challenge assumptions.

The key is building on what came before rather than starting fresh each time.

Type 12: Hybrid Prompts

Combining multiple techniques for complex tasks.

When to use it:

  • High-stakes decisions
  • Complex deliverables
  • When one approach isn't enough
  • You need comprehensive analysis

Example:

You are a product strategist [Persona].

Compare these two features we could build next [Comparative]:
- Feature A: [description]
- Feature B: [description]

For each, think through [Chain-of-Thought]:
(1) User impact
(2) Engineering cost
(3) Revenue potential
(4) Strategic fit

Then provide your recommendation as [Format-Specific]:
{
  "recommended": "...",
  "rationale": "...",
  "risks": [...],
  "next_steps": [...]
}

You're stacking techniques to get exactly what you need.

The catch: These can get complicated. Start simple and add complexity only when needed.

I reserve hybrid prompts for the most important 10% of my AI interactions.

How to Choose the Right Type

Here's my actual decision process:

Step 1: Is this simple or complex?

  • Simple → Direct prompt
  • Complex → Keep going

Step 2: Do I have good examples?

  • Yes → Few-shot
  • No → Keep going

Step 3: Does this need reasoning?

  • Yes → Chain-of-thought
  • No → Keep going

Step 4: Do I need a specific format?

  • Yes → Format-specific
  • No → Keep going

Step 5: Am I comparing options?

  • Yes → Comparative
  • No → Keep going

Step 6: Do I want multiple versions?

  • Yes → Iterative
  • No → Direct (fallback)

After doing this for a while, it becomes instinctive.

Common Mistakes I See

Using direct prompts for everything

"Write me marketing copy" gets generic garbage.

"Here are 3 examples of copy I love: [examples]. Write something similar for [product]" gets something usable.

Overcomplicating simple requests

Don't use chain-of-thought for "What's the capital of France?" Just ask directly.

Not combining techniques

Some of my best results come from "You're a [persona], here are examples [few-shot], think through [chain-of-thought]."

Forgetting to specify format

If you need JSON, say so upfront. Otherwise you'll get a paragraph and have to ask again.

Real-World Examples

For content creation: Few-shot + Iterative

Here are 3 headlines I love: [examples]
Write 10 in that style for [topic]
Then refine the best 3
Pick the winner

For decisions: Persona + Chain-of-thought

You're a [domain expert]
Should we [decision]?
Think through: [factors 1-5]
Recommendation?

For analysis: Chain-of-thought + Format-specific

Analyze [data/situation]
Think through: [angles 1-4]
Format findings as: [structure]

For learning: Multi-turn + Persona

Explain [concept] like you're a patient teacher
[Ask follow-ups based on answers]
[Dig into confusing parts]

Building Your Prompt Library

Here's what I actually do:

  1. Keep a note file of prompts that worked really well
  2. Tag them by type and use case
  3. When facing a new task, check if I have something similar
  4. Adapt the template, don't start from scratch

After a few months, you'll have prompts for 80% of what you do regularly.

Check out our guide on managing and organizing prompts for the system I use.

The Bigger Picture

Most people treat AI like a search engine. Ask a question, get an answer.

But AI is more like a really smart person who needs context and structure to do their best work.

These 12 types are really just ways of giving that structure.

The pattern I see in people who get consistently great AI results:

  1. They know these types exist
  2. They match the type to the task
  3. They have templates ready to adapt
  4. They iterate based on what works

That's it. Not magic, just knowing which tool to use when.

Start Here

If you're new to this:

Pick the 3 types you'd use most often. For most people that's:

  • Few-shot (for consistent outputs)
  • Chain-of-thought (for decisions)
  • Format-specific (for structured data)

Create one template for each.

Use them this week.

Refine based on results.

After a month, add a few more types as you hit their use cases.

Within a few months, you'll instinctively know which approach to use. And your AI results will be dramatically better.


Want to go deeper on specific techniques?

Start with chain-of-thought prompting if you make complex decisions.

Check out few-shot prompting examples if you need consistent formatting.

Or read our beginner's guide to prompt engineering if you're just getting started.

For practical applications, see our guide on AI prompts for coding or prompting for copywriting and sales.

Ready to 10x Your AI Results?

Join thousands of professionals who are already getting better results from ChatGPT, Claude, Midjourney, and more.

Or

Get prompt optimization tips in your inbox

No spam. Unsubscribe anytime.