The Perfect Prompt Structure: A Framework Top AI Engineers Use
Learn the CRISPS framework that professional prompt engineers use to structure every prompt. Simple, repeatable, and proven to work.
Most people write prompts randomly. A sentence here, some context there, a dash of hope that ChatGPT figures out what they want.
The results? Wildly inconsistent. Some outputs are great. Others miss the mark entirely.
The best prompt engineers don't rely on luck. They use a framework. A repeatable structure that removes guesswork and produces predictable, professional-quality results every single time.
I'm going to show you the framework I use—and the one used by most top prompt engineers: CRISPS.
The Psychology Behind Prompt Structure
Before we dive into the framework, understand this: AI models work best when you mimic how expert humans think.
When you ask an expert something, you don't just say "help me." You give context. You explain your situation. You ask them to think from a specific perspective. You specify how detailed your answer should be.
Prompt structures mirror this natural human conversation pattern. They guide the AI through the same thinking process an expert would follow.
That's why structured prompts get better results.
The CRISPS Framework Explained
CRISPS stands for:
- C - Context – Situation, background, what problem you're solving
- R - Role – What expertise should the AI adopt?
- I - Instruction – The specific, detailed task you want done
- S - Style – Tone, format, length, depth
- P - Perspective – Who is the audience for this answer?
Let me break down each section with real examples:
1. Context: Set the Stage (2-4 sentences)
What it does: Provides background so the AI understands what matters to you.
Why it matters: Without context, AI doesn't know if you need a casual explanation or technical documentation. It doesn't know if you're a beginner or an expert.
What to include:
- What's the situation you're in?
- What problem are you trying to solve?
- What have you already tried (if relevant)?
- What constraints do you have (time, budget, audience)?
Example (Good): "I'm a marketing manager at a B2B SaaS company. We're struggling with customer churn—losing about 8% of customers monthly. Our product is strong, so I think it's a retention/communication problem, not a product problem. We have 6 weeks to implement a solution before quarterly business review."
Why this works: The AI now knows:
- You're in B2B SaaS (different context than B2C)
- The problem is specifically churn (8%)
- You've already ruled out product issues
- There's a time constraint (6 weeks)
- This is important to leadership (quarterly business review)
Example (Bad): "Help me with customer retention"
Why it fails: AI has no context. It could recommend anything from free trials to loyalty programs to customer success programs. It doesn't know what matters to you.
2. Role: Assign Expertise (1-2 sentences)
What it does: Tells the AI what professional expertise to adopt.
Why it matters: The same question asked to a "marketing expert" vs. a "CEO" gets different answers. The framework guides how the AI thinks about the problem.
Best practice: Be specific about the role AND the level of expertise.
Example (Good): "You are a customer success strategist with 10+ years of experience reducing churn at B2B SaaS companies. You've seen hundreds of churn problems and know what actually works vs. what sounds good."
Why this works:
- Specifies the role (customer success strategist)
- Specifies the level (10+ years)
- Specifies the domain (B2B SaaS, churn reduction)
- Specifies the expertise type (proven track record, not theory)
Example (Bad): "You are an expert"
Why it fails: "Expert" means nothing. Expert at what? How experienced? In what context?
3. Instruction: Be Specific and Detailed (2-4 sentences, very specific)
What it does: States exactly what you want done. No ambiguity.
Why it matters: Vague instructions get vague results. Specific instructions get specific results.
Best practice: Break down complex tasks into numbered sub-tasks. State output format. Include any specific requirements.
Example (Good): "Create a customer retention strategy for the next 6 weeks that includes: (1) A root cause analysis of why we're losing customers—specific to B2B SaaS, not generic, (2) Three tactical interventions we can implement immediately (this week), (3) A medium-term program (weeks 2-6) for improving retention, (4) Success metrics and KPIs we should track. For each intervention, include effort required (high/medium/low) and expected impact on churn."
Why this works:
- Breaks task into clear components (1, 2, 3, 4)
- Specifies context for each (B2B SaaS, not generic; this week, weeks 2-6)
- Specifies output format (specific metrics per intervention)
- Specifies evaluation criteria (effort + impact)
Example (Bad): "Give me ideas for reducing customer churn"
Why it fails: Completely open-ended. "Ideas" could be 3 sentences or 30 pages. AI doesn't know what format, level of detail, or type of ideas you want.
4. Style: Control Tone and Format (1-2 sentences)
What it does: Specifies how the output should sound and be formatted.
Why it matters: The same information formatted as bullet points vs. prose vs. a table gets used differently. Tone affects how ideas are received.
What to include:
- Tone (conversational, professional, formal, casual)
- Format (bullet points, numbered list, prose paragraphs, tables)
- Length constraints (under 1,000 words, concise, detailed)
- Technical level (jargon-heavy, explain terms, completely non-technical)
Example (Good): "Use a professional but conversational tone. Format the tactics as numbered items with 2-3 bullet points each explaining the approach. For the medium-term program, use a table with timeline, tactics, and KPIs. Keep the entire response under 2,000 words. Explain business concepts but avoid marketing jargon—this is for engineers and operations people reading the strategy."
Why this works:
- Specifies tone (professional, conversational—not stuffy)
- Specifies format per section (numbered, bullets, then table)
- Specifies length (hard cap: 2,000 words)
- Specifies audience knowledge level (explain concepts, avoid jargon)
Example (Bad): "Make it sound good and not too long"
Why it fails: "Good" is subjective. "Not too long" is undefined. The AI can't match your expectations because you haven't been specific.
5. Perspective: Define the Audience (1 sentence)
What it does: Specifies who will read/use this output and what they care about.
Why it matters: The same information pitched to a CEO vs. a team lead vs. a customer looks completely different. Perspective helps the AI calibrate complexity and emphasis.
Example (Good): "This strategy will be presented to the executive team (VP of Product, VP of Sales, VP of Operations)—people who care about ROI, timeline, and what resources we need. They're not customer success experts, so explain things clearly but assume they understand business fundamentals."
Why this works:
- Specifies exact audience (VP-level)
- Specifies what they care about (ROI, timeline, resources)
- Specifies their expertise level (business fundamentals, but not CS experts)
Example (Bad): "For the team"
Why it fails: "The team" is too vague. Which team? What do they know? What do they care about?
Putting It Together: The Complete CRISPS Formula
Here's how all five elements fit together:
[CONTEXT]: [background and situation]
[ROLE]: You are a [specific role] with [relevant expertise].
[INSTRUCTION]: Create [specific deliverable] that includes:
- [specific element 1]
- [specific element 2]
- [specific element 3]
[STYLE]: [Tone]. Format as [structure]. Keep it [length]. [Technical level].
[PERSPECTIVE]: This is for [audience] who [what they care about].
Full Example:
I'm a marketing manager at a B2B SaaS company. We're losing about 8% of customers monthly. The product is strong, so this is a retention/communication problem. We need a solution within 6 weeks.
You are a customer success strategist with 10+ years reducing churn at B2B SaaS companies. You've seen what actually works versus what sounds good.
Create a retention strategy including: (1) Root cause analysis of our churn (specific to B2B SaaS), (2) Three tactics we can implement this week, (3) A medium-term program (weeks 2-6), (4) Success metrics we should track. For each tactic, note effort required (high/medium/low) and expected impact.
Use professional but conversational tone. Format tactics as numbered items with 2-3 bullets each. Use a table for the timeline. Keep it under 2,000 words. We're not CS experts, so explain clearly but assume business knowledge.
This is for our executive team (VP Product, VP Sales, VP Operations) who care about ROI, timeline, and resource requirements.
That's a complete CRISPS prompt. Clear, structured, and specific.
Why CRISPS Actually Works
Research shows structured prompts like CRISPS get 40-60% better results than unstructured prompts using the same AI model.
Why? Because:
- Context prevents misunderstandings about what matters
- Role anchors the AI in relevant expertise
- Instruction removes ambiguity (AI doesn't have to guess)
- Style ensures the output is actually usable
- Perspective helps the AI calibrate tone and complexity
Together, these five elements guide the AI through a deliberate thinking process. Same way you'd guide a human expert through solving a problem.
CRISPS Template for Quick Reference
If remembering CRISPS feels overwhelming, use this template:
I'm [situation/role].
[Context about the problem].
You are a [role] with [expertise].
Create/write/analyze [specific deliverable] that includes:
- [specific requirement 1]
- [specific requirement 2]
- [specific requirement 3]
Format as [format]. Tone: [tone]. Length: [length].
This is for [audience] who [what they care about].
Just fill in the brackets and you have a professional-quality prompt.
Examples of CRISPS Applied to Different Tasks
Example 1: Writing Sales Email
Context: I'm launching a new product feature and need to alert existing customers.
Role: You're a sales copywriter who specializes in feature announcements that drive adoption (not just awareness).
Instruction: Write a 200-word email announcing our new API that gets existing technical users excited. Include: why we built it, what problems it solves, why they should care, and a specific action (try it for free).
Style: Conversational, email format, short paragraphs. Avoid corporate speak. Assume technical audience.
Perspective: For technical decision-makers who are busy and skeptical about feature announcements.
Example 2: Creating Training Content
Context: Our onboarding takes too long. New hires take 3 weeks to become productive.
Role: You're an education specialist who designs training that sticks.
Instruction: Create a module teaching new hires how to use our main product. Should cover: core workflows, common mistakes, hands-on exercises. Target: 45 minutes to complete.
Style: Mix of written explanation, bullet points, and step-by-step walkthroughs. Assume no prior knowledge of our product.
Perspective: For engineers with technical background but no domain expertise in our industry.
Example 3: Strategic Analysis
Context: We're deciding whether to enter a new market segment. Need to evaluate the opportunity.
Role: You're a market analyst who's helped startups enter new segments successfully and failed segments unsuccessfully.
Instruction: Analyze our opportunity in the enterprise segment. Include: market size, competition, barriers to entry, our competitive advantages, risks, and recommendation (enter or don't).
Style: Professional analysis format. Bullet points for key points. Include data/evidence. 1,500-2,000 words.
Perspective: For the executive team who care about ROI, risk, and strategic fit.
Common Mistakes When Using CRISPS
Mistake 1: Making Context Too Long
Bad: 200-word paragraph explaining your entire company history
Good: 2-4 sentences explaining the specific situation and constraint
Context should be relevant background, not everything. Keep it focused.
Mistake 2: Being Vague in Instructions
Bad: "Create a strategy"
Good: "Create a strategy that includes: (1) market analysis, (2) three tactical recommendations with effort/impact, (3) timeline"
The more specific, the better. Break down deliverables into numbered components.
Mistake 3: Forgetting About Perspective
Bad: Just writing instructions without specifying who'll read them
Good: "This is for the executive team who care about ROI and timeline, not technical details"
Perspective changes everything. It determines complexity, terminology, and emphasis.
Mistake 4: Combining Multiple Tasks in One Prompt
Bad: "Write an email AND create a landing page AND design a social media strategy"
Good: One CRISPS prompt per task. Chain them together if you need multiple outputs.
AI works better when focused on one task at a time.
Mistake 5: Not Iterating on Results
Bad: Using the first output without feedback
Good: "Here's the output. Can you adjust X and expand Y?"
The best results come from iteration. Use the initial response as a starting point.
When CRISPS Works Best
CRISPS is perfect for:
- ✅ Writing (emails, content, documentation)
- ✅ Analysis (market research, competitive analysis)
- ✅ Strategy development (go-to-market, retention plans)
- ✅ Brainstorming (ideation, problem-solving)
- ✅ Technical explanations (documentation, tutorials)
CRISPS is less necessary for:
- ❌ Simple factual questions ("What's the capital of France?")
- ❌ Creative freewriting where structure would limit ideas
- ❌ Quick clarifications ("Explain OAuth")
For simple questions, you can skip CRISPS. For anything complex, it's worth the extra 30 seconds of setup.
Making CRISPS a Habit
The first time you use CRISPS, it feels like extra work. You have to think through all five elements.
By the 10th time? It becomes automatic.
You'll start naturally thinking in terms of context, role, instruction, style, and perspective. It becomes your default way of asking AI (and humans) for help.
That's when you see the biggest payoff. Not just better AI outputs, but better thinking about what you actually need.
Your Next Steps
- Pick one task you're doing this week
- Sketch out CRISPS for that task
- Test it against a random prompt
- Compare results and see the difference
- Adjust based on what worked
Start with one task. Notice the quality improvement. You'll be convinced.
Master Prompt Engineering Fundamentals
Start with our complete beginner's guide to prompt engineering to build a foundation.
Explore different prompt types and techniques in our comprehensive guide to types of prompts to understand all the frameworks available.
Learn advanced reasoning approaches like chain-of-thought prompting for step-by-step problem-solving.
Understand tree-of-thought prompting for exploring multiple solution paths simultaneously.
Control tone and randomness with our guide to temperature and creativity settings.
Avoid costly mistakes with our mistakes to avoid guide specific to structured prompting.
Finally, integrate everything into your workflows with our AI productivity guide.