Prompting Techniques That Actually Work
Prompting Techniques That Actually Work
How you structure prompts determines output quality. These five techniques consistently improve results: showing examples, requesting reasoning, using XML tags, constraining format, and breaking tasks into steps.
Few-Shot: Show the Pattern
Include 2-3 examples in your prompt. The model learns the pattern and applies it to new inputs.
Example: Writing Claude Skill descriptions
You're creating a new skill and need a description that follows the same pattern as existing skills. Instead of describing what you want:
I need a skill description for a blog post writerShow examples from existing skills:
Here are skill descriptions that work well:
 
---
name: code-reviewer
description: Review code for bugs, security issues, and best practices. Use when user asks to review code or provides a pull request.
---
 
---
name: debug-helper  
description: Debug errors by analyzing logs, stack traces, and system state. Use when user shares error messages or reports unexpected behavior.
---
 
Now write a description for a blog post writing skill that follows this pattern:
---
name: write-post
description:The model sees the structure (what it does + when to use it) and generates a consistent description.
Chain-of-Thought: Show Your Work
Ask the model to explain its reasoning before giving an answer. Add "think step by step" or "show your reasoning."
Example: Debugging deployment failures
You have a failed deployment with logs. Standard prompt:
This deployment failed. What's wrong?
 
[paste logs]Chain-of-thought prompt:
This deployment failed. Analyze the logs step by step:
1. What error occurred?
2. What was the system trying to do when it failed?
3. What's the root cause?
4. How do we fix it?
 
[paste logs]Breaking down the analysis leads to better diagnosis. The model works through each step instead of jumping to conclusions.
XML Tags: Structure Your Prompt (Claude-Specific)
Claude was trained to pay special attention to XML tags. Use them to organize sections of your Claude prompts:
<task>
Review this code for security vulnerabilities
</task>
 
<code>
def authenticate(username, password):
    query = f"SELECT * FROM users WHERE name='{username}' AND pass='{password}'"
    return db.execute(query)
</code>
 
<focus_areas>
- SQL injection risks
- Password handling
- Input validation
</focus_areas>Tags make it clear what each section is. The model knows <code> is what to analyze, <focus_areas> is what to look for.
Common useful tags:
- <instructions>- what to do
- <context>- background information
- <examples>- sample inputs/outputs
- <constraints>- rules to follow
For other models like GPT: Use markdown headers (## Instructions, ## Context) or JSON structure instead. GPT and other models work better with these formats than XML.
Output Constraints: Specify Format
Tell the model exactly what format you want. Be explicit about structure, length, or schema.
JSON output:
Extract key information from this text and return it as JSON with these fields:
- timestamp (ISO 8601)
- severity (error/warning/info)
- message (string)
- service (string)
 
[log entry]Markdown with specific sections:
Write a pull request description with exactly these sections:
 
## Summary
[one paragraph]
 
## Changes
[bulleted list]
 
## Testing
[how to verify]Constraints prevent the model from adding extra commentary or deviating from your needs.
Prompt Chaining: Break It Down
Split complex tasks into multiple prompts. Use the output of one prompt as input to the next.
Example: Turning raw notes into a blog post
Don't ask for everything at once:
❌ Take these notes and write a polished blog post with proper structureChain prompts instead:
Prompt 1: Extract key points
From these raw notes, extract the 3-4 main ideas:
[paste notes]Prompt 2: Create outline
Turn these key points into a blog post outline:
[paste extracted points]Prompt 3: Write introduction
Write an introduction paragraph for this outline:
[paste outline]Each step produces better output because the model focuses on one thing at a time.
Combining Techniques
You can mix these approaches:
<task>
Write a skill description following these examples.
Think through: what does this skill do, when should Claude use it, what tools does it need?
</task>
 
<examples>
[few-shot examples here]
</examples>
 
<new_skill>
Name: api-doc-generator
Purpose: Generate API documentation from code
</new_skill>
 
<format>
Return as YAML frontmatter with name, description fields
</format>This combines: XML structure, few-shot examples, chain-of-thought reasoning, and output constraints.
What Actually Matters
Use these techniques when they solve a real problem:
- Few-shot: Output format or style matters
- Chain-of-thought: Task requires reasoning or verification
- XML tags: Prompt has multiple distinct sections (Claude only)
- Output constraints: You need specific structure (JSON, markdown, etc.)
- Prompt chaining: Task is too complex for one prompt
Skip the technique if a simple, direct prompt works.
References
- Chain-of-Thought Prompting - Wei et al., Google Research
- Anthropic Prompt Engineering Guide - Claude-specific prompting best practices
- Claude's Use of XML Tags - How Claude processes XML structure
- OpenAI Prompt Engineering Guide - GPT-specific prompting strategies
- Google Gemini Prompting Guide - Best practices for Gemini models