Prompt Caching: Design for Reuse
Structure prompts to maximize Anthropic's prompt caching, reducing costs by 90% and latency by 85% for repeated context.
Structure prompts to maximize Anthropic's prompt caching, reducing costs by 90% and latency by 85% for repeated context.
AI agents work better when they see the full structure upfront, then make targeted requests. How to use progressive disclosure for efficient context management.
How to recognize when your conversation has grown too large to be effective, and what to do about it.
MCP provides a standardized way for AIs to interact with tools, from Figma to your calendar to custom workflows you build yourself.
Five prompting techniques that improve LLM outputs: few-shot learning, chain-of-thought reasoning, XML structure, output constraints, and prompt chaining.
The architectural pattern that makes Agent Skills scalable: load only what's needed, when it's needed.
Anthropic's Agent Skills let you equip Claude with specialized capabilities through reusable skill packages. Here's how to build them.