Prompt Library
Prompting isn't one-size-fits-all. Each model family has different strengths and best practices. This library gives you model-specific guidance backed by official documentation, plus task-based templates you can adapt.
Model-Specific Prompting Guides
Each model family responds differently to prompts. What works for GPT may not work for Claude or Gemini. These guides link to the official best practices — which are updated with every new model release.
OpenAI (GPT & o-series)
Excels at following detailed instructions. Supports system messages, vision, and structured outputs. Use the system message for persistent context.
Reasoning models think before responding. Do NOT use system messages — use the "developer" role instead. Keep prompts concise; the model handles chain-of-thought internally. Do not say "think step by step" — it already does.
Same format as GPT-4o but benefits from more explicit instructions. Be specific about output format since smaller models need clearer guidance.
Key Tips
- ✓ Use system messages to set persistent behaviour (except o-series)
- ✓ Provide examples in the prompt (few-shot) for consistent formatting
- ✓ Request structured output with JSON mode for reliable parsing
- ✓ For o-series reasoning models, give the problem clearly and let the model think
Anthropic (Claude)
The most capable Claude. Handles 200K context. Use XML tags to structure complex prompts. Supports extended thinking for deep reasoning tasks.
Best value for most tasks. Fast and capable. Use clear, direct language — Claude responds well to straightforward instructions rather than elaborate role-play.
Optimised for speed. Be very explicit about what you want — smaller models need less ambiguity.
Key Tips
- ✓ Use XML tags to separate sections: <context>, <instructions>, <examples>
- ✓ Be direct — Claude prefers clear instructions over elaborate framing
- ✓ Put the most important instruction at the end of the prompt
- ✓ Use "Here is the document:" followed by content in tags for analysis tasks
- ✓ For coding: specify the language, framework, and any constraints upfront
Google (Gemini)
1M+ token context window. Excellent at analysing very long documents, codebases, and multi-image inputs. Supports built-in thinking for complex tasks.
Extremely fast with built-in thinking. Great for tasks that need both speed and reasoning. Use for real-time applications.
Cheapest option. Best for straightforward tasks with clear instructions.
Key Tips
- ✓ Gemini handles multimodal inputs natively — mix text, images, audio, and video
- ✓ For long documents, use the full context window rather than summarising first
- ✓ System instructions persist across turns — use them for consistent behaviour
- ✓ Use Google AI Studio to prototype prompts before coding
Open-Source (Llama, DeepSeek, Mistral)
Uses the standard chat template. Supports system messages. Available through many providers (Together, Fireworks, Groq) with different performance characteristics.
Reasoning model like o3 but open-source. Shows its thinking process. Do not prompt it to think step-by-step — it does so automatically.
Strong multilingual performance. Supports structured function calling natively. Good at following precise instructions.
Key Tips
- ✓ Prompt format matters — each model has its own chat template
- ✓ When using via Ollama locally, the standard OpenAI format works
- ✓ Open-source models may need more explicit instructions than frontier models
- ✓ Test the same prompt on multiple providers — speed and cost vary significantly
Task-Based Prompt Templates
Ready-to-use prompts organised by task. Copy, replace the bracketed placeholders with your content, and paste into any model.
Analysis
Document Analysis
Best: Claude Opus, Gemini Pro (for long docs)Analyse the following document and provide:
1. A one-paragraph summary
2. The three most important points
3. Any claims that need verification
4. Questions the document leaves unanswered
Document:
[paste your document here] Tip: For very long documents, Gemini's 1M context window avoids the need to chunk.
Data Interpretation
Best: GPT-4o (with vision for charts), Claude SonnetHere is a dataset:
[paste data or table here]
Please:
1. Identify the key trends
2. Flag any outliers or anomalies
3. Suggest three conclusions supported by the data
4. Note any limitations in drawing conclusions from this data Tip: You can paste screenshots of charts into GPT-4o or Gemini for visual analysis.
Writing
Professional Email
Best: Claude Sonnet, GPT-4oWrite a professional email with the following details:
- To: [recipient and their role]
- Purpose: [what you need]
- Tone: [formal/friendly/urgent]
- Key points to include: [list them]
- Desired outcome: [what action you want them to take]
Keep it concise — under 200 words. Tip: Specifying word count prevents AI verbosity. Always review before sending.
Technical Documentation
Best: Claude Sonnet (coding docs), GPT-4.1 (general docs)Write documentation for the following:
- What it is: [component/API/feature name]
- What it does: [brief description]
- Target audience: [developers/users/admins]
Include:
1. Overview (2-3 sentences)
2. Prerequisites
3. Step-by-step usage guide
4. Common issues and solutions
5. Example code (if applicable)
Use clear, direct language. Avoid jargon unless the audience is technical. Tip: For API docs, paste your actual code and ask the model to generate docs from it.
Coding
Code Review
Best: Claude Sonnet/Opus, GPT-4.1, DeepSeek R1Review this code for:
1. Bugs or potential errors
2. Security vulnerabilities
3. Performance issues
4. Code style and readability improvements
Be specific — reference line numbers and explain why each suggestion matters.
```[language]
[paste code here]
``` Tip: Include the language and framework context. "This is a Next.js API route using Prisma" helps enormously.
Debug This Error
Best: Claude Sonnet, GPT-4.1, o4-mini (for complex bugs)I'm getting this error:
```
[paste error message]
```
Here's the relevant code:
```[language]
[paste code]
```
Environment: [language version, framework, OS]
What's causing this and how do I fix it? Tip: Always include the full error message and stack trace. Context about your environment prevents wrong-language solutions.
Research
Topic Deep Dive
Best: Claude Opus, GPT-4o, Gemini ProI need to understand [topic] at an intermediate level.
Please provide:
1. A clear explanation (assume I know the basics but not the details)
2. The key concepts and how they relate to each other
3. Common misconceptions
4. Practical implications — why this matters
5. Recommended resources for further reading (books, papers, courses)
Be specific and cite sources where possible. Flag anything where experts disagree. Tip: Always verify citations — models may generate plausible but incorrect references.
Compare and Contrast
Best: Claude Sonnet, GPT-4oCompare [option A] and [option B] for [specific use case].
Structure your comparison as:
1. Overview of each option (2-3 sentences)
2. Key differences (as a table)
3. Strengths of each
4. Weaknesses of each
5. My recommendation for [your specific situation]
Be balanced — don't favour either option without evidence. Tip: Specifying "be balanced" helps prevent the model from defaulting to the more popular option.