Skip to main content

Prompt Patterns Catalog: 15 Reusable Templates for Common LLM Tasks

Intermediate45 min3 exercises50 XP
0/3 exercises

Every time you sit down to write a prompt, you are solving a problem someone has already solved. Classification, extraction, summarization, comparison, rewriting — these tasks show up in every LLM project, and the prompt structures that work for them are well-established. I keep a personal catalog of 15 patterns that I copy-paste and adapt. This tutorial gives you that catalog, with runnable Python templates you can drop into any project.

Why Prompt Patterns Beat Starting from Scratch

A prompt pattern is a reusable template structure for a specific type of LLM task. Instead of crafting every prompt from a blank page, you start with a proven skeleton and fill in your specifics. The same way a software engineer reaches for design patterns (factory, observer, strategy), a prompt engineer reaches for prompt patterns.

I used to write every prompt ad hoc. The classification prompt for one project looked nothing like the classification prompt for the next. When I started standardizing them, three things happened: my prompts got more consistent, I stopped forgetting critical instructions, and new team members could contribute prompts on day one because the patterns were documented.

Each of the 15 patterns in this catalog follows the same structure: name, when to use it, template, and a runnable example that builds the prompt using Python string formatting. Most examples use pure Python so you can run them right here in your browser. The few that need an actual LLM call are clearly marked.

We will group the 15 patterns into four categories based on the type of work the LLM is doing:

The 15 prompt patterns organized by category
Loading editor...

Analysis Patterns — Classify, Extract, Evaluate, Compare

Analysis patterns ask the LLM to examine input data and produce structured judgments. These are the workhorses of production AI systems — think content moderation, resume parsing, product review analysis, and document triage.

Pattern 1: Classification

Classification assigns one or more labels from a fixed set to a piece of text. The critical detail most people miss: you must list the exact labels. If you say "classify the sentiment," the LLM invents its own labels and you get inconsistent results across calls.

Classification prompt template
Loading editor...

Pattern 2: Extraction

Extraction pulls specific fields out of unstructured text. I use this pattern constantly for parsing emails, invoices, job postings, and log files. The key is specifying the exact fields and their expected formats upfront.

Extraction prompt template
Loading editor...

Pattern 3: Evaluation and Grading

This pattern asks the LLM to score or grade content against explicit criteria. Without criteria, you get vague feedback. With criteria, you get structured, reproducible assessments.

Evaluation/grading prompt template
Loading editor...

Pattern 4: Comparison

The comparison pattern structures the LLM's analysis when it needs to weigh two or more options. Without structure, comparisons tend to ramble. This template forces a consistent format that is easy to parse programmatically.

Comparison prompt template
Loading editor...

Generation Patterns — Summarize, Generate, Rewrite

Generation patterns ask the LLM to produce new text based on inputs. The biggest mistake I see with generation prompts is under-constraining the output. Without explicit length, format, and style constraints, you get wildly inconsistent results.

Pattern 5: Summarization

Summarization compresses long text into a shorter version. The template controls three things that matter most: target length, audience, and what to prioritize.

Summarization prompt template
Loading editor...

Pattern 6: Q&A Generation

This pattern generates question-answer pairs from source material. I reach for it when building training data, study guides, or FAQ sections. The difficulty parameter makes a real difference — without it, you get all easy questions or all hard ones.

Q&A generation prompt template
Loading editor...

Pattern 7: Code Generation

Why code generation prompts need constraints
Loading editor...

Code generation prompts need more constraints than most people realize. Without specifying the language version, style, and error handling expectations, you get code that technically works but would never pass a code review.

Code generation prompt template
Loading editor...

Pattern 8: Rewriting

Rewriting transforms existing text while preserving its meaning. The rewriting pattern works for tone shifts, simplification, formalization, and localization. The constraint that catches most edge cases: "preserve all factual claims."

Rewriting prompt template
Loading editor...
Exercise 1: Build a Multi-Label Classification Prompt
Write Code

Write a function called multi_label_prompt(text, labels, max_labels) that builds a classification prompt allowing multiple labels (up to max_labels). The prompt should instruct the LLM to return labels as a comma-separated list. Use the test cases to verify your output format matches exactly.

Loading editor...

Reasoning Patterns — Think Step-by-Step, Compare, Demonstrate

Reasoning patterns guide how the LLM thinks, not just what it produces. These are the patterns that separate mediocre prompts from excellent ones. Research by Wei et al. (2022) showed that chain-of-thought prompting significantly improves accuracy on arithmetic, commonsense, and symbolic reasoning tasks compared to direct prompting.

Pattern 9: Chain-of-Thought

Chain-of-thought (CoT) prompting asks the model to show its reasoning before giving a final answer. This is not just about transparency — forcing the model to reason step-by-step genuinely improves accuracy on math, logic, and multi-step problems.

Chain-of-thought prompt template
Loading editor...

The structured steps (identify, determine, work through, state) are not just cosmetic. They prevent the model from jumping to an answer before fully processing the problem. In my experience, the more specific your reasoning steps are to the domain, the better the output.

Pattern 10: Step-by-Step Instructions

Where chain-of-thought asks the LLM to reason, step-by-step asks it to produce a procedure. The output is an ordered set of actions someone can follow. This pattern works best when the task has a clear sequence and each step depends on the previous one.

Step-by-step prompt template
Loading editor...

Pattern 11: Pros and Cons

How many times have you asked an LLM "what are the pros and cons of X?" and gotten a generic, surface-level list? The fix is anchoring the analysis to a specific context and forcing a verdict. Without context, you get textbook answers. With context, you get actionable advice.

Pros and cons prompt template
Loading editor...

Pattern 12: Few-Shot Examples

Few-shot prompting gives the LLM examples of the input-output mapping you want. Instead of describing the format in words, you show it. This is the single most reliable way to control output format, and I use it more than any other pattern.

Few-shot prompt template
Loading editor...
Exercise 2: Build a Few-Shot Sentiment Classifier
Write Code

Write a function called sentiment_few_shot(reviews, new_review) that takes a list of (review_text, label) tuples as examples and a new_review string. It should build a few-shot prompt that classifies the new review. The labels should only be: Positive, Negative, or Neutral. Use the few_shot_prompt function pattern from Pattern 12.

Loading editor...

Transformation Patterns — Translate, Convert, Embody

Transformation patterns change the form or perspective of content without losing its core meaning. These are simpler than they look, but the details in the template — preserving formatting, handling ambiguity, maintaining consistency — make the difference between usable and unusable output.

Pattern 13: Translation

Translation goes beyond language — it includes translating between formats, registers (formal to casual), or technical levels. The key constraint most people forget: "preserve formatting." Without it, the LLM strips bullet points, code blocks, and headers during translation.

Translation prompt template
Loading editor...

Pattern 14: Data Transformation

Data transformation converts structured data from one format to another — CSV to JSON, flat to nested, raw to aggregated. This is one of those patterns where showing the LLM the desired output schema is more effective than describing it in words.

Data transformation prompt template
Loading editor...

Pattern 15: Persona

The persona pattern sets the LLM's expertise, communication style, and constraints for an entire conversation. It is essentially a system prompt builder. I separate it from the other patterns because it shapes all subsequent prompts rather than producing a one-off output.

Persona prompt template
Loading editor...

Combining Patterns with Jinja2 Templates

F-string templates work for individual patterns, but real projects combine multiple patterns into complex prompts. A classification prompt might need few-shot examples and chain-of-thought reasoning. Jinja2 handles this composition cleanly with conditionals and loops.

Jinja2 is a Python templating engine that adds {% if %}, {% for %}, and filters to plain text. It runs natively in the browser here, so you can experiment with it directly.

Composing multiple patterns with Jinja2
Loading editor...

That single Jinja2 template combines three patterns: persona (the role), few-shot (the examples), and chain-of-thought (the reasoning steps). The {% if %} blocks let you toggle features on and off without maintaining separate template strings.

Template factory for reusable prompt builders
Loading editor...
Exercise 3: Build a Jinja2 Evaluation Template
Write Code

Create a Jinja2 template stored in eval_template that generates an evaluation prompt. The template should accept: content (text to evaluate), criteria (list of strings), and an optional scale (defaults to "1-5"). It should list each criterion with a number, include the content, and ask for scores. Use {% for %} to loop over criteria.

Loading editor...

Quick Reference — All 15 Patterns at a Glance

Here is the complete catalog. Bookmark this section and come back to it when you need a starting point for a new prompt.

All 15 prompt patterns at a glance
Loading editor...

Common Mistakes When Using Prompt Patterns

Prompt patterns are not magic — they fail if you misapply them. These are the mistakes I have seen most often across teams and projects.

Under-constraining: vague classification
prompt = """Classify this text.

Text: "The battery lasts all day"
"""
# Problem: no labels, no format, inconsistent results
Properly constrained classification
prompt = """Classify into: Positive, Negative, Neutral.

Text: "The battery lasts all day"

Reply with ONLY the label."""
# Clear labels, strict output format

The second mistake is using chain-of-thought for simple tasks. If you ask "What is the capital of France?" with a full CoT prompt, you waste tokens and sometimes get worse results because the model overthinks a trivial question. Reserve CoT for multi-step reasoning, math, and logic problems.

Over-engineering: CoT for a simple lookup
prompt = """Think step by step.
1. Consider what country we are asking about
2. Recall its capital
3. Verify your answer

What is the capital of France?
ANSWER:"""
# Overkill — wastes tokens on a factual lookup
Direct prompt for simple tasks
prompt = """What is the capital of France?
Answer with just the city name."""
# Simple task, simple prompt

The third common trap is hardcoding context that should be a parameter. If your summarization prompt says "summarize in 3 sentences," that number should be a variable. Every fixed value in a prompt is a future refactoring task when requirements change.


Frequently Asked Questions

Which pattern should I start with for a new LLM project?

Start with classification and extraction — they cover the majority of structured LLM tasks and are the easiest to evaluate. You can verify classification accuracy with a labeled test set and extraction correctness by checking field values against source data.

Do these patterns work with every LLM provider?

Yes. The patterns are model-agnostic. I have used them with OpenAI, Anthropic Claude, Google Gemini, and local models through Ollama. You may need to adjust the strictness of format instructions — smaller models sometimes need more explicit output constraints.

Same pattern, any provider
Loading editor...

Can I combine more than two patterns in one prompt?

Yes, but be deliberate about it. The Jinja2 composite template in this tutorial combines persona + few-shot + CoT. Beyond three patterns in a single prompt, the instructions tend to conflict or confuse the model. If you need more complexity, split into a multi-turn conversation where each turn uses one pattern.

Should I use f-strings or Jinja2 for my templates?

For prototyping and single-purpose prompts, f-strings are simpler and faster to write. Switch to Jinja2 when you need conditional sections, loops over variable-length data, or when multiple team members are editing prompts. Most production systems I have worked on end up using Jinja2 within six months.


Complete Code

All 15 prompt pattern functions in a single runnable script. Copy this into your project as a prompt_patterns.py module.

Complete prompt patterns module
Loading editor...

References

  • Wei, J., et al. — "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." NeurIPS 2022. arXiv:2201.11903
  • Brown, T., et al. — "Language Models are Few-Shot Learners." NeurIPS 2020. arXiv:2005.14165
  • OpenAI — Prompt Engineering Guide. Link
  • Anthropic — Prompt Engineering Documentation. Link
  • White, J., et al. — "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT." Vanderbilt University, 2023. arXiv:2302.11382
  • Jinja2 Documentation — Template Designer Documentation. Link
  • Google — Prompt Engineering for Developers. Link
  • Related Tutorials