The Universal LLM Prompt Template for Every AI Task

by | Nov 2, 2025

You’re rebuilding prompts from scratch because you haven’t standardized. The fastest teams use a simple hybrid template that’s readable for humans, parsable for machines, and reliable under pressure. If you write content, review code, analyze data, teach, or make product calls, this gives you consistent outputs in minutes.

Here’s what you’ll get: a universal template that blends Markdown, XML, and JSON, clear guidance on when to use each layer, and five complete examples across high-value tasks. Copy it once and eliminate prompt thrash for good.

The hybrid stack explained

XML gives you the scaffold. Use lightweight tags like <role>, <context>, <instructions>, <constraints>, and <output> to segment the prompt into parts the model can reliably follow. This structure makes prompts easy to diff, reuse, and compose in tools.

Markdown keeps it human. Headings, bold text, and short lists make constraints and steps scannable. You and your teammates can understand and edit fast without breaking the prompt.

JSON locks the data. Reserve JSON for inputs and machine-readable outputs. It’s strict by design—perfect for automated pipelines, validation, and downstream apps.

  • XML = structure and segmentation for the model
  • Markdown = readability and instruction clarity
  • JSON = precision for inputs and outputs at scale

– XML = structure and segmentation for the model

– Markdown = readability and instruction clarity

– JSON = precision for inputs and outputs at scale

Why this works:

  • XML tags create clear sections that AI can parse
  • Markdown inside keeps content human-readable
  • JSON handles actual data
  • Simple means easy to write and understand

The universal template you can copy

Use this as your default. Keep one clear role, scannable context, numbered steps, explicit constraints, and a dual-output format: a human-friendly report plus a machine-readable JSON object.

<role>
Define who the AI is in one clear sentence. Example: You are a pragmatic senior reviewer who writes concise, actionable feedback.
</role>

<context>
Provide background using simple Markdown.
- Audience: [who this is for]
- Objective: [the outcome you need]
- Scope: [what's in or out]
- Key constraints: [3–5 bullets max]
</context>

<instructions>
Tell the AI exactly what to do, in order:
1. [Step one]
2. [Step two]
3. [Step three]
</instructions>

<constraints>
Rules and limitations:
- Must do: [requirement]
- Avoid: [anti-pattern]
- Style: [tone, length, formatting]
</constraints>

<data>
```json
{
  "inputs": {
    "items": [],
    "source_text": "",
    "variables": {}
  }
}
```
</data>

<output>
Describe the desired outputs:
- Human report: headings, bullets, and a short summary paragraph
- Machine data: valid JSON that follows the schema below
</output>

<final_data_json>
{
  "status": "<SUCCESS|REVIEW>",
  "summary": "<one-sentence outcome>",
  "details": [
    {"id": "<string|int>", "label": "<string>", "score": "<number>", "notes": "<string>"}
  ],
  "meta": {
    "timestamp": "<ISO-8601>",
    "confidence": "<0–1>"
  }
}
</final_data_json>

Two practices that reduce retries: make constraints short and specific, and always define the JSON schema you want back.

One integrated pattern you can reuse

If you need both a human-facing brief and a strict data object, follow this flow:

  • Validate inputs in <data>
  • Execute the numbered <instructions>
  • Write a concise human report in <output>
  • Return the machine object defined in <final_data_json>

This approach forces precision, reasoning, and consistent output structure.

When to use each layer without overcomplicating

Keep XML for boundaries, Markdown for clarity, and JSON for strictness. If you’re tempted to add more tags, you probably need better steps, not more structure.

LayerWhat it does bestWhen to use it
XMLSegments tasks and rolesAlways, to define <role>, <context>, <instructions>, <constraints>, <output>
MarkdownHuman-readable rules and stepsWithin XML tags to format bullets, headings, and emphasis
JSONMachine-validated inputs and outputsFor <data> and <final_data_json> to integrate with code and pipelines

Do this:

  • One role and 3–5 tags for simplicity
  • Simple Markdown bullets and numbered steps
  • JSON for data only, not prose

Not this:

<!-- TOO COMPLICATED -->
<configuration>
  <parameters>
    <param name="style" value="casual"/>
    <param name="length" value="500"/>
  </parameters>
</configuration>

Better:

<constraints>
- Style: Casual and conversational
- Length: Around 500 words
</constraints>

Common pitfalls and fast fixes

PitfallWhy it hurtsFix in your template
Vague rolesUnstable tone and scopeWrite one-sentence <role> with persona and goal
Wall-of-text contextModels miss constraintsBullet the <context> with audience, objective, scope
No output schemaInconsistent returnsDefine <final_data_json> with exact keys and types
Overlong instructionsHallucinated stepsUse 3–5 numbered steps tied to outputs

Real examples across common tasks

The universal template adapts cleanly to any workflow. Here are five complete, copyable prompts you can use immediately.

Content Writing

You need a blog post reviewed, but generic feedback wastes time. This prompt forces the model to identify weak spots, explain fixes, and demonstrate one rewrite.

<role>
You are a blog editor who helps improve draft articles.
</role>

<context>
I've written a draft blog post about productivity tips for remote workers.
- Target audience: Early-career professionals
- Current length: ~800 words
- Goal: Make it more engaging and actionable
</context>

<instructions>
Review my draft and:
1. Identify the 3 weakest sections
2. Suggest specific improvements for each
3. Rewrite one section as an example
</instructions>

<constraints>
- Keep the original tone (casual but professional)
- Don't add fluff - every word should add value
- Maintain the current structure
</constraints>

<output>
Provide your feedback in three parts:
- **Issues Found** (bullet list)
- **Recommendations** (numbered list)
- **Example Rewrite** (one section, formatted as it would appear)
</output>

The output section demands structure. You get a bullet list of issues, numbered recommendations tied to each, and a rewritten example that shows rather than tells.

Code Review

Code reviews need specificity and prioritization. This prompt segments feedback into critical, performance, and nice-to-have buckets so you know what to ship first.

<role>
You are a senior Python developer conducting a code review.
</role>

<context>
This function processes user uploads:
- Handles CSV and Excel files
- Validates data before database insertion
- Currently has some performance issues with large files
</context>

<instructions>
Review the code below and:
1. Identify performance bottlenecks
2. Flag any security concerns
3. Suggest architectural improvements
</instructions>

<constraints>
- Must maintain backwards compatibility
- Cannot add new dependencies
- Solutions should work with Python 3.8+
</constraints>

<output>
Structure your review as:
- **Critical Issues** (must fix)
- **Performance Improvements** (should fix)
- **Nice to Have** (optional enhancements)
</output>

[code would be pasted here]

Data Analysis

Analysis requests often return walls of numbers with no actionable insight. This prompt requires the model to connect findings to evidence and recommend specific next steps.

<role>
You are a data analyst specializing in customer behavior.
</role>

<context>
We're analyzing Q3 sales data to understand:
- Which products are underperforming
- Customer segments with highest churn
- Regional variations in purchasing patterns
</context>

<instructions>
Analyze this dataset and:
1. Identify the top 3 insights
2. Explain what's driving each trend
3. Recommend specific actions for each finding
</instructions>

<data>
```json
{
  "regions": [
    {
      "name": "Northeast",
      "revenue": 450000,
      "customers": 1200,
      "churn_rate": 0.08
    },
    {
      "name": "Southeast",
      "revenue": 380000,
      "customers": 950,
      "churn_rate": 0.15
    }
  ],
  "products": [
    {
      "id": "PRD-001",
      "category": "Premium",
      "units_sold": 450,
      "returns": 23
    }
  ]
}
```
</data>

<output>
For each insight provide:
- **Finding** (one sentence)
- **Evidence** (reference the data)
- **Recommendation** (specific action)
</output>

Teaching and Tutoring

Tutoring prompts fail when they ask for explanations without scaffolding. This one demands an analogy, a clean example, a practice task, and common mistakes.

<role>
You are a patient coding instructor teaching web development fundamentals.
</role>

<context>
Student background:
- Completed HTML/CSS basics
- Just learning JavaScript (week 2)
- Struggling with the concept of functions
</context>

<instructions>
Help the student understand functions by:
1. Explaining the concept using a real-world analogy
2. Showing a simple code example
3. Creating a practice exercise
4. Anticipating common mistakes
</instructions>

<constraints>
- Use beginner-friendly language (no jargon)
- Examples must be practical, not abstract
- Keep code examples under 10 lines
</constraints>

<output>
Structure as:
- **The Analogy** (2-3 sentences)
- **Code Example** (with inline comments)
- **Practice Exercise** (clear instructions)
- **Common Pitfalls** (bullet list)
</output>

Decision Support

Strategic decisions need frameworks, not opinions. This prompt forces the model to list decision criteria, compare options side-by-side, surface hidden risks, and justify a recommendation.

<role>
You are a strategic advisor helping evaluate business decisions.
</role>

<context>
Our startup is deciding whether to:
- **Option A**: Build an in-house solution (6 months, $200k)
- **Option B**: Use a third-party platform ($3k/month, immediate)

Current situation:
- 8-person team
- Runway: 18 months
- Growing 15% MoM
</context>

<instructions>
Provide a decision framework by:
1. Listing key factors to consider
2. Analyzing pros/cons of each option
3. Identifying hidden risks
4. Making a recommendation with reasoning
</instructions>

<constraints>
- Be objective - acknowledge uncertainty
- Consider both short-term and long-term implications
- Factor in opportunity cost
</constraints>

<output>
Deliver as:
- **Decision Criteria** (5-7 factors)
- **Option Comparison** (side-by-side)
- **Risk Assessment** (bullet points)
- **Recommendation** (with 2-3 key reasons)
</output>

The <context> block gives the model the runway, team size, and growth rate so recommendations stay realistic.

Ship fast with this practical checklist

  • Clarify the role in one sentence and align tone to your audience.
  • Put background in <context> as short bullets to prevent wall-of-text drift.
  • Number your steps and keep each step tied to one output element.
  • Store raw inputs in <data> and define the exact <final_data_json> schema you need back.
  • Constrain length, style, and must/avoid rules under <constraints> so edits don’t spill into logic.
  • Always ask for both a human report and a JSON object to feed pipelines and tests.
  • Version your template per task type and commit diffs so teams can evolve standards.

Hybrid prompting turns prompt craft into an asset instead of a bottleneck. By separating structure (XML), readability (Markdown), and data (JSON), you get reliable, reviewable, and automatable outputs across content, code, analysis, tutoring, and decisions.

Copy the universal template above, paste it into your next task, and tailor five fields—role, audience, objective, instructions, and JSON schema. Ship one standard today, and you’ll never reinvent a prompt tomorrow.

Categories

×