The difference between people who get garbage from AI and people who get gold isn’t intelligence. It’s structure.
A year ago, a single blog post took me 6–8 hours. Research, outline, draft, edit, rewrite, edit again. Now I can produce better content in a fraction of that time — not because AI writes it for me, but because I stopped treating ChatGPT like a magic eight ball and started giving it a blueprint.
That blueprint is what you’re getting today.
One template. Three layers. Real examples pulled from the workflows I use to run my freelance web design business. You can copy it, paste it into ChatGPT, Claude, Gemini, or any large language model, and start getting consistent, usable outputs in minutes.
Here’s what’s inside: the hybrid framework that combines XML, Markdown, and JSON into a single reusable prompt — plus guidance on adapting it to your own tasks, feeding AI multiple data sources at once, and four complete real-world examples you can steal.
No theory. No fluff. Just the system.
Table of Contents
The Hybrid Stack: XML + Markdown + JSON
Most people write prompts the way they write text messages. One long paragraph. No structure. No sections. Then they wonder why the AI rambles, misses the point, or hallucinate details.
The fix is a three-layer system. Each layer does one job.
XML gives you the scaffold. Wrap sections of your prompt in simple tags like <role>, <context>, <instructions>, <constraints>, and <output>. These create clear boundaries the model can follow. Think of them like labeled containers — the AI knows exactly where your background info ends and your instructions begin.
Markdown keeps it human. Inside those XML tags, use headings, bold text, numbered lists, and bullets. This makes your prompts easy to read, easy to edit, and easy to share with teammates. You’re not writing code. You’re writing clear instructions that happen to have structure.
JSON locks the data. When you need to feed structured inputs (a spreadsheet, a list of products, query data) or get machine-readable outputs back, JSON is your format. It’s strict by design — perfect for automations, validation, and anything that plugs into another tool downstream.
- XML = structure and segmentation for the model
- Markdown = readability and instruction clarity for humans
- JSON = precision for data input/output at scale & when using via API code
This isn’t about making prompts complicated. It’s about making them reliable. The same template, the same structure, predictable results every time — whether you’re writing a proposal for a client or auditing a blog for SEO gaps.
The Universal Template You Can Copy
Use this as your default starting point. One clear role, scannable context, numbered steps, explicit constraints, and a dual-output format: a human-friendly report plus a machine-readable JSON object.
<role>
Define who the AI is in one clear sentence.
Example: You are a pragmatic senior reviewer who writes concise,
actionable feedback.
</role>
<context>
Provide background using simple Markdown.
- Audience: [who this is for]
- Objective: [the outcome you need]
- Scope: [what's in or out]
- Key constraints: [3–5 bullets max]
</context>
<instructions>
Tell the AI exactly what to do, in order:
1. [Step one]
2. [Step two]
3. [Step three]
</instructions>
<constraints>
Rules and limitations:
- Must do: [requirement]
- Avoid: [anti-pattern]
- Style: [tone, length, formatting]
</constraints>
<data>
```json
{
"inputs": {
"items": [],
"source_text": "",
"variables": {}
}
}
```
</data>
<output>
Describe the desired outputs:
- Human report: headings, bullets, and a short summary paragraph
- Machine data: valid JSON that follows the schema below
</output>
<final_data_json>
{
"status": "<SUCCESS|REVIEW>",
"summary": "<one-sentence outcome>",
"details": [
{"id": "<string|int>", "label": "<string>", "score": "<number>",
"notes": "<string>"}
],
"meta": {
"timestamp": "<ISO-8601>",
"confidence": "<0–1>"
}
}
</final_data_json>
Here’s what each field does:
<role>— One sentence. Include the persona and the goal. “You are a senior WordPress developer who writes beginner-friendly tutorials” is better than “You are helpful.”<context>— Bullet your audience, objective, and scope. No paragraphs. Keep it scannable so the AI (and you) can see the boundaries at a glance.<instructions>— Number your steps. Three to five is the sweet spot. Each step should map to one piece of your expected output.<constraints>— Your must-do and must-avoid rules. Style, tone, length, format. Short and specific beats long and vague.<data>— Structured inputs as JSON. A list of blog posts. Query data from Search Console. Product specs. Anything the AI needs to reference.<output>— Describe exactly what you want back. A bullet list? A table? Both a human-readable report and a JSON object? Say it here.<final_data_json>— The exact keys and types you need in the machine-readable output. Defining the schema up front prevents the AI from inventing its own structure.
Two practices that eliminate re-prompting: Make your constraints short and specific. And always define the JSON schema you want returned. Vague constraints get vague results. No schema gets inconsistent structure. Nail these two and you’ll cut your retry rate in half.
If you want a tool that formats your source files into clean, structured prompt blocks automatically, the AI Prompt Formatter does exactly that — paste in messy files and get perfect XML-wrapped prompt inputs in seconds.
When to Use Each Layer (Without Overcomplicating)
Keep XML for boundaries, Markdown for clarity, and JSON for strictness. If you’re tempted to add more tags, you probably need better steps — not more structure.
| Layer | What it does best | When to use it |
|---|---|---|
| XML | Segments tasks and roles | Always — to define <role>, <context>, <instructions>, <constraints>, <output> |
| Markdown | Human-readable rules and steps | Within XML tags to format bullets, headings, and emphasis |
| JSON | Machine-validated inputs and outputs | For <data> and <final_data_json> when integrating with code or pipelines |
Do this:
<constraints>
- Style: Casual and conversational
- Length: Around 500 words
- Avoid: Jargon, passive voice, filler phrases
</constraints>
Not this:
<!-- TOO COMPLICATED -->
<configuration>
<parameters>
<param name="style" value="casual"/>
<param name="length" value="500"/>
</parameters>
</configuration>
The first version is clear to the AI and to the human editing the prompt next Tuesday. The second is XML cosplay. Use the simplest structure that gets the job done.
Common Pitfalls and Fast Fixes
| Pitfall | Why it hurts | Fix in your template |
|---|---|---|
| Vague roles | Unstable tone and scope | Write a one-sentence <role> with persona and goal |
| Wall-of-text context | Model misses constraints buried in paragraphs | Bullet the <context> with audience, objective, scope |
| No output schema | Inconsistent, unpredictable returns | Define <final_data_json> with exact keys and types |
| Overlong instructions | Hallucinated or skipped steps | Use 3–5 numbered steps, each tied to one output element |
| Doing everything in one prompt | Overwhelmed model, shallow results | Break complex tasks into chained prompts (output of one feeds the next) |
| Starving the AI of context | Generic or wrong answers | Paste your actual source data — the AI can’t read your mind |
That last one catches more people than anything. You know your business inside and out. The AI doesn’t. If you’re asking it to write a web design proposal but you haven’t told it who the client is, what they sell, or what their budget looks like — don’t be surprised when the output sounds like it was written by a stranger. Because it was.
How to Adapt the Template to Any Task
The template structure never changes. The content inside it does. Here are four questions that make customization fast:
1. “What expert would I hire for this?”
The answer becomes your <role>. Writing a blog post? You’d hire a content strategist. Reviewing code? A senior developer. Pricing a new web design service? A freelance business consultant. Be specific — “You are a freelance pricing strategist who specializes in web design services” outperforms “You are a helpful assistant” every single time.
2. “What would I tell that expert on day one?”
That’s your <context>. The background, the audience, the goal, the boundaries. Everything you’d cover in a 10-minute briefing before handing someone the project.
3. “What are the 3–5 concrete deliverables?”
Those are your <instructions>. Not “analyze the data.” Instead: “1. Identify the top 3 underperforming pages by impressions vs. clicks. 2. Recommend specific title tag changes for each. 3. Prioritize by estimated traffic impact.” Concrete. Numbered. Tied to outputs.
4. “What format do I need the result in?”
That’s your <output>. A table? A bullet list? A narrative report? Both a human summary and a JSON object for your CMS? Spell it out.
Here’s the thing — I learned this the hard way. Early on I had ChatGPT generate a fully working WordPress shortcode using the Google Developer API. It was impressive. It worked. But a simple Google Maps embed URL would’ve done the same job in 30 seconds. The AI over-engineered the solution because I hadn’t constrained the <context> enough. I didn’t tell it I wanted the simplest path. Lesson: the better your context and constraints, the less you have to clean up after.
Whether you’re writing a client proposal, debugging a WordPress plugin, or auditing your content strategy, you fill in the same six fields. The structure stays. Only the contents change.
Multi-Source Prompts: Feeding Real Data Into AI
This is where prompting goes from useful to powerful.
Most people stuff everything into one big <context> block. Their strategy notes, their data, their instructions — all crammed together. The AI loses track. Important details get buried. The output suffers.
The fix: use separate named XML blocks for each data source. Think of each block as a labeled folder you’re handing the AI.
<seo_plan> ← your strategy document
<blog_post> ← the content to work on
<search_queries> ← performance data from Search Console
<story_bank> ← personal anecdotes to weave in
<task> ← what you actually want done
Each block is self-contained. The AI can reference <seo_plan> when it needs strategic direction and <search_queries> when it needs performance data — without the two bleeding together.
Here’s why this matters in practice: when I refresh a blog post now, I feed ChatGPT five separate data sources in one prompt. My SEO strategy. My internal linking rules. The current published content. Search Console performance data. And my personal story bank. The AI gets everything it needs, organized and labeled. And because each source is in its own XML block, I can swap out just the <blog_post> and rerun the same prompt for a different article. No rewiring. No rewriting the whole prompt.
This multi-source approach is literally how the content workflows behind this site operate. The AI article generator I built uses chain prompting and structured data blocks to turn a topic into a publish-ready article — same principle, scaled up.
When you see examples below using shorthand like <codebase>, that represents the full <codebase>YOUR_COMPLETE_CODE_FILES</codebase> block. It’s a notation style that keeps examples scannable while reminding you that real, substantial data lives inside those tags.
Real Examples You Can Copy
The template adapts to any workflow. Here are four examples pulled directly from tasks I run for my freelance web design business. Every one of these started as the universal template above — just with different contents in the fields.
Example 1: Content Strategy Audit
Scenario: You’ve got a WordPress blog with 50+ published posts, but you’re not sure what’s working, what’s missing, or where to focus. You export your blog data and Google Search Console reports and need a strategic audit.
<published_posts>
[Full CSV of your blog titles, URLs, and meta descriptions]
</published_posts>
<search_console_pages>
[Pages performance report — impressions, clicks, CTR, position]
</search_console_pages>
<search_console_queries>
[Query-level report — what keywords drive traffic]
</search_console_queries>
<task>
You are a strategic content marketing manager specializing in
WordPress businesses.
Analyze the three data sources above and deliver:
1. Content cluster analysis — group posts into themes, identify
pillar opportunities
2. Content gap identification — topics where I have authority
but incomplete coverage
3. Top 10 SEO quick wins — internal linking fixes, refresh
candidates, title/meta optimization
4. Content repurposing plan — high-performing posts to convert
into videos, email courses, or downloadable guides
5. Keyword opportunity analysis — page 2–3 rankings that could
reach page 1, high-impression/low-CTR terms needing
better titles
Be specific. Reference actual post titles, URLs, and metrics.
Make recommendations tactical and immediately actionable.
</task>
What makes this work: Three separate data blocks feed one comprehensive task. The AI doesn’t have to guess what your “blog data” means — it’s clearly segmented. The five numbered deliverables in <task> tie directly to the outputs you need. And the closing constraint — “Be specific. Reference actual post titles” — prevents the generic, hand-wavy analysis that wastes your time.
This is the exact structure I used to audit the content strategy for this site. The output mapped every post into clusters, flagged the biggest SEO opportunities, and built the roadmap I’m executing right now.
Example 2: Blog Post Refresh Brief
Scenario: You’ve identified a blog post that needs updating — it’s ranking on page 5 for a high-value keyword and the content is stale. You need to create a detailed task brief so the refresh hits every mark: SEO, internal linking, brand voice, and conversion.
<seo_plan>
[Your content strategy document with keyword targets
and cluster priorities]
</seo_plan>
<internal_linking_strategy>
[Linking rules: how many links, what types,
anchor text guidelines, sitemap]
</internal_linking_strategy>
<blog_post>
[The current published version of the article
to be refreshed]
</blog_post>
<search_console_queries>
[Query performance data for keyword opportunities]
</search_console_queries>
<story_bank>
[Collection of personal anecdotes with emotional arcs,
lessons, and proof points]
</story_bank>
<task>
You are creating a detailed task brief for a copywriter who
will refresh the blog post in <blog_post>.
Generate a complete task description that includes:
1. Context — why this post is being refreshed, its strategic
value, relevant performance data
2. Target keywords — primary, secondary, and long-tail
from <seo_plan> and <search_console_queries>
3. Target personas — who reads this, what they need
4. What to preserve vs. what to fix in the current version
5. Proposed outline with H2/H3 structure, section guidance,
and word count targets
6. Internal linking requirements per <internal_linking_strategy>
7. Where stories from <story_bank> could naturally fit
The copywriter should be able to read this document and
execute without back-and-forth.
</task>
What makes this work: Five data sources. One task. The <seo_plan> gives strategic direction. The <internal_linking_strategy> enforces linking standards. The <blog_post> shows what exists. The <search_console_queries> reveal keyword opportunities. And the <story_bank> ensures the refreshed post has authentic voice — not just SEO keywords.
This is a prompt that creates prompts. Meta-level productivity. You run it once and get a complete creative brief that any writer can follow. When I need to get web design clients through content marketing, this is the system that makes every blog post refresh count.
Example 3: Personal Story Extraction
Scenario: You have raw content — blog drafts, email newsletters, podcast transcripts — and you want to mine it for reusable personal stories that strengthen future content. Not every anecdote is worth saving. You need a filter.
<content>
[Raw text — emails, blog drafts, transcripts,
journal entries]
</content>
<story_bank>
[Your existing collection of extracted stories,
so you don't duplicate]
</story_bank>
<task>
Scan <content> and extract only the most bankable personal
stories using this strict filtering criteria:
**BANKABLE STORY TEST — Must pass ALL five:**
1. Universal Appeal: addresses a challenge 70%+ of the
target audience faces
2. Emotional Resonance: contains a clear emotional
transformation others relate to
3. Reusable Lesson: core insight applies across different
projects, clients, or decisions
4. Storytelling Value: has narrative tension, stakes,
or surprise
5. Broad Context: lesson works across industries,
not just one narrow scenario
**REJECT stories that are:**
- Step-by-step process explanations disguised as stories
- Technical troubleshooting with no broader business lesson
- Hyper-specific tool details that won't age well
- Obvious best practices everyone already knows
For each story that passes, ask: "Could I tell this story
to illustrate a point in 3 different blog posts about
different topics?" If no, reject it.
Only return NEW stories not already in <story_bank>.
</task>
What makes this work: The <constraints> section (built into the task here) does the heavy lifting. The five-filter “Bankable Story Test” prevents the AI from surfacing every mildly interesting anecdote. The rejection criteria act as negative constraints — equally important. And checking against the existing <story_bank> ensures no duplicates.
This is a masterclass in using constraints to prevent garbage-in, garbage-out. Without those filters, you’d get 30 “stories” that are really just how-to steps with a first-person pronoun.
Example 4: Codebase-to-Tutorial Pipeline
Scenario: You have a working codebase — a web app, a WordPress plugin, a tool — and you want to turn it into a step-by-step YouTube tutorial. The codebase is complex. A single prompt can’t handle the whole thing. So you chain three prompts together.
Prompt 1: Analyze the Architecture
<codebase>
[Complete code files]
</codebase>
<readme>
[Project documentation]
</readme>
<task>
Reverse-engineer this codebase for tutorial creation:
1. Identify the architectural pattern and key dependencies
2. Map 3–5 natural "completion points" where the code runs
and shows meaningful progress
3. Rate complexity (1–10) and flag concepts needing
special explanation
4. Suggest areas to simplify for a 15–30 minute tutorial
</task>
Prompt 2: Create the Build Sequence
<codebase>
<architecture_analysis>
[Output from Prompt 1]
</architecture_analysis>
<task>
Transform the analysis into 3–5 tutorial milestones:
- Each milestone shows visible, testable progress
- Follow dependency requirements (nothing breaks mid-build)
- Include time allocation, files created, and testable outcome
- Specify simplification strategy if the original is too complex
</task>
Prompt 3: Generate Copy-Paste Code Snippets
<codebase>
<build_sequence>
[Output from Prompt 2]
</build_sequence>
<task>
For each milestone, create:
1. Setup instructions (folder structure, commands)
2. 3–5 copy-pasteable code chunks with file paths
3. Brief explanation of what each chunk does
4. Demo moment — when to test and what to expect
5. Transition cue to the next milestone
</task>
What makes this work: Chaining. Each prompt’s output becomes the next prompt’s input. Prompt 1 produces <architecture_analysis>. That feeds into Prompt 2. Prompt 2 produces <build_sequence>. That feeds into Prompt 3. The universal template structure stays the same at every stage — only the data blocks and instructions change.
This is structured prompting at scale. I used a similar chain when I built a web app with AI and zero coding experience. Screenshots and spreadsheet data went in. A working application came out. The chain pattern turned what would’ve been weeks of learning into an afternoon of building.
Ship Fast: Your Prompt-Building Checklist
- Define
<role>in one sentence — include persona and goal - Bullet your
<context>— audience, objective, scope (no paragraphs) - Number your
<instructions>— 3–5 steps, each tied to one output element - Put raw data in
<data>as JSON; define your<final_data_json>schema - Set must-do and must-avoid rules under
<constraints> - Request both a human report and a machine-readable object when you need both
- For multi-source tasks, use separate named XML blocks — one per data source
- Version your templates per task type — save what works, iterate what doesn’t
Frequently Asked Questions
Yes. Both ChatGPT and Claude handle XML tags, Markdown formatting, and JSON inputs without issues. Claude is especially strong with XML structure — it was practically designed for it. Gemini and other LLMs work fine too. The template is model-agnostic because it uses open standards, not platform-specific tricks.
No. Skip `<data>` and `<final_data_json>` for simple tasks like drafting an email or brainstorming ideas. Use them when you’re feeding structured data (a spreadsheet, a list of products, search console metrics) or when you need machine-readable output that plugs into another tool. For most creative and writing tasks, the XML + Markdown layers are enough.
Three to five is the sweet spot. `<role>`, `<context>`, `<instructions>`, `<constraints>`, and `<output>` cover 90% of tasks. Add `<data>` and `<final_data_json>` when structured data is involved. More tags doesn’t mean better results. More *clarity* in fewer tags wins every time.
The template is designed for text-based LLMs. For image generation prompts (Midjourney, DALL-E, Stable Diffusion), the core principle still applies — structure beats randomness — but the format shifts. Focus on `<context>` for scene description and `<constraints>` for style, aspect ratio, and mood. Drop the JSON layers unless you’re building an automation pipeline around image generation.
One Template. Infinite Applications.
You’ve got the framework now. XML for structure. Markdown for clarity. JSON for data. Six fields that adapt to any task — from auditing a blog to turning a codebase into a YouTube tutorial.
The shift isn’t about learning “prompt engineering.” It’s about going from trying random prompts and hoping for the best to running a system that produces reliable results every time. Same template. Different contents. Predictable output.
Here’s your next move:
Try it right now. Copy the universal template above. Pick one task you’ve been putting off — a client proposal, a content outline, a code review. Fill in the six fields. Run it. See what comes back.
Want all the examples in one place? Download the free Prompt Template Cheatsheet — the universal template plus all four real-world examples, formatted and ready to paste. Grab your copy here. [Link to email opt-in]
If you work with WordPress, check out our 13 ChatGPT prompts built specifically for WordPress freelancers — each one follows the structured format you learned today. And if you want pre-built system prompts that transform ChatGPT into a team of AI specialists for web design, development, and content — those are ready to go too.
Stop guessing. Start structuring. The template is yours.

