LLM Prompting: Getting Effective Output

November 18, 2024AI • Technical • Prompting

LLM prompting techniques

Loading...

Great prompts don’t “trick” a model—they reduce ambiguity, expose the real task, and set guardrails so the output is usable without detective work. This guide gives you the smallest set of principles that consistently produce reliable work from modern LLMs, followed by two concrete use cases—including an SEO brief that ranks and a developer workflow that ships.

Principles that actually matter

Reusable prompt template

Role: [Who are you?]
Goal: [One sentence outcome]
Inputs: [Bullet list of concrete inputs]
Constraints: [Rules, tone, limits]
Output: [Exact format to return]
Checks: [Self-verification or acceptance tests]

Use case 1 — Developer workflow: reproduce and fix a bug

We want an answer we can act on: reproduction steps, likely cause, and a minimal patch. Here’s a compact prompt that routinely yields diffs you can review.

Role: Senior engineer familiar with React and Next.js.
Goal: Reproduce and propose a minimal fix for the described bug.
Inputs:
- Error: TypeError: Cannot read properties of undefined (reading 'map')
- Context: occurs on /reports when 'data' is null from the API
- Code excerpt: ReportsList.tsx (lines 10-60)
Constraints:
- Prefer guard clauses over deep nesting
- No unsafe casts; keep types strict
Output: 
- Short repro steps
- Root cause summary (1-2 lines)
- Minimal diff (PATCH format) limited to the touched file
Checks:
- State why the fix cannot break empty states

Why this works: the model knows the finish line (diff), the safety rule (no unsafe casts), and must justify the change (self‑check). You can paste code excerpts or a gist link in the Inputs block.

Use case 2 — SEO: on‑page brief that avoids fluff

Instead of asking for a whole article, generate an on‑page brief your team can execute. This keeps control over voice while forcing evidence and structure. For more on how AI content performs in search rankings, see AI vs SEO: Ranking in Google 2025.

Role: Senior editor. Topic expert: [TOPIC].
Goal: Produce an on-page brief for a high-intent article.
Inputs:
- Primary query: [KEYWORD]
- Audience: [WHO]
- Evidence: [Your notes, screenshots, results]
Constraints:
- No generic claims; require proof or examples
- H2/H3 outline with 6–10 sections; each section lists evidence to include
Output:
- Title (<=60 chars) and meta description (<=155 chars)
- H2/H3 outline with bullet evidence per section
- Internal link suggestions (3-5) from our site
Checks:
- Include a “Missing evidence” list we must gather before publishing

Pair this with RAG or your notes. The model’s job is structure and gaps; your job is plugging real proof. This consistently beats generic drafts for rankings and trust.

Anti‑patterns to avoid

Checklist before you hit enter

FAQs

Which model? Use the strongest you can for reasoning; smaller models work with tighter constraints.

Few‑shot or zero‑shot? Start zero‑shot with structure; add 1–2 concise exemplars if the task is niche.

How to measure quality? Define acceptance tests per task (e.g., lints/tests for code, entity/on‑page checks for SEO).

Further reading

Real‑world use case: Ship a reliable bug‑fix prompt

Reproduce a bug and ask for a minimal diff.

  1. State intent and constraints
  2. Provide inputs + code lines
  3. Ask for a minimal patch

Expected outcome: Actionable output: repro steps + diff you can review.

Implementation guide

  1. Write the role, goal, inputs (error, context, code lines).
  2. State constraints (no unsafe casts, strict types).
  3. Ask for: repro steps, root cause, and a minimal diff.

Prompt snippet

Role: Senior engineer… Goal: Reproduce and fix… Output: repro, root cause (1–2 lines), minimal diff.

SEO notes

Loading...