Prompt Engineering for Developers: Techniques That Actually Work
Master prompt engineering techniques for LLMs — few-shot prompting, chain-of-thought, system prompts, and iteration strategies.
Prompt Engineering for Developers: Techniques That Actually Work
Prompt engineering is the new skill frontier. Write a vague prompt, get vague results. Write a precise, structured prompt, get impressive results. The difference between mediocre and excellent AI features often comes down to prompt quality, not model quality.
Most developers treat prompts casually. "Generate a function that does X." Then they're surprised when the output is mediocre. Prompt engineering is a discipline with proven techniques that compound.
Foundational Principles
1. Clarity Over Brevity
❌ "Write code for auth"
✅ "Write a Next.js 15 API route that:
- Validates email format using zod
- Hashes password with bcrypt
- Creates user in Postgres via Prisma
- Returns JWT token valid for 24 hours
- Handles duplicate email error gracefully"
The second prompt is 50% longer but will generate code that's 10x better.
2. Context is Everything
❌ "How do I deploy this?"
✅ "I have a Next.js 15 app that:
- Uses React Server Components
- Connected to PostgreSQL via Prisma
- Has environment variables for API keys
- Built with Tailwind CSS and TypeScript
I want to deploy to Vercel. What are the deployment steps?"
The second prompt tells the AI exactly what to optimize for.
3. Examples Trump Explanation
❌ Explain TypeScript generics
✅ "Explain TypeScript generics using these examples:
// Example 1: Before (any type)
function getFirstElement(arr: any[]) {
return arr[0];
}
// Example 2: After (generic)
function getFirstElement<T>(arr: T[]): T {
return arr[0];
}
Why is Example 2 better? Show how type safety improves."
Advanced Techniques
1. Few-Shot Prompting
Show examples of desired output format:
// app/api/code-review/route.ts
const systemPrompt = `You are a senior code reviewer. Review code and provide feedback in this exact format:
## Issues Found
1. [Issue Type] - [Severity: Critical/High/Medium/Low]
Description: [What's wrong]
Fix: [How to fix]
## Improvements (Optional)
1. [Suggestion]
Example review:
User Code:
\`\`\`javascript
function getData(id) {
fetch('/api/data/' + id).then(r => r.json()).then(d => console.log(d));
}
\`\`\`
Review:
## Issues Found
1. No Error Handling - Critical
Description: Fetch errors are silently ignored
Fix: Add .catch() handler or use try/catch with async/await
2. String Concatenation - High
Description: URL concatenation is vulnerable to injection
Fix: Use URL constructor or template literals with validation
3. Console.log in Production - Medium
Description: Debugging code left in production
Fix: Use proper logging library or remove before deployment
---
Now review the user's code:`;
export async function POST(request: Request) {
const { code } = await request.json();
const response = await client.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
system: systemPrompt,
messages: [
{
role: 'user',
content: `Review this code:\n\`\`\`javascript\n${code}\n\`\`\``,
},
],
});
return Response.json({
review: response.content[0].type === 'text' ? response.content[0].text : '',
});
}
2. Chain-of-Thought Prompting
Ask the model to "think through" the problem step-by-step:
❌ "What's the time complexity of this algorithm?"
✅ "Let's work through this step-by-step.
Here's my algorithm:
[code]
Walk me through:
1. What does each loop do?
2. How many iterations does each loop run?
3. Are there any nested loops?
4. What's the final time complexity?
Explain your reasoning for each step."
This technique improves accuracy significantly:
const prompt = `Let's solve this React performance problem step by step.
Problem: A list of 1000 items is slow to render
Current code:
\`\`\`jsx
function ItemList({ items }) {
return (
<div>
{items.map(item => (
<ItemCard key={item.id} item={item} onClick={handleClick} />
))}
</div>
);
}
\`\`\`
Step 1: Analyze the problem
- What could cause slowness?
- Are there prop drilling issues?
- Is re-rendering happening unnecessarily?
Step 2: Identify root causes
Step 3: Propose optimizations
- Option 1: [specific optimization]
- Option 2: [specific optimization]
- Which is best and why?
Step 4: Show the optimized code`;
3. Role-Based Prompting
Define the "personality" and expertise:
const systemPrompt = `You are an expert React architect with 10+ years building high-scale applications.
Your specialties:
- Performance optimization
- Scalable component design
- Testing strategies
- Mentoring junior developers
When responding:
- Explain the "why" behind recommendations
- Mention tradeoffs and constraints
- Include production-tested patterns
- Share lessons from real applications`;
4. Constraint-Based Prompting
Set explicit boundaries:
"Generate a React hook that:
- Handles loading, error, and success states
- Uses TypeScript with proper typing
- Works with React 18+
- Includes JSDoc comments
- Can be tested easily
- Doesn't exceed 50 lines
- Uses no external dependencies except React"
The constraints force the model to be thoughtful about implementation.
Iteration and Refinement
Great prompts aren't written in one shot. Iterate:
// Iteration 1: Initial request
Prompt: "Generate a form component"
Result: Generic form that doesn't fit my needs
// Iteration 2: Add constraints
Prompt: "Generate a React form component for Next.js 15 that:
- Uses React Hook Form
- Validates with Zod
- Has accessibility features
- Shows error messages inline"
Result: Better, but layout is off
// Iteration 3: Show example
Prompt: "Generate a form component. Here's an example of the desired output:
[Show example form]
Use this styling and structure. Validate with Zod."
Result: Exactly what I needed
Real-World Prompt Framework
Build a structure for consistent, high-quality prompts:
interface PromptTemplate {
role: string; // "You are a React expert..."
task: string; // "Generate a TypeScript utility function..."
context: string; // "This is for a Next.js 15 application..."
requirements: string[]; // Array of "Must haves"
constraints: string[]; // Array of "Must nots"
examples: string; // Few-shot examples
output_format: string; // "Return as TypeScript code block..."
}
function buildPrompt(template: PromptTemplate): string {
return `You are ${template.role}.
Task: ${template.task}
Context: ${template.context}
Requirements:
${template.requirements.map((r) => `- ${r}`).join('\n')}
Constraints:
${template.constraints.map((c) => `- ${c}`).join('\n')}
Examples:
${template.examples}
Output Format: ${template.output_format}`;
}
// Usage
const httpClientPrompt = buildPrompt({
role: 'an expert TypeScript developer specializing in API integrations',
task: 'Create a reusable HTTP client with proper error handling',
context: 'Next.js 15 app using Vercel deployments',
requirements: [
'Supports GET, POST, PUT, DELETE methods',
'Automatic retry on failure (max 3 attempts)',
'Request/response logging',
'JSON serialization',
],
constraints: [
'No external HTTP libraries (use native fetch)',
"Don't include express or server frameworks",
'Must work in browser and Node.js',
],
examples: `
// Usage
const client = new HttpClient({ baseURL: 'https://api.example.com' });
const users = await client.get('/users');
const created = await client.post('/users', { name: 'John' });
`,
output_format: 'TypeScript class with type annotations and JSDoc',
});
Avoiding Common Mistakes
❌ Vague Prompts
"Fix this code"
"Make it better"
"Optimize for performance"
✅ Instead: Specify what "fix" means, what "better" looks like, which performance metrics matter.
❌ Ambiguous Context
"I have a component that's slow"
"Add authentication"
"Users are confused"
✅ Instead: Explain the tech stack, current approach, and constraints.
❌ Ignoring the Model's Limitations
Asking Claude for the current weather (it doesn't know today's date)
Asking GPT for your private codebase (it wasn't trained on it)
✅ Instead: Provide all necessary context in the prompt itself.
❌ Not Validating Output
Use generated code without testing
Trust all factual claims
Copy-paste without review
✅ Instead: Always verify output, test code, and fact-check claims.
Prompt Anti-Patterns to Avoid
| Anti-Pattern | Why It Fails | Better Approach |
|---|---|---|
| "Do everything" | Too vague, output is generic | Break into specific tasks |
| Conversation mode | Model forgets earlier context | Repeat key context each time |
| Passive voice | Model is uncertain | Active, direct instructions |
| Overexplaining | Adds noise | Clear, concise requirements |
| No examples | Model guesses at intent | Show examples of desired output |
Tools for Prompt Engineering
// Prompt testing harness
async function testPrompt(
prompt: string,
testCases: Array<{ input: string; expected: string }>
) {
const results = [];
for (const test of testCases) {
const response = await client.messages.create({
model: 'claude-3-5-sonnet-20241022',
messages: [{ role: 'user', content: prompt + '\n\nInput: ' + test.input }],
});
const output = response.content[0].type === 'text' ? response.content[0].text : '';
const passes = output.includes(test.expected);
results.push({
input: test.input,
expected: test.expected,
actual: output,
passes,
});
}
return results;
}
// Usage
const results = await testPrompt(myPrompt, [
{ input: 'fibonacci(5)', expected: '[1, 1, 2, 3, 5]' },
{ input: 'fibonacci(1)', expected: '[1]' },
]);
console.log(`Passed: ${results.filter((r) => r.passes).length}/${results.length}`);
Conclusion
Prompt engineering is the difference between AI that's interesting and AI that's useful. Invest time in learning these techniques, and you'll unlock AI's potential for your applications.
The best prompt engineers aren't the ones who memorize tricks—they're the ones who iterate, test, and constantly refine based on results.
Start implementing these techniques today, and watch the quality of AI-generated solutions improve dramatically.