article

Context Engineering: Why Your AI Builds Break (And How to Fix Them)

Everyone talks about prompt engineering. Nobody talks about why your AI coding tools produce perfect code one day and total garbage the next. The answer is context — and most vibe coders are getting it completely wrong.

Diagram showing the four layers of context engineering for AI-assisted coding

You open Cursor on Monday morning. You type "build me a user settings page with email notifications toggle, dark mode switch, and account deletion." Claude spits out a beautiful React component. Tailwind classes match your design system. The state management follows your existing patterns. It even adds the right API calls. You barely touch a thing before it ships.

Tuesday, you ask for something similar. "Build me a billing settings page with plan upgrade, payment method, and invoice history." The AI gives you a component that looks like it belongs to a different project. Wrong colors. Class names you have never used. State management pattern that contradicts everything in your codebase. Inline styles mixed with Tailwind. You spend two hours fixing what should have taken five minutes.

What changed between Monday and Tuesday?

Not your prompt. Not the AI model. Not your skill level.

The context changed.

On Monday, you happened to have the right files open. Your existing settings components were in the AI's context window. It could see your patterns, your conventions, your design tokens. On Tuesday, you had different files open. The AI was flying blind, so it guessed. And guessing is what produces garbage.

This is the single biggest problem in AI-assisted development, and almost nobody is talking about it. Everyone is obsessed with prompt engineering — the craft of writing better instructions. But your instructions are not the problem. The information surrounding those instructions is the problem. That information is called context, and learning to control it is called context engineering.

It is the difference between AI that feels like a senior engineer who knows your codebase and AI that feels like a contractor who just showed up and has never seen your project before.

What Is Context Engineering?

Let me give you a clean definition.

Context engineering is the practice of structuring and delivering information so that AI tools consistently produce correct, project-aligned output.

Notice what this is NOT:

  • It is not prompt engineering. Your prompt is the instruction — "build me a settings page." Context is everything else: which files the AI can see, what documentation exists, what examples are available, what constraints apply.
  • It is not RAG (retrieval-augmented generation). RAG is a technical implementation. Context engineering is the discipline of deciding what information matters and how to organize it.
  • It is not "just writing a good README." Although a good README is one piece of the puzzle.

Here is the simplest way to think about it. When you onboard a new developer to your team, you do not just say "build a settings page." You give them access to the codebase. You point them to your component library. You show them an existing page that is similar. You explain your conventions. You share the design file. You tell them which API endpoints to use.

You are engineering their context.

AI tools need the exact same thing. The difference is that a human developer builds up context over weeks and retains it. An AI tool starts from zero every single session. Sometimes every single message. If you do not actively engineer what it knows, you are gambling on whether it will produce good output or bad output.

Stop gambling. Start engineering.

The 4 Layers of Context

After building dozens of projects with AI tools — Cursor, Claude Code, Bolt, Windsurf, Copilot — I have identified four layers of context that determine whether your AI produces good code or trash. Miss any layer and quality drops.

Layer 1: Project Context

This is the foundation. Project context answers the question: what is this project?

It includes: - Tech stack (Next.js 14, TypeScript, Tailwind, Prisma, Postgres) - Architecture decisions (app router, server components by default, API routes for mutations) - Project structure (where things live, how files are organized) - Environment information (Node 20, pnpm, deployed on Vercel)

This is what goes in your CLAUDE.md file (for Claude Code), your .cursorrules file (for Cursor), or your README.md (for everything else).

Without project context, the AI does not know if you are using Pages Router or App Router. It does not know if you use Prisma or Drizzle. It does not know if your components are in src/components or app/_components. So it guesses. And it guesses wrong 50% of the time.

Project context is the cheapest, highest-leverage thing you can create. It takes 20 minutes to write and saves hours every single day.

Layer 2: Code Context

Code context answers the question: what code can the AI actually see right now?

This is where most vibe coders get wrecked without realizing it. Every AI tool has a context window — a limit on how much text it can process at once. Claude has 200K tokens. GPT-4 has 128K. But "has" and "uses effectively" are different things.

When you open a project in Cursor and start a conversation, the AI sees: - The files you have open in your editor - Files you explicitly reference with @ mentions - Files the tool retrieves automatically based on your query

It does NOT see: - Every file in your project - Files in closed tabs - Your node_modules (thankfully) - Files that are too large to fit in context

This means file size matters enormously. A 2,000-line component is much harder for the AI to work with than ten 200-line components. Not because the AI cannot read 2,000 lines — it can. But because one massive file leaves less room for other relevant files in the context window.

The practical rule: If a file is over 300 lines, the AI will produce worse results when editing it. Not because of a hard limit, but because large files push other important context out of the window.

Layer 3: Convention Context

Convention context answers the question: how do we do things here?

This is the subtlest layer and the one that separates AI output that looks "close but off" from AI output that looks like a team member wrote it.

Convention context includes: - Naming patterns (do you use handleClick or onClick for handlers? UserCard or CardUser?) - Component patterns (do you colocate styles? use compound components? prop drilling or context?) - State management (local state, Zustand, React Query, server state?) - Error handling (try/catch everywhere? Error boundaries? toast notifications?) - File organization (one component per file? barrel exports? index files?)

The best way to provide convention context is not to describe it — it is to show it. One real example from your codebase beats ten paragraphs of explanation. This is the "show me an example" pattern, and it is probably the single most effective context engineering technique.

When you tell the AI "follow our existing patterns," that means nothing unless it can see those patterns. When you say "@UserSettings.tsx — follow this pattern for the new BillingSettings page," that means everything.

Layer 4: Task Context

Task context answers the question: what exactly am I building right now?

This is the layer closest to traditional prompt engineering, but it goes beyond the prompt itself. Task context includes: - The specific feature or change you are implementing - Acceptance criteria (what "done" looks like) - Design references (mockups, screenshots, Figma links) - API specifications (endpoints, request/response shapes) - Edge cases and constraints ("must work offline," "must handle empty state")

Most vibe coders provide task context as a one-line prompt: "add dark mode." That is like telling a contractor "build a bathroom" and walking away. You technically gave an instruction, but the lack of surrounding information guarantees a result you did not want.

Practical Techniques

Theory is useful. Techniques are better. Here are the specific things you should do, starting today.

1. Write a CLAUDE.md / .cursorrules File

This is the single highest-impact thing you can do. It takes 20 minutes and immediately improves every AI interaction in your project.

Here is a real, complete template:

# Project: Knox Hub

Tech Stack

- Next.js 14 (App Router) - TypeScript (strict mode) - Tailwind CSS with custom design tokens - better-sqlite3 for database - Deployed on Vercel

Project Structure

src/ app/ — Pages and API routes (App Router) hub/ — Content hub (articles, guides, templates) admin/ — Admin panel (auth-protected) api/ — API endpoints components/ — Shared React components lib/ — Utilities, DB access, auth

public/ covers/ — Post cover images (SVG) templates/ — Downloadable files (.json)

Architecture Decisions

- Server Components by default. Only use 'use client' when you need interactivity (event handlers, useState, useEffect). - All database access happens through functions in src/lib/posts.ts. Never import the db object directly in components. - API routes handle mutations. Server components handle reads. - Markdown content is rendered with `marked` and sanitized with `sanitize-html`. Support both HTML and Markdown in post content.

Design System

- Background: #F7F4EE (warm beige) - Text primary: #1A1A1A - Accent: #D4622A (orange) - Use Tailwind classes, not inline styles - Serif font for headings (font-serif), sans-serif for body - Border radius: rounded-xl for cards, rounded-lg for inputs - Spacing: follow 4px grid (p-4, p-6, p-8)

Naming Conventions

- Components: PascalCase (PostCard.tsx, SkoolPopup.tsx) - Utilities: camelCase (getPosts, formatDate) - Files: PascalCase for components, camelCase for utilities - CSS: Tailwind utility classes only. No CSS modules. No styled-components. - Database columns: snake_case (created_at, cover_image) - TypeScript types: PascalCase, exported from relevant lib file

Component Patterns

- Props interfaces defined above component, named {ComponentName}Props - Destructure props in function signature - Use `export default function` for page components - Use named exports for shared components - Loading states: skeleton shimmer, not spinners - Empty states: always handle them with a helpful message

Common Mistakes to Avoid

- Do NOT use Pages Router patterns (getServerSideProps, etc.) - Do NOT import from @/lib/db directly — use functions from @/lib/posts - Do NOT use CSS modules or styled-components - Do NOT add 'use client' to components that do not need it - Do NOT use default exports for non-page components

Notice what this file does. It does not explain how Next.js works. The AI already knows that. It explains how THIS project uses Next.js. That distinction is everything.

2. Structure Your Codebase So AI Can Navigate It

Your codebase structure IS context. When files are small, well-named, and logically organized, the AI produces better output — even without explicit instructions.

Bad structure (AI struggles):

src/
  components/
    Dashboard.tsx          (1,847 lines — everything in one file)
    utils.ts               (943 lines — grab bag of unrelated functions)
    types.ts               (612 lines — every type in the project)

Good structure (AI thrives):

src/
  components/
    dashboard/
      DashboardHeader.tsx      (87 lines)
      DashboardStats.tsx       (124 lines)
      DashboardChart.tsx       (156 lines)
      RecentActivity.tsx       (93 lines)
      index.ts                 (4 lines — barrel export)
    settings/
      SettingsLayout.tsx       (45 lines)
      NotificationSettings.tsx (112 lines)
      AccountSettings.tsx      (98 lines)
  lib/
    auth.ts                    (67 lines)
    posts.ts                   (189 lines)
    formatting.ts              (34 lines)
  types/
    posts.ts                   (28 lines)
    users.ts                   (19 lines)

Why does this matter? Three reasons:

  1. Small files fit more easily into the context window. When the AI needs to reference your DashboardStats component, it loads 124 lines — not 1,847.
  2. Clear file names help the AI find relevant code. When you say "build something like DashboardStats," the AI immediately knows which file to look at.
  3. Logical grouping creates implicit convention context. The AI sees that dashboard components live in components/dashboard/ and infers that it should put settings components in components/settings/.

The refactoring effort pays for itself within a week of AI-assisted development.

3. Use @ Mentions Strategically

In Cursor, @ mentions are your most powerful context engineering tool. But most people use them wrong.

Wrong approach:

@src — build a new page
Mentioning an entire directory dumps thousands of lines into context. Most of it is irrelevant. The AI drowns in noise.

Right approach:

@src/components/settings/NotificationSettings.tsx
@src/lib/posts.ts (just the type definitions at the top)

Build a new BillingSettings component following the same pattern as NotificationSettings. Use the Post type as reference for how we structure TypeScript interfaces.

You are giving the AI exactly the reference material it needs. No more, no less. This is surgical context delivery, and it produces dramatically better results.

The order of @ mentions also matters. AI models pay more attention to content that appears earlier in the context. Put the most important reference first.

4. The "Show Me an Example" Pattern

This is the single most effective technique I have found in two years of vibe coding. Instead of describing what you want, show the AI an existing example and say "do it like this."

Without example (hit or miss):

Build a card component for displaying pricing plans.
It should have a title, price, feature list, and CTA button.
Use our design system colors. Make it responsive.
Follow our Tailwind conventions.

With example (consistently good):

@src/components/PostCard.tsx

Build a PricingCard component following the exact same pattern as PostCard. Same structure, same Tailwind approach, same prop interface style. It should display: - Plan name (like PostCard title) - Monthly price - Feature list (array of strings) - CTA button (like the "Read more" link in PostCard)

The second approach works better because the AI is not interpreting vague instructions — it is adapting a concrete pattern. "Our design system colors" is ambiguous. The actual hex values and Tailwind classes in PostCard.tsx are not.

This pattern works for everything: pages, API routes, database schemas, test files. Always point to an existing example when one exists.

5. Planning Docs as Context

Here is a technique most vibe coders completely miss: write a spec before you code, then feed the spec to the AI.

This is not about bureaucracy. It is about giving the AI task context that goes beyond a one-line prompt. A spec does not need to be formal. It can be a simple markdown document:

## Billing Settings Page

Purpose

Let users manage their subscription, update payment method, and view invoice history.

Components Needed

1. CurrentPlan — shows active plan, usage, next billing date 2. PaymentMethod — displays card on file, button to update 3. InvoiceHistory — table of past invoices with download links

API Endpoints

- GET /api/billing/plan — returns current plan details - POST /api/billing/upgrade — upgrades plan - GET /api/billing/invoices — returns invoice list - GET /api/billing/invoices/:id/pdf — downloads invoice PDF

Data Shapes

Plan: { name, price, interval, usage, limit, nextBillingDate } Invoice: { id, date, amount, status, pdfUrl }

Edge Cases

- User on free plan (no payment method, no invoices) - Failed payment (show warning banner) - Plan downgrade (show what they will lose)

Design Reference

Follow SettingsLayout pattern. Same sidebar, same content width.

Now when you give this spec to the AI along with the relevant reference files, it has complete task context. It knows not just WHAT to build, but WHY, for WHOM, and under what CONSTRAINTS. The output quality difference is enormous.

Before and After: Same Task, Different Context

Let me show you a real comparison. Same task. Same AI model. Different context.

The Task

"Build a notification preferences component."

Attempt 1: No Context Engineering

Prompt given to AI:

Build a notification preferences component in React.

What the AI produces:

import React, { useState } from 'react';
import './NotificationPreferences.css';

interface Props { onSave: (prefs: any) => void; }

const NotificationPreferences: React.FC<Props> = ({ onSave }) => { const [emailEnabled, setEmailEnabled] = useState(true); const [pushEnabled, setPushEnabled] = useState(false); const [smsEnabled, setSmsEnabled] = useState(false);

const handleSave = () => { onSave({ emailEnabled, pushEnabled, smsEnabled }); };

return ( <div className="notification-prefs"> <h2>Notification Preferences</h2> <div className="pref-item"> <label> <input type="checkbox" checked={emailEnabled} onChange={(e) => setEmailEnabled(e.target.checked)} /> Email Notifications </label> </div> <div className="pref-item"> <label> <input type="checkbox" checked={pushEnabled} onChange={(e) => setPushEnabled(e.target.checked)} /> Push Notifications </label> </div> <button onClick={handleSave} className="save-btn"> Save Preferences </button> </div> ); };

export default NotificationPreferences;

Problems: - Uses CSS modules (NotificationPreferences.css) when the project uses Tailwind - Uses React.FC pattern which the project does not use - any type in the props — no proper TypeScript - Default export for a non-page component - useState for everything instead of the project's state management pattern - Generic HTML checkbox instead of the project's toggle component - No loading state, no error handling, no empty state

This code technically works. It also looks like it belongs to a completely different project. You will spend 30 minutes adapting it.

Attempt 2: Proper Context Engineering

Files provided via @ mentions: - CLAUDE.md (project context) - src/components/settings/AccountSettings.tsx (example component) - src/components/ui/Toggle.tsx (existing toggle component)

Prompt given to AI:

@CLAUDE.md
@src/components/settings/AccountSettings.tsx
@src/components/ui/Toggle.tsx

Build a NotificationSettings component following the same pattern as AccountSettings. Use our Toggle component for each preference.

Preferences to include: - Email notifications (weekly digest) - Product updates - Community mentions - Marketing emails

Each toggle should save independently via PATCH /api/user/preferences. Show a subtle success toast on save. Handle the loading state.

What the AI produces:

'use client';

import { useState } from 'react'; import Toggle from '@/components/ui/Toggle';

interface NotificationPreference { key: string; label: string; description: string; enabled: boolean; }

const defaultPreferences: NotificationPreference[] = [ { key: 'email_digest', label: 'Weekly digest', description: 'Summary of community activity delivered every Monday', enabled: true, }, { key: 'product_updates', label: 'Product updates', description: 'New features, improvements, and platform changes', enabled: true, }, { key: 'community_mentions', label: 'Community mentions', description: 'Get notified when someone replies to your posts', enabled: true, }, { key: 'marketing', label: 'Marketing emails', description: 'Occasional offers and promotions', enabled: false, }, ];

export function NotificationSettings() { const [preferences, setPreferences] = useState(defaultPreferences); const [saving, setSaving] = useState<string | null>(null); const [toast, setToast] = useState<string | null>(null);

async function handleToggle(key: string, enabled: boolean) { setSaving(key); setToast(null);

try { const res = await fetch('/api/user/preferences', { method: 'PATCH', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ [key]: enabled }), });

if (!res.ok) throw new Error('Failed to save');

setPreferences((prev) => prev.map((p) => (p.key === key ? { ...p, enabled } : p)) ); setToast('Saved'); setTimeout(() => setToast(null), 2000); } catch { setToast('Failed to save. Try again.'); } finally { setSaving(null); } }

return ( <div className="space-y-1"> <h3 className="text-sm font-semibold text-text-primary mb-4"> Notifications </h3> <div className="divide-y divide-border-light rounded-xl border border-border-light bg-bg-secondary"> {preferences.map((pref) => ( <div key={pref.key} className="flex items-center justify-between px-5 py-4" > <div className="mr-4"> <p className="text-sm font-medium text-text-primary"> {pref.label} </p> <p className="text-xs text-text-tertiary mt-0.5"> {pref.description} </p> </div> <Toggle checked={pref.enabled} onChange={(checked) => handleToggle(pref.key, checked)} disabled={saving === pref.key} /> </div> ))} </div> {toast && ( <p className="text-xs text-text-tertiary mt-2 animate-fade-in"> {toast} </p> )} </div> ); }

The difference is obvious: - Uses the project's actual Tailwind classes and design tokens - Named export (matches project convention) - Uses the existing Toggle component instead of raw checkboxes - Proper TypeScript interfaces, no any - Individual save per toggle (as specified) - Loading and error states handled - Toast feedback pattern - Consistent spacing and typography with the rest of the project - 'use client' directive because it uses interactivity

Same AI. Same task. Radically different output. The only difference was context.

This is not a 10% improvement. Attempt 1 requires 30 minutes of manual fixes. Attempt 2 requires maybe 2 minutes of polish. Over the course of a project, that compounds into days of saved time.

Context Engineering for Teams

If you work solo, the techniques above are enough. But if you are on a team — or if you use multiple AI tools on the same project — context engineering becomes even more critical.

Version Your Context Documents

Your CLAUDE.md and .cursorrules files should be in version control. They should be reviewed in pull requests, just like code. When someone adds a new convention or changes the architecture, the context docs get updated in the same PR.

This is not extra work. It is documentation that actually gets used — every single day, by both humans and AI. Unlike most documentation, context files have an immediate, measurable ROI.

Keep Context Files Under 500 Lines

There is a temptation to put everything in your CLAUDE.md. Resist it. If your context file is 2,000 lines, the AI will not process all of it effectively. The beginning and end get more attention than the middle. Critical information buried on line 847 might as well not exist.

Keep the main file focused on the essentials: - Tech stack - Architecture - Conventions - Common mistakes

For detailed information about specific subsystems, use separate files that you reference when needed:

docs/
  context/
    CLAUDE.md              — Main project context (300 lines)
    auth-patterns.md       — Authentication conventions
    api-conventions.md     — API route patterns
    database-schema.md     — Schema + query patterns
    testing-guide.md       — How to write tests

Then when you are working on auth, you mention @docs/context/auth-patterns.md in addition to the main CLAUDE.md. Targeted context, delivered when it is relevant.

Establish a Context Review Checklist

When reviewing PRs, add context to your checklist:

  • Did the PR introduce a new pattern? If yes, is CLAUDE.md updated?
  • Did the PR change the architecture? If yes, is the project structure section current?
  • Did the PR add a new convention? If yes, is it documented with an example?
  • Does the PR include a new component that could serve as a reference example?

Teams that maintain their context documents consistently report that AI output quality stays high even as the codebase grows. Teams that let context docs go stale watch AI output quality degrade over time — and then blame the AI instead of their own processes.

Multi-Tool Context Sharing

If your team uses Cursor, Claude Code, and GitHub Copilot on the same project, you need context that works across all tools:

  • CLAUDE.md — read natively by Claude Code
  • .cursorrules — read natively by Cursor
  • README.md — read by everything as a fallback

The content should be nearly identical. Some teams maintain a single source-of-truth file and symlink or copy it to the tool-specific locations. The key principle: every AI tool that touches your codebase should have access to the same project context.

The Compound Effect

Context engineering is not a one-time setup. It is a practice. Every time you notice the AI producing wrong output, ask yourself: "What context was missing?" Then add that context to your documentation.

Over weeks, your context documents become an increasingly precise representation of how your project works. The AI gets better because your context gets better. Your context gets better because you keep refining it based on real failures.

This is the real skill gap in AI-assisted development. It is not who writes the best prompts. It is who maintains the best context. The vibe coders who understand this build faster, ship cleaner code, and waste less time fighting their tools.

Prompt engineering asks: "How do I phrase this instruction better?"

Context engineering asks: "What does the AI need to know to do this correctly, and how do I make sure it knows it?"

Start with the second question. Everything else follows.


CTA

If you want to go deeper on context engineering and the other skills that separate vibe coders who ship from vibe coders who struggle — join Knox Hub. We run live sessions on exactly this kind of thing, with real projects, real codebases, and real feedback.

Stop vibing. Start shipping.

Join Knox Hub →

Share this article

Comments

0/2,000

Loading comments...

Roman Knox
Roman Knox

Published March 19, 2026

Building businesses with automation and AI. Sharing workflows, templates, and real strategies that work.

Related content