webhouse.appwebhouse.appdocs

Curator edits and rejection notes are saved per agent and injected into the next run as few-shot examples — agents learn from corrections automatically.

The problem

Early on, the agent runner read past corrections from _data/agents/{id}/feedback.json and injected them into the system prompt as few-shot examples. The intent was that agents would learn from curator edits over time. The catch: nothing in the codebase ever wrote to that file. Curators could fix every draft and reject every bad one and the agent would happily repeat the same mistakes the next day.

The feedback loop closes that gap. Every curator action — edit a field before approving, reject with a note — is now persisted to the agent's feedback file automatically. The next time that agent runs, the most recent corrections are baked into its system prompt as concrete examples.

How corrections are recorded

When a queue item is created by an agent, the runner snapshots the original contentData onto the queue item under originalContentData. When a curator approves the item, the curation route diffs the current contentData against that snapshot, and writes one correction entry per changed string field:

json
{
  "id": "fb-...-...",
  "type": "correction",
  "queueItemId": "qi-...",
  "field": "title",
  "original": "Why TypeScript Generics Matter",
  "corrected": "How TypeScript Generics Save You From Refactoring Hell",
  "createdAt": "2026-04-08T..."
}

If you don't edit anything before approving, no corrections are recorded — the original output was good enough.

How rejections are recorded

When you reject a queue item with a note, the rejection route writes a rejection entry containing the curator's notes:

json
{
  "id": "fb-...-...",
  "type": "rejection",
  "queueItemId": "qi-...",
  "notes": "Tone is too dry — needs more personality and specific examples",
  "createdAt": "2026-04-08T..."
}

Rejection notes are visible on the agent detail page but not currently injected into the next system prompt (only correction and edit entries with both original and corrected strings are used as few-shot examples). They're still useful as an audit trail, and may be folded into the prompt context in a future revision.

What gets injected into the next run

The agent runner calls loadFeedbackForPrompt(agentId, 5) and pulls the last 5 correction examples in chronological order. They're appended to the system prompt under a ## Learn from past corrections section, like this:

## Learn from past corrections
Example 1:
Original: Why TypeScript Generics Matter
Corrected: How TypeScript Generics Save You From Refactoring Hell

Example 2:
Original: A serene sunrise over snow-capped mountains
Corrected: Snow-covered alpine valley with frozen lake at golden hour

The model treats these as concrete editorial preferences and tends to mimic the patterns on its next run.

The Recent feedback panel

The agent detail page shows a Recent feedback card with the last 5 entries. Each entry has a colored type badge (green for correction, blue for edit, red for rejection), the field name where applicable, a strikethrough diff for corrections, and a timestamp.

The footer notes the maximum injected count so curators understand what's actually flowing back into the agent.

API

The endpoint is at POST /api/cms/agents/[id]/feedback and accepts:

json
{
  "type": "correction" | "rejection" | "edit",
  "queueItemId": "qi-...",
  "field": "title",
  "original": "...",
  "corrected": "...",
  "notes": "..."
}

Most curators never call this directly — the curation approve/reject routes handle it automatically. It exists for programmatic submissions and for the in-page panel.

Storage and limits

  • File: _data/agents/{agentId}/feedback.json
  • Max entries: 200 (oldest are dropped on append)
  • Format: JSON array of FeedbackEntry objects
  • Backwards compatible: legacy { original, corrected } shape is read as a correction type

Why only 5 examples?

Few-shot examples are the most expensive part of the system prompt — each one is hundreds of tokens, and the cost is paid on every run. Five is enough to communicate consistent editorial preferences without bloating the prompt to the point of token waste. If you want the agent to learn a new preference quickly, that one will land in the top 5 within a few approvals.

See also

Tags:AI AgentsAI PromptsAnalytics
Previous
Media Management
Next
AI Builder Guide
JSON API →Edit on GitHub →