Skip to main content

Documentation Index

Fetch the complete documentation index at: https://avocadostudioai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Editing Pipeline

When a user types a message like “Add a testimonials section with 3 customer quotes below the hero”, the system goes through a structured pipeline to turn that natural language request into validated, type-safe content changes.

Step 1: Intent Detection

The orchestrator first determines what the user wants to do. A fast model (e.g., Claude Haiku) classifies the message:
  • Structural edit — Add, remove, move, or update blocks
  • Page operation — Create, duplicate, or delete a page
  • Clarification needed — The request is ambiguous (“make it better” → which block?)
  • Off-topic — Not a content editing request
If the fast model can handle the request directly (e.g., a simple text change), it returns a plan immediately — skipping the full planner and saving time and cost.

Step 2: AI Planning

For complex requests, the orchestrator sends the full context to a planning model:
  • System prompt — Instructions for generating structured edit operations
  • Block schemas — Zod schemas for all available block types, so the AI knows what fields exist and what values are valid
  • Current page state — The full PageDoc with all existing blocks
  • User message — The natural language request
  • Conversation history — Previous messages for context
The AI responds with a structured edit plan — a JSON object containing one or more operations.

Step 3: Validation

Every operation in the plan is validated against the block’s Zod schema:
  • Are the props valid for this block type?
  • Does the target block exist (for updates/removes)?
  • Is the position valid (for adds/moves)?
If validation fails, the orchestrator attempts auto-repair — re-prompting the AI with the specific error. If repair also fails, the user sees a clear error message explaining what went wrong. See Chat Troubleshooting for debugging failed plans.

Step 4: Apply Operations

Validated operations are applied to the draft page state in order. Each operation:
  1. Updates the in-memory draft PageDoc
  2. Records itself in the undo/redo history
  3. Bumps the preview version number

Step 5: Stream to Content Studio

The orchestrator streams results back to the Content Studio via Server-Sent Events (SSE). The main events are:
event: op_candidate    → An operation parsed from the LLM's streaming response
event: plan_meta       → Plan metadata (status, intent, opCount)
event: op_applied      → Each operation successfully applied (progressive UI updates)
event: image_progress  → Image resolution updates (when image lookup is deferred)
event: final           → Final result, including summary_for_user and change log
Other events emitted on the same stream include field_draft, summary_token, changelog_entry, op_skipped, rollback_started / rollback_done, status, heartbeat, error, and canceled. There is no dedicated preview_updated event — preview refresh is driven by the draft version bump described in Step 6. The user sees changes appearing progressively — not a loading spinner followed by a wall of changes.

Step 6: Live Preview Update

The site (in the iframe) detects the draft content update and re-renders:
  1. Orchestrator notifies via draft version bump
  2. Site re-fetches draft pages from orchestrator
  3. React re-renders only the changed blocks
  4. Site sends postMessage to Content Studio confirming the update
The user sees the live site update in real time, exactly as visitors would see it.

Step 7: User Review

The user can now:
  • Approve — Keep the changes in the draft
  • Undo — Roll back the last operation (or the entire plan)
  • Continue editing — Send another message to refine the changes
  • Publish — Push the draft to production

Streaming & Progressive Updates

Avocado Studio is optimized to minimize perceived latency:
OptimizationWhat it does
Parallel planningFast intent router and full planner run concurrently. If the router succeeds, the planner is aborted.
Streamed op applyOperations are validated and applied as they stream from the LLM, not after the full response.
Deferred image resolutionText and structural changes apply immediately. Image lookups and generation (Unsplash, OpenAI gpt-image-1, and Google Gemini gemini-2.5-flash-image) resolve in the background. See Asset Manager & AI Images.
Incremental previewEach operation triggers a preview update, so the user sees changes at ~800ms intervals.

Undo / Redo

Every operation is recorded in a per-session history stack. Undo reverses the last operation by restoring the previous page state. Redo re-applies it. The history is operation-level, not plan-level — if a plan contains 3 operations, you can undo them one at a time.

Publishing

When the user is satisfied with their edits, they click Publish in the Content Studio. The orchestrator snapshots the current draft pages and pushes them through a publish target — a pluggable interface that connects to your deployment workflow.

The PublishTarget Interface

Publishing is defined by a single TypeScript interface:
interface PublishTarget {
  readonly name: string
  canHandle?(ctx: PublishContext): boolean
  publish(ctx: PublishContext): Promise<PublishOutcome>
}
  • name — Stable identifier used by the registry and the PUBLISH_TARGET env var.
  • canHandle() — Optional selection hint. The registry iterates targets on each POST /publish and picks the first whose canHandle(ctx) returns true.
  • publish() — Receives the full PublishContext (session, scopedSession, pages, siteConfig, siteOrigin, generatedImageDir, logger) and returns a PublishOutcome (ok, httpStatus, tracker, response body).

Built-in Targets

Three targets ship in the box, registered at module load (in order) from apps/orchestrator/src/publish/publish-target-registry.ts: site-contract — Selected when the request includes a siteOrigin. POSTs pages + siteConfig + inline image assets to the remote site’s /api/editor/publish contract endpoint. The site owns content storage (json file, CMS, database, etc.). For the legacy avocado-stories siteId this target also records a git snapshot for version history. git — Selected when no siteOrigin and PUBLISH_MODE=git (default):
  1. Serializes all draft pages to apps/site/lib/published-content.json
  2. Copies any generated images to apps/site/public/generated-images/
  3. Rewrites localhost image URLs to relative paths
  4. Commits and pushes to the configured branch (PUBLISH_GIT_BRANCH, defaults to main)
  5. If a Vercel deploy hook is configured, the push triggers a rebuild automatically
deploy-hook — Selected when no siteOrigin and PUBLISH_MODE=deploy_hook:
  1. Calls VERCEL_DEPLOY_HOOK_URL to trigger a Vercel rebuild
  2. Tracks deployment status by polling the Vercel API (VERCEL_TOKEN)
  3. Reports back to the Content Studio: triggered → building → ready (or failed)
To force a specific target regardless of the selection rules, set PUBLISH_TARGET=<name> (e.g. PUBLISH_TARGET=git).

Environment Variables

VariableDefaultDescription
PUBLISH_MODEgitLegacy selection: git or deploy_hook (used when no canHandle match and no PUBLISH_TARGET override)
PUBLISH_TARGET(none)Force a specific target by name (overrides canHandle and PUBLISH_MODE)
PUBLISH_GIT_BRANCHmainBranch to push published content to
PUBLISH_GIT_STRICT0If 1, abort publish when working tree has unrelated changes
PUBLISH_TOKEN(none)Require this token in x-publish-token header to authorize publishes
VERCEL_DEPLOY_HOOK_URL(none)Vercel deploy hook URL (for deploy-hook target)
VERCEL_TOKEN(none)Vercel API token for polling deployment status

Implementing a Custom Publish Target

To publish to a different platform (Netlify, AWS, a CMS, or your own CI/CD pipeline), implement the PublishTarget interface and register it before the server starts handling requests:
import type { PublishTarget, PublishContext, PublishOutcome } from "./publish/publish-target.js"
import { registerPublishTarget } from "./publish/publish-target-registry.js"

class S3PublishTarget implements PublishTarget {
  readonly name = "s3"

  canHandle(ctx: PublishContext): boolean {
    // Claim publishes routed at a specific siteId, for example.
    return ctx.siteId === "my-s3-site"
  }

  async publish(ctx: PublishContext): Promise<PublishOutcome> {
    const { session, pages, slugs, logger } = ctx
    // Example: upload to S3
    const res = await fetch("https://my-bucket.s3.amazonaws.com/publish", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ pages })
    })

    const now = new Date().toISOString()
    return {
      ok: res.ok,
      httpStatus: res.ok ? 200 : 400,
      tracker: {
        session,
        status: res.ok ? "triggered" : "failed",
        startedAt: now,
        updatedAt: now,
        slugs,
        vercelState: res.ok ? "READY" : "ERROR"
      },
      response: {
        status: res.ok ? "ready" : "failed",
        session,
        slugs,
        message: res.ok ? "Published to S3" : `S3 returned ${res.status}`
      }
    }
  }
}

registerPublishTarget(new S3PublishTarget())
The orchestrator passes full PageDoc objects — your publish target receives structured JSON, not HTML. Your target decides how to store, transform, or deploy that content.

CMS Publishing (via Site SDK)

For CMS-backed sites (Contentful, Sanity, Strapi), the Site SDK provides publish utilities that handle the common workflow:
  1. Resolve image URLs (rewrite localhost references, upload to CMS)
  2. SSRF validation on external image URLs
  3. Deduplicate image uploads within a single publish
  4. Push page content to the CMS API
See the working examples in examples/contentful-site/, examples/sanity-site/, and examples/strapi-site/ for complete CMS publish implementations.