Skip to content

Generation Runner

The Generation Runner is the headless executor that takes a .nbflow workflow file and produces all the images and videos it describes. It handles the API calls, retries, R2 uploads, and visual QA — and exports a completed -generated.nbflow that you open in PatchWork to review.

What it does, in one sentence

Walks every generation node in the .nbflow, calls the right API for each (NanoBanana 2 for images, Veo 3.1 for videos), produces 4 candidates per row, retries failures, visually reviews every image for AI tells, auto-reruns flagged outputs, and writes the result back to a timestamped output file.

flowchart TB
    A[.nbflow input] --> B[Wipe stale cache<br/>preserve prompts + refs]
    B --> C[Upload refs to R2]
    C --> D[Walk generation nodes]
    D --> E{Generation type}
    E -->|image| F[Call NanoBanana 2<br/>4 candidates per row]
    E -->|video| G[Call Veo 3.1<br/>4 candidates per row]
    F --> H[Visual QA<br/>check for AI tells]
    H -->|flagged| I[Auto-rerun via regen-nodes<br/>up to 3 attempts]
    H -->|clean| J[Mirror to R2]
    I --> J
    G --> J
    J --> K{All nodes done?}
    K -->|no| D
    K -->|yes| L[Export -generated.nbflow]

When to use which execution path

There are two ways to run a workflow. Both need a G-Labs tunnel running.

Headless via workflow-runner.js (default)
Calls the G-Labs API directly from Node, uploads references to R2 itself, runs every node in parallel up to a concurrency cap, and writes the output file. No browser needed. Fastest and most reliable. Use this 95% of the time.
Playwright-driven via the PatchWork web UI
Spins up a Playwright browser, loads the PatchWork web app at https://patchwork-33m.pages.dev/, points it at your G-Labs tunnel URL, loads the .nbflow, and clicks through generation in the UI. Useful when the UI flow handles an edge case the headless runner doesn't, or when you want visual verification that PatchWork is rendering and executing correctly.
flowchart LR
    A[.nbflow] --> B{Pick path}
    B -->|fast / default| C[workflow-runner.js<br/>headless]
    B -->|visual verification / edge case| D[Playwright<br/>PatchWork UI]
    C --> E[-generated.nbflow]
    D --> E

Running the headless path

The Generation Runner is registered as a subagent — invoke it from the Manager with the .nbflow path and a few options. Under the hood it runs:

node --experimental-vm-modules path/to/workflow-runner.js \
  "path/to/workflow.nbflow" \
  --server "<G-Labs tunnel URL>" \
  --api-key "<YOUR_GLABS_API_KEY>" \
  --ref-image "path/to/reference-image.jpg" \
  --concurrency 3
Flag What it controls
--server The G-Labs tunnel URL (e.g. https://<random>.trycloudflare.com). Changes every session.
--api-key Your G-Labs API key. Stays the same across sessions.
--ref-image Path to a reference image to use if the workflow has unwired avatar refs. Optional.
--concurrency How many nodes run in parallel. Default 3, cap 5. Higher concurrency = faster but risks rate limits.
--mode images for an image-only validation pass (fast — skips video generation). Omit for full run.

Output structure

Every run produces:

Artifact Where
-generated.nbflow Generations/{workflow}-{version}-generated-{YYYYMMDD}-{HHMMSS}.nbflow. Same shape as the input, with all R2 URLs baked in.
Local image PNGs Assets/{workflow}/generated-images/v0-N/ (one folder per workflow version, named sceneNN.png).
Local video MP4s Assets/{workflow}/generated-videos/v0-N/ (same convention).
QA loop outputs Assets/{workflow}/qa_loop/v0-N/ (one PNG per regen attempt, named sceneNN-rN.png).
errors.json Inside the output folder if any nodes failed permanently.

Open the -generated.nbflow in PatchWork to review the candidates and pick the strongest takes.

What the runner enforces

The runner is opinionated — it patches a few defaults on every run so workflows behave consistently:

  • outputCount=4 on every gen node — both image and video gens produce 4 candidates per row. The user picks the strongest from the gallery in PatchWork.
  • model="nano_banana_2" on every image gen — never falls back to NanoBanana 1 by accident.
  • Reference image upload to R2 — any local reference image paths get uploaded to Cloudflare R2 automatically and the resulting public URL is substituted into the node properties. No credentials needed on the operator side.
  • Cache wipe before run — clears stale generation cache from the input file while preserving prompts and reference images.

Visual QA: what the runner reviews

After every image generation, the runner pulls the result and visually checks for AI tells — specifically:

  • Extra fingers
  • Two left hands
  • Melted faces
  • Garbled limbs
  • Impossible anatomy

If any image fails the check, the runner automatically reruns the node via the regen-nodes mechanism — a per-node rerun that mirrors fresh output to R2 and updates the Approve node's gallery. Default cap is 3 attempts per node, but you can request more for stubborn realism issues ("up to 10 attempts" for hands-and-fingers-heavy scenes).

What QA does NOT check

The visual QA pass does not score for prompt adherence (did the model produce the right scene?) or hallucinated text (signage, price tags, fake labels). Those are deliberately ignored — fixing them is low priority and the user can always rerun manually. QA is about anatomical realism only.

Concurrency tuning

The --concurrency flag controls how many generation nodes run in parallel. Defaults:

Setting When to use
--concurrency 3 (default) Standard. Good throughput, low risk of rate-limit hits.
--concurrency 5 (cap) When you're sure the G-Labs backend can handle it and you want maximum throughput on a large workflow.
--concurrency 1 Debugging. Sequential execution means errors are easier to attribute to a specific node.

Higher concurrency doesn't help past 5 — it just queues more requests waiting for upstream capacity.

Retry behavior

The runner retries failed nodes automatically:

  • Content policy violation — auto-retries (usually succeeds on second attempt with a slightly different seed).
  • Timeout — flagged in errors.json. User can rerun individual nodes in PatchWork.
  • Network error — exponential backoff retry.
  • API error — logged with context (prompt text, node title, attempt count).

Default is 3 retries per node before giving up.

When something fails

If the run produces an errors.json, open it to see which nodes failed and why. Common causes:

Symptom in errors.json Likely cause Fix
content_policy Prompt tripped a safety filter Tweak the prompt language, rerun
timeout Node took too long Bump timeout in node properties, or rerun manually
connection refused G-Labs tunnel dropped Restart the tunnel, get a new URL, rerun
model not available API rejected the model identifier Check the node's model property (should be nano_banana_2 for images)
dangling link Workflow has a broken edge Fix the link in PatchWork, save, rerun

After the run

  1. Open the exported -generated.nbflow in PatchWork.
  2. Review the 4 candidates per image and per video generation.
  3. Pick the strongest take from each gallery (PatchWork remembers your picks).
  4. Rerun any node where no candidate is acceptable — either with the same prompt (re-roll the seed) or with an edited prompt.
  5. Export the final stills/clips from PatchWork for post-production.