prompt-tuning¶
What it does¶
Takes a prompt that's close but not quite, runs it through the image generator (NanoBanana 2), looks at the result, judges where it's off, adjusts the prompt, regenerates. Repeats for N iterations. Each round, you can see what's getting better and what's getting worse.
Think of it like dialing in a recipe — you taste, adjust seasoning, taste again, adjust, until it lands.
When to use it¶
When you have an image prompt that's producing mostly right but missing something specific:
- Avatar's pose is right but lighting is off
- Composition is right but wardrobe doesn't match the brief
- Setting is correct but the camera angle is wrong
- 70% there but the last 30% won't move with normal prompt edits
Don't use it for a brand-new prompt with no baseline — start with one Generation Runner pass first, then use prompt-tuning to refine.
Where it fits in the pipeline¶
flowchart LR
A[First Generation Runner pass] --> B{Close to ideal?}
B -->|yes| C[Use as-is]
B -->|close but not quite| D[prompt-tuning loop]
D --> E[Dialed-in prompt]
E --> F[Final Generation Runner pass]
How to invoke¶
In a conversation:
You: iterate the prompt at /path/to/scene_03-image-start.json
against the reference frame /path/to/reference_frames/scene_03.jpg
Claude: [starts the prompt-tuning loop, generates round 1,
shows you what changed, asks how you want to adjust]
Trigger phrases the system recognizes:
- "iterate the prompt"
- "tune the prompt"
- "loop on this prompt"
- "dial in this prompt"
What you get back¶
For each iteration:
- The generated image (saved to
Assets/{workflow}/qa_loop/v0-N/) - A visual judgment of what's working and what's not
- A proposed prompt adjustment for the next round
- After N rounds, the final dialed-in prompt
Configuration¶
| Setting | What it controls |
|---|---|
| Iterations | How many rounds to run before stopping (default: 5) |
| Reference image | What the output is being judged against |
| Server | The G-Labs tunnel URL (auto-starts if not passed) |
Example session¶
You: tune the prompt at scene_03-image-start.json — the avatar's lighting
keeps coming out too cold. Reference frame is scene_03.jpg.
Claude: Starting prompt-tuning loop. I'll auto-start the cloudflared tunnel
since you didn't pass --server.
Round 1 result: [image shown]
Issue: lighting is cool, reference shows warm window light from
the left.
Adjustment: adding "warm natural window light from camera-left,
golden hour quality" to the lighting section.
Round 2 result: [image shown]
Issue: lighting warmth is right, but now the wardrobe shows
a contradiction — prompt says "white cotton tee" but the
output is rendering in cream.
Adjustment: making wardrobe more specific — "crisp pure white
cotton t-shirt, slight texture, no off-white tones".
Round 3 result: [image shown]
Looking good — wardrobe is correct, lighting matches reference,
composition is right.
Final prompt saved to scene_03-image-start.json. Ready to
regenerate the scene with the Generation Runner.
Limitations¶
- Doesn't tune video prompts (Veo 3.1) — only image prompts (NanoBanana 2)
- Won't fix problems that are structural (wrong subject in the reference frame, etc.) — needs a sensible starting point
- Burns generation budget per iteration — set a reasonable iteration count
- Can't fix AI tells (extra fingers, garbled hands) — that's what the Generation Runner auto-rerun is for