Skip to content

Pre-generation Sanity Check

Run this validation before triggering the Generation Runner on any .nbflow. It catches the bug classes that don't surface until generation runs — model mismatches, outputCount drift, dynamic variableName mismatches, dangling links, and stale per-node link refs.

What this check catches

Bug class Symptom in production What this check verifies
Wrong image model Generations look "off" or use NanoBanana 1 features model == "nano_banana_2" on every NanobananaAPI
outputCount defaulted to 1 Only 1 candidate per gen, no gallery to pick from outputCount == 4 on every gen node
Dynamic variableName mismatch Generations are unrelated to the prompt (literal {scene} sent to NB) Every template placeholder has a matching Dynamic variableName
Veo3 missing negativePrompt r.trim is not a function toast on PatchWork import negativePrompt is a string (empty OK)
Veo3 missing input slots Import fails, third frame slot missing Each Veo3 has 3 input slots
Object-shaped dynamic rows r.trim is not a function on generation dynamicRows are flat strings
Template input slot name doesn't match placeholder PatchWork removes the input on import, link destroyed Template input slot names = placeholders
Dangling links UI shows orphan edges, ref images don't reach G-Labs Every link target slot exists on the target node
Stale per-node link refs (Cached Media bug, fan-out renumber bug) UI-driven generation silently drops avatar refs output.links and input.link match the canonical links table

The validation script

import json
import re

def sanity_check(nbflow_path):
    data = json.load(open(nbflow_path))

    for tab in data['tabs']:
        nodes = {n['id']: n for n in tab['graphData']['nodes']}

        # Model + outputCount on every gen node
        for n in nodes.values():
            if n.get('type') == 'nanobanana/NanobananaAPI':
                assert n['properties'].get('model') == 'nano_banana_2', \
                    f'wrong model on node {n["id"]}'
                assert n['properties'].get('outputCount') == 4, \
                    f'outputCount wrong on node {n["id"]}'

            if n.get('type') == 'nanobanana/Veo3':
                assert n['properties'].get('outputCount') == 4, \
                    f'outputCount wrong on node {n["id"]}'
                # Importer-emitted files often leave negativePrompt undefined → .trim() crash
                assert isinstance(n['properties'].get('negativePrompt'), str), \
                    f'Veo3 {n["id"]} missing negativePrompt string'
                # Veo3 needs 3 input slots (prompt, start frame, end frame) with SPACES
                assert len(n.get('inputs', [])) == 3, \
                    f'Veo3 {n["id"]} has {len(n.get("inputs",[]))} inputs, expected 3'

        # Dynamic variableName matches every {placeholder} in every template
        templates = {
            m for n in nodes.values()
            if n.get('type') == 'nanobanana/Prompt'
            and n.get('properties', {}).get('templateMode')
            for m in re.findall(r'\{([a-zA-Z_]\w*)\}', n['properties'].get('text', ''))
        }
        varnames = {
            n['properties'].get('variableName')
            for n in nodes.values()
            if n.get('type') == 'nanobanana/Prompt'
            and n.get('properties', {}).get('dynamicMode')
        }
        assert templates <= varnames, \
            f'unmatched placeholders: {templates - varnames}'

        # Dynamic rows must be flat strings (objects → r.trim() crash)
        for n in nodes.values():
            if n.get('type') == 'nanobanana/Prompt' and n.get('properties', {}).get('dynamicMode'):
                for i, r in enumerate(n['properties'].get('dynamicRows', [])):
                    assert isinstance(r, str), \
                        f'dynamic {n["id"]} row[{i}] is {type(r).__name__}, must be str'

        # Template input slot name MUST equal the placeholder in its text
        # (otherwise PatchWork's _syncTemplateInputs removes the slot on import,
        #  destroying the connected link)
        for n in nodes.values():
            if n.get('type') == 'nanobanana/Prompt' and n.get('properties', {}).get('templateMode'):
                placeholders = re.findall(r'\{(\w+)\}', n['properties'].get('text', ''))
                input_names = [i.get('name') for i in n.get('inputs', [])]
                for ph in placeholders:
                    assert ph in input_names, \
                        f'template {n["id"]} text references {{{ph}}} but no input slot named {ph!r}'

        # Link validator — every link's target_slot must exist on the target node
        for l in tab['graphData']['links']:
            link_id, src, src_slot, tgt, tgt_slot, _ = l
            target_inputs = nodes[tgt].get('inputs', [])
            assert tgt_slot < len(target_inputs), \
                f'dangling link {link_id} -> #{tgt} slot {tgt_slot} (only {len(target_inputs)} inputs)'

        # Output.links AND input.link must match the canonical links table
        # (no stale per-node refs after renumber / fan-out / variant patch)
        expected_out, expected_in = {}, {}
        for l in tab['graphData']['links']:
            expected_out.setdefault((l[1], l[2]), []).append(l[0])
            expected_in[(l[3], l[4])] = l[0]
        for n in nodes.values():
            for si, out in enumerate(n.get('outputs', []) or []):
                assert sorted(out.get('links') or []) == sorted(expected_out.get((n['id'], si), [])), \
                    f'node {n["id"]} output[{si}].links stale: refs not in central links array'
            for si, inp in enumerate(n.get('inputs', []) or []):
                assert inp.get('link') == expected_in.get((n['id'], si)), \
                    f'node {n["id"]} input[{si}].link stale: ref does not match central links array'

    print(f"{nbflow_path}: all checks passed")

How to use it

Save the function above as scripts/_sanity_check.py (or copy into a one-off script), then:

python scripts/_sanity_check.py path/to/workflow.nbflow

Or import and call from a build/patch script:

from _sanity_check import sanity_check
sanity_check("path/to/workflow.nbflow")

Every assertion that fires names the specific node ID and what's wrong. Fix one at a time, rerun, repeat until clean.

When to run

  • After the PatchWork Importer emits a fresh .nbflow
  • After any fan-out script that duplicates tabs
  • After any variant patcher that mutates the graph
  • After any manual edit in the PatchWork web UI that saves a new file
  • Before invoking the Generation Runner (always)

After fixing

When a check fails:

  1. Read the assertion message — it tells you exactly which node and which property
  2. Fix the property (the Node Types Reference covers what each one should be)
  3. Resync per-node link refs if the fix touched any links — use manager/scripts/_lib_link_refs.py (resync_link_refs(tab) + assert_clean(workflow))
  4. Rerun the sanity check

Repeat until the script prints "all checks passed."

Why we have this script

Each line in this script corresponds to a real production bug that cost time to diagnose:

  • The model-mismatch line exists because we shipped a full Generation Runner pass on NanoBanana 1 by accident — the model field wasn't set, G-Labs defaulted to NB1, nobody noticed until output was audited
  • The variableName check exists because we generated 9 images that all looked like generic supermarket scenes — the literal string {scene} was being sent to NanoBanana
  • The link-ref sync check exists because a fan-out script left 414 stale per-node link refs, and UI-driven generation silently dropped 26 of 30 avatar reference connections
  • The Veo3 input slot check exists because the PatchWork Importer was emitting 2-slot Veo3 nodes, causing r.trim is not a function toasts on import

Running this script before every generation pass is the cost of avoiding all of those bugs again.