Surface Forge is the open-source pipeline that does the work underneath every Lit Fuse production. Python, ComfyUI nodes, Blender rig. Reads any source, ships to any venue — MPCDI v2, IMERSA fulldome, ADM BW64 audio, Ambisonic, Digistar / SkySkan / Zeiss / Spitz scripts. Apache 2.0.
Pip-installable. Pure Python core, optional ComfyUI nodes, optional Blender add-on. The library is the contract — what's in the live demo is what you get on disk.
# Surface Forge core (pending public release) $ pip install surfaceforge # Optional: ComfyUI node pack $ pip install surfaceforge[comfy] # Optional: Blender add-on $ blender --command extension install surfaceforge # Hello-world: ship a 360° capture to a planetarium >>> from surfaceforge import ship_to_venue, load_preset >>> preset = load_preset("planetarium_26m_4k") >>> ship_to_venue("capture.mp4", preset, out="./build/") [forge] analyzing capture.mp4 ... 360° equirect detected [forge] anchoring to dome surface ... MPCDI v2 written [forge] rendering IMERSA fulldome master ... 4K · 30fps [forge] composing ambisonic FOA from spatial cues ... [forge] writing ADM BW64 ... 6 sources, dome ring layout [forge] writing Digistar script ... done [forge] ship complete · 7 files in ./build/
For the AI-media community already in ComfyUI: drop the node pack in, route generated content through Surface Forge, ship straight to MPCDI / IMERSA / ADM BW64. No round-trip through other tools.
Detect 360° vs flat, audio spatial cues, color space, anchor candidates. Returns a content fingerprint Surface Forge uses everywhere downstream.
Find the surface the content should anchor to — horizon line, focal subject, anchor object. Used by SurfaceRender to keep things from drifting on a curved surface.
Grab one of 20+ built-in venue presets — dome_720, urban_block_large, civic_convening_space_composite, planetarium_26m_4k, nightclub_standard, and more.
The headline call. Source + preset → full venue-ready package. Calls analyze, anchor, render, audio compose, write the venue scripts. One line for the common path.
Three-tier delivery — preview, mid, full. For pipelines where the venue wants a low-latency preview while the full IMERSA master renders in the background.
Render a WebXR-ready HTML preview. What the live demo runs on. Drop into a browser; works on a Chromebook or a Quest.
Convert per-source positional audio into First-Order Ambisonic (AmbiX or FuMa). Generic playback for any speaker rig.
Build a full ambisonic mix from panel-positioned sources. The way Surface Forge derives a venue-agnostic master from the content's spatial layout.
Write the production-grade object-based audio file. Atmos-grade pipeline, compatible with any venue engine that reads ADM BW64.
Tie an audio source to the on-content position of its visual subject. The audio follows the speaker on the dome ceiling, automatically.
Generate VESA MPCDI v2 calibration files. Per-projector geometry + color, anchor markers, edge-blend curves. Standard format any pro AV team can ingest.
Drop the venue + content into a Blender scene. Camera rig, projection geometry, audio source positions. For the 3D pipeline already living in Blender.
4K / 8K fisheye renders, dome-ring lighting cues, audience seating mask. The format planetariums actually book.
VESA-standard per-projector tiles with edge-blend, geometry warp, and color calibration. Reads in any pro AV pipeline.
Object-based production audio. The standard from ITU-R BS.2076. Atmos-grade. Drives any venue's speaker rig.
First-order ambisonic master for generic playback. The room-master tier between preview and full object-based.
Venue scene + content layout as an editable 3D file. Drops into any 3D toolchain. Includes camera, projection geometry, audio sources.
Single-file preview that runs in any browser or headset. What the live demo serves. Survives a school Chromebook.
Native script for Evans & Sutherland Digistar planetariums.
SkySkan production format. Spitz / Zeiss equivalents in the same family.
Every voice, every recording, every contributor — Creator, ConsentRecord, RecordingContext, Citation, ToolingRecord, governance_review fields. Auditable.
The library above is what works today. Here's the next ring out — research-stage, prototype, in our labs but not in your install. We name them so you can see where this is going.
Farina-method speaker localization using a phone mic. Sub-cm positioning of every speaker in a venue, no special hardware. Native mobile capture app in development.
Lucas-Kanade optical flow + ambient sensing. The room calibrates itself continuously while content plays — no test pattern interruption. Python prototype lives in Surface Prism.
Cotting/Gross imperceptible structured-light extended to embed audio markers and metadata into projected light. Direction we're exploring; not built yet.
Phone-as-sensor + phone-as-capture. Walk a venue, capture the geometry and acoustic profile, hand off to Surface Forge. iOS / Android.
Studio · Agile Studio · Enterprise Edition wrap Surface Forge in authoring, real-time collaboration, ship-direct-to-venue automation. All in development. Tell us which tier fits and we'll bring you in early.
Apache 2.0 library always free. Studio = the wrapper.