
2023-11-15 | GeometryOS | Techniques, representations, and underlying tech
From NeRF to Gaussian Splatting - Why Render-Time Representations Took Off After 2023
Technical analysis of why render-time representations (NeRF → Gaussian Splatting) gained production traction after 2023, with engineering criteria and pipeline guidance.
Strong opening — topic, scope, and why it matters
Render-time scene representations (models that are evaluated or rasterized at render time instead of being directly converted to meshes/textures) moved from research curiosity to practical option after 2023. This post analyzes the key technical shifts from NeRF-style neural radiance fields to Gaussian Splatting and explains what that shift means for pipeline engineers, technical artists, and studio technology leads. The focus is on concrete production implications, objective engineering criteria that separate hype from production-ready systems, and an actionable, validation-first checklist for deterministic, pipeline-ready adoption.
Define terms at first mention
- NeRF: Neural Radiance Field — a neural volume that maps 3D location + view direction to color and density for view synthesis.
- Gaussian Splatting: A render-time representation that models a scene as many 3D anisotropic Gaussian primitives ("splat" ellipsoids) with learned colors and opacities, rendered by splatting and compositing.
- Render-time representation: Any scene encoding designed to be evaluated or rasterized directly during rendering, rather than baked to traditional geometry + texture ahead of time.
- production layer: The part of a studio pipeline that must meet delivery constraints (performance budgets, determinism, validation, asset lifecycle).
- deterministic: Repeatable, bit-for-bit or tolerance-bounded outputs given the same inputs and environment.
- validation: Automated checks and metrics (visual and numeric) used to ensure asset correctness and regressions do not escape to downstream production.
- pipeline-ready: Fits integration points, tooling, validation, and performance constraints required by a production layer.
Time context
- Key source published: Gaussian Splatting preprint (2023-07-25). See the paper and code: https://arxiv.org/abs/2307.XXXXX (paper) and associated project pages.
- Earlier foundational source published: NeRF (2020-03-xx). See the original NeRF paper: https://arxiv.org/abs/2003.08934.
- This analysis published: 2023-11-15
- Last reviewed: 2023-11-15
Note: the analysis synthesizes multiple papers, open-source releases, and community tooling that matured through 2023. The two linked sources are representative anchors; many incremental contributions filled engineering gaps between NeRF and Gaussian Splatting. For a continuously updated catalog of related work and tooling, see our /blog/ and /faq/ pages.
High-level technical differences that matter for production
- Representation shape and rendering method
- NeRF: continuous neural function evaluated by ray-marching and neural network inference per sample (many small NN evaluations). High quality, but high and variable render cost.
- Gaussian Splatting: discrete learned primitives (gaussians) rasterized via splatting compositing on the GPU. Lower and more predictable runtime cost; friendly to rasterization pipelines and spatial indexing.
- Compute and latency profile
- Neural fields: high per-pixel inference cost; latency depends on network size, number of samples per ray, and accelerator characteristics.
- Splatting: mostly GPU raster operations (blending, instanced draws), predictable worst-case latency for a given primitive count.
- Memory and storage
- Neural fields: model weights can be compact but require network execution memory and activations; caching strategies are nontrivial.
- Splatting: larger explicit primitive counts and attributes, but more straightforward memory budgeting and streaming.
- Editability and authoring
- Neural fields: editing requires retraining or local fine-tuning; authoring tools are experimental.
- Splatting: more amenable to targeted edits (remove/change splats), conversion to meshes/point-cloud proxies, and integration into DCC workflows.
- Compression and serialization
- Neural fields: weight quantization, distillation, or hybrid encodings needed for delivery.
- Splatting: standard binary formats and spatial indices (octrees/tiles) map well to existing asset pipelines.
Separating hype from production-ready reality — engineering criteria
Use these objective criteria to decide if a render-time representation is pipeline-ready for your production layer. A technology should be evaluated against all of them:
-
Deterministic renderability
- Requirement: given identical inputs and hardware/driver versions, renders must be repeatable within acceptance tolerances (pixel delta thresholds).
- Why it matters: compositing, automated pixel tests, and regression detection rely on determinism.
- How Gaussian Splatting scores: good potential — GPU rasterization can be made deterministic if blending/order is controlled; however, floating-point non-associativity and driver variations need explicit mitigation.
-
Bounded worst-case latency and memory
- Requirement: explicit upper bounds on frame latency and GPU memory for a scene under production budgets (e.g., 20ms/frame, <8GB).
- Why it matters: real-time playback and batch render scheduling.
- How to measure: instrument scene budgets; require graceful degradation (LOD, fallback) when bounds are exceeded.
-
Validation hooks and numeric metrics
- Requirement: ability to run automated visual metrics (PSNR/SSIM/LPIPS) and scene integrity checks (missing data, alpha correctness).
- Why it matters: automated pipelines must detect regressions before editorial review.
- Practically: provide render outputs at canonical viewpoints with numeric baselines and thresholds.
-
Asset lifecycle and tooling interoperability
- Requirement: conversion tools that map DCC-friendly formats (USD, Alembic, EXR) to/from representation; metadata for authoring and versioning.
- Why it matters: artists and studios must inspect, edit, and iterate without opaque retraining cycles.
-
Streaming and partial-load behavior
- Requirement: ability to stream scene subsets and progressively refine without halting playback.
- Why it matters: large sets of splats or volumetric data must not block start-of-frame or exceed memory.
- Splatting advantage: natural to tile and stream splats by screen-space importance.
-
Failure modes and graceful fallback
- Requirement: documented failure modes and deterministic fallbacks (proxy geometry, lower LOD).
- Why it matters: unexpected artifacts must not stop dailies or automated renders.
-
Authoring and editing cost
- Requirement: predictable artist workflow for capture → author → validate → integrate, without extensive retraining for minor edits.
- Why it matters: production schedules cannot tolerate multi-hour retrains for small changes.
Concrete production implications (what studios actually face)
-
Capture and preprocessing changes
- More cameras and calibrated capture rigs remain useful, but Gaussian splatting reduces per-scene optimization time compared to full volume retraining.
- Practical implication: plan capture schedules with spatial coverage that matches splat density targets.
-
Render farm and scheduling
- Predictable per-frame cost simplifies scheduling; add a step to validate GPU driver/version matrix for deterministic results.
- Practical implication: incorporate deterministic driver pinned images for render nodes, or use CPU fallbacks for critical validation batches.
-
Storage & CDN
- Splat-based assets are larger than compressed neural weights but smaller than naive dense volumes; they benefit from tiling and chunked storage.
- Practical implication: store splats in tileable blobs that can be requested by scene region.
-
Art pipeline integration
- Artists need conversion/export/import plugins for DCCs and lookdev tools. Expect to build or adopt plugins rather than rely exclusively on research repos.
- Practical implication: prioritize USD-friendly exporters to maintain asset metadata and scene hierarchy.
Tradeoffs — both sides clearly
-
Quality vs. predictability
- Neural volumes can produce very high-fidelity view-dependent effects; however, quality may vary with sampling budgets.
- Splatting yields highly predictable performance and easier integration at some cost in view-dependent subtlety (though recent splatting variants recover much of that).
-
Generalization vs. per-scene optimization
- Large neural models can generalize across scenes (when trained that way), but per-scene NeRFs typically require per-scene training.
- Splatting is inherently per-scene but fast to optimize, which fits many production workflows where per-shot assets are acceptable.
-
Precompute time vs. runtime cost
- NeRF-like systems amortize effort into training; Gaussian Splatting shifts work into a faster fitting stage, enabling quicker turnaround between capture and usable asset.
Actionable validation-first, deterministic checklist for pipeline decisions
-
Define clear acceptance criteria (numeric + visual)
- Per-scene: PSNR/LPIPS thresholds on canonical views.
- Performance: max GPU memory, max frame latency, acceptable stall/drop behavior.
-
Build a deterministic render harness
- Pin GPU driver/builds or containerize render nodes.
- Use fixed random seeds and specify blending/compositing order.
- Capture pixel diffs and metric logs per render.
-
Prepare validation datasets
- Create a set of canonical scenes and viewpoints (capture lighting variations, occlusions, fine detail).
- Maintain ground-truth renders (photogrammetry or plate) for regression tests.
-
Implement graceful fallbacks in the production layer
- LODs that reduce splat counts deterministically.
- Proxy mesh + texture fallback if validation fails or budgets exceed.
-
Integrate asset metadata into artist tools
- Include provenance, capture parameters, and fitting logs in asset metadata.
- Export/import via USD or a similar scene graph to preserve pipeline hooks.
-
Automate collection of runtime telemetry
- Capture GPU memory peaks, frame times, and percent of tiles loaded.
- Use telemetry thresholds to trigger alerts or fallbacks.
-
Stage rollout
- Pilot: limit to non-critical shots, one editorial team, and automated validation.
- Expanded pilot: multiple scenes and live capture loop, maintain regression baselines.
- Production: full integration into render farm with deterministic validation gates.
Integration patterns and reference roadmap
-
Short-term (weeks)
- Prototype with a small set of captured scenes.
- Implement deterministic harness and run automated render comparisons.
- Evaluate storage/tiling scheme for splats.
-
Mid-term (months)
- Build DCC export/import plugins (USD adjacency).
- Add streaming infrastructure and LOD controls.
- Add automated validation gates into CI for artist submissions.
-
Long-term (quarter+)
- Integrate representation into in-house lookdev and compositing tools.
- Provide artist-facing editing operations that avoid retraining (e.g., localized re-weighting of splats).
- Standardize asset format and versioning for long-term archival.
What changed since 2023-07-25
- Source date referenced: 2023-07-25 (Gaussian Splatting preprint and code).
- What changed since that date:
- Community implementations improved GPU pipelines and tiling strategies that make streaming and LOD practical.
- Tooling emerged to export/import to USD and DCCs, reducing authoring friction.
- Best practices converged on deterministic GPU setups and validation harnesses for render-time representations.
- Practical effect: several research ideas matured into engineering patterns that make the representation usable in production layer pilots; however, full studio integration still requires pipeline work (validation harnesses, fallbacks, and authoring tools).
Sources and further reading (representative)
- NeRF — Neural Radiance Fields (Mildenhall et al., 2020): https://arxiv.org/abs/2003.08934
- Gaussian Splatting — representative preprints and project pages (2023): https://arxiv.org/abs/2307.XXXXX
- Community implementations and tooling repositories: search for "gaussian splatting github" and related USD exporters.
Concise summary
- Why render-time representations took off after 2023: better runtime predictability, GPU-friendly rasterization of learned primitives, and maturing tooling reduced the integration cost into production layers.
- Production readiness criteria: deterministic outputs, bounded latency/memory, validation hooks, streaming, tooling interoperability, and clear failure fallbacks.
- Actionable first steps: define numeric acceptance criteria, build a deterministic render harness, create validation datasets, and stage a pilot with strict rollback/fallback paths.
Next steps for pipeline teams
- Run a short pilot using 2–3 representative shots and the checklist above.
- Invest in deterministic render infrastructure and automated visual metrics.
- Prioritize USD/scene-graph integration so artists can inspect and edit without retraining full models.
For related posts and pipeline-level questions, see our /blog/ and /faq/ pages.
See Also
Continue with GeometryOS