top of page
Search

How Gaussian Splatting and NeRFs Are Quietly Reshaping Filmmaking

  • Writer: Peter Gagnon
    Peter Gagnon
  • May 29
  • 3 min read

If you work in film, you’ve probably heard whispers about “NeRFs” and “Gaussian Splatting.” At first glance, they sound like niche tools for computer vision researchers or post-grad math projects. But ignore them at your peril because they’re already reshaping how we capture and create the world onscreen.

Let’s start with what they actually are…

Neural Radiance Fields (NeRFs) use AI to reconstruct 3D scenes from a series of 2D images. Feed it a few photos or frames, and the system learns how light interacts with the surfaces of a space; allowing you to render entirely new angles, camera moves, and lighting conditions that weren’t captured on set.

Gaussian Splatting is the faster, more real-time cousin. Instead of deep learning models and lengthy training times, it uses point clouds and mathematical “splats” (blurry 3D blobs, basically) to create detailed scenes that render quickly and still look remarkably good. Recent breakthroughs have made splats shockingly efficient and for filmmakers, that’s huge.

We’re entering an era where you can scan a location and start shooting virtually inside it within days, sometimes hours. Not months of modeling. Not weeks of texture work. Just rapid capture, light cleanup, and boom! You’ve got a set in your pocket!

That changes the game for:

  • Location scouting: Send a team with LiDAR or a photogrammetry rig and build virtual location references you can walk through in pre production.

  • Virtual production: Build environments for LED volumes without massive 3D modeling pipelines.

  • Post-production pickups: Missed a shot? If you’ve got enough reference, NeRFs or splats can help reconstruct the space for VFX or reshoots.

  • Archiving and continuity: Digitally preserve set builds or real locations for later use like a spatial photo album with infinite camera angles.

Here’s where things get interesting, and where the industry needs to pay attention:

  • Dynamic Scenes: Right now, NeRFs and Gaussian Splatting excel at static environments. Moving subjects? That’s still in the experimental phase. But progress is being made. Soon, we’ll be able to scan and recreate entire performances, not just places.

  • Editing Tools: These systems weren’t built for filmmakers (yet). We need artist-friendly tools that plug into existing pipelines integrating with Unreal, Nuke, Blender, etc. without needing a PhD in machine learning to use them.

  • Hybrid Workflows: The future isn’t NeRFs or traditional 3D, it’s both. Think scanned environments enhanced with artist-driven lighting, set extensions, and simulations. Tech that blends precision capture with creative flexibility.

  • Ownership and Ethics: As we scan more people, places, and performances, who owns that data? And how do we prevent this from becoming another exploitative chapter for VFX artists?




NeRFs and Gaussian Splatting are no longer just research curiosities; they're rapidly becoming practical tools. We're already seeing them appear in previsualization, set scanning, and early-stage virtual production workflows. But the technology is still evolving. The creative pipelines, the software integration, and the standards around asset handling and artist involvement all remain in flux.

For filmmakers and studios, the key now is not hype, but familiarity. Understanding what these tools can (and can’t) do today, while keeping an eye on where they’re heading, is essential. They aren’t replacements for traditional workflows at least not yet but they offer compelling new options for capturing, manipulating, and archiving real-world environments.

As with any emerging tech, success will depend less on the novelty of the tools and more on how thoughtfully they’re implemented, both creatively and operationally. The coming years will be about refining use cases, building better integration, and ensuring the people using the tech are part of the conversation.

This is an area worth watching and, when appropriate, experimenting with. Not because it’s revolutionary, but because it’s quietly becoming part of the foundation we’ll all be working from.


 
 
 

Comments


bottom of page