Photorealistic Photogrammetry with Machine Learning (GSoC 2025)

Hi everyone! I’m Patrick, and I’m looking to add 3D reconstruction to Blender, using machine learning technologies.

My proposal is here: Blender GSoC proposal v2 - Google Docs

I have a few topics of discussion about the technical aspects:

PyTorch installation: What’s the best way to package PyTorch (which is on the order of 1GB) with Blender Python?
Because of the large size, I think optional installation in user preferences is a good idea. It can be integrated similar to the Custom Scripts path currently.

Point cloud representation: Currently, there’s a PointCloud object type in development. For this project, we need a way to represent a cloud of 3D Gaussians, which is a PointCloud with a few extra features.
Is it a good idea to create e.g. GaussianPointCloud object that inherits from PointCloud? Or should they be completely separate classes?

Rendering: What’s the best way to add a rendering algorithm for 3D Gaussians into existing render engines?
The rendering algorithm described in the original paper is view dependent rasterization and alpha blending. Is this paradigm compatible with EEVEE or Cycles?

Thanks!

7 Likes

All good questions, and needless to say that they need to be answered one way or the other.
Challenge would be to answer these within a short time. Perhaps focus on a single topic would make it more feasible.

As Gaussian splatting doesn’t provide actual geometry its integration with EEVEE/Cycles is limited. Also there are other splatting techniques that are also promising. Perhaps it is better to integrate it as a background shader. Depending on your experience with the Blender code-base this could already be a project on its self.

How to store it in the scene depends on how much interaction and standardization is expected. Perhaps the data can just be stored as point cloud attributes. What kind of auditing tools can the user expect.

Another topic could be why to standardize to a specific implementation/framework. This field is very dynamic and selecting a specific implementation might be outdated really fast. Perhaps it is better for a first step to be able to import the data, than doing the calculation within Blender. Yes from UX point of view this might be better, but in the longer run maintaining and updating it will become a bottleneck. Using add-ons for this might be an option to solve this.

I believe when thinking more about the project that some part would not fit in a MVP (Minimum Viable Product), but are stretched goals. Finding them and prioritizing could lead to a different selection of features.

9 Likes

Hi Jeroen, thanks for the reply.

What other splatting techniques were you referring to? 3DGS seems to be one of the most popular splatting algorithms (almost every “splatting” project I see is referring to 3DGS).

I agree that it’s better to leave specific technologies to add-ons. However, given the popularity and applications of Gaussian point clouds (both in Blender and in current research), I think implementing a representation of 3D Gaussians is a good main goal.
Gaussians can be saved and loaded from files (spz is a recent format), and this can be a main goal as well.

In Blender, Gaussians can be used for Geometry Nodes, converted to meshes, rendering, and possibly as representation for fluid simulations.
Some of these are stretch and long term goals.
But I think integration with Geometry Nodes, convert to mesh, and rendering Gaussians as point clouds are viable in a project.

Another stretch goal could be to implement the MASt3R-3DGS photogrammetry pipeline as a separate add-on. This allows reconstruction and importing through Blender’s UI.

So to summarize:
Main goals: Gaussians representation, import/export, Geo Nodes, convert to/from mesh, simple rendering algorithm.
Stretch goals: MASt3R-3DGS pipeline, 3DGS rendering algorithm.

Do you have any thoughts on this?

2 Likes

Oh no.
Add a PyTorch and we can say goodbye to a Blender.
Building this monster is a challenge multiply decades harder than even build OpenImageIO.
When ai based photogrammetry algorithms are not better than analytical, and can’t compare to any proprietary.

Gaussian Splats and NeRF is too young in the same time, and adding any method at this time mean adding something that will be thrown away in couple of months, because newer methods will be added.

3 Likes

Hi, I’m not sure I understand the logic behind your points.

First, PyTorch can already be integrated with Blender quite easily; simply set a custom scripts path in preferences, and pip install to that directory.
The discussion here is how to standardize this procedure for end users.
The key is that PyTorch is a Python module, not a C++ module. It doesn’t need to be linked at compile time.

Second, machine learning algorithms can be and are better than analytical.
The pose estimation project I mentioned, MASt3R, recently won the Niantic Map Free challenge, outperforming all previous analytical algorithms.

3D Gaussian Splatting has been around for almost two years now, and is used in many downstream applications. See the number of results on Google Scholar, for example.

As technology is changing now, we can’t ignore the developments because of what we’re used to with Blender in the past.
You mentioned proprietary software. I can guarantee you that they are putting in effort to develop AI features (see results from a Google search). If Blender does not begin now, Blender will fall behind, because such features really do increase productivity for artists.

My main point, as I stated in my reply to Jeroen, is that 3D Gaussians have proven to be a useful general technology. Specific pipelines for creating them (i.e. MASt3R-3DGS photogrammetry) definitely will come and go with new research.
As I stated in the project proposal, this project would both add one feature for artists — camera pose estimation and photogrammetry — and also add a versatile machine learning technology so Blender can continue developing in this direction.

1 Like

Good. Blender doesn’t need “AI”.

What’s your source for this? If it’s just an opinion, that’s fine, but it’s presented as a fact, which may be misleading

5 Likes

PyTorch + model weights are too big to bundle with Blender, they would be a better fit for an extension indeed. From what I understand something like ONNX Runtime is much smaller than PyTorch, and more suitable for distributing to end users that don’t need to do training. But I’m not very familiar with the exact functionality and trade-offs of the various libraries in this domain.

For rendering, maybe the easiest path would be MeshSplats, as the authors already made that work in Blender.

Directly rendering as point clouds is possible too but probably more work. It would require implementing proper pointcloud rendering in EEVEE, which is quite limited now, though it would be nice to solve that. And figuring out a shader node group for 3DGS and then seeing which shading system improvements and optimizations are needed to make it work well. Maybe would need native support for ellipsoid rendering to make it somewhat efficient.

3 Likes

Haven’t looked too closely, but a word of warning: quite a few of the AI libraries depend on cudnn in some form or another in their back-end which we afaik can’t distribute with a stock blender. So whoever is going to work on/mentor this please keep an eye on that.

3 Likes