[Draft] GSOC 25 – Improving EWA Sampling + Exposing It in the Compositor

Improving Sampling in the Compositor

Benjamin Beilharz
Blender Chat: Ben Beilharz @ben:blender.org
email redacted

Some handles:

  • GitHub: pixelsandpointers
  • BlueSky: ben.graphics

Synopsis

The goal of this project is to improve the compositor’s sampling situation. This includes exposing the interpolation option to all nodes reliant on it. The interpolation options are already partially implemented as seen in this #119592.

Furthermore, sampling requires border conditions, such as zero, extend, or wrap. At the current state, these conditions are not exposed to the user, but use the wrap condition across methods. The conditions should also be selectable in the node giving more freedom to the users.

Lastly, there is one sampling method that requires some special attention – BLI_ewa_filter.
Currently, the EWA sampling uses different implementations across CPU and GPU, leading to different results on different devices/OSs. Blender uses EWA sampling primarily for anisotropic texture sampling. The goal is to write an implementation of the EWA filter for the CPU, which is transferable to the GPU, aiming for similar/equal results for CPU and GPU versions, allowing it to be used in the compositor to enable anisotropic compositing. Literature is available:

  • Heckbert’s original algorithm [1]
  • Modern GPU-based approximations exist [2]

To summarize the project proposal:

  • Unify compositor nodes by exposing interpolation and border conditions to eligible nodes
  • Improve EWA sampling situation, writing a coherent implementation for CPU and GPU with similar results

Benefits

The exposure of the sampling and border condition will give the users more freedom in the compositing workflow.
An anisotropic texture filtering node would allow users to perform high-quality resampling with reduced aliasing, particularly useful after transformations such as rotation, scaling, or camera projection. This improvement will enhance texture and compositing workflows.

Deliverables

  • Compositor nodes with exposed interpolation and border conditions
  • A modernized CPU implementation of EWA filtering.
  • A GPU implementation of EWA filtering based on the CPU implementation.

Project Schedule

Phase Duration Tasks
Literature Research 1-2 weeks Familiarize with EWA literature, researching modern methods and approximations. Comparative analysis of methods. Potential discussion with mentors to weigh factors and pick implementation.
Phase 1: CPU Implementation 2-3~ weeks Rewrite and optimize BLI_ewa_filter, add unit tests.
Phase 2: GPU Implementation 1-2~ weeks Adapt a GPU version based on the CPU one.
Phase 3: Interpolation Exposure 2 weeks Add EWA to interpolation methods and expose to eligible nodes.
Phase 4: Border Condition Exposure 2 weeks Add border conditions to eligable nodes.
Testing & Final Adjustments 2 weeks Bug fixes and documentation.

Total: 175 hours over 13 weeks (flexible start date from May 16).

Potential Challenges & Considerations

  • Ensuring the filter is robust, producing the same results across different hardware and platforms
  • Potentially writing an approximation of EWA sampling

Bio

G’day, this is Ben! I’m a PhD student at TU Darmstadt, researching the intersection of physically based rendering (PBR) and vision science. I hold a Bachelor’s in Computational Linguistics (Heidelberg University) and a Master’s in Computer Science with a focus on Visual Computing (Technical University of Darmstadt).

In my free time, I enjoy learning new languages, traveling, photography, and working on some personal projects.

The passion for computer graphics was ignited in me when I watched Avatar, momentarily set aside, only to be reignited after Avatar 2. This led me to pivot from NLP/AI to PBR and differentiable rendering. Since my university lacks dedicated global illumination courses, I’ve been self-teaching anything PBR.

My aspiration is to work someday as a rendering engineer, but this requires a bit more time on my end to hone my skills for the next three years, which I also hope to achieve by contributing to Blender and growing with contributions.

For development, I have been mainly in the machine learning domain and got comfortable in Python, having worked on different research projects, companies, and startups. I also taught Python for a year at university to freshers. In 2023, I was part of the ASWF summer learning program, learning to use Python to write custom tooling for DCCs, and got in touch with MaterialX and OSL. C++ became increasingly important for me as interest in CG grew. I haven’t been able to write C++ on a day-to-day basis, but I have completed some projects, such as a minimalistic 3D editor using OpenGL or some rendering projects. So, there is a strong urge to write more in C++ and get as comfortable as Python. Apart from contributing to Blender and potentially open standards at ASWF, I also started to work on a production renderer as a personal project that I want to develop for the coming years.

After GSOC got announced, I looked into the proposed projects and realistically picked a project that seems to be slightly above my current C++ knowledge, so I have room to grow while learning more about Blender’s internals – the compositor. I opened a PR for a first good issue. This touched upon surprisingly many things: RNA/DNA system, node system, UI builder patterns, GPU contexts, etc., so it was a very nice introduction to Blender and the APIs that will be used to implement the proposal.

References

[1] Heckbert, P. S. (1989). Fundamentals of texture mapping and image warping.
[2] Mavridis, P., & Papaioannou, G. (2011, February). High-quality elliptical texture filtering on GPU. In Symposium on Interactive 3D Graphics and Games (pp. 23-30).

9 Likes

After some comments from Habib (thanks!) it makes sense to expose the EWA filter not as a separate node, but add it to the interpolation/filter drop down of the existing nodes.

Edited the draft respectively.

2 Likes

I would like you to mention where this function is used in Blender, is it only used by the compositor or it is used by other parts of Blender?

It is important to clarify this point. EWA sampling already works across multiple platforms, it is just that it produces different results across GPUs/Drivers/OSs and of course compared to CPU. So you should state this clearly as one of the reasons for doing the project.

I think you are already somewhat familiar with most of the the API that we would need to use, so should probably be shorter in duration. What I would like to see explicitly stated is dedicated time for literature review and a comparative analysis of existing methods of EWA be it ground truth or approximation.

I also feel like this could be shorter since the CPU implementation will likely be adapted for GPU, it is not going to be code that will be written from the ground up. This would also make room for what I mention below.


I suggest that you incorporate 119592 into your proposal as well. So your proposal would be generally about improving sampling in the compositor:

  • Improve the EWA situation.
  • Expose interpolation options to all nodes that need it. Your first issue already tackled one of those, so should not take much time.
  • Expose boundary handing to all nodes that need it.

The last point is about providing the user with a drop-down that contains:

  • Zero.
  • Extended.
  • Repeat.

For each of the axis to control how out of bound access is handled. This is what I meant by removing the repeat option in the Translate node, it would be replaced by this new functionality.

1 Like

Thanks Omar! :slight_smile:
Updated the proposal with the feedback.

1 Like