Compositor Anti-Aliasing Node - D2411

This thread is about getting user feedback on the new anti-aliasing compositor node. The goal is to agree on the final design from a user’s perspective before moving on to polishing the code.

Anti-Aliasing Node
The node removes distortion artefacts around edges known as aliasing.
Example:

Algorithm
The anti-aliasing node is an implementation of Enhanced Subpixel Morphological Antialiasing (SMAA) by Jimenez et al. [1]. C++ implementation and current integration in Blender was done by Shinsuke Irie [2].
It is a post-processing filter with 3 main steps. First, it finds edges and corners in the input image and classifies them according to patterns (first pass). Second it eliminates edges that are usually not perceived by the eye and therefore avoid unwanted artefacts. This step is called local contrast adaptation (second pass). Finally, anti-aliasing is achieved by blending pixels in borders detected in first step using precomputed textures (third pass).

Node interface

  1. Edge detection, (find discontinuities in image based on):

    • Luma
    • Color (typically RGB)
    • Value (also called Depth in earlier versions)
  2. Threshold
    Describes edge detection sensitivity across the whole image.
    Example threshold 0.5 vs 0.05:
    th_effect

  3. Local contrast adaptation factor
    The human eye does not perceive all edges equally. For instance it tends to mask low contrast edges in the presence of much higher contrasts in the surrounding area. Therefore applying anti-aliasing to unperceived edges will produce artefacts.
    This parameter quantifies the difference between low contrast and high contrast (neighbouring) edges.
    Example 1 vs 5 (notice that only edges around eyes are affected, where neighbouring edges of higher contrast exist):
    adap

  4. Corner detection
    Detect corners to help preserve the original shape

  5. Corner rounding
    Example of no corner detection vs maximum corner preservation
    corner_effect

  6. Value
    This serves as an alternative input for RGB or luma, e.g. depth pass from a render to help detect edges.

Edges output gives intermediate result of edge detection. It’s only there for debug purposes and will be removed later

Proposed changes
Simplify interface as follows:
Screenshot 2021-02-14 at 10.25.44
The authors of the original paper argue it is best to choose luma for edge detection, because this information is always available (unlike depth or object id) and because the underlying idea of morphological anti-aliasing assumes edges are color discontinuities. Also, I could reproduce same results from luma and RGB by choosing different threshold. So edge detection was set to luma by default and other options were removed.
Corner detection is now off it corner rounding is set to 0, otherwise use the value.

Feedback is welcome!
Please let me know what you think of the interface, especially of the simplified version. It would be nice to have the node tried out by compositing artists. Windows, Linux and macOS builds are available here

Implementation notes
This is a continuation of D2411. Patch is not ready for review. Defining node design/usability is more important now

[1] Jorge Jimenez and Jose I. Echevarria and Tiago Sousa and Diego Gutierrez 2012, Enhanced Subpixel Morphological Antialiasing, in Computer Graphics Forum (Proc. EUROGRAPHICS 2012). https://www.iryoku.com/smaa/
[2] https://github.com/iRi-E/smaa-cpp

14 Likes

I like the suggested interface much more that the original patch was doing. Is this something what you’ve implemented in the current version of the patch?

To simplify testing from compositor artists I think what is to be done is the following:

  • Make sure your proposal is implemented in a patch
  • Move the patch to a branch in blender.git
  • Make buildbot to deliver builds for all platforms.

Make sure to get a feedback from the technical artists in the module (need to document it still, but it is Sean Kennedy and Sebastian Koenig). Also, make sure Sebatian Parborg is involved into views, as he seems to be much interested as well! Surely, there are others who are interested, just making sure the team is included :slight_smile:

2 Likes

Thanks for the hint! I’ll try to get in touch with those people in the next few days.

Yes, current version of the patch D2411 has the proposed changed implemented.

I’d be happy to do that, but I think I’d need more access to the repository than I currently have (already asked for commit access). For now, I managed to add a mac and a windows build and added links in the post above.

1 Like

I would love for the Value input to stay, if that’s not too much of an implementation issue.
Maybe the mode dropdown could be kept, but when it is set to Luma, the Value input is hidden.

And maybe it should be renamed to something like “Custom”?
A similar kind of input exists with the Bilateral Blur node, where the second image is called “Determinator”. But I don’t think that’s a good name either.

Updates:

Thanks for the feedback. Can you explain the use case for this? Note that you can already pass a 1 channel image as input, so passing depth from depth pass is valid.

1 Like

I don’t think I understand.
As I see it now, the node only has one single input.

Use cases could be as follows:

  • Using depth / normals / an id mask to decide where to blur.
  • Applying the anti aliasing after some color correction (i.e. to use the image before color correction to determine where to blur).
  • In order to have the same anti aliasing applied on multiple passes of the same render.

Ok I think I understand what you mean, thanks.

I don’t think introducing a mask will improve results. The general idea of the implemented paper is to blur edges visible edges after classifying them into patterns. For instance, the two following edge patterns would get blurred differently:

pattern 1:

       |
.------´
|

pattern 2:

        |
        .----
      .-´
    .-´
--.-´
  |
  |

Now if you want to detect edges on an image but blur after some processing of the original image, e.g. color or distortion correction, then

  1. you might end up blurring edges that are not there (and therefore unnecessarily reducing image quality) or
  2. ignore edges that were not there before the extra processing step (and therefore producing unwanted artefacts).

I must admit though, I’m concluding this from my understanding of the original paper, and not from actual renders. So if an input mask does improve results in practice, e.g. because depth edges always correspond to color edges in real world renders, then of course I can implement it.
I can separate the current anti-aliasing node into an edge detector node and a blurring node to play around with, in case you are still convinced this is the way to go.

But I thought you had already implemented this?
I.e. that’s what the value input did earlier.

And while the paper recommends edge detection using Luma, their implementation explicitly provides the option to use depth instead.
This could be useful in order to avoid blurring textures.

This tells me the authors tried different types of input and decided luma is the best. So why should we keep offering the second best option?

Yes a previous version offered Value as input (which doesn’t come from the authors by the way, but rather from another Blender developer), and I was basically arguing we don’t need it, based on my understanding of the paper.

I’ll upload a build with “Value” as input. If it does indeed produce better results in some cases then I’ll be happy to included in the final design.

I tried the recent build with the unified interface, and I like it more than the previous. In my typical usecases I wouldn’t need that extra value input, so I cannot comment on the fact that it is now missing.
As a user I wonder though why it needs 3 different value systems, threshold between 0 and 0.5, Local Contrast between 1 and 10 and Corner Rounding between 0 and 100. That’s kind of confusing. Can threshold and corner rounding not be mapped to values between 0 and 1? Or maybe all of them?

3 Likes

Thanks for the input! It’s a good point, these really are implementation specific values so we can map them all to values between 0 and 1 for further simplicity.

I will update the interface by the end of the week.

Some updates:

  • Inputs are all between 0 and 1.
  • Kept 3 parameters for input: see current version from buildbot
  • Original version with all inputs is now available at GarphicAll

Please let me know what you think

1 Like