Hi, I implemented a new compositor node “Kuwahara filter” that I want to get user feedback on. The node lets you create stylized image from a realistic one.
Note: only full-frame compositor is supported for now. Full-frame compositor is an experimental feature. To activate it:
- Go to Edit → Preferences → Experimental → Enable “Full Frame Compositor”
- Then go to Compositor → Optionsl and set Execution Mode to Full Frame:
Try it out here: Blender Builds - blender.org
Feedback on the following points would be nice:
- Is the purpose of the filter clear
- Without reading elsewhere, is the effect of the parameters “kernel size” and “sigma” clear?
- Do both variations (classic and anisotropic) feel fast enough?
Patchset for technical discussion: #107015 - WIP: Compositor: add new node: Kuwahara filter - blender - Blender Projects
Hi ! I think this filter will be familiar to anyone having used an image editor. It’s usually presented as a way to denoise. I haven’t seen “Kernel size” used in Blender anywhere. The compositor’s various blur nodes just call it “size”, but the meaning may be more obvious in the case of blurring than it is here. I think “size” might be enough.
Same goes for “sigma”, not sure the term is very meaningful to the average user, even though they can consult the doc. I would go for something that describes the effect, which you already did in the tooltip description. Perhaps “Smoothing”. edit or “Strength” ?
In any case the build doesn’t work for me. The node is a no-op, and doesn’t update on frame change.
It doesn’t work for me either.
(Also I suppose this does not support realtime compositor but trying to use this with it leads to crash.)
Thanks for testing!
I forgot to mention the node is only implemented for the full-frame compositor. I will add a note on the node and prevent it from crashing when used with viewport compositor.
Makes sense, I will consider renaming the parameters.
I see it’s working with full-frame compositor!
I think color of the node is inconsistent, as other filter nodes have dark purple color but this one is yellow like color nodes.
Just curious. Is there a particular reason that the soft minimum value for size is set to 4? I can see that 4 would be a good default value but the soft minimum value could be 1 in my opinion. I imagine 1~3 values have many use cases (subtler stylized look for example) and think putting in behind a soft limit makes it less discoverable.
Thanks for working on this.
I’m on Linux so I can’t try it yet, but It’s a filter I was waiting for years!
Thank you for working on it! Can’t wait to try it.
You can download the current version here: Blender Builds - blender.org
- Changed parameter naming and defaults
- Make it clear viewport compositor is not supported
- Built packages for windows, macos and linux
- Made color part of filters, so its color is now purple
Since the kernel size is an absolute value in pixels, its effect is highly dependent on the size of the input image. For 1k-4k images the value 1 almost has no effect. That’s why I chose 4.
Another reason is the anisotropic variation will only be free of visible artefacts for values >= 4 (ideally should be >=8 actually). I also adjusted the default for smoothing accordingly, where the effect starts to get noticeable starting from the value 2.
I don’t think strength reflects the effect well. Kernel size (now called “size”) is the value actually affecting strength. Sigma (now called “smoothing”) only influences edges but not how “stylized” the image looks. So I went with the user-centric name “smoothing” : )
Still no dice for me with this new build. Is there anything needed besides activating the full frame compositor in the experimental settings ?
You will also need to set it to the active compositor:
Go to Compositor → Options and set Execution Mode to Full Frame:
Sorry for the confusion, I updated the description of the first post.
Classic mode works fine but Anisotropic gives me this result.
Anisotropic size 4 smoothing 2
The image sequence is PNG 1920*1080, and I’m not sure if there is a way to log the execution time, but it updates on frame change in about 0.6 or 0.7s on this machine (gtx970).
Can you try with a filter size >= 8 ? Also, your test image is already smooth and stylized, so the effect probably won’t be obvious…
The glitch is gone at 8 :
Anisotropic size 8 smoothing 2
but it takes 18s to render
Ok, thanks for testing!
I have performance optimzation on my todo list, might have to do it sooner though
I see artifacts all around the image borders as well, perhaps that could be improved.
Here is the current build: Blender Builds - blender.org
- Solved issue with black pixel artefacts. Please let me know if this solves the issue for your images as well.
- Solved the issue with artefacts at image border
- Implemented filter for tiled compositor. Note: this is much slower than full-frame compositor. For testing please use full-frame compositor.
- Changed meaning of “size” parameter. You should see significant changes but without any artefacts starting from size = 1
- Kuwahara filter is now in main
- Classic variation is now about 3.5x faster
- Anisotropic variation is now 4x faster. However, results are less accurate, and this is what this post is about.
The patch #108796 speeds up the anisotropic variation significantly, but results are less accurate. We would like to ask for artist feedback regarding the significance of the difference.
Following images show how the new patch affects results (left: original, middle: fast and inaccurate, right: slow and accurate):
Full render side by side (left: fast and inaccurate, right: slow and accurate):
You can try it out the latest version with the speedup yourself here:
I tried it with a bit of tricky situation. Maybe a bit of an extreme case that would not use Kuwahara filter.
Used default values (size: 4, smoothing: 2).
Objects have more obvious color differences. Iron bars and shadow of them are crossed, making a grid-like pattern.
Fast one shows more artifacts along the edges. Even the most obvious edge (where cage’s red base and and the floor meets) are blurred.
Render with Classic mode for comparison:
I imagine the fast one could be trickier to use for animation rendering in some situations, due to more instability between frames.
That said, I definitely see generally 4x faster speed as you said.
Hello! First of all, let me congratulate you on getting this into main, thanks for your awesome work and for adding this long awaited node !
Here’s my two cents on the speed vs accuracy, coming from a studio perspective that specializes in stylization in Blender:
It’s true that the “accurate” version can run prohibitively long sometimes, in a sense where it almost hinders the artistic process of iterating over the settings to adjust the node tree (it’s also a limitation of the compositor which has no caching, meaning any variation in a node tree containing the Kuwahara filter basically grinds to a halt at every variable change.) That being said, the results are very promising and great in most cases, and stable enough for animation.
The fast version enables faster tweaking which is great for the compositors, but as @sunkper demonstrated, the artifacts generated (and flicker heightened during animation) in some cases unfortunately simply render the shots unnacceptable from a quality standpoint, making the filter unusable in some demanding production settings.
Ideally, I would say the best option is to have a “preview” toggle such as with the defocus node, which switches between the fast and accurate algorithms:
However, I understand that this might not be the most optimal option for you as you’d have to maintain and update both implementations.
If I really had to pick one, I would choose the accurate version, even if it means much slower performance (and keep my fingers crossed for performance updates, and more importantly, the GPU implementation, as it could resolve all performance issues!)
Thanks for adding this filter. Kuwahara Anistoropic mode seems to be insanely slow with 4k or highres images.
The classic mode is like 1 sec, the anistoropic mode is more like 30+ sec with a 4kx3k 16bit image.
Does anisotropic mode use polynomial optimization?