Canny Edge Detector

A duplicate of https://developer.blender.org/D4105 to poke the devs and to collect feedbacks.


I made a Compositor for Canny Edge Detector.
It is placed in the filter menu.

This can be useful to produce pictures such as the following.

This is basically the same as the following link.
https://docs.opencv.org/3.4/da/d22/tutorial_py_canny.html

However, there are 3 differences.

  1. No Noise Reduction
  2. High/Low Thresholds can be determined for each pixel
  3. Constant Width

1. No Noise Reduction

Compared to real images, CG images are free of noises. (especially on EEVEE)
In addition, Blender already has plenty of noise reduction filter.
This is why I decided to omit the noise reduction filter.

2. High/Low Thresholds can be determined for each pixel

The high/low threshold is 0.2/0.1 by default. This follows the python skimage default value.
In the original algorithm, the thresholds was something global around the image. However, the algorithm does not necessarily require a shared value, I decided to allow socket inputs.
Since the thresholds determines the sensitivity, one can detect edges densely in some areas while leaving the other areas sparse.

3. Constant Width

Constant Width Off (Original algorithm)

VS

Constant Width On

When the Constant Width is on, the angles will be rounded to 0 or 90 (originally, one among 0,45,90 or 135)

I personally prefer the constant width on, but not quite sure what this causes. (I couldn’t find much differences in detected edges during my tests. However, there should be a good reason why the original algorithm runs this way)

There are several other changes in the actual calculation for speed up but the results should be the same.

demo file is available at https://developer.blender.org/D4105

1 Like

It would help if you explain which problems this is intended to solve, what the use cases for it are, to motivate why it should be included.

This is a textbook computer vision algorithm, but it’s not obviously the right method to find edges for artistic purposes.

I believe this would be helpful for adding edges with constant widths.

Currently, if you want to add edges at the compositer, it would be something around using sobel. However, AFAIK it is very difficult to make the edge width constant with the current node set.

Canny edge detect will enable a very reasonable one pixel edge, which the user may expand if needed.

I mean at a higher level. Is this for adding toon edges, or another use case?

For toon edges, a more typical method would be based on the Z-buffer. This way you get edges between different objects even if they happen to have the same color. You’d also want options to anti-alias the edges and give them a certain width, for better usability.

The compositor is also intended to be scale independent as much as possible (though not everything is at the moment), so you can render at different resolutions and get expected results. A fixed 1px result is not ideal for that, but it depends on the use case.

An algorithm that can be multithreaded well is helpful too to keep working with the compositor faster or possible to GPU accelerate in the future.

I see the point. I have some ideas for anti-alias and greater line width to try so I’ll test them soonish.

As for multi threading, I am not quite sure about the multithreading infrastructure in blender. Where should I look at?

The compositor can do multi-threading by distributing work per-pixel or per-tile. So if the algorithm can be split up in a such a way it would fit in. You can look at existing compositing nodes that do something similar.

For toon rendering in general it’s good if algorithms only have localized effect, you don’t want a small change in one part of the image to affect something far away. If that is the case, the algorithm is parallelizable by definition.

1 Like

I already understand that the compositer does multithreading for each pixel. I would like to know how to do multithreading in general, since the blob analysis is the only place that does not support multithreading and is not based on pixel-by-pixel algorithm.

We have a general task scheduling API like BLI_task for threading.

However it’s possible to do a good and efficient edge detection multipass per-pixel, or per-tile (with padding). Not necessarily Canny edge detection exactly, but at the moment I’m not convinced it’s the right algorithm for toon rendering.

1 Like