New Line Art proposal 2023, feedback?

Hi! It’s Yiming here again. I have written down a new proposal for improving line art, mostly just things I’m thinking about.

Basically I try to see the possibilities for making line art into a set of nodes to provide flexibilities to use in combination of other modelling stages. Also with thoughts on improving the performance of line art in the manga/anime use cases.

I posted here mostly trying to get some eyes on the technical side of things, in case there’s anything obvious I’m missing. :sweat_smile: Thanks guys!

UPDATED 2023/03/18


Hello. I ran in a quick one, but I am more familiar with geometry knots than with a line art.
Do you suppose the benefit of this complexity for the implements of the drawing, or do you consider it as a way of parametric filling with color?

1 Like

A very important point to consider is that many artists use LineArt to generate a basic drawing that they then refine manually, so being able to bake the data and turn it into a “normal” grease pencil is essential.


The past year I’ve worked on a proposal for an NPR engine that features line rendering support:
[NPR · GitHub]
The project is not yet approved and the proposal is not final, but I think is still worth discussing the approach proposed there.

IMO, it makes a lot of sense to integrate the line rendering system into the render engine itself, since it has several advantages:

  • Artists can have (per-pixel) control of the line stylization based on any shading feature (lighting, image and procedural textures…) and use the material nodes they already know.
  • Line rendering can be aware of renderer-side features, like masked transparency or vertex displacement.
  • It can be computed purely on the GPU, and can easily make use of the mesh loading and caching systems that are already in-place for rendering, so it’s much easier to get more optimal performance.
    For reference, it takes around 10ms on a 3060ti to render geometry-based contour lines for 9 million triangles:

The main disadvantage of this approach compared to LineArt is that it doesn’t support chaining-based stylization.
However, it may be possible to combine this approach with the chaining and stroke extraction from WangZiWei-Jiang’s algorithm to get the best of both worlds.

That said, temporally stable chaining is still a very tricky problem, so I think it is worth exploring other stylization options (like screen-space filters or photoshop style stamp-based strokes) except for cases where it’s absolutely necessary, like conversion to GP strokes.


I strongly agree with this point. One of the main advantages we have right now is the ability to edit strokes in post. I think this is a very major advantage that we have over other software or methods that produces lines from 3D geometry. I don’t think we should abandon this, I think we should leverage this fact even more.

GPU based method does have a place in the pipeline, absolutely. Especially if the artist just wants to throw huge amounts of geometry into their scene and doesn’t care too much about the line quality.

However GPU depth buffer based method will have issues with smaller details so if quality is very important we should also provide slower, but more high quality geometry based methods.

The current performance issue we have with LineArt currently is the construction of the occlusion level lookup table. Fluid sims and other physics simulations do use very similar acceleration structures, so I think we can make this much faster than what we currently have (and perhaps even make it able to run on the GPU).

There are quite old papers around that does seem to tackle this issue quite nicely:

I think if we finish up my “Smooth Contour” Blender implementation in addition to the above paper, we will have very high quality and temporally coherent strokes.
In addition to that, the technology used in the smooth contour code could in theory be used to create fill shapes from 3D geometry as it can accurately cut out and flatten areas of the mesh. That is, it could be possible to take a 3D model with textures and create a flat 2D cutout of it that could be pure grease pencil.


With talks of potentially another way to do NPR things, what’s the plan for Freestyle? This question often comes up, including somewhat recently.

The Freestyle code is unmaintained and all of the recent commits there (many dozens of them) are fallout from cleanups/changes in other areas. Meaning Freestyle is currently a tax.


  • Is LineArt meant to eventually replace Freestyle?
  • If so, what features are missing in order to do so[0]? And can they be done for 4.0?
  • If not, what’s the fate of Freestyle?

I don’t have much skin in this game[1] but it would be good for the Grease Pencil/Line Art module to have a stance on the matter with 4.0 on the horizon.

[0] Somewhat recent BA thread: Is Freestyle still useful? - Blender Development Discussion - Blender Artists Community
[1] I’ve mainly used Freestyle to render proper Wireframe lines on subd models. I’ve not been able to get LineArt to produce nice looking lines for similar models but maybe that’s user error.


Hi! Line art doesn’t generate enclosed shapes, thus it probably won’t touch on the “filling” part, but rather provide more flexibility on tuning strokes.

Yeah, well I think if we implemented this as a “node modifier”, then naturally it’s gonna support baking right?

Hi I think we also talked briefly about this implementation before. The image-space detection is indeed fast (well mostly depending on how big your image is). And overall if the goal is not precision vector result, I believe this kind of algorithm is a much better solution than Line Art (which initially is developed for me to do mechanical drawings), especially if used to do manga/comics sort of subdiv-heavy scenes.

I’m not sure how you do the geometry-space line detection, presumably an additional adjacency table texture? The old LANPR GPU mode does that and it’s indeed very fast used together with depth occlusion with a little bit bias, so if we don’t need occlusion level info, this can be a very good solution.

The disadvantage of this render-engine type of approach is obviously the lack of ability to edit stroke afterwards and shade/modify them with modifiers etc. (Curiously I literally don’t do any editing, but for other artists things might be very different)

Ehhh? Apparently you could read the detected edged back and chain them, or WangZiWei-Jiang’s algorithm claimed to do all that on GPU so we could take advantage of that and chain them.

The good thing about pragma37’s algorithm is that is has image-space samples. which means it’s easier to chain into a temporal coherent result, but I believe you need to add additional information to each sample point to register e.g. time and intermediate interpolation factor etc. I’m still not quite familiar with the Wang-Jiang algorithm as in how they serialize the chain on GPU… the temporal coherency thing might need more work for it to work on that algorithm.

Isn’t is done already? lol

I think due to the uncertainty of geometry, the best way to have this sort of “cut out” shapes is to render it in raster as a mask and use potrace (which blender already come with) to trace it back as a vector shape.

Yeah sort of… It’s meant to be faster and more stable in a lot of cases where the view map in freestyle will fail to produce usable result.

I guess it’s not impossible… But depends on the complexity and our time :sweat_smile:

I think another big issue is now GPencil doesn’t have a very good anti-aliasing, thus if you render the regular size it’s gonna look really bad. If you look at some of my stuff with line art, I think they look reasonably fine, but I need to render huge and scale back for the moment. I believe new GP will fix this.


Totally agree with this point!


I think equating GPU-based computation to lower render quality is wrong.

The image from my previous post uses geometry-based contour edges, the quality is just the same as any other renderer that uses that same method.

While I get that in the end it may turn out to be best to have two separate systems (GP LineArt and engine-side line rendering), I think it’s worth looking at how these systems would overlap and try to avoid duplication and/or ending up with design decisions taken for one limiting the possibilities of the other.

Even the fixed examples show some flicker and sliding issues, and those are still fairly simple examples.
I think it’s still worth leaving room for exploring other stylization methods.

It’s simply a separate mesh rasterized in line mode.
It has a different index buffer than the main mesh and an extra buffer with per-edge data:
Occlusion is handled by regular hardware depth testing against the engine depth pre-pass.

For stroke editing I’d say this is a non-issue because we can always bake the result out of the GPU, and the extra ms it takes should be fine for a 1 time operation.
For GP modifiers it could be a performance issue indeed. But it would be as well for any GPU to CPU method.

On the other hand, it opens the door to shade/modify lines with material nodes and screen-space effects.

According to him, the later parts of his algorithm (contour chaining and stroke extraction) only require a list of the line pixels and their normal/tangent.

So in theory this could also open the door for chaining working with non-geometry-based lines.

Do you have in mind any specific improvements?


Although I’m still kind of an outsider to this community, I want to share some of my comments and thoughts on line art and NPR rendering here.

I remember we had a detailed discussion on this before;
My GPU chaining algorithm can extract strokes from the line pixels if the rasterized lines have minimum width (about 1~4 px); with chaining we can build advanced stylization with fully controls:

GPU-based methods dominate in performance since the nature of rendering primitives is parallel, except for the stroke chaining, which was a serial process but now has been efficiently parallelized by my algorithm. Here is the performance comparison, and I believe it speaks for itself.

I have to say that the CPU algorithms are not evidently “high quality.” If you do extensive experiments and surveys, the actual situation is much more complicated.
When mesh gets dense and complicated, geo-based algo suffers from various problems: zig-zags, broken contour chains, accurate visibility, etc.

Actually, my GPU-based algorithm can extract strokes with comparable or even better structure than geometry-based ones; here is the comparison for meshes with different geometrical & topological complexity for Freestyle, LineArt, Pencil+4, and my algorithm:

FreeStyle - Strokes highly fragmented, wrong topo everywhere

LineArt (geometry-based) - Better, but still fragmented

Pencil+4 for Unity - The best in existing geo-based systems

My GPU-based algorithm - Comparable to Pencil+4

What;s more, when strokes are generated on image space (usually use depth buffer for visibility test), the strokes naturally "fade"s out as the distance increases, which provides a natural LOD for the lines.

How about the chaining process? When I made my comparison for the research paper I remembered that took LineArt a long time when there are lot of edges/pixels to link. What’s more, you need to update big VBO to the GPU each frame, which can cause some overhead.

Sadly the old paper you mentioned is bad; They only picked this example that luckily works - See the video below, starting from 1:04:

I’ve extensively researched the subject of stroke temporal coherence and found it a very tricky problem. It’s similar to SLAM, or reconstruction and tracking simultaneously. Its difficulty also links to the type of stylization you want to apply - the hardest is maintaining persistent arc-len parameterization and topology for strokes.
Among publically available algorithms (with provided code) in this area, I would say the Active Strokes by Benard et al. is the best. However, that algo can not maintain correct stroke topology among the frames. Inspired by this algorithm and advances in modern GPU simulation and optimization, I came up with a solution. However, it still needs to be done, and I want a new platform to further test & develop it.

Here is what I got so far, fully GPU and real-time: strokes change color when topo changes, and the vertices are persistently tracked to keep stroke attributes(width, color, texture uv, etc.). High framerate & resolution are required to achieve proper temporal coherence, which is fine since the algo is fast.

I have yet to do a practical analysis of the smooth contour algorithm. Still, from the paper (written when Benard was in Pixar) and the presentation, it will be a solid solution for offline, accurate contour rendering from subdiv meshes; it may even be coupled with the GPU-based realtime algorithm.
Also I noticed that Pierre Bénard has a research project dedicated to this field and they have a simpler and faster algo from their work on smooth contour - sadly still offline and only applies to subdiv meshes…


Indeed, the sample-chain method is gonna be much more robust, and this is also what I think would be better for artistic renderings of things. However the chaining algorithm would sometimes be tricked by densely packed “feature” pixels leading to weird result (e.g. from a big flat face where it’s almost perpendicular to the camera). But I think with some pixel tricks we can sort this out… e.g. a derivative pass on normal/curvature/depth.

From the paper demonstration, the performance is impressive. I’m not sure if you were to render very big how is that gonna slow down, and/or if the GPU is tiling the render, will there be stitches on those chains which would result in a visual “line” across the image. For comparison, I rendered a line art output on a 15000x10000 output… Which it just went fine because the performance is not resolution-sensitive (Which could also be a downside if you are rendering small, pixel based methods generally handles LOD naturally)

The benefit of current Line Art algorithm is that the entire pipeline being vector (which then naturally allows edit), and it does layered occlusion so it’s possible to have controlled see-through. For other usages I don’t see why we don’t give raster based algorithm a go.

The temporal coherency in @WangZiWei_Jiang is also very good.

The only thing I think prohibiting the implementation of this paper into Blender is that GPU_ doesn’t support compute shader and I think that’s a compatibility choice since on a lot of different hardware things work rather differently. Even on my two computers the GL driver behaves radically different in just using memoryBarrier() and framebuffer fetch. The algorithm may require some shader extensions that some not-too-old hardware doesn’t even have. I hope this doesn’t stop it from being implemented, but to integrated to master, it should depend on how we are gonna move forward with the GL/Vulkan requirement.


Wow! Thanks everyone for your thorough responses!
This makes me really happy as it shows we have a lot of highly invested people involved.
Which is great because then we can work together to bring Blender NPR rendering to the next level.

Sorry if I sounded a bit dismissive before, but on the other hand it seems like that lead to a nice discussion where we go over the different methods available. :wink:

I’m a bit swamped with work right now so I’ll try to respond properly when I get the time to do so.
Or perhaps it would be better to do a video meeting so we can iterate quicker on the different ideas and issues we have?


That’s no longer the case! :grinning:
EEVEE Next and the new Draw Manager makes extensive use of compute shaders, IIRC the minimum required GL version will be 4.3.

Sounds good to me. :+1:


:exploding_head: GREAT NEWS!!!

@pragma37 @WangZiWei_Jiang perhaps we can discuss this in the Grease Pencil meeting tomorrow?
@ChengduLittleA suggested this to me in and I think that might be a good time and place to do it. Do you guys agree?

I’m ok with it, but isn’t 4 pm (CET) quite late in China?

I’m afraid I can not join the meeting if it’s 4 pm (CET); not sure about @ChengduLittleA, but he also lives in China.
I think @pragma37 should have a certain level of knowledge about my algorithm, and I have explained most of my thoughts in this thread. So it should be fine if I’m not joining the meeting.
If this new proposal thing is going to happen, and my work is considered in the proposal, you can call me via blenderchat or in this thread.
My major concern is that how I could join the development. Since the GPU-related stuff is quite heavy, I think it might be better for me to prepare those code. But I’m currently an outsider and cannot contribute to the official git repo/branches

1 Like

That is true, that is is very late over in China.
Perhaps we can do it earlier like 12:00 CET? That should be around 19:00 local time in china.
I would like to have all interested parties present so we can properly talk things over.
If we manage to come up with something, I can talk it over with the other GP people in the later meeting.

1 Like

I’m ok with 12.00 CET, but I might be busy today; maybe we can delay to tomorrow?

Since I think this will be a longer discussion, I suggest to schedule a separate meeting for this topic. I prefer to have the module meeting shorter and to the point :slight_smile: