New Line Art proposal 2023, feedback?

I’m ok with it, but isn’t 4 pm (CET) quite late in China?

I’m afraid I can not join the meeting if it’s 4 pm (CET); not sure about @ChengduLittleA, but he also lives in China.
I think @pragma37 should have a certain level of knowledge about my algorithm, and I have explained most of my thoughts in this thread. So it should be fine if I’m not joining the meeting.
If this new proposal thing is going to happen, and my work is considered in the proposal, you can call me via blenderchat or in this thread.
My major concern is that how I could join the development. Since the GPU-related stuff is quite heavy, I think it might be better for me to prepare those code. But I’m currently an outsider and cannot contribute to the official git repo/branches

1 Like

That is true, that is is very late over in China.
Perhaps we can do it earlier like 12:00 CET? That should be around 19:00 local time in china.
I would like to have all interested parties present so we can properly talk things over.
If we manage to come up with something, I can talk it over with the other GP people in the later meeting.

1 Like

I’m ok with 12.00 CET, but I might be busy today; maybe we can delay to tomorrow?

Since I think this will be a longer discussion, I suggest to schedule a separate meeting for this topic. I prefer to have the module meeting shorter and to the point :slight_smile:

I will not be available tomorrow.
If you can’t make it today, we can reschedule to friday perhaps?

I’m ok with that, what about other ppl @ChengduLittleA @pragma37?

Your response is ambiguous.
Is it “not today, lets try friday” or “let’s try today, if it won’t work out, I’m up for friday as well”?

not today, I’ll be there on friday

Let’s do it on Friday then: 2023-03-10T11:00:00Z.
I’ll setup a meeting and post the link in the #grease-pencil-module channel.

1 Like

What you are talking about is more of a LOD thing - how many fragments(samples) are generated within a screen pixel. This is mainly due to the image resolution; for example, the figure below compares the extracted strokes from Pencil+4 and my algo, when viewport resolution is not enough to support outline samples:


I’d argue that in this case, purely geo-based algorithms like Pencil+4 suffer since the extracted strokes are complete but very dense and overlapped; they still need simplification to provide a natural LOD at the image - which again leads it back to image-based algorithms.
And that’s why I prefer a hybrid system - compute 3d lines from mesh edges, then raster them into pixels and link them to strokes. This allows us to benefit both from the geo- and image-based algorithms.
For more details about my thoughts on geo-based, image-based and hybrid line art systems:

[quote=“ChengduLittleA, post:11, topic:28078, full:true”]

From the paper demonstration, the performance is impressive. I’m not sure if you were to render very big how is that gonna slow down, and/or if the GPU is tiling the render, will there be stitches on those chains which would result in a visual “line” across the image. For comparison, I rendered a line art output on a 15000x10000 output… Which it just went fine because the performance is not resolution-sensitive (Which could also be a downside if you are rendering small, pixel based methods generally handles LOD naturally)

The benefit of current Line Art algorithm is that the entire pipeline being vector (which then naturally allows edit), and it does layered occlusion so it’s possible to have controlled see-through. For other usages I don’t see why we don’t give raster based algorithm a go.

I’ve been using the compute shaders in Blender and ported some basic shaders from my Unity project, it is not the best but barely enough for me to implement the algo.

5 Likes

Here is the summary of that meeting: 2023-03-10 Line Art Meeting

4 Likes

The node-lineart proposal is updated with a newer design to reflect on the knowledge of how GN works, and I try to make the node compatible with existing generic GN stuff.

Could use a bit more feedback :slight_smile: Thanks guys!

3 Likes

I think it’ll be necessary to push a bit further to a design that fits with the existing data-types, sockets, and geometry nodes design. Here are some specific points:

  • We don’t have list sockets currently, so Geometries doesn’t quite work. Maybe Mesh would make more sense as an input socket.
  • “Crease” isn’t a built in thing in geometry nodes, I would recommend making that into a generic boolean selection socket.
  • Rather than generating a named attribute “in_shade”, better to output that data as an anonymous attribute from the node that generates the data. Geometry nodes don’t typically generate non-built-in named attributes themselves, that’s for the user to do.
  • “we can still pass line art internal reference by socket” I want to push to find a design that uses existing socket types, or at least generic types that aren’t specific to line art if a new feature is really necessary. We can’t add new socket types for every situation like this.
  • I’d suggest making the Camera and Light inputs lower-level-- use location and direction inputs for both of them. The nodes shouldn’t require using separate objects.
  • Can you split the direct conversion of edges (like selecting creased edges) and the contour line generation into separate nodes? They are really very different. The shadow processing could also be split into a separate node.

Also, more of a meta-point, but it would be helpful for you to label the data type of sockets in mockups. Or you can also build the mockups out of group nodes in the node editor. Thanks.

4 Likes

Ah well then consider that input as a realized geometry.

I see. Then the output should be a geometry socket with a bunch of field sockets that represents attributes for the same geometry. It might look somewhat cluttered because those sockets are also duplicated on the input side on the filtering node

This is understandable, although then we need to add a camera FOV/aspect ratio and clipping plane inputs as well :sweat_smile: .

The problem is that we can’t really customize the process of calculation, all steps refer to the same pool of data internally, like implicit edges and triangulated topology have references to each other. Let’s say we make “feature line detection” into a stand-alone node which spits out lines in the form of generic mesh, we lose those kind of internal information, including topology, quad-tree association etc.

If we need that feature, then what we may be able to do is have a internal global line art data in the background that keeps track on all the generic socket data, but that sounds quite sketchy and COW complicates things a lot in this case.

Also considering the problem of shadow, it actually needs to cross reference 2 line art runs at the same time to determine shadow state, so unless we can pass line art reference or have a background handler, it’s not really possible.

So I guess the way to put is that maybe the “calculation architecture” is incompatible with current GN design, but we could think it as just to provide flexibility in post-calculation filtering, because we can easily output necessary information as generic types. It’s gonna much more compatible with the current design since line art internal data is not needed from the outside.

Edit: Actually, ideally, considering the main goal of line art which is “provide an editable geometry result”, and if it needs to “take inputs from the scene”, it should live somewhere in-between the geometry evaluation stage and the render stage, it’s gonna become kind of a “geometry renderer”, some other use cases can be benefited from this too. (Well this is mostly pie in the sky… For now we may be able to fake this with another viewlayer?) We might also be able to make it as just a regular renderer too, it’s just somehow we need to output stuff to scene…

(Side note, It is possible to not use the algorithm of line art but only do feature line detection (you can make a node group for that using just internal nodes) and directly compose strokes with depth, but that kinda defeats the point of having line art :sweat_smile:)

OK today I updated the proposal yet again. This is after I had some more discussion with Hans et.al.

Basically what we are trying to do is to hand over filtering functionality completely over to attributes, so line art won’t need to record any specific object-reference data internally, this also makes it easy to pass stuff around with generic GN data types.

Feedbacks are welcomed! ( just go to the top to click on the doc link :sweat_smile: )

Looking good! I only take issue with your proposed method of storing attributes, I think copying them to a AoS format like BMesh shouldn’t be necessary when the index of the original edge (or the index into some array) is available. But that only matters when it comes time for the implementation!


A “Camera Info” utility node sounds great. And it fits with the task here as another node. I think the way to achieve your goal of having a higher level interface that accepts objects directly is to build a group node that wraps the builtin node.


In your mockup, what does the Line Art node do when both “Contour” and “Shadow” are turned off? It seems like it should do nothing in that case.

1 Like

Ah technically it’s possible to just store an index to original primitive since that’s gonna be available throughout the calculation if that’s what you meant. What I’m thinking is that in line art, 1 edge can have multiple cuts that could have interpolated attributes and different mask etc, needs to check if anything else is preventing us from doing so (I think we could also evaluate those just as we are outputting them, by interpolating in-place? Would speed be a concern?).

From what I can see, the contour lines (a lot of times) don’t have a real-edge reference, so eventually there are some attributes that need to be stored “somewhere”. Now I’m thinking of storing the those attributes as a reference to an offset of a continuous self-growing array(since the output chain & list length can’t be determined initially), and later when calculation is finished, we can assign indexes to chains and access per-point/segment attribute easily this way.

having a higher level interface that accepts objects directly is to build a group node that wraps the builtin node.

I think that’s the way to go, yes. I suggest that this could be a utility group node that’s built-in for convenience, this may also apply to line art presets for different kind of usages (Or maybe it should be provided through the asset manager).

In your mockup, what does the Line Art node do when both “Contour” and “Shadow” are turned off? It seems like it should do nothing in that case.

Ehhh well if that I guess basically it only loads any lines that user tells line art to load (via attribute)… So yeah :sweat_smile:

I wonder how could the GPU data be integrated when the NPR render engine is done?
If the work graph works across modules then GN can send task nodes to the render engine.

Initially for line art node, it’s still gonna be CPU until we get it running correctly, and we can think about the GPU stuff :slight_smile:

1 Like