[Geometry Node] Distance tesselator Node (With Dynamics LoDs)

Hi everybody !


I’m litteraly a newbie in code for blender !
Today ! I will put my hands in blender code for learning, asking some help and litteraly give a new features to the community ! This topic can be updated at different time and i don’t know how much time this features gonna be implemented because “I am a beginner in C++”
And this topic was created for

  • Following the developpement of this feature
  • Maybe asking some help !
  • Giving feadback

PS : What is you’re level in C++ ?! Bruh i got some basis and i can understand fastly but of 10 i put 4/10


Presentation of the project

A new node for the geometry node called “Distance tessellator but why this node ?

This node permit to have a more density into mesh like the other subdivision mode ecpect this one works with camera and have an ease curve for having a much better transition into the topology instead giving and “Hard result”. With this features you have more smoothness transition more the camera come closer to the mesh.

Exemple :

03-quad-grid-morphing


How much time ?

I really ahve no idea for me only i could give 2 - 3 Months because i’m a noobie and i need

  • Understand code
  • Understand & writing GLSL
  • Writing code
  • Reviewing code
  • Fixing bugs
  • NO MAKE MEMORY LEAKS !
  • Try to avoid Pointer, working only with reference

What is you’re plan ?

  1. Try it locally with OpenGL to undertand better and not make bullsh*t
  2. Setup Repo & tools
  3. Understand prinicpe of Pipelines & Complex math (Only for this feature)
  4. Make it works

Mockup


Geometry Node - Node properties

Input :

  • Triangulated Faces : Geometry (Green socket) → Need a triangulated face as an input
  • Min distance : Factor (Grey socket) → Need a min subdivsion level exemple : 1
  • Max distance : Factor (Grey socket) → Need a max subdivsion level exemple : 6
  • Only render : Boolean (Pink socket) → Define if the feature are enable only in render

Output :

  • Geometry
Input and output can be have a changement due to if want a self control of a full control so maybe a more input and output can be added so the mockup can evolve.

Exemple of situation (Usage Synopsis)

A beautiful landcape, with some grass and water in a mountain with a lot of details displacer by a displacement maps, with this features more you’re coming close to the surface, more you can capture the details with you’re camera.


CPU / GPU / Memory Footprint

In facts this feature have less memory footprint because he compute in real time and with limitation definied in the node a Min and a max subdivision distance exemple min 2 and max are 6 he start from 2 (2x2) subdivision and go to 6 (6x6) in passing by 3x3 → 4x4 → 5x5 and finally the max 6x6 subdivision when the camera come closer to the mesh…

Firstly the feature come to grow on the CPU and i think i’m gonna made this bad boy working in multi-threaded and in the future if i can i’m gonna move to the GPU module.


What problem solve this feature ?

WIth this features you solve the problems of “I need more density at this area but i don’t want he stay here everytime when i do not came closer” and plus with the transition the details come smootly without put you’re eyes harder on the details coming in the topology !


Docs ressources

Tessellated Terrain Rendering with Dynamic LOD - victorbush (Main ressource to dev this feature)
LearnOpenGL - Tessellation (Additionnal ressrouce helping)


Game engine ressource exemple parameters and demo

13 Likes

Exciting, I’m going to comment here as a user, this looks like adaptive subdivision & displacement in Cycles, only as “real geometry”. Are there particular cases where rendertime displacement wouldn’t cut it, where this method would help? The example from Fortnite looks good, but would be a typical candidate for rendertime displacement if transposed to Blender.

Hehe, this is funny.

I needed something to evenly subdivide any non closed mesh (mesh to volume and volume to mesh process only works for closed meshes), to give various meshes even topology for displacement with textures. I need it so much I already set away time to try to code it myself after I finish current job, but you beat me to it.

Anyway, my idea was a simple even tessellation node, where geometry goes in, and there’s just one parameter “max size” which ensures the mesh doesn’t have any triangle larger than this size. Yours I supposed would expect the input tessellation size to be a field.

What I am curious about is how would you evaluate the input field let’s say based on distance from camera if the source geometry before tessellation would be too sparse. Let’s say the input would be just a single quad 1x1km plane with 4 vertices. Those 4 points don’t seem enough to establish some distance fading gradient, but I may be wrong.

1 Like

Is almost exactly the same thing ! Expect i’m do expose 3 problems !

  • You do not activate in Modifier adaptive SubD if you’re using this node, or you’re subdivide by 2* your actual topology and you’re got a big memory due to polycount due 2 similars methods of Shader SubD
  • Because is a real time compute due to shaders if you’re putting you’re cameras at one distance and you’re applying the GN you’re been conserving the actual topology so if you’re the point to get small vertex of beginning the subdivision you can have a crap topology ! And the user need to be informed about that !
  • Is a shader can run in Geometry nodes in CPU ? (Need to investigate, but i think yes the actual subdivide node use OpenSubdivide)

Yes is it a good candidate for some mesh when you’re need an important density of topology at one time like in landscape or water need details ! Or simply a character using displacement maps

You do viewing this node for like a rendering ! Not a tools for create something procedurally is really for rendering and appreciate every details of the mesh. Is a refinement toplogy adding details.

All help recieve is welcome ! But i works only in C++ & GLSL because after a time to investigate this new features do using Shader tessellation


Next to give a awnser first the good methods is triangulate faces to give a better computing and a better topology as triangle because most of tessellate shader use triangle as primitive you have 2 main shader

  • Flat Tessellation (Is like simple subD this one not smooth the topology,not like Catmull clark)
  • PN Triangles (This one is like catmull clark he smooth the topology !)

So the size of you’re mesh doesn’t matter because is like you’re subdivide with a min value and a max value like the “Subdivide Mesh” Node the difference here is you have a transition to the details and have a LOD in real time more the mesh are huge the the pré-density of the mesh to be subdivide with a “Base” (A pre-subdivision) or simply by increasing the max value

Exemple :

You’re having a huge mesh like 500x500 Meters and you’e need to subdivide you can use the basis topology of you’re mesh like that and just increasing the sub-steps of subdivision like max 6 turn into max 32… so everytime 1 subdivision are complete you’re add one to go at max value


And i want to expose more parameters for this node for letting the user choosing how topology look !

You can take a look at this article everything are perfectly explained ! But you have a few articles in the WEB he do exeplain how to do that ! That was a features of all Graphical API

It is not really cleat to me the full motivation behind this work. There are building blocks that allow implementing such system. Surely, it might not be as easy to use as a single node, but it allows to have more grained control. For example, you can all sorts of tricks about the camera frustum as well.

Did you study what it already possible in the geometry nodes, and what artists are able to achieve already? Or are you missing some specific algorithm for a reason that is not so obvious for us reading this thread?

For those who has Studio subscription, this is what we did for the Gold project: Research & Development - Gold - Blender Studio

Having something easier to use for new people sounds good. So it is more like how to get there: maybe better and easier solution would be to make it an asset node group that we ship with Blender?

6 Likes

Rendertime micro-displacement is still considered ‘experimental’ despite the first bits of code going into Cycles 12 years ago. Mai got us much of the rest of the way there, but the Blender Foundation does not yet seem to be fully aware of the usefulness of mesh tessellation for highly detailed scenes (which leads us to continued reliance on cranking up subdivision levels in cases leading to a lot of memory waste).

3 Likes

Did you study what it already possible in the geometry nodes, and what artists are able to achieve already? Or are you missing some specific algorithm for a reason that is not so obvious for us reading this thread?

The problem with geometry node and nobody want understand, is really hard firstly ! And we need a lot of node for make a small tweek is not because you’re purpose a lot of node the task come more easyer… I prefer to have one node with a lot parameter and consistant than searching everytime what node to connect at what input… Is really frustrating ! And don’t found in the GN a comparable node to do the same things like him by subdiving correctly the topology with increasing or decreasing the distance and controling where the edge popping


It is not really cleat to me the full motivation behind this work. There are building blocks that allow implementing such system. Surely, it might not be as easy to use as a single node, but it allows to have more grained control. For example, you can all sorts of tricks about the camera frustum as well.

As you said is difficult “Than one node” ! Is a simple reason to not use ! Nobody want use something harder and taking so much time to setup ! My node gonna be evolve in future version with more parameter for set the topology wanted with exposed parameters like Inner & outer edge and many other things to make it consistant but simple.


Having something easier to use for new people sounds good. So it is more like how to get there

Having something easier is really user friendly is what everybody want ! Simple setting but, POWERFUL ! Expose only what you need to expose ! no other things to render the task harder or the user lost what paremeter to set :man_shrugging:


maybe better and easier solution would be to make it an asset node group that we ship with Blender?

It’s not a bad idea but i don’t to make 4 node for one thing i want the thing rest simple to use and really friendly ! Because the geometry node in general aren’t user friendly ! Is hard !! And the thing do and reste simple because anybody can use it ! even you’re been using non procedural for the modeling etc, you can use this tool for rendering properly and is very useful ! As i said is more directed for rendering tools (Used in GN) than a tools for modeling or generating something procedurally ! Now if you have any suggestion or giving some help you’re welcome :smiley:

1 Like

Yeah completly ! This an underrated features in blender :cry: We need more dev about this features for increasing performance and having a real tools ! Is really useful in any projects when you need details (Water,terrain, bricks, wheels, Tissue… etc etc…) !

That’s the point of group nodes and node group assets. Any complicated node system can be shared and used as a single node with some parameters.

6 Likes

Tell me you’re plan if you have one to split this one node into multiple… But the node need to do stay easyer even is splitted :thinking::thinking::thinking:

I don’t have time to design this system right now. From skimming the proposal I think focusing on the fundamental data structures and data processing abilities of CPUs and GPUs and how it relates to rendering would make the design more effective. Frankly I don’t see the detailed understanding in the description I feel would be necessary to execute such a difficult project. But I appreciate your enthusiasm and experimenting can’t hurt.

4 Likes

Cycles has an implementation of DiagSplit. I’m not sure if that’s still state of the art. But the algorithm in Manuka was inspired by it, so it’s probably pretty good. Quality should be better than tessellation shaders, but a potential GPU implementation could be slower.

It would be neat to have it on the Blender side. However both the algorithm and Blender integration are quite challenging.

10 Likes

Unreal engine and a lot of game engine been using tessellation in real time and have no troubles… Maybe is accelerated by the GPU + CPU but even have no troubles and this features can be fully skipped by the implementation of Meshlets in the future !:man_shrugging::man_shrugging:

But now i can understand why the adaptive subd in cycles are stopped, if a complex things like that otherwise than a simple shader are used yes… I don’t know if he was the good methods for that :thinking::thinking::thinking:

So i can suppose this node can replace DiagSplit partially !

1 Like

GPU hardware tesellation is designed for realtime with lower quality. DiagSplit and similar have found more use in offline rendering with higher quality. Both have their uses. Both are difficult to implement well, especially for a beginner.

Unreal is actually discontinuing hardware tessellation support, in favor of Nanite which is an LOD system instead. Basically, assuming the mesh has been tessellated beforehand, offline and not in realtime. To create assets for Nanite in a DCC, it’s probably worth using something higher quality like DiagSplit. As far as I know, hardware tessellation has not been a great success in games overall.

Meshlets are more of an implementation detail, not important to the choice of tessellation algorithm.

6 Likes

Maybe meshlets implémentation (Mesh Shader) with a support like Meshlets tessellation (Actually not existing)
are a better idea and i thinks more easylier to implement with VK, and more viable for the long time due to is a recent technologies (Already almost 7 years) @Jeroen-Bakker are already emit the idea ! So maybe if i found that too hard i’m putting that in standby and waiting to Meshlets implementation :man_shrugging::man_shrugging: It’s a better to idea than to make some people waiting a feature never coming, i do be honnest to the people !

1 Like

#Update - Learn how Blender working


After some search during the night, and testing a lot of things i found blender use “Open Subdivide (OSD By Pixar)”. So that feature are an excellent support for the target features because he have already is in DNA code a feature called “OSD Tessellation shader Interface”. So by passing directly by a feature already implemented in a LIBs is an excellent thing because :

  • OpenSubDiv have a documentation and is a good point to know what we do and taking the right way !
  • OpenSubDiv has already implemented inside blender (Withouth this lib you can’t use the subdivide or subdivision surface modifier & Node inside the GN)
  • We can found a lot of exemple and demo files on “Github” or in research paper and the code are free and OpenSource to use it !
  • Low code, because is a lib and we only call what we need for only make what you think 50% of the code are already been written !

OpenSubDiv Manual (OSD Tessellation shader Interface)


Conclusion ?!

This is a good beginning for this node, because i can create a base for "Tesselator Distance on “Subdivide Node”, because is already sitted to OpenSubDiv (OSD). This is only for making the beginning structure for this new node ! After we need to implement the tessellation LODs

So all the investigation and talking make the node looking in the right way → FORWARD !


In video

this is what exactly what we can do and what we need to do for blender (Respect the timecode)


And ?! Huhh ?!

Now i think i’m going to understand how to make the task the more easyer as possible and understand all basic pipeline essential and in what order he proceed and what she’s doing pass by pass.

Exemple :

If you want to experiment, that is of course fine. But in stead of making your first project to complex to handle I would advice to make it smaller and more manageable.

Using GPU to solve something prerendering has big performance penalties. I would not add any GPU based work into this project as it massively increases the complexity and interfaces more areas in Blender that you also need to learn/understand. It also has a change of becoming a point solution.

Note that OSD also has drawbacks (API mismatch leading to more overhead, probably integration with subdivision code to improve reuse) It isn’t a walk in the park. As you’re working with multiple unknowns this could lead to suboptimal decisions as both OSD and Blender code base is unfamiliar.

Second advice is to collect a short list of ideas to consider and gather better understanding of the impact of these ideas by prototyping.

7 Likes

Due to several remarks and the complexity of the project as a beginner and the complexity of rendering scenes and the poor optimization of this thing that can make the task very complicated and which may be largely replaced by much more modern technologies using meshlets for real-time use or in the final rendering. I fear having to cancel this project or find a successor due to the complexity of the thing and the various modules to manage and especially due to the implementation which can be if it is poorly done really catastrophic. It is a reality that I had not seen but I have to face the thing and cancel or find a successor or a helper (Who will have time to devote with me) to the project I am not experienced enough it must be recognized.

I appreciate the comments of all the people who were able to be interested in this project from near or far and find it magnificent. As I also thank the people who discouraged me a little, I must admit that by telling me the problems and facing the reality of the tasks for a beginner, I admit to having underestimated the project despite the numerous documentation that exists on the Internet.

Much more modern technologies exist and can be implemented in better ways (Meshlets (mesh shader)).

In the meantime use the experimental adaptive subdivision feature in the “Subdivision surface” modifier.

(I still learn in my corner btw)

3 Likes