dynamic paint set-up using the volume of a generated FOV cone to substract weight. (that’s why i suggested a fourth mode to the new attr proximity node, the volume option! quite important)
So no, it’s not using the nodes.
Maybe we can clip particles not in camera FOV by comparing the points position with the camera orientation & fovangle?
Oh, you’re using a mesh overlaid with the camera, I didn’t catch that. Clever !
I’m too weak at maths to figure this out quickly, but I imagine convert the points to camera space by multiplying them with the camera transform matrix. However, I’m not sure this would give you enough information…
Well i’m almost sure it’s possible by using simple pythagoras, i could do it with python but translating a formula with math nodes it’s more compicated of course
I am of the opinion that implimenting instance culling and LOD selection should be handled internally by blender rather than by the user’s own node logic. If your issue is the amount of load that many polys/instances puts on the rasterizer/vertex shader or ray traversal then that should be handled by the renderer figuring out how to ballance the work load without impacting the image too much. The artist should not have to invent a system like this and only focus on how to compose their shot.
it’s not just a real-time optimization… LODs are even used in film vfx- though not in quite the same systemic way as a game engine like UE4. The reality is that rendering take a lot of time, and the more geometry and complex materials you have in your scene, the longer it takes to render- so if you reduce those things your renders get faster.
now, we could discuss whether or not LODs are something we need to be thinking about for Blender right now (I personally think it’s putting the cart before the horse when we lack so much other basic functionality), but their benefit to any 3D industry is undeniable.
Ive seen countless demonstration of blender rendering billions of complex tree instances with extreme ease, so I’m not sure that I get your argument here
Maybe i’m thinking about particles & scattering too much here? but well it’s quite the goal of this thread
( related : Offloading heavy geometry - #72 by BD3D)
@BD3D does it have to be discussed in this thread? I’m pretty sure that devs are aware of current performance issues and also poor performance with lots of instances is not problem just of Geometry nodes module.
To make working on large projects easier, GN needs some kind of “switch” node that can select different inputs to display in OpenGl viewport, interactive rendering and for the final render. This way it would be possible to set a lower point distribution density in the viewport, pick simplified mesh, set lower subdivision level, etc.
Also it would be very convenient if instanced objects could be displayed as point clouds. I though that maybe this could be achieved by adding nodes modifier on the source object with only a point distribute node inside, but this method has too many drawbacks. Main one is that the modifiers cannot be disabled for interactive rendering, only for final render. So it would be useless during the scene creation. Also the points themselves have weird shading and fixed size in scene-space, whereas point in a point cloud should have a fixed size in screen-space to better display object shape when zooming out. Best example in Blender is to delete all faces and edges in a mesh and also a percentage of vertices. Remaining vertices basically look like a point cloud.
Agree with the viewport / render time result, wouldn’t that be as easy as creating to sockets in the output? One for render and one for viewport, in the end such differentiation already exists in the modifier.
I reckon this would lead to potentially a lot of duplication in the node tree, whereas if it’s exposed as a variable as in Pablo’s mockup you can keep a single nodetree and just differentiate through that variable.