Change what everything nodes is

Would it be better to make everything nodes nothing more than a collection of nodes, each of which are just a visual representation of one of blenders C/C++ backend functions?

This way instead of Jacques designing the functionality, the community could use those low level nodes to build higher level nodes, and then share with the rest of the community to build whatever they want. The node tree could then be wrapped in something similar to a group node, with the difference being any parameter could be exposed for the end user to operate without having to dive into the complex node tree.

Maybe it would also be better, because when the devs improve the backend,the nodes would automatically improve as well, as they’d be nothing more than visual representations of those back end functions.

This is basically how houdini’s HDA system works.

1 Like

I’m pretty sure that’s what they’re aiming to do…

Naa, I asked on the live stream. They’re redesigning the particle system as an entirely new entity rather than using the existing back end, and hadn’t considered a way for users to wrap up and share node trees with a simple front end.

I think make the nodes to work with the existing back end first, then improve the backend and the nodes will automatically get the new functionality (adding new parameters to the node when a function is updated to accept additional parameters etc).

What is an example of using a system like this? To put it in perspective.

a good example would be HDA’s (basically shareable node trees with a simple front end node with only the parameters required for the end user to use the node without diving into the node tree contained within.

Say for example I build a node tree which accepts an input of objects, it then loops through each object and seperates them into containers based on their object name, then loops through all the materials of the selected objects and adds some new nodes and links. After this it selects the objects of each container one at a time, and then renders them in isolation. When it finished it sends a finished message out of the top level node, which if connected to another node will trigger that one.

I would then inside of the node tree right click on a number of parameters and choose ‘expose to top level node’.

Once complete I would then share the node to an online repository which blenders shift + a menu would have access to. A different user could then press shift + a in the node editor, select the community menu, search through by name, find my node, add the node to his node editor, connect a relevant node to my nodes input socket (in this case an object selection node), then they set a few of the parameters I’ve exposed (save location, resolution, container seperation keywords etc), make that node the active endpoint or if they like plug its output into a different node which would then perform additional tasks when my node completes.

There should definitely not be more than one node editor, instead there should be one node editor with each system (particles, textures, shaders, world, mesh edit, animation etc) being accessed by dropping down top level nodes, all of which would contain nodes relevant to the top level node. For example if you drop down a particle top level node, once inside of it, pressing shift a would bring up a list of nodes relevant to making a particle system rather than nodes relevant for compositing or creating shaders. Having top level nodes that all live in one node editor would mean that you could auutomate any number of tasks within blender just by connecting the top level nodes. For example you could have a mesh generation node connected to a mesh edit node, connected to a particle node, connected to a shader node, connected to two different render settings nodes each of which specifies a different camera to use etc, and then finally they connect to a render node to create two different renders. Top level nodes should of course be able to live within another top level node so that you can share full functionality.

The end could also if they wished dive into the node to see how it works by analysing the contained node tree, and maybe tweak it to perform a different task and then add it back to the community repository under a different name.

Other usages would be creating a foliage system which automatically populates a landscape with objects that are passed into it, or a batch rendering system, parametric model like chairs (google houdini parametric chair or parametric car tutorial).

Basically the system would mean that rather than waiting for one developer to create all this functionality, he would instead just give us the bare essentials to create and share anything we wanted. The system would require minimal upkeep from the devs, because all bug fixes to the existing code would automatically trickle down the the nodes, as they’d be nothing more than visual representations of the existing C/C++ backend functions. Maybe have to add or remove the occasional parameter input from a node.

Thanks for the explanation. There was a similar use case I am interested, at some point I wanted to use EEVEE for generating textures (everything nodes / since compositor can’t handle shaders), then throw back the result to compositor to edit it, then import back the result as a dynamic texture and apply to a material.

This back-and-forth pipeline is very dynamic and flexible as defined by the user. Perhaps with this new approach it can be feasible.