Compositor improvements

This is important: an output to IMAGE node!

Also, I’m pretty convinced that nodes should replace the modifier stack.

3 Likes

More maths nodes: I have been working on improved shadow catchers which involves doing maths on RGB values in the compositor (multiply, divide, add, subtract RGB values). It is a pain to split into RGB all the time and then recombine. In the shader editor I can use the Vector Maths nodes as they also work on colour, but they are not in the compositor. Shouldn’t be a lot of work to port the nodes across (object oriented code anyone?).

3 Likes

You can already do that with the File Output Node.

AFAIK this is already planned, right after the node based particle system is in place.

2 Likes

Thank you everyone! Lots of great ideas.

I have aggregated all ideas into a spreadsheet and have started organizing some of these ideas.

Blender Compositor Improvements.xlsx

Next step will be to prioritize all ideas and start working on initial proposals for these ideas. I’m imagining that we would split into smaller work groups to do this.
Anyone that has the time and is interested in working on these proposals let me know and I’ll make sure to keep you in the loop.

Blender developers @JeroenBakker @sergey @dfelinto what is the best way to prepare proposals for you? Any suggestions? What is the most important work we can do in order to create useful proposals that developers can act on?

Lets keep pushing this forward!

7 Likes

Blender compositor already has rotate and scale nodes, as well as a transform node which has rotate, scale and translation options. These don’t need to be on the list.

I think @BlenderNavi was talking about the VSE, where image strips have an “offset” property which allows them to be translated in X or Y, but can’t be rotated or scaled without using an effect strip (as far as I can tell)

2 Likes

Yeah. I thought so too, but wasn’t in front of Blender when I updated the spreadsheet. Removed.

1 Like

These are all good ideas.

Regarding your UI suggestion, I believe one of the biggest strengths of Blender’s node interface is that the parameters are available directly on the nodes instead of only being available through a property panel. Fusion, just like Nuke, can move the node ports around the node, because they are not related to parameters on the node. I would hate to see this node design go, just so that we can have moving ports.
That said, if there’s a smart way to design this UI that allows for both vertical and horizontal layouts while keeping node parameters on the nodes I’d be open to that… I just can’t think of a good way to do it.

Blender already uses colors to help identify ports. You can also hide unused ports if that helps stay organised. Shapes could be an option, but I worry that the UI will start getting too cluttered.
As before, I think this is mostly a problem if you’re not using the parameters on the node…

The Fusion suggestion is great…But, Node Wrangler already has the functionality your looking for. Alt+Shift and right click drag to another node and you get a list of inputs.
In fact Node Wrangler has several “Lazy Functions” that help with that kind of workflow. Check them out here https://gregzaal.github.io/node-wrangler/

On a side note, I have to say that having worked with Nuke, Fusion, Gaea, Godot, Unity, C4D, Maya and probably a couple other node interfaces over the years, Blender’s node interface with Node Wrangler active is unbeatable IMO.

4 Likes

Thanks so much for these insights! I do have to say that I’m starting to get the hang of Blender nodes… just that the noodles are all over the place so quickly defeats their purpose somewhat. Right now I think the only way to avoid noodle salad is using lots of groups… maybe that’s better than having lots of big frames though, I’ll have to find out! And sure, if you count node wrangler as an integral part of the interface, I haven’t checked that out in depth — will do!

One other thing that annoys me, don’t know if it belongs here: why do I have to load the clip again in the clip viewer to make a mask? In one of my projects the clip viewer fails to load my sequence while the image node does just fine so I can’t make a mask for a clip that’s already in the comp… can’t wrap my head around that concept

Definitely check Node Wrangler. It’s pretty much essential. Many people say Node Wrangler’s features should just become part of Blender.

As for organization, groups, frames and reroutes are your friends.

The viewer thing is a little annoying indeed… it’s probably a result of the compositor being patched in together with other Blender interfaces, instead of having it’s own viewers. You kinda get used to it, but not ideal for sure.

1 Like

You don’t have to. You can create masks in an image editor, and set the display to a viewer node, and that way you can create a mask for any node that’s in your composite, without needing to load the images separately into an image editor or a movie clip editor.

1 Like

This is a minor issue maybe but it would be nice to be able to copy and paste nodes between different Blender sessions similar to how copy/paste works for 3d objects.

1 Like

VSE interface have image offset but no scale or translate.

2 Likes

Pingging @JeroenBakker @sergey @dfelinto and other Blender Foundation developers.

We have a good list of ideas here… What’s the best way we can prepare some of these for you?

Would any of these help?

  • UI Design Mockups
  • Usage Examples
  • Videos of similar implementations in other softwares
  • Pseudo code
  • Other…???

Thank you

Hi all!
Great work in listing these topics. Personally I would love that the compositor would get more attention. I myself aren’t able to spend time on it this year due to other projects I have been assigned to. Before making any mock-ups some parts (the bigger projects) should be checked with a developer.

Adding new nodes in a similar workflow we currently support shouldn’t be an issue and should be doable by any developer.
A quick review of some topics that have passed this discussion:

  • OpenFX is build on QT and is not compatible with our architecture.
  • Interfacing Compositor to other areas will get a dependency cycle. eg Render layer -> Texture output -> render? or Sequencer Input -> Sequencer Output. It needs a more clear vision what the compositor is. And what parts should really left to the sequencer (with sequencer nodes) and texture nodes. And how all these system would work together. What is possible, what not, etc. and How does it all fit together.
  • Caching is a really technical subject I would say that the https://developer.blender.org/T74491 will point out what caching is feasible.
  • Realtime Viewport compositing has been discussed some times here, but isn’t a project with high prio compared to other projects.

Please don’t copy paste other software. We should focus on getting blender better using the principles that made blender great.

It is good to see this post. Users should stick to what and why. Developers should be doing the how and limitations. There isn’t always a clear border and there we should meet!

8 Likes

Hi, @JeroenBakker

Could you please clarify what you mean when you say:

according to the wiki page on OpenFX (API)

OpenFx has a C++ Supported library, and QT is just used to create GUIs.
And if this is, a big problem is it solvable?

1 Like

Just reread the specs of OpenFX and seem that they are more friendly than they used to be. Still it feels like a very complex project with multiple possible solutions and all their own downsides.

Interoperability with OpenFX on OpenGL level would also mean that we get two different UI’s combined. That feels like a horrible solution. Also in detail I have a lot of questions if the architecture will match. Also when vulkan is becoming the target Rendering framework.

OpenFX works if we change blender internals to match their memory manager, their threading model their workflow; it feels like a lot of overhead and transformations. So it isn’t something that could be decided on based on needs only. Would we port over all our nodes to OpenFX. and does that even work with all our nodes. Or do we go the nuke way what has multiple composite engines that work together. Also this feels like a very need driven solution that can become hard to maintain and might limit future projects. Last release on OpenFX is 5 years ago. There are occasionally some fixes on the github branch.

BTW I am not against adding frameworks like this, but it should fit and not become a burden we have to live with. Especially as there isn’t an active developer on blender side ATM. But it should start with an in-depth research and perhaps a PoC, before it is clear how to do this.

But I would rather spend this time on fixing current issues in the compositor and adding/restructuring nodes. That is feasible and everyone benefit from the first commit onward.

10 Likes

I’ve just downloaded 2.83 LTS and couldn’t find such node. :thinking:
How do I use it?

it’s in the compositor workspace

1 Like

That’s for render and compositing.
What I propose an image output node for the material nodes, which only output note is the material output.
That would allow us to generate and save bitmaps from all procedural maps, map transforms and map ops.

1 Like