Sorry for shameless plug. I’m playing with that idea for while:
Sorry for shameless plug. I’m playing with that idea for while:
After having to work with the Blender compositor for the last few weeks I’ve written down some thoughts on it. Feel free to ignore, i just had to get it off my chest.
- It’s painfully slow!
- Really basic and dumb – always recalculating everything instead of only recalculating affected nodes.
- No caching functionality for preview.
- Even if nothing is being displayed that has to be rendered, a simple frame change can keep the computer busy loading who knows what.
- To stop Blender from doing that, one has to untick “use nodes”. That is extremely hidden and shouldn’t even be necessary.
- No native way of solo-ing nodes (node-wrangler fixes that in a way).
- If you have to solo a node at one end of a big comp to check something and then go back to the other end of the comp script you have to zoom out, move, zoom in, and solo again.
In another compositing software its literally a press of a button to achieve the same.
- So many crucial nodes are missing.
An exposure node was just added recently … i mean come on.
Dilate/erode only works on single channel - to make it work on RGBA one has to make a custom node group using drivers for the radii … really?
- Not able to zoom out far enough. A big comp script easily fills more than can be displayed.
- No shortcuts to display different channels. Having to manually select the desired channel in a drop-down menu is extremely slow.
- The frame node is pure garbage:
- No way to disable the annoying behavior that it parents other nodes all the time.
- The corner to resize the frame node it is literally 1px, instantly increasing my blood pressure.
- No way to set the label text bigger than 64px (which is really small in a big comp script).
- No straight forward way of re-ordering multiple frame nodes in Z if they overlap or occluding each other.
- The way to roto something is ridiculous.
- Probably a lot more that I’m forgetting right now.
In my opinion the whole compositor of Blender needs to be rethought and build up from scratch:
- to store an arbitrary number of channels not just RGBA.
- to have the ability to define which channels a node is affecting.
- to have a proper caching system for preview and in order to not having to recalculate everything all the time.
Mostly because is a historical design, played really well in the context of Blender internal. Now things are exactly the opposite. That would lead to design the compositor in a totally new way.
I remark each one of those points!
And as @anon10078140 says, I’d like to see some composite in the viewport, some cheap but common and useful 2d filters applied directly on what you see in the viewport. For example: RGB curves, some tonemapping, bloom, lens distortion…
About how to implement this, I imagined Nodetrees for cameras, where you find only relevant nodes.
Can you give examples?
What do you mean exactly?
Been doing comp for 20 or so years, here’s where Blender doesn’t work :
Missing Nodes :
- OCIO Nodes
- Time-related tools (framehold, timewarp, oflow)
- Color And Pixel Matrix nodes
- Pixel expression node (Fusions CustomTool or Nukes Expression&MergeExpression)
- Grain tools
- Metadata tools
- Paint and Roto as 1st class Nodes (we need to see p&r in context)
- Grid and spline warp
- CoordinateSpaceTransform (latlong to spherical to cross etc)
Missing Core Functionality:
- Multichannel workflow
- Concatenation of image translation tools (with different filtering options)
- Temporal framework for referencing frames and values at different times.
- Viewport/Image viewer widgets for transforms, roto shapes, trackers, basically any on-screen controller
- Metadata handling (inject&extract metadata for burnins or custom data out of blender)
While Natron has its own set of issues they have absolutely NAILED the workflow (mostly because its based on Nuke, Fusion, Shake etc workflows). I would employ anyone that’s looking into upgrading blenders compositor to take a deep look at Natron before drawing up anything new.
Compositing have been figured out, no need to re-invent anything.
To elaborate further: camera node tree would act as “shaders” for the camera, changing the look of what virtual lenses see. This in order to add that little extra lens imperfections that make renders “more photorealistic”. It is something that many do (and abuse) in post, even out of Blender, and that would be cool to have in realtime preview.
Every other compositing job, such as alpha over, renderlayers, matte, …and anything that belongs to actual postpro, would go in Composite editor as usual.
Its very common in VFX to use temporal denoising to create more stable animations. We typically denoise with 3-5 frames median or average for slow moving parts by pushing previous and next frames into current frames domain. You can do this in blenders compositor now but youll need 3 image read nodes.
We also take the high speed areas and denoise them using motion blur by combining previous and next frames with current.
Well said. For the connection part. There is no other better way than doing live connection through sockets. I have fought against the bpy a lot of times in terms of new user interactions (eg: puppeteering live mocap) and new user interfaces modes (ie: 2D axis widgets). Now I just use whatever Blender brings into the table and the rest is pure bpy data ready to be accessed and written.
Such as for example having a blank node in the compositor ready to do http request in an address and receive a png is much more preferred than any other way. Not that it will be fast. But is the key for multi-tier architectures.
One of the best ever projects in terms of interconnectivity I tested so far is this. Gives almost close to real-time performance, at least as fast as an artist can use the hands the feedback is immediate.
Just do a pip install ws4py from your system, and throw ws4py to
If you consider that the application supposed to be time-critical there is no other way to design your code in an exact domain with boundaries.
Though I am a great fan of lazy evaluation, I sleep and wake up thinking about it.
Such as for example the typical human reaction time is 250 milliseconds and the speed of the nervous system is 120 ms. From the time you see a banner popup and you begin to start moving the mouse, you have 300 up to 500 ms (a generous half a second) to do anything you want that will go unoticeable. (Exclude additional ms for decision making or micropausing - straight in biology terms).
Talking about the caching mechanism. I don’t know exactly what’s in your bucket list. But one of the most notable examples to consider is this.
In Unreal Engine 4 as you probably know for sure, when you enter a scene, all textures appear to be blurry. Other models might appear low detailed with their corresponding lower LOD. Gradually you can see more of the textures getting sharper as they are replaced with the higher resolutions. For sure this is done in a threaded manner so practically has a effect on the “VIEW” layer (if you think of the MVC style).
This translates practically to have reasonable super fast loading, without paying for startup cost. Not exactly that is a dream to come true but exactly if you consider to eliminate what makes users annoyed and what makes them practically happy, there are lots of lessons to be learned.
As for example, I could easily take this idea in the compositor. As for example despite I have 1980x1680 composite view. The bulk of the work is done in preview panes because you always split your view, never the case to have a full monitor dedicated for nodes.
This means that there can be these types of views:
- Full Resolution: Exactly as everything run this time.
- Half Resolution: Acts as a replacement until the full resolution is generated.
I’m still dreaming of implementation of this task T53790 from Jeroen Bakker. That would seriously be a massive performance improvement, and something which is afaik new on the market.
Lets say you want to use a roto as a mask for adding a shadow or something.
So you set the mode of the image editor (which displays the viewer node) to “mask”, then click on “new” to create a new mask, then probably name the mask (if you dont want to end up in a mess), then start drawing the mask, then you have to add a mask node in the node editor and select the desired mask. Thats a huge amount of steps needed for a simple task like adding a quick roto. In some other compositing software it works like this: you press “r” which creates a roto node and you immediatly can start drawing the roto in the viewer … that’s it.
And to make things worse Blender has to recalculate the comp (viewer node) on every change of the mask even if you dont use the mask.
Try this: add a picture to your comp, blur it 100px, and then connect the viewer node to it. Now try to make a mask while viewing the blured image, every time you move a point of the mask, Blender needs to recalculate (even though you are not doing anything with the mask yet) … thats completely nuts. And the hack to untick “use nodes” doesnt help here either.
I’m looking into picking some tasks for the compositor. So far it seems performance is the biggest issue, so not sure if adding more nodes will make things significantly better at this stage. Also, some patches adding new nodes didn’t get any attention for a while (see D1984 and D2411).
It seems there are a lot of arguments against that approach so the revision for sampling based compositor has been abandoned.
Which makes T81650 seem like the most promising way forward.
Please do this to Blender. Compositing is essential for great results and currently a nightmare to use.
I don’t understand why the team focuses on the VSE instead of the Compositor. Both should evolve equally in my opinion or Compositor should have a priority because its use cases are much more needed, for both still and animations. Right now it is seriously lacking on performance, which is the bigger problem.
I have noticed that with images with very high resolution the initializing execution before compositing is using half CPU threads:
You have very great points, but please don’t be so disrespectful. There is no real compositor module owner with a concrete roadmap and ambition to make the compositor work in heavy production environments. We need to find someone for this positin or wait until someone from the current dev team has time.
Sounds great. Does it make sense contribute with smaller things so you get a better feeling with the workflow of the development? Your project sounds like a big task to integrate into Blender.
It wasn’t my intention to be disrespectful. I am a big fan of Blender, the community, and it’s developers. I started with version 2.36 if I remember correctly and since many years I use Blender on a daily basis.
No disrespect towards the developers, they kick ass … but compositing in Blender (at the moment) is just painful and feels too broken to repair if not started from scratch.