Compositor improvements

I have to advise against a cache node. Simply have a memory budget or cached images budget and garbage collect by last time used and time to compute.
It makes it more preformant and simpler for the artist.
Although a disk cache node could be useful and something to look in to.

2 Likes

This is a great concept. It’s very similar to how BlackMagic Resolve integrated Fusion in the latest versions, and I believe Nuke Studio does something similar.

Thanks @Moniewski. This makes total sense. I figured I’d give it at least couple of days before setting a deadline and starting filtering. But you’re probably right, if the thread stays open indefinitely it might grow into an unwieldy monster.

Any suggestions on the best tool to organize ideas and start sorting through and organizing? I was thinking something like Trello or a similar Kanban board thing. But I’m open to ideas…

That would explain the lack of support I’ve seen for OpenFX tools overall. Always thought it was a great concept, but if they’re clunky and slow it might not be the best idea…

Exactly. Caching system should work fully automatically. We can, very precisely, calculate speed of each nodes, as static test or even at runtime. Buffers should be assigned in those places in graph that takes most time to compute and cache queue should be updated based on last tweaks made by user (don’t assign many buffers for a chain that wasn’t touched, last one is enough). Filter nodes are heavy by design and all need caching but general math and color correction can work in realtime.

@Jomo Whatever tool suits you. Final proposal should imho follow guidelines of patch/design task on phabricator. Main problem(s) explanation > proposed solutions > details. Grocery list of features won’t cut it. :wink:

2 Likes

yes, but davinci lacks the ability to place footage on a “timeline-thing” when inside fusion, so i disregarded it in the post - moving your footage on a timeline is way more intuitiv than punching in numbers for in and out - also prerendered footage makes way more sense with timeline/vse where you directly see what you have got (thinking graphics-elements or titles)… however, i pitched this idea in the blender-institute some time ago and it might be way out of scope of compositor and vse

Basically a sequencer strip(like adjustment strip) or a modifier on a strip could send the image buffer from the sequencer into an input node in the compositor, carrying the name of that sequence. And then an output node with the same strip name could return the image buffer to the sequencer.

Or in other words, roundtrip the sequencer image in the node editor. It might become very slow, but maybe the new disk cache features in the Sequencer can be used to speed things up?

This stuff is by far the most requested feature in the Right-Click Sequencer section.

3 Likes

hello guys !
some small ideas :

  • merge node (or join node) - combine the alpha over node and the mix node - I think it can be more simple to have the merging of two branches into a single node
  • moving the reroute node - you should be able to move the reroute node like any other node instead of having to hit the G key - perhaps the addition of arrows coming in and out can help so when your mouse is on the node you’ll be moving it instead of adding a connection.
  • frame node resize - at the moment you need to put every node in the frame node to be contained and it automatically resize to the scale of the node inside, can be cool to be able to resize the frame node and it will automatically contained the node inside ( i don’t know if it’s clear ^^ ) and not necessarily resize the frame node.
  • hide input - when you have some massive comp you don’t want to see every connection of the comp, so it can be cool to have an option on every node to hide the input connections - and if you want to see the input connections you just have to select the node.
  • user parameters - having the possibility to add custom parameters ( like float, int ,color, python button, etc…) to a node
1 Like

The Colorspace Transform Node could be an OCIO node. The ability to apply a colorspace transformation is not only useful for I/O color management. It’s also useful for grading (for instance when you need to grade your filmic output, but after desaturation, something currently impossible).
Implementing OCIO nodes might require some changes in the image view/output pipeline and interface though: If color transforms are applied in your compositing tree, you may want to disable the view transform for the viewer and the display-referred output, as you’re taking the wheel.

7 Likes

About a month ago, i wrote a compositor node to be able to apply lookup tables in the blender compositor, i used a OCIO Color transform for that, I’m sure we can take that code and generalize it into a OCIO Color transform node.

The issue that i hit (i think) was that i didn’t have the right colorspaces to input into the Color transform node, and the results came out a bit wonky. I haven’t had time to look at this again since.

On a separate note. Applying Lookup Tables is something that i really want in the compositor! So:

Lookup Tables - Allow me to apply a LUT to the image - If we do create OCIO nodes, we can use those, but they don’t really have an artist friendly name.

3 Likes

They seem to be good enough for Nuke or Davinci Resolve. I have pretty bonkers timeline performance with OpenFX. I’ve even seen UHD ProRes Footage with heavy Lens Blur playback in real-time - on a Macbook! (“seen” as in “tried myself”)

2 Likes

These are some pretty cool ideas! This one caught my attention in particular.

I’ve been researching and tinkering with this particular thing lately. Have found a few workflows…

You can map a texture to geometry directly, like sculpting a grid and generating a comp element from a 3D render. This is ok for cases like where there features don’t travel very much in the source - say if you’ve got a still image. or if the source video element doesn’t contain much motion, or if it does but the motion can be normalised by using motion stabilisation as a mid-point. Useful but fiddly.

There’s also inputting vector fields into the Displace modifier and scratching your head a lot about what is going on and why. Let’s leave that one.

Then there’s the promising one based around using Map UV as an inverse mapping node. Even though it’s designed for taking a texture map and applying it at composite time, it’s also useful as a mapping function.

You can make a UV map in compositor by running a horizontal and vertical blend texture out of Texture nodes plugged into Combine RGBA - R for horizontal and G for vertical. You use Map UV to apply the map to an image. If you distort the map so that it’s not linearly running from 0 to 1 across the image - run the texture node through square root before Combine RGBA - you can distort the image.

Map UV takes an alpha channel on the Z component of the UV map as well, so you get an “opacity control” too.

Now, for a robust distortion/warp you want to be able to handle a case where your target image features are in motion across UV coordinates, assuming that you can’t normalise the UV with motion stabilisation or something. This is where the geometry-based workflow falls over, because we can’t do “inverse texture mapping” (video + geometry = texture map) in Blender, and we can’t tell Blender to keep a particular feature in one position on the UV Map.

It’s more efficient to stay in 2D and tell Blender “this is the bit of screen I want to warp, this is where I want it to end up”. And to be able to do that over a number of frames.

So a workflow might be…

  • set the boundaries of the source shape of the feature - basically roto the bit of image you want to reshape
  • mark the origin points of the features/areas you want to distort in the source image
  • specify offsets for those features to say where you want it to distort to
  • make Blender use that information to generate a warping field - effectively a whole-screen UV map
  • send the image and the warping field through Map UV node to get the distorted element… either as literal nodes or under the hood

UV maps are an inverse mapping function which we have at composite time, so why not use them? :slight_smile:

Since it’s 2D distortion, the natural place to mark origin points and offsets is the movie clip editor, and we’d want the origin points and offsets to be able to parent to tracking markers so we can automate them. While it’s possible to do this stuff from the viewport using empties and hooks, it is a long way around to go from 2D to 3D and back into 2D again.

Now, while you could try to generate your image warping field by using just individual tracking points, this can lead to poor results.

  • Join the points together using triangulation to cover the whole screen? Maybe, but it creates hard boundaries which don’t warp very naturally
  • Create a vector field from the points using the same algorithm that Krita does (RDF I think?) - smoother, definitely, but it can result in lumpiness where parts of the image get doubled. Krita also doesn’t handle the boundary very well but since we can get that working with roto in Blender, we’re OK.

Back in the 1990s, CG researchers found that animators got much better results when they could use multi-segment lines to define the source and offset points as opposed to instead of single points (not enough info) or warping the field by hand (too much to do).

So following along from this, and trying to reuse UI patterns that already exist in Blender, you could have a distortion control object made up of line segments which work not unlike the Mask tool we can already use for roto. Each line segment has

  • a starting point and ending point in the “source” image, represented by something like Mask handles
  • a starting point and ending offset point, used something like Mask feather controls
  • a “warp factor” for that part of the line segment to say how far towards the offset the warp should be

If we have a factor for warp, we can use that to do animated warping… combine two warps going in opposite directions with switched origin and offset points, and you get morphing. :slight_smile:

The line segment control method is part of the Beier and Neely. That was responsible for the famous morphing singer sequence in the music video for “Black Or White”. To do a morph, you’re warping your starter image’s feature shapes towards their equivalents in your target image, while unwarping your target image’s feature shapes away from the starter image’s equivalent shapes. And that would be super neat. :slight_smile:

So that’s what I’m tinkering with at the moment - warping with an eye to morphing using UV-warping- and that’s what I’ve learned so far with respect to how we might approach distortion in terms of control mechanisms and reusing what’s already there. (Specifically with respect to morphing there’s also tech like optical flow, but that’s way over my head right now and doesn’t cover the warping/distortion case.)

So that’d be a semi-automated workflow using tracking and Map UV… and more calculus than I currently know. :slight_smile:

There’s also the brute force “squidge stuff into place manually” workflow like in this After Effects tutorial - useful to have. Do we already have it? Maybe if we use single frame Shape Keys on a mesh which emits a UV map at render time so we can Map UV it, but it’d still go through the 2D - 3D - 2D workflow… but it might work!

Good luck with your features!

3 Likes

Unfortunately, this is not the way to do it(I’ve tried this approach for the sequencer, unsuccessfully). If you want to improve the node editor, you’ll have to do it yourself by submitting patches. I can recommend you guys to spend a little time to install the stuff needed for building Blender: wiki.blender.org/wiki/Building_Blender
I use SourceTree sourcetreeapp.com/ to clone Blender from Git to local: git://git.blender.org/blender.git and then it is pretty much in cmd to goto the blender folder and type make or make lite and then it is building.
The free VS code is excellent to use for navigating around in the code and tinkering with it. If you do some improvements of the code, SourceTree will detect the changed files and all you have to do is to click “Create patch” and upload it to phabricator. And if something fails during building, this is a friendly forum, and they will help you out: blender.chat/channel/blender-builds
I’m not a coder, but the Blender source code got plenty of code to learn from and tinker with. If the compositor should be improved, it will only happen if someone like you guys really needs it and and find a way to fix it yourselves. If that sounds intimidating, it is still possible to help out by applying patches and test them and do bug finding in the submitted, but still uncommitted patches.

4 Likes

I’d gladly welcome this, or any form of caching in Shader Node Editor too!

1 Like

Code reviews are very useful as well.

This is what I have found out so far about how to start hacking your own compositor nodes.

Note: This assumes you have already got as far as building Blender from source - you’ve downloaded the source. you’ve set up CMake, and you have a build system like Visual Studio or XCode or whatever Linux uses. But now you want to roll up your sleeves and hack away.

These are not official best practice instructions. They are more like “this worked for me” type notes.

This also assumes you have some knowledge about how to hack code together - not necessarily that you know C++ or how to take a partial derivative, just that you can read existing code which already works and you’re happy to experiment with sticking bits of code together to try to make something work. Hacking this way is a good way to get started.

Speaking of how things work, compositor nodes work something like this:

Blender gives a node an XY coordinate and a location in memory. The node puts a value into memory that says what that XY coordinate should be. It makes that decision based on whatever its inputs are (images, vectors, colours, numbers) and whatever its non-input parameters are (e.g. the Flip X and Flip Y checkboxes in Flip X/Y), and whatever procedure it does, whatever it reads into memory to use with that procedure, etc.

Now, in normal operations, that XY coordinate could be anywhere on the screen. Your output can’t read the output of other XY coordinates. If you’re just changing the colour of a single pixel to make it more orange, it doesn’t matter what colour the other pixels are. But if you’re doing a blur operation, you do care what colour other pixels are because to blur you have to chuck away detail by averaging the values of surrounding pixels. And if you’re doing Map UV, you have to read a teensy bit of the UV map but the whole texture map because you don’t know ahead of time where on the texture map you’ll be sampling the colour from.

Sometimes you can just precompute a whole screen and copy its values into their memory locations as Blender asks for them.

Now let’s say you want to write a compositor node. Let’s say your compositor node is for sampling colours based on X and Y coordinates as an input. Something super simple.

Make a branch for your code…

git checkout -b node_samplecolor

And then make some files.

Here are some files you need to create in source. These files can be copied straight across from any other similarly named files to begin with - the important part is to have them there containing something.

I’d recommend finding a “buddy” node or three which already exist in Blender. It should do something you want to do - read an image, accept a vector, etc. Basically you are going to be using those as a starting point and changing them as an exercise, or to make them do something new that you want.

This file contains code to register the node as a Thing What Do Stuff. Its inputs/output names are set here.

source/blender/nodes/composite/nodes/node_composite_samplecolor.c

These files contain node setup information.

source/blender/compositor/nodes/COM_SampleColorNode.cpp
source/blender/compositor/nodes/COM_SampleColorNode.h

These files are where the node does the procedures that make it useful.

source/blender/compositor/operations/COM_SampleColorOperation.cpp
source/blender/compositor/operations/COM_SampleColorOperation.h

To make those files, just copy files with the same prefix and extension in the same directory. They do not have to actually contain the code you want to run yet, they just have to be there so CMake can include them in the IDE’s project files.

To continue with my example, I wanted to sample colours based on an XY coordinate, and the Map UV node already does that sort of thing. So to start with, (my “buddy” code), I just copied the Map UV code and renamed any reference in that code from MapUV to SampleColor (or MAPUV to SAMPLECOLOR…).

Now you have to modify a couple of CMake’s files. This is where you tell blender to include your files in the project build. Find the spots in the CMakeLists file which already have some composite nodes in and pop your filenames alongside them.

source/blender/compositor/CMakeLists.txt
source/blender/nodes/CMakeLists.txt

At this point you should be able to run CMake and load something into your IDE to start hacking.

To actually test your work, there’s some other files you have to change.

Be consistent in how you name things but mostly just follow what the other nodes do as an example - it’s hacking to get something working so you don’t have to know why the other nodes do things this way yet.

This file is what you change to add the node to the actual UI in Blender.

release/scripts/startup/nodeitems_builtins.py

This file is where you register the node you wrote (look for all the register_node_type_cmp... functions at the bottom)

source/blender/blenkernel/intern/node.c

And you add that function to this file too.

source/blender/nodes/NOD_composite.h

This is where you add an “ID” for your composite node name. You can pick the next highest number up, or whatever you like, as long as no other node is using it.

source/blender/blenkernel/BKE_node.h

This is where you add a definition to declare that your node is indeed a thing which is a node. It is a big data structure of nodes and other stuff. Just follow the example of other nodes.

source/blender/nodes/NOD_static_types.h

Here you can add an extra handle to a switch statement to make your node “convert” from the old fullscreen compositor. I’m not 100% sure whether this is one is strictly necessary but if your node doesn’t work… maybe put something in this file too.

source/blender/compositor/intern/COM_Converter.cpp

I’m not sure what the below file does exactly either, but make sure you add your node’s details to this too - just find the other composite nodes and follow what they’re doing.

source/blender/makesrna/RNA_access.h

If you do all that right, you should be able to compile some code and swear at it for not doing what it should be doing.

Hope this saves someone some websearching. If you get stuck, try looking through other compositing nodes (now you know where to find them) or check through code reviews on developer.blender.org for compositing nodes. They are a goldmine. :slight_smile:

Happy late night hacking! :slight_smile:

8 Likes

Here is one idea.

Real-Time View-Port Compositing – preview composites in the view port in real-time – this would link creation and compositing closer together and allow one to preview changes faster, in real time. Probably the best way to implement this is to add a new output node, a Real-Time Viewer node, so you could exclude slower nodes and just preview the ones you want. This does rely heavily on the compositor getting performance improvements, but with the direction technology is going it seems to be very possible.

2 Likes

I’m really interested on a better compositor, but it really needs a better performance system first, specially for animation. Right now it’s slow but it’s doable (somehow) if you do stills. But for animation it is impossible to work with unless you have very much time available.

1 Like

I have UI suggestions: In Fusion, the inputs and outputs of the nodes rearrange themselves according to where they lead. If the output leads to a node above it will be located on top of the node, if it leads to the right the output will be placed on the right edge of the node etc. … this way I can build vertical node flows as well as horizontal ones. Of course it is tidy to have the inputs and outputs always at the same position, BUT I find it very restricting that the node flow has to be from left to right (or it will look messy)…

Another thing is the look of collapsed nodes. All the inputs and outputs are so close together and on top of that indistinguishable … one idea: different shapes for inputs and outputs could enable to move them about the node edges more freely and make them usable and distinguishable even when the node is collapsed.

One other cool thing from fusion: drop a connection line on a node and hold alt to get a list of all the inputs and choose one. This could also be useful for collapsed nodes.

And last not least anyone working with OCIO will know the struggle of that insanely long color space list that isn’t completely visible if you are zoomed in too close in the node editor!

2 Likes

I need 3 features help composite 2D image from Inkscape. Example:

• Direct rotate image (not need use effect strip) - can rotate image 2D (3D good too if not hard) - compositor already have translate feature check box and it work good, please add rotate
• Direct scale image (not need use effect strip) - can scale image 2D - compositor already have translate feature check box and it work good, please add scale
• Frame speed for image strip - often want import image strip but want change image every 2 or 3 frames instead every frame - this is integer value 1 or greater. Animation often update every 2 frame instead 1 for save work draw animation. This is NOT play back speed like 24 frame/s or 30 frame/s.

3 Likes

Most important suggestions above: Performance Improvements & Caching, Better Mask Editor, “Scene Node”, “Multiple viewers”, “2D Text”, “File Output Render”

I will also add:

  • Multi-channel workflow improvements - A way to pass multiple channels between nodes with a single connection
  • Cleaner workflow between 3D and Compositing - Including decoupling 3D render action & Comp render action. When rendering the comp, only recalculate the 3D Render when the 3D scene has changed, otherwise just process the comp and read the 3D render from disk.
3 Likes