This is a brief summary of the meeting “VFX Artists and Studios” that took place at the Blender conference on October 25 2024. The meeting mostly focused on the compositor, although the video sequencer and motion tracking were briefly discussed. The goal was to know the users better and to identify the most important feature requests.
As a moderator, not all points from users were clear to me, especially when it comes to interoperability with other software. These points are noted below. Further discussions with users who brought up these pain points are planned.
Attendees:
Habib (moderator)
~60-70 artists
– Usage for blender compositor: 50% NPR, 50% live action
– Skill level: majority intermediate to expert. ~10% are beginners/hobbyists
– Majority used other compositing software before. ~50% still use Nuke, AE or Davinci together with Blender
Meeting notes:
Speedup
Recent speedups are very nice!
Viewport compositor is cool
Glare node
Use case: product animation for luxury brands. Need to set glare on some parts of the image only in a convenient way. Currently possible in Blender but very cumbersome. In this case, compositing with Davinci is easier because you can set the parts of the image where the glare effect takes place using gizmos.
Use case: color grading after animation. Glare doesn’t have alpha when exported. This leads to problems when using AE. ~50% in the room agree this is a problem. (Unclear to me if the problem lies in Blender or in AE. Follow-up talks with users planned)
Caching
Use case: NPR compositing. At some point, compositing becomes so complex that it can’t be realtime anymore. Therefore a caching mechanism is needed.
Tweaking a single node feels slow. Can we cache up to the node before? Most people familiar with Nuke expect this node behavior. Reasoning is if you change one parameter in a node, you are most likely to change it again because you never hit the perfect parameter/number the first time you change it.
Workarounds with proxies exist, but are complicated to setup.
UX/UI
“Blender feels fragmented”: For some workflows, it is tedious to switch back and forth between sequencer and compositor. Tools should be right in the compositing editor, not a separate window for each workflow (follow-up talks with users planned to understand the problem better)
3D support in the compositor gives Blender a unique advantage: minority of people in the room agree.
Top down nodes is better? Majority doesn’t think so.
Changing start/end frames is not straightforward.
Use case: get footage from a different team (typically 3d render). Having multiple people working on the same blend file and on the same node tree is
Import/Export
OpenVDB: files are slow in blender. ~20% of attendees who actually use it agree.
OpenEXR: the file format remains very important. ~20% of attendees prefer to export all frames as openEXR, open them in a separate blend file for compositing for a clean workflow, even though such a workflow is possible without import/export of intermediate renders.
There is duplicate functionality between File Output and render output. Both are rarely needed at the same time. Almost everyone agrees.
Color management
Most users in the room are not concerned with color management. Only ~4 people think it’s crucial for their workflow.
It was concluded that color management is important, but not relevant for this meeting.
Good to hear that caching is being brought up as a priority. The lack of a timeline cache is the number one issue I have, especially for any comp that has animated values, such as for mograph.
There is also a great need to be able to preview at a lower resoultion/quality than full quality. Other compositing software will allow you to preview at 1/2, 1/4, 1/8 etc res quality, but as far as I know this is not possible in Blender without creating your own proxies manually.
I have also noticed that there is a massive speed difference between a video file and an image sequence when using the viewport compositor.
You can get realtime or close to with a video file, but drastically slower with an image sequence. Even more so if it is a 16bit image sequence. A highly compressed video is much faster for playback than an 8bit png sequence with no compression, which is surprising. Maybe this is a known issue?
A glare with an Alpha? Does this mean an Alpha for the parts of the glare over “empty” areas of the image?
Sounds like an AE thing.
While using Nuke, Fusion or Blender with an Alpha Over or Merge operation, pixels with an Alpha of 0 are composed of the background in an additive way.
So a proper glow or glare or flare or whatever usually don’t have an Alpha.
It seems the group of hobbyist/beginners was greater than 10%…
Not understanding color management is more like it imho. And to be fair, it -is- complicated…
Blender TIF and PNG RGBA render files don’t contain an alpha channel, at all. Instead, Blender saves a transparent TIF or PNG (with all image data in the layer as transparency, which ultimately works). However, the glow is not saved as part of the image data.
Below render saved as PNG. Black layer not added in comp this time, so that it’s more obvious as to what the render file fully is
didn’t have much time to write today and the topic is haunting the VFX industry since the 90s (until the linear workflow / EXRs arrived) but it still poisons our work today.
Sadly I can’t answer today, I just wanted to tell you that I don’t ignore your problems and will hopefully give some answers tomorrow. Some teasers:
Alpha is NOT transparency (except you ask Adobe)
Compositing in non-linear reference space is the source of all evil
I’ll accept that statement for how it’s intended… But within the context of this particular topic, it doesn’t matter on alpha vs trans. Ultimately the glow is not present in the saved render file, at all.
So the screenshots are simply to demonstrate what the group is talking about. I should be able to open the rendered file in Photoshop or after effects and see the glow, somewhere.
The glow is there… it’s just being masked out by the “missing” Alpha. If you look at the RGB channels in a proper image viewer it’s there and can be additively comped over the background.
As I said, no in-depth answer from me today but try loading your TIF in Blender image viewer and set the image’s Alpha to none:
Is there a glare / glow visible now?
Or try loading this PNG in the Blender image viewer and switch the Alpha to “none”.
It’s just 3 horizontal color bars (RGB) and a vertical bar in the Alpha channel. The PNG contains the complete 3 bars horizontally but the image viewer shows them clipped by the Alpha channel. Just like the BA forum software also multiplies the image by its Alpha so that only a vertical strip is visible.
But if you load the image in a “proper” comp software (in this case Blender) and use Alpha Over or Merge to comp it over a background picture it suddenly looks like this:
Yes, it’s the very same PNG attached above, but you can see the red, green and blue bars extend past the Alpha and starting to glow or rather being added over the Big Buck Bunny image.
The same happens with your Glare: It’s contained in the image, Photoshop just doesn’t know how to deal with it or rather ignores it because there’s no Alpha but Blender or Nuke or Fusion or Resolve or Houdini’s Compositor or Natron will additively comp it over your background.
BTW: If you’re using a PNG or TIF version of your image, set it to “Channel Packed” like in the screenshot below. If you’re using EXR you don’t need to do anything because it’s properly working out of the box.
Yes, and if used with an Alpha Over/Channel Packed - the comp layering works.
In Davinci, the TIF doesn’t seem to show a glow when comped. (Perhaps this is what Falk meant when he said “TIF doesn’t support alpha”?) Or, there’s a setting within Davinci that can access the packed channel that’s I’m unaware of. Would not surprise me, I’m far better with AE than Davinci.)
EXR in Davinci does show a glow. In AE, neither the TIF nor EXR shows the glow (and even with the Extractor filter in AE, I cannot seem to access any channel data that gets the glow to appear.)
So - with all the the above in mind - I do appreciate you taking the time to answer, but again - my reponses were just to illustrate the AE situation that I believe was described by the focus group.
One might choose to say “Well, problem is AE defect” - and depending on one’s point of view, perhaps true. But let’s not ignore the fact that AE is one of the most-used compositors on the planet, so it would do well for Blender’s rendered file to play nicely with all the others in the yard… “Don’t use AE or PS” is not a realistic suggestion.
Glad you can reproduce it and sorry if I sounded like “AE is crap - use a real comp tool”. That wasn’t my intent at all.
AE / PS just make it a bit more complex to do simple linear math stuff like 0.5 + 0.5 = 1
Don’t worry, I’m not going to elaborate on this
I put this short explanation here because it contains screenshots from Resolve which might not be allowed in this forum. Just click on the triangle to show everything.
How do you comp in Resolve? On the “Edit” page or on the “Fusion” page (hopefully the latter)?
I don’t usually comp in Resolve (I use Fusion for that) and in plain Fusion it doesn’t matter if you load an EXR or a TIFF, it doesn’t try to be smart and doesn’t interpret it in any way. It reads the “naked” data from the files and therefore both look and work the same (on the left the TIFF, on the right the EXR):
BTW the crude test images I use are these 2. Just 3 horizontal color bars and a vertical bar in the Alpha channel.
If I instead use the Fusion page in Resolve, the Loader nodes are being replaced by MediaIn nodes and now the TIFF does look different. The missing semi-transparent bars in my test images represent your Glow or Glare or whatever that’s now also missing:
The secret lies within the Media page. Resolve interprets the EXR’s Alpha as “premultiplied” while it sets the TIFF’s Alpha to “straight”. Just tell the clip to treat it like “premultiplied” as well like this:
P.S.: Maybe some admin can move the last few posts to another topic to keep this one on topic?
Also, if these screenshots hurt intellectual property or whatever, feel free to remote this post.
I have so many students that try to avoid EXRs at all costs “because they look too dark in Photoshop” or “they don’t look like in the viewport”.
Most of them have their eyes opened as soon as they start comping in Nuke or Fusion and all of the sudden they have unlimited channels, can restore highlights from clipping, do crazy color corrections without banding, can use Cryptomatte or proper Z-Depth etc.
This requires a bit of dry and boring knowledge about linear workflow, viewer LUTs, working display vs. scene referred and therefore a tiny bit of the intimidating Color Management.
That’s also the reason why I was wondering why so few Vfx guys at the meeting deemed this topic “non-crucial” or “not relevant”.