This is a summary of a 1-hour meeting that took place at the Blender Conference. The goal was to familiarize Blender developers with the challenges faced by VFX artists who use Blender in production.
- Robert Rioux (Blender Bob)
- Pau Homs i Farré (Disney Animation)
- Captain Disillusion (Captain Disillusion)
- Monica Galantucci (M74 S.R.L.)
- Nicholas D’Amore (M74 S.R.L.)
- Andrea Boccia (M74 S.R.L.)
- Andrea Battistoni (M74 S.R.L.)
- Jandre (Poly Haven)
- Sebastian König (blendFX)
- Brecht, Jeroen, Sergey, Habib, Dalai (Blender)
General / VFX
- Robert brought up multiple points he considers crucial for VFX work in a production environment. His conference talk “Aftermath Making-of” (especially slide “main issues”) shows the show stoppers.
- Most important missing feature in Blender for Robert’s workflow is the ability to have low-res and high-res meshes within a project and switch back and forth between them seamlessly. It’s important to be able to reference/link models/collections without loading the whole blend file and then deselect non-needed collections.
- For Cycles/Rendering, the following feature requests were mentioned: Mipmap and texture caching as well as a less noisy way of introducing fog effect.
- The rest of the list was briefly discussed but decided to keep the focus during the meeting on VFX/compositor and not rendering.
The compositor is valuable because artists don’t have to export to another program and therefore it simplifies the pipeline and the workflow. However, it lacks features and still feels alien to compositors coming from other software when it comes to UI and workflows with nodes. On a positive note, the compositor does not feel very far from being complete for most use cases. The following feature requests were discussed.
1) ‘Make backdrop deprecated, forever!’
Agreement among artists that backdrop doesn’t bring any value. Pau argued you never want your tools on top of your final result, similar to painting and shading where tools and nodes live in a separate editor.
2) Real-time playback with compositor.
Andrea, Andrea, and Nicolas mentioned that the compositor is fast enough for a single frame, but when pressing play it lags. For example, if artists want to animate a flickering light, they are forced to render the whole sequence to see a preview with the correct timing. Caching was briefly discussed as a solution. The typical workflow involves files on a network. Artists wish Blender would cache files automatically on a (fast) disk. Note: this was also a topic in Brecht and Sergey’s talk “Tech for Gold”, (see Q&A section)
3) UI / workflow: “Blender feels alien to other compositors”
- Other software that do nodes (not just compositing) use top-to-bottom workflow. Blender uses left-to-right workflow.
- Basic settings such as render size are in the property panel, not in the compositor
- Fast Viewing, e.g. press 1 activates viewer 1, press 2 activates viewer 2 etc…
4) Canvas / Metadata:
- Hard to keep track of image size artists are working with.
- Need some sort of canvas where you scale
Brecht mentioned the design task #108944 - Compositor: canvas and transforms - blender - Blender Projects. From a short discussion, it was not clear whether this is the best solution moving forward. Feedback from artists was Blender should keep designing the Blender way, not just copy Nuke.
- Ability to work with a lot of channels. Similar to geometry nodes where you can extract depth and leave everything else in the image. Currently, this is possible but makes node tree unnecessarily complex
5) Color management:
- Being able to upload ocio files is desirable. The current workaround is to modify an existing (filmic) file
- Missing ACES configuration. Typical requests from Disney/Netflix require ACES exr file.
- Blender has issues with the color picker.
Brecht: we currently only do sRGB displays. Recently HDR support landed
Sergey: we want to have custom ocio configurations. Issues are with UI (color picker) and camera configuration (do we have to ship all configurations with Blender?)
Not sure that about this. If enumerating programs without a compositing topic, there is a lot of programs with the left-to-right direction of graph: Unreal Engine, Shaders in Unity, Sunvox, VCV Rack, …
WHAT, doesn’t bring any value? it’s literally the best thing ever in blender lol? You can quickly move the nodes in gesture and see the image? No way someone think this. I use the compositor daily and always use backdrop… Don’t kill it
You will be probably in minority. Working on an image while having nodes right on top of it covering parts of it always felt like a stupid gimmick to me. Like some programmer was too invested in whether they could that they didn’t stop to think whether they should.
Non the less, there are also parts I disagree with:
This is just straight up not true. When it comes to compositors, there’s Nuke, some open source literal clone of Nuke, and very shortlived one called SGO Mamba. Fusion and Fusion inside of DaVinci Resolve are left to right. Autodesk Toxik, later named Composite was also left to right.
When it comes to non compositor 3D software using node systems, only notable example of top to bottom is Houdini. Pretty much everything else is left to right. Overwhelming majority actually.
At the risk of stating obvious, it would be ridiculous if Blender had all its node based systems left to right but compositor top to bottom because of such a weak argument.
Obviously Blender will never be Nuke, so there’s no need to try to make it nuke.
The post is really meant as a summary of the discussion, there was no agreement to implement those feature requests from Blender developers.
Those concerns were brought up by the people present in the meeting, who have specific needs for their workflows. It’s expected to have different feature requests from people doing different things with Blender. So please feel free to discuss the single points and whether they make sense for your workflow or not.
This was indeed referring to Houdini. So I might have expressed this too generally. As mentioned later in the post, attendees still think it’s better to keep designing the Blender way and not copy other software.
Iam also using the backdrop a lot. Its amazing. Totally agree with @costavojik, Please don’t remove it.
The backdrop is advantageous on smaller monitors. I use it when on the go. Guess I would do fine without it, but it can be handy.
Generally I use the backdrop for a large scale preview and to see details in the composite. I then have a smaller image editor open for the overall look of the image.
Just because the backdrop isn’t in other software doesn’t mean it’s useless.
Honestly, I’d rather see the functionality of the backdrop improved to make it a proper feature (easier navigation for a start), rather than have it scrapped.
I would agree with some other views here. I like the backdrop image being inside the compositor. If one wants to make it an option to have the view elsewhere without taking away the viewport compositor backdrop, that would be okay. But just please don’t take away the backdrop. Sometimes one does not need to see the whole screen, and when having limited space on a laptop or computer screen, it is nice to be able to have that share space with nodes. Making it super easy to move would be nice, though.
Agreement among the handful of artists here, yes. Agreement on a larger scale? I doubt it. As said, the backdrop brings immense value to people working on small screens.
Does the compositor backdrop bring value?
In the larger scope, i agree with “Pau argued you never want your tools on top of your final result”.
That’s true, I usually don’t. But, due to the nature of the compositing nodes - you need a lot of screen space for it, compared to compositors that use a layer system. If I put an image (result) viewer side-by-side with the compositing nodes, I cant see much of either one. I’d rather see tools on top, than nearly nothing useful of anything.
So yeah, in reality - i think it adds value. If you have a second monitor, perhaps not.
It already is?
Why? What would this accomplish?
It seems that there are people who has strong bonds with the backdrop image. For those I’d like to mention it: these are notes from the meeting, and it is not the list of actions. We will not just remove the backdrop or flip nodes from horizontal to vertical notation.
Having such feedback was useful, because it made it clear a higher care needs to be done when aligning some other design aspects (like procedural texturing: left-to-right, fields, etc) not alienate VFX artists even further. Additionally, in the context of backdrop, it confirmed some on-going design and work needed to make the viewer node and its result in the image editor be a smoother workflow.
Keep in mind, we will need to show much more information. For example, the data and view windows of an image. Combine that with the transform widgets which are already drawn on there, with all nodes covering the image, information, and the widgets… We do need a more clear way of seeing those things. It will likely be reflected in the default workspace, and the default for the backdrop option. Maybe make backdrop read-only, avoiding clutter of extra gizmos rendered on it, but not remove it alltogether.
I never use the backdrop. I use two windows, one for the image and one for the nodes. I slide them around according to whether I need to do heavier node work or fine tune adjustments. Works well on small monitors.
I find the backdrop feature to be annoying, even after years of using the compositor. When the node network gets even a little complex, it becomes clutter. It’s also annoying to resize the backdrop using ‘v’ and ‘alt + v’ and having to hold ‘alt’ to use the MMB to move the image. It’s awkward and would be a great deal better if there was simply a separate dedicated viewer where just the MMB can be used to easily scroll to zoom and clicked to move the image around. It’d be a beautiful change to the compositor.
I understand that I can disable ‘Backdrop’, open a separate window, change it to ‘Image Editor’, click on the dropdown and select ‘Render/Viewer Node’, but the fact that setting up the most standard workspace expected out of a compositor takes all these steps, while the awkward one is the default, is wild to me lol. Still love Blender though
I learned to love the backdrop during my long time with Blender. And most of my students that laughed at that feature at first now find it “cool” and use it themselves. It helps with lower resolution interfaces (or when you e.g. need some screen space for curves / 3D), it helps to focus on special areas while adjusting values without having to constantly move your eyes between two positions on the screen etc.
And you can always disable it and go to a split layout with a few mouse clicks and of course you can save it as a Startup File and will instantly have your beloved layouts ready at start.
So to make it short: Please don’t remove it!
As I’ve mentioned above: there is no plan to remove anything. I do not think there is any reason to keep iterating over it as it is not very constructive. We do hear all of you.
If this topic keeps dragging we’d need lock the discussion, as it is not very productive time investment from any of us.
Sorry, didn’t mean to bug you. Good to hear the backdrop won’t go anywhere
So now for something more constructive:
I totally agree 100% with this! Almost 15 years ago, when we were still a Softimage shop and adopted Arnold, it changed the way we looked at textures / texture resolutions / memory limits with its maketx / texture cache.
Before that it was always a fight to a) use a high enough texture resolution for the hero assets that are close to the camera so everything looked crisp while b) don’t brainlessly use the same high resolution textures for everything in the background or out of camera frustum to not waste precious memory and c) don’t need to crank up anti-aliasing samples because textures used too high frequency details where a lower resolution texture would actually render with a better quality and less noise.
A LOT of tedious manual work.
But the mipmap / texture cache of Arnold changed all of this: Just use the highest possible resolution textures and the renderer will take care of choosing the right resolution with the least amount of memory needed. Awesome!
When we finally switched to Blender / Cycles workstations had much more RAM and even the GPUs weren’t as wimpy as a few years before, but still we were back to keeping an eye on texture resolution, memory consumption etc. while “the industry” was adopting UDIMs where they started assigning gigabytes of textures to the tiniest asset without even caring. Because the renderer would take care of that.
Long story short: I’m glad this popped up on an official meeting transcript.
Yes, mipmapping and caching is one of those things that, at least in our case, became a real showstopper to even try to use blender as the main DCC.When doing the first tests to try and use it in a new project, we intended to port an already existing set from Maya/Arnold to Blender, and it wouldn’t even begin to render… And to change the whole texture/shading workflow to accommodate that shortcoming was a no-no. (We eventually gave it up. It wasn’t the only problem we faced to be honest, but it was a very-big-nail in the coffin)