Yeah, I just made a rough example without meaning it to be taken literally – the point is that using variables in file paths is hugely powerful; I just recently rendered 182 unique image sequences to unique file paths with unique file names from Resolve using a single output thanks to their render tokens (variables).
Personally, I’d take a greater amount of visible complexity over a lesser amount of hidden complexity. If a large/user-defined stack of contributors to the render product is directly visible inside Blender (e.g. some sort of Outliner mode, or a dedicated Linking/Override editor), it would be easier to teach and remember than a simpler ad hoc stack where you have to hit render to see what it’ll produce.
I have had to use more than one scene because Blender still does not have a local render toggle for view layers for individual objects that are not collections (See my post Why Is There No Local Render Toggle Outside of Collection Checkboxes? for explanation). If you want to mix and match parts of the scene for social media, YouTube, website, and Facebook OpenGraph images, making more than one scene was the best option as long as the render toggle is global for all view layers.
I use this feature to combine different cameras into a single image- specifically, rendering a background (i.e. clouds) at a different perspective than the main scene. This makes it easier to set up, I don’t have to get my clouds in precise scale and distance because I can just render them separately with a different focal length and composite. Being able to use per-camera options would make this workflow easier, though, I’d love to see that happen
I didn’t mean to say I am sure people don’t use it. Just to point out it’s way more rare than just rendering multiple cameras in one scene. There’s probably some niche use case, but it was just a point I was making regarding too many override settings.
I am not sure offloading render dimensions/ratio to the camera is wise because it makes these settings less centralized, thus more hidden. Yet I’d love to have these things variable per-camera as much as anybody. Camera->Marker bindings are already very hidden. I’d love for the whole render pipeline (dimensions, ranges, possibly sampling settings as well) to become more explicit than it is now, and it doesn’t look like any of the proposals go in this direction.
We already have precedent for listing important, view-only info in the UI with Named Attributes on the geo nodes modifier. Maybe a similar sub-panel could be added to the Format panel to show all cameras that are overriding it and on what frames.
What would it mean for e.g. dimensions to be more explicit than they are now? Not sure what that refers to concretely.
About moving dimensions and output entirely from scene to file level. I think maybe the trade-offs would be more acceptable if we made some other improvements.
We could use better support for writing EXRs of scene renders for the purpose of a cache that feeds into the compositor and sequencer. Being able to cache an entire frame range and go back and forth in time is very useful. But currently requires either manual setup in the compositor by loading from images, or relying on sequencer caching which is maybe ok for story tools, but not for slower renders.
Currently the render pipeline will only write out whatever is last, be it the scene, compositor or sequencer. But if we could also have the option to write these intermediate frames to their own files, and the compositor and sequencer would be able to find them automatically, that would be a nice workflow improvement I think. This doesn’t need many settings, the file path can be chosen automatically, the file format is EXR, color space is scene linear, and some reasonable choice of EXR compression.
The other issue is overscan. You often want to render the scene with overscan, while after compositing or sequencer it will typically be gone, so the dimensions will be different. If we add a native option to render with some amount of pixels of overscan on scenes with appropriate EXR metadata, there would be no need to have different scene and sequence dimensions for this purpose.
Wouldn’t it be more efficient to render once and crop multiple times using the compositor instead of rendering multiple times? Or do you also have different settings like angle, DOF, etc… per camera?
Yes focal length is usually different, since framing changes. But in some cases angle is different as well. If there is animation things can change even more.
It was a reaction to how the render pipeline in Blender is a bit opaque, and how adding ways to override parameters at different levels can quickly explode -not strictly speaking a reaction to this proposal, which does solve the issue of per-camera format rather squarely. I do wish for a more centralized, programmable render pipeline at some point, but it’s probably off-topic.
Proper overscan support would be a dream!
But it doesn’t replace fine control over the render dimensions in my opinion.
Good to see that rendering gets some attention in this area, as it is -for me- still annoying to work with. Despite having a render addon to ‘ease the pain’.
And +1 for finally seeing namespace variables on the horizon.
But… I feel that Blender is trying to make things more complicated as they should be?
The whole ‘all is linked to a scene’ paradigm is a concept that doesn’t really exist in other 3D applications, and puts a lot of people ‘on the wrong foot’ in the beginning of starting with Blender. Especially when they come from other 3d applications where a lot of the Blender concepts just don’t exist.
Any other of the ‘big name’ 3D apps look for settings at the view layer/render pass level. That way overrides are very easy to set, and quick to find/check. In combination with naming variables it quick to set up your filenames and where the files should go.
Also, in other 3D applications most of the parts of Render, Output, View Layer (and some of the Scene) settings are bundled up into one panel.
Maybe have a look at the ‘competition’ to see how certain things are handles in a app rendering wise? It’s frowned upon here to add 3rd party images or links, so I won’t do that.
But I have no idea how much work the above would mean to reshuffle data inside the blendfile.
I do like the discussion here, as even the comments above show a multitude of user cases that would benefit from a changed rendering workflow/setup.
my € 0.02 in this discussion.
@RobWu Totally agree. Here’s what I wrote as a comment to the design document:
Imo, it’s a mess with sequence and scene render settings, the render settings should be moved entirely out of the scene properties. They should live and be exposed in the File Browser sidebar when rendering independently of the scene.
Same thing with resolution, fps etc. instead of dealing with the chaos of mis-matching between scenes and scenes, and scenes and sequences, there should be Project settings which are setting these values for everything inside this .blend file.
In other words, file export settings and project settings should be liberated from living in the scene data-structure.
Regarding terminology, “project settings” refers to settings shared between multiple blend files. The proposal here is to move various things to the blend file level, which matches what you suggest?
Putting it in the file browser sidebar would be different than the properties editor, but I don’t see how that could work in practice. Opening the file browser to change resolution seems quite strange.
@RobWu Making view/render layers more powerful with support for overrides would be very useful. Though it’s also not clear to how that concretely affects the specific design changes being discussed here.
The different levels are:
- Project
- File
- Sequence
- Scene
- View/Camera
- Render layer
- Render pass
Any application will store these settings at some level(s), and there appear to be differences here between the big name 3D apps. The point with render layer overrides or procedural render settings, you can store it at fewer levels as some use cases will be covered by those.
But it’s still a question where to put them above the render layer level. And in a way that works for things like the story tools workflow which doesn’t have an obvious equivalent in other applications, or for users who are looking for something easier to use than an override system.
I meant the export format settings should be in the sidebar when exporting.
The resolution, color space etc. could be above the scene-data, so it’s set one place for the full .blend file and all scenes within it.
One issue with that is - to set all those parameters, i would have to intiate a render. Which isn’t something i want to do, just to have access to the settings and be able to change them.
I first thought of that, but then you still need all those settings somewhere else.
Maybe Preferences could be like a ‘default’ setting (sort of the Project/File) level, but as per this proposal, you then still need many of those settings on a per camera object basis (since I think that is the whole point of this thread).
So while it may make sense to remove the stuff from the Scene tab, they still need to be somewhere else.
But even in Preferences could be an issue, since any change there then changes it for every new file. So do we now put them back into the File tab, as default for that file.
And that default is what applies to any created camera or video sequence, etc.
This in turn makes for a bit of a workflow shift. A Scene no longer has anything to do with resolution, colour management or frankly anything to do with output.
All output is basically tied to a camera.
I’m not totally sure how that then works with the VSE (since I never use it), but I guess if need be, much like the 3D scene, it too can have a camera (or at least an implied one, rather then an actual camera object that you can move around).