So, I made a little script that allows to store custom x and y resolution in camera.data.
I have a function (ccr_handler) that checks if the current scene.camera has custom resolution and in that case it changes the scene.resolution_x and _y to the stored values.
I call this function with:
bpy.app.handlers.depsgraph_update_pre.append(ccr_handler)
When I switch Camera everything works as expected: the scene resolution is changed accordingly. It also works when binding cameras to timeline markers. Changing frame or playing viewport animation show the resolution changing.
But now the issue: it doesn’t work when I launch an animation render, which is actually the main reason I made this script since the beginning. All the frames are output at the resolution currently set when I hit Ctrl+F12.
What am I missing?
Thanks @Josephbburg. Frankly I’d like to go through the “learn” path if possible, instead of paying and copying.
I didn’t find much documentation so far. On blender stackexchange somebody mentioned the fact that during rendering there some limited API context (or at least I understood so)
Can someone shed some light on this please?
One thought: it is possible that what I am trying to achieve is deliberately unallowed. The reason is that if you are rendering in a video format changing resolution is infeasible.
So the right question now is: where to find the documentation for such an undocumented topic?
I had the same problem with my add-on not attuning handlers…
and
That works for me and handlers are run. What s interesting is that in a conversation with @jacqueslucke I think he said he re-wrote AN many times before handlers actually worked for him.
Thanks Clock.
It also works for me this way. The cons are that I have to use a custom render operator and that gui freezes while rendering. I’ll have to find a way around ‘INVOKE_DEFAULT’
Maybe if @jacqueslucke reads us he can shed some light on the matter? Pleeeease?
It’s a bit of time since this has been reported. These days I stumbled over this issue as well. I’m seeing this issue in Blender 2.92 as well as the latest 3.0.1.
Is there any news about that?
I was able to pin down the issue that there is a relation with the presence of the “Render Layers” node in the node editor.
I’m also using the node editor for editing a series of photos and I do not need the render output. So I normally don’t have the “Render Layers” node that is being instantiated by default initially.
activating nodes (which is instantiating a “Render Layers” node feeding a “Composite” node)
clicking into different frames in the time line and seeing messages on the console as expected
rendering an animation and also seeing the messages in the console as expected
now deleting the “Render Layers” node
clicking into different frames in the time line and still seeing the proper messages in the console
rendering an animation and the messages disappear because the handler is not called any more
adding the “Render Layers” node again, even leaving it unconnected
rendering the animation again and the messages appear again
My current work-around is to put a “Render Layers” node there nonetheless, but leave it unconnected hence doing rendering for the trash… However, one weird point is that during rendering the animation in my script I have to apply the changes to various nodes as if we were at frame+1, not frame+0. During editing I have to remove this offset of 1. So during rendering the animation, the changes become effective with a lag of one frame. Furthermore it does not matter whether I’m installing a pre- or a post-handler. I’m not sure, but perhaps this is another bug that exists besides the fact that the handlers are not called at all when the “Render Layers” node is not present. At least, this behaviour seems counter-intuitive for me.