Thanks for the review.
Yes I agree. The idea was to be able to have eg. a (asset browser) preset text animation, which would run at a constant predictable framerate. This is not an issue that’s specific just to the compositor though, and adding it would only add unnecessary complexity. I talked about it with ZedDB and he mentioned that other animation assets will also need to be usable at a constant frame rate. It could be achieved eg. by retiming the keyframe durations when applying on a scene/strip/rig with a different frame rate. (afaik how it will be implemented is still in the open)
I’m not sure about excluding 3D nodes from the sequencer. Just like you can use a scene render as a sequencer strip, one might want to use scene renders in the compositing nodes of strips. Maybe you want to add effects to the render, or composite multiple scenes or render layers together. There is a performance penalty for it, but I think there are also many use cases that people would take the performance penalty for.
On the sequencer side there are some obvious nodes that can’t be used for 3D scenes like the time remapping nodes you mentioned. I’m not sure whether it’d be worth splitting the editors entirely because of those though. Reusable node groups like vignettes, color grading setups, etc. would have to be remade for the sequencer and the 3D compositor individually if it was split. An error could be shown if invalid nodes are used, like how the compositor shows an error when the composite node is missing.
Maybe there could be an option to mark a node group as either a sequencer node group, 3D scene node group or both? Invalid node types would be highlighted as red and greyed out in the add menu, and they could only be applied if it’s marked for the use in that context (sequencer or 3D scene). That could work as a compromise between them. I’m not sure about it though.
I asked @iss about this in the chat, and the general idea he has is being able to use strips outside the strip the node group is applied on. Somewhat like this:
I intentionally left it out, since I think the compositor and the sequencer should be separated. Eg. you shouldn’t be able to bring in strips from the sequencer to the compositor, other than the strip the node group is applied on.
The strips in the sequencer are layered on top of each other in a clear order. If you were able to bring in strips to the node tree, this layer system would be broken, which would make it confusing to use. Maybe a strip is used by multiple strips, maybe it’s used multiple times on different “layers” in the same strip. With nodes it’s easy to see at a glance how each input is used and where, but this wouldn’t be possible to visualize in the sequencer.
The only thing that the sequencer could be used be used for in this case is timing the strips. Since it would break the layering and make it confusing to use, my suggestion is moving that responsibility to an animation/keyframe editor. This is something that I briefly mentioned at the end of the proposal. The videos that were outside the host strip would instead be brought into the compositor as an image sequence node. The start and end times of those clips would be adjusted in the node keyframe editor.
In addition to providing a solution to that issue, it would also help displaying node tree keyframes in general. Node tree keyframes are displayed in the dope sheet using the name of the value being keyframed. The node name isn’t visible, which makes it hard to find what you’re trying to keyframe.
Which Fac is which?:
The new editor would show a clear hierarchy of what nodes the values are under, which would make working with it easier.
I haven’t had time yet to make a full proposal for it, but it would work very roughly like this:
The VSE and strips are useful for combining different shots / video clips together, adding transitions between those clips, and timing audio. That’s something you can’t do with nodes. Nodes are useful for adding effects onto those clips. The timing of the different effects applied on the clips is relative to that clip, and exposing them outside the node tree would make it difficult to work with the timing of clips in the sequencer.
Selecting video clips that aren’t in the timeline as inputs could be useful though, if you for example have some stylized node group preset that tiles different clips together.
Yeah, I think there should be some further consideration on how the input and output would be handled. My earlier reasoning for using channels for everything was that you could reuse them in different situations, but looking back on it I don’t see why one would use eg. a node group for a transition on a normal strip. I think a better design should be considered for this.
As for exposing everything in the node group input, I mentioned in the proposal that it would be hard to see what is passed and where. Eg. if you have a transition applied on a node group with four inputs, where are the two clips used for the transition passed?
@iss’s solution for it in the chat was selecting each of the inputs manually. Something like this:
I think combined with the idea of including strips outside the “host strip” it makes sense, but it also has downsides. Mainly that you have to manually choose what goes where every time you add a transition. This would slow down the workflow a lot, since generally you only apply the transition in a certain way: from the in strip to the out strip with the transition factor being based on the transition length. That’s why I think it should be defined in the node group itself.
My revised idea is having separate input nodes for normal strips, transitions, and the special meta strips. I think those three would be enough for all cases, since transitions are the only effects you usually apply in an NLE. More complex effects can be done inside the node group, as it is a better tool for the task.
When used inside another node tree the inputs and outputs (From, To, Factor) would be shown like in the previous proposal.