Proposal: Using compositor nodes on VSE strips

Currently, if you want to use the compositor alongside the video sequence editor, you have to create another scene, do the composition in the other scene, and add the scene back to the VSE. While the feature of including scenes in the VSE is useful for numerous other things, it’s only a workaround for the use of composite nodes. It can’t be used to make custom inputs, it can’t be applied on a strip, it can’t be used to make custom transitions, etc.

There have already been some nice proposals for this feature such as this one. The intent of this one is to create a more fleshed out idea for how it could be implemented. This proposal could also bring some potential improvements outside of the VSE. The idea in its essence is fairly simple. Most of this text is explaining how it can be applied to different cases (such as clip transitions).

The idea

What the idea essentially boils down to, is removing the strict link between the scene and the composite nodes, and being able to use them like most other data blocks.

Just like meshes and materials are their own data blocks you can apply to any object from a dropdown, the composite node trees would also work the same way. You could have any number of them, and a single composite node tree could be applied to multiple strips. The “Output Properties” tab of the properties panel would have a dropdown letting you choose the composite node tree you wish to use. This is similar to how the “World Properties” panel lets you pick the data block for the background of the 3D world.
These images are there just to illustrate what I mean, they’re not an example of what the UI should look like.

This would also make it possible to have multiple different compositor node trees for a scene, which you could quickly switch between.

Using them in the VSE

Each graphical sequencer strip would have an option to select a single compositing node tree, when the “Use Nodes” toggle is active. This would replace the “Modifiers” section of the strip.

The reason why you wouldn’t be able to stack multiple node trees like in this proposal, is because stacking them is less flexible than using nodes, and would only add further complexity. Since these compositor node trees also work as node groups (more on that later), you can apply multiple effects by adding the effects in the node tree of your sequencer strip. (This will be explained in further detail in the “Changes to the compositor” section)

A “Composite” strip would also be added. It would give no media input to the nodes, and it would show the node group’s output.

Changes to the compositor

Each compositor node tree would be a node group, just like how geometry nodes are: You’d be able to include other compositor node groups inside other compositor node groups, or use them eg. on a vse strip as is.

Bringing clips in to the composite node tree

(Below is my reasoning for the idea, the idea itself is in the next subsection)

I wrote wrote my idea for this, but I realized that it had a bunch of flaws. I rewrote the idea to address those issues, but there might be things I haven’t considered. I’d like to hear what other people think.

These are the main things that have to be considered:

  • The node groups should be easily reusable: on different strips, as a node group inside another node tree, as the scene compositor output, as a transition between strips (eg. it should be easy to apply a node based transition on a strip, but it should also be easily usable in a node tree). It should also have the ability to take in an arbitrary amount of media inputs.

  • There needs to be a clear distinction of where the media inputs are “automatically” passed in: when the node tree is applied on a strip or when a node tree is used as a transition. Same for the media output. Node groups can have an arbitrary (but predefined) amount of inputs, so it needs to be clear to the user and the program how the media is passed to and from the node tree.

The fixed proposal

Media In node
A new node would be added, a “Media In” node. This node would work as a generic input node, taking whatever is passed to it: a sequencer strip, the scene, input from another node tree, etc.
(Special cases: If the node group is used on a scene, and there is no “Render Layer” node, the Scene the node group is applied on is passed in to the Media In node using the default render layer.)

The Media In node would have a “Channel” input. It would be used to select which of the “automatically” passed in medias would be used. There would be a default channel (0) where most things would be passed in: vse strips, scene renders, etc. For things like transitions between strips, a set of pre-determined channels would be documented for each case (eg. strip transitioned from: 0, strip transitioned to: 1. There will be more about transitions later.). A dropdown with textual names for each channel would be next to it, to make it easier to select the correct channel.

When used as a node group inside another node tree, each used channel would be listed at the top of the input sockets before the node group input sockets.

Media Out node

This node is the same as the current “Composite” node, but I’m calling it the Media Out node for the sake of consistency. (Naming is something I haven’t considered, these are just the names I use to explain the idea.) The Media Out node, like the Media In node, is read by whatever the node group is used on. It is also shown at the top of the output socket list when used as a node group.

Group Input and Group Output nodes

When used inside another node tree, the group input/output nodes would function the same way as they did before. The only difference would be that the sockets would be listed after the media in/out sockets as mentioned before.

The group input/output nodes are never read from or written to automatically. This creates the distinction between user modifiable inputs/outputs and the ones written to and read from by other systems such as the sequencer.

The group input node properties would be shown in the sequencer and in the post processing section of the properties panel, just like how they’re shown with geometry nodes. This would make it possible to create reusable templates.

Example of a node group called “Strip Compositing Nodes”:

Used “as a node group”:
Applied on a strip in the sequencer:

Compositor keyframes and resolution

Each composition node tree would have its own frame rate. This could be changed for example under the “Use Nodes” button. The resolution of the output would be determined based on the amount of space taken by the different nodes in the composition. (eg. a composition with only a 50x50 px image would be 50x50 px).

Keyframes in a composition node tree would be local to that node tree. This is similar to how the sequencer strip keyframes move around with the strip as you move the strip around the timeline.

Sequencer in more detail


A new “Node Group” (temporary name) transition strip would be added. It would be applied to strips the same way other transitions like Gamma Cross are applied. It would also have the same node group dropdown other strips have.

On the node side, in the Media In node, the channel input would be used to select which clip the Media In node receives. Channel 0 would be the strip that’s being transitioned from, and channel 1 the strip that’s being transitioned to.
The fade factor/progress would be exposed as a property in an appropriate node. The transition factor would be exposed as an input socket when the node group is used in another node tree, just like how the media in channels are exposed.

Cross fade example

Meta strips

Meta strips would have a “Layers as separate channels” option, this would make it possible to access each layer individually by setting the layer number as the channel number in the Media In node. This is helpful if you have multiple layers with multiple strips, and you want to composite them together in the same way. The option would be turned off by default.


Other things

  • Additional feature: In the scene strip, a dropdown to select which compositing node tree to use for the render.
  • Forward compatibility: Older versions of Blender wouldn’t be able to handle this new system
  • Currently there is no timeline for the compositor to edit the image/movie start and end times. I’ll make a proposal for one later.

I didn’t include my reasonings for every section of this proposal. If there are any questions or suggestions for improving it, I’d like to hear them.


Some previous writeups on the same topic:
This fork added a sequencer node(which lets the user input a VSE channel into the compositor):

The main culprit in the current design of Blender is that the Sequencer is supposed to be at the end of the process and the sequencer data is stored inside scenes, which means that the sequencer can’t reference things from ex. the 3d view or the compositor which is located within the same scene. I’ve written about solutions for that here:

And here:

In the old days, it was still possible to assign ex. elements from the 3d view to the same scene sequencer, which led to this very much industry-standard workflow(in most game engines and 3d software):

When doing ex. GP storyboards it is really counterintuitive, that users will have to switch scenes in order to edit the contents of a drawing/strip.

And here on the overall vision for the VSE:

As you can see, have I tried to bang the drum for this stuff over many years, but unsuccessfully. It’s only during the Blender Studio productions, they realize the shortcomings of the VSE and throw in some hacks, but no BF dev care about developing a proper vision for the VSE.


We do care.
The issue is that some people (including you) have a very hard time to understand that caring doesn’t translate into unlimited resources and time. We currently have a lot of areas in Blender that needs work so we have to prioritize what things we will put into our time budget.

There was actually talks of removing the VSE for the 2.8 release because we didn’t feel like we could dedicate enough developer time on it. But because we care about having a video editor and want to improve it, it was decided to still keep it in.

I’m going to be honest, it is really disheartening seeing you go around spreading toxicity just because the VSE is not improving at the pace you would like to. This just makes it really hard to take any of your suggestions seriously anymore. Especially since collaborating with you is then almost impossible as you seem very emotionally unstable. One day your are trying to submit improvements and an other day you want to remove all your patches and say you will never contribute anything to Blender ever again.
(But you come back after a few weeks regardless, with seemingly more hatred than before)

If you are going to see the Blender developers as malicious entities, then that is not a good place for team work to happen.

Now of course we are not perfect and a lot of things are in a terrible state. However painting the situation as impossible to resolve with us is not helping anyone.


Blenders compositor is in serious need of a re-architecture. Its literally unusable for anything but the most basic slap comps.

  • No caching, each change triggers re-render of the entire graph. This is just terrible.
  • No On-screen controls for widgets like transforms.
  • No spline/tracker/roto embedded into the compositor viewer, its literally the place your eyes are when you are working. Working on roto-shapes outside of the final comp is just wrong on so many levels.
  • No temporal tools (frame offset, frame hold etc)
  • Better multichannel workflows.

Been hammering on this topic before:

And maybe then, when all that is done is it even worth discussing bringing it on top of the VSE.


You might want to check the latest release again :slight_smile: (see rBfa2c1698b077)
EDIT: Sorry, thought this was about the VSE.

I think he is talking about the compositor and not the VSE.

1 Like


But the VSE is also lacking purely from a quality pov. The new transform widgets etc are all great. But the terrible filtering is just wracking havoc on anything you scale up and down and rotate.

While the tooling is getting there, the quality of the video output is unusable (when you use transforms).

A lot of work has been done recently on the compositor by Manuel Castilla(he also did the fork which featured the mentioned sequencer node):

@ZedDB If you care, maybe you could comment on the proposal?

I have already reached out to the author about having a chat to do some on boarding about the general state of things and how to we could proceed.

I didn’t want to do that in this thread as I feel we have already derailed this topic enough as none of the replies here actually respond to what the author has written in their proposal.


@ok-what I don’t know if you tried out the compositor-up fork I mentioned above? You can actually do the transition stuff you mention with it:

Here’s a file if you want to try it out yourself(it needs to be run in that fork):

The first scene is with the nodes and the second scene is returning the comp as a strip into a sequence.

I tried it.

I think including VSE channels in the compositor is a useful feature, especially if/when you’ll be able to have multiple sequences like you proposed. For the usual use cases though, it’s quite inconvenient and won’t be enough.
In most cases you have multiple videos which you add after each other on the timeline. Each video usually either has no compositing, or compositing applied to it which is separate from the other videos.
When including an entire channel into a single compositing node tree, it’s nearly impossible to have multiple strips on the timeline with separate effects. Transitions and other effects also can’t be reused, and you’ll have to manually duplicate the effect and adjust the keyframes for each transition. You’ll also have to redo the layering in the compositor.

Nice to see that there has been work done on this though.

The argument I heard repeated over the years for not round-tripping strips in the Compositor, is that the Sequencer is supposed to be last, however, the “sequence channel node” in this fork proves that it actually can work feeding VSE material into the Compositor(and thereby not be last in the order).

An alternative approach would be to not use the Compositor Editor but only use the node editing UI, for a strip editing node editor(which switches content based on the active strip selection), which could reuse the compositor nodes(the code of VSE modifiers are copied from those modifiers anyway). So in a way, it would be just using nodes as UI for effects instead of layering Strip Effects or a Modifier stack. Doing it this way would work around the VSE embedded in the Scene problem, but the question is if the overall gain of moving Sequencer contents into scene-independent data blocks, would be much greater than just changing the UI of existing VSE filters/effects/modifiers (like using the full power of the compositor on Sequencer material)?

Thanks for proposal.
I would like to implement such flexible system, but this definitely needs to evaluated bit more in detail. Here is my as short as possible overview and my opinions.

Personally I like your thinking of node groups as interchangeable blocks similar to way how GN handles this.

Thing I don’t quite like is handling of input and output. Having nodes that processes channel without any relation to particular strip is quite weak design, that would hinder interchangeability. GN can handle this well by exposing node tree inputs in modifier panel, I don’t quite see why not use this approach in VSE to provide particular inputs that can be then moved around without affecting final image. Using custom resolution idea does seem to be unnecessary, images should be processed as they are. To operate in custom framerate seems very odd. Node tree should do just image processing not think about time.

I think important point is, that while nodes from compositor looks like they could be used for image processing needs of VSE, I wouldn’t mix compositor node groups and VSE specific one. These editors serve quite different purpose and they should be kept focused on this purpose. So VSE should have own node editor/view, its node trees should not be usable for compositing and there could be VSE specific nodes and 3D specific compositing nodes should be excluded.

What’s not mentioned here is handling of inputs based in time. For example currently effects are meant to use strips that do overlap, in 2.79 or perhaps before they were marked as invalid if this wasn’t the case. While you can have time remapping/offset nodes to access strip content outside of “host strip” boundary, this is not practical way to normally do things, because otherwise, why would you even need strips? You could then use image sequence nodes and apply time offsets until you get them lined up in time. But again this is impractical and VSE would suit this situation better. I don’t quite have satisfying solution for this problem, but I think this is most important one to address to quite good satisfaction, otherwise there is good chance that implementation will fall apart and good amount of time will be wasted.

Assuming only me working on this half time, I think this could be done in relatively reasonable time if all things go well (never happens), but it would be probably best to think about such change in 4.0 release. Not only because of scope, but I am not sure if current files could be ported over to new system, so this would be breaking change.

I think we can discuss this here, but feel free to poke me on chat too, but these discussions should be done openly.


Thanks for the review.

Yes I agree. The idea was to be able to have eg. a (asset browser) preset text animation, which would run at a constant predictable framerate. This is not an issue that’s specific just to the compositor though, and adding it would only add unnecessary complexity. I talked about it with ZedDB and he mentioned that other animation assets will also need to be usable at a constant frame rate. It could be achieved eg. by retiming the keyframe durations when applying on a scene/strip/rig with a different frame rate. (afaik how it will be implemented is still in the open)

I’m not sure about excluding 3D nodes from the sequencer. Just like you can use a scene render as a sequencer strip, one might want to use scene renders in the compositing nodes of strips. Maybe you want to add effects to the render, or composite multiple scenes or render layers together. There is a performance penalty for it, but I think there are also many use cases that people would take the performance penalty for.

On the sequencer side there are some obvious nodes that can’t be used for 3D scenes like the time remapping nodes you mentioned. I’m not sure whether it’d be worth splitting the editors entirely because of those though. Reusable node groups like vignettes, color grading setups, etc. would have to be remade for the sequencer and the 3D compositor individually if it was split. An error could be shown if invalid nodes are used, like how the compositor shows an error when the composite node is missing.

Maybe there could be an option to mark a node group as either a sequencer node group, 3D scene node group or both? Invalid node types would be highlighted as red and greyed out in the add menu, and they could only be applied if it’s marked for the use in that context (sequencer or 3D scene). That could work as a compromise between them. I’m not sure about it though.

Accessing strips outside the “host strip”?

I asked @iss about this in the chat, and the general idea he has is being able to use strips outside the strip the node group is applied on. Somewhat like this:

I intentionally left it out, since I think the compositor and the sequencer should be separated. Eg. you shouldn’t be able to bring in strips from the sequencer to the compositor, other than the strip the node group is applied on.

The strips in the sequencer are layered on top of each other in a clear order. If you were able to bring in strips to the node tree, this layer system would be broken, which would make it confusing to use. Maybe a strip is used by multiple strips, maybe it’s used multiple times on different “layers” in the same strip. With nodes it’s easy to see at a glance how each input is used and where, but this wouldn’t be possible to visualize in the sequencer.

The only thing that the sequencer could be used be used for in this case is timing the strips. Since it would break the layering and make it confusing to use, my suggestion is moving that responsibility to an animation/keyframe editor. This is something that I briefly mentioned at the end of the proposal. The videos that were outside the host strip would instead be brought into the compositor as an image sequence node. The start and end times of those clips would be adjusted in the node keyframe editor.

In addition to providing a solution to that issue, it would also help displaying node tree keyframes in general. Node tree keyframes are displayed in the dope sheet using the name of the value being keyframed. The node name isn’t visible, which makes it hard to find what you’re trying to keyframe.
Which Fac is which?:

The new editor would show a clear hierarchy of what nodes the values are under, which would make working with it easier.

I haven’t had time yet to make a full proposal for it, but it would work very roughly like this:

The VSE and strips are useful for combining different shots / video clips together, adding transitions between those clips, and timing audio. That’s something you can’t do with nodes. Nodes are useful for adding effects onto those clips. The timing of the different effects applied on the clips is relative to that clip, and exposing them outside the node tree would make it difficult to work with the timing of clips in the sequencer.

Selecting video clips that aren’t in the timeline as inputs could be useful though, if you for example have some stylized node group preset that tiles different clips together.

Input/output handling

Yeah, I think there should be some further consideration on how the input and output would be handled. My earlier reasoning for using channels for everything was that you could reuse them in different situations, but looking back on it I don’t see why one would use eg. a node group for a transition on a normal strip. I think a better design should be considered for this.

As for exposing everything in the node group input, I mentioned in the proposal that it would be hard to see what is passed and where. Eg. if you have a transition applied on a node group with four inputs, where are the two clips used for the transition passed?
@iss’s solution for it in the chat was selecting each of the inputs manually. Something like this:

I think combined with the idea of including strips outside the “host strip” it makes sense, but it also has downsides. Mainly that you have to manually choose what goes where every time you add a transition. This would slow down the workflow a lot, since generally you only apply the transition in a certain way: from the in strip to the out strip with the transition factor being based on the transition length. That’s why I think it should be defined in the node group itself.

My revised idea is having separate input nodes for normal strips, transitions, and the special meta strips. I think those three would be enough for all cases, since transitions are the only effects you usually apply in an NLE. More complex effects can be done inside the node group, as it is a better tool for the task.
Transition example:
When used inside another node tree the inputs and outputs (From, To, Factor) would be shown like in the previous proposal.

If I understood correctly the thing that would be different from my proposal is having a separate node editor as a work around? I think that sounds like somewhat of a hacky way to solve the issue.

I guess, that’s what ISS is writing about here:

1 Like

Okay. If it makes sense from the UX perspective, I think we should do it that way. If it’s there only to work around old code, I think that’s a bad way to solve it. I’m currently not quite convinced by the idea though because of the reasons I mentioned in my reply to iss.

Moving the VSE content out of scene data structure would allow for much better integration into the various editors including the compositor since it would make it possible to access the current scene in scene strips(which was possible up to 2.79) and thereby avoid the problems of recursion(both in and out of the sequencer and compositor). So, for me, this is a superior solution. Suggestion: Make the Sequencer contents into data-block, which can override the switched to Scene-Sequencer contents And solving the slow speeds of scene strips should be prioritized over any node work, imo, since it is already possible to get compositor output into the sequencer that way: The poorly exposed Scene Strip properties, which results in bad UX:

Currently, there are already UIs for effect strips and UIs for modifiers, adding a UI for basically the same functions in a node editor(but add similarity to Resolve workflow), would just result in more UI clutter(though nodes are nicer than strips(but adds Afx like workflow) and modifiers(but adds similarity to 3d workflow)).

An alternative way to get strips in and out of the compositor could be as movie-clip-data-blocks. If the VSE had the option to convert imported a/v material into movie clip/sounds/image data-blocks(add on for this: Using the movie clip node ex. the movie clip data-block(source file) can already be imported into the compositor, so what’s missing is storing in and out points of the movie/image data-blocks from the strips and reading this into the node, and then a data-block output node would be needed. This way the source strip could be round-tripped in the compositor and returned into the movie strip(with an option to select the compositor-data-block output instead of the source-a/v material).

Converting a/v material to data-blocks will also have the advantages of the Asset Browser and the Outliner will be able to show these files and users can tag, organize and drag & drop them into the sequencer:

1 Like

Afaik effect strips are currently being phased out. The transform strip has already been removed, and the general idea seems to be that once there’s a better system they should be removed altogether:

So this would replace both the effect strips and modifiers. That would remove clutter from the current system.

I think an issue with that is that the node setup is linked to a specific video clip. The same setup couldn’t be easily reused on different clips.

Moving the VSE content out of scene data structure […] is a superior solution.

I think these are separate issues.