Since the animation module started talking about a ghosting system, it became clear that it should be implemented at a more fundamental level, so that more areas of Blender such as Grease Pencil can benefit from it too. This system would be useful for all kinds of animations.
Requirements
There are a few core things that this system needs to be able to do:
Display the evaluated state of one or more objects at any point in time in the scene multiple times.
Allow custom rendering for the ghost frames. Depending on the object type, different ghosting might be needed. It might also be useful to render the ghost frames with different render engines (overlay/workbench/eevee).
Allow custom transformations on the ghost frames. Animators often need to shift ghost frames to e.g. âde-clutterâ overlapping frames. Sometimes the inverse is needed too, e.g. in 2D animation, in order to draw an in-between frame, animators will shift ghost frames so they overlap and it becomes easier to see what the in-between pose should be. This is known as âshift & traceâ.
Proposal
In the following, Iâd like to propose an idea of how it could be implemented and gather feedback from developers.
Technical Design
The core idea is to create a separate minimal dependency graph for each ghost frame. Something like this:
These would be stored in an array on the Scene. When the scene is tagged for an update, instead of just tagging the main depsgraph, we also tag the ghost frame depsgraphs. And the same for when the scene is tagged for a frame change.
For the moment, I think it makes sense to figure out how the simple approach performs and only then think about possible performance improvements. Here are some ideas.
Instead of having multiple dependency graphs, a single dependency graph could evaluate the same IDs at different times (and store their evaluated state for each frame requested). It seems like that would require some major rewrites for the dependency graph code though.
Share as much data as possible between ghost frames. E.g. when the vertices positions of a mesh donât change between frames, they would be duplicated in the evaluated states of the ghost frames. They could share this data e.g. with the current frame.
I use onion skinning for 3D and will try to give feedback based on my experience.
Iâve tried multiple add-ons that draw ghost frames in Blender and most of them are unusable for me, and the reason is that they do something that is also in your video. Ghost frames look exactly like the original mesh. Unless you have object selected it becomes impossible to see original frame. Thereâs also no difference between previous and after frames. It becomes visual clutter. I use addon called Animation Extras which is free and has onion skinning that draws like this
It is extremely helpful. Ghost frames are invisible, so you can see through them if you have something behind. Theyâre clearly different from original, difference in previous and after frames too.
I think there needs to be general Previous and After ghost frame color in theme that will work on every type of object including Grease Pencil, and if you have multiple and want to differentiate you can override on object level and set custom colors. Similar to how viewport material colors work I guess.
Also UI is very important. This is the UI from add-on that I cleaned up
Ability to choose Before and After ghost numbers separately, so you can have 1 previous and 3 after for example
Ability to display either around frame, or entire range, like Motion Paths.
Frame Step is also something I work with all the time, when spacing is too close it helps a lot
Color and opacity controls are also nice. Start and End values below colors create gradient, so that frames closer to current frame are more visible than ones at the end.
In Front/In Back also helps a lot if you have multiple objects and theyâre intersecting with ghost frames.
Looking at the video perfromance seems amazing and real-time updates are top notch. I also like ability to disable them from overlays
Thank you for the input, but his is only a technical proposal. Itâs not meant to address how users interact with ghost frames or how they should be rendered. Itâs too early for that.
Is there a particular reason itâs being called Frame Ghosting - which is a specific unrelated term in video - instead of Onion Skinning, which is what animators call this?
Right, it should just be Ghosting System maybe. We didnât want to use Onion Skinning, because itâs usually only used in 2D animation. So it should be more general imo.
Itâs descriptive, though. If 3D doesnât have a real well known equivalent in terminology it doesnât hurt to call it the same, I think. On one hand it makes it more distinct to google when looking for tutorials later on on the other hand itâs really weird learning different terminology for the same thing in each software. After all Blender is not the only software to name things differently.
Then again - this is probably a minor problem in comparison to actually developing the feature itself.
If itâs only for developers, why is it open to replies and in the design feedback subforum?
That aside, @nickberckley is correct- onion skinning only works when with different colors. Just duplicating the mesh is not feasible for animation. @thorn-neverwake is correct as well- the name for this across all animation software is onion skinning. This proposal feels like a very technically focused idea that is missing the major points of what animators would actually expect from this.
This proposal as it currently exists really needs more input from animators and not just developers.
Iâm not sure about that, âonion skinningâ is ubiquitous in animation in general. It would work well, I think. I wonât comment on the bells & whistles such as colors because you mentioned itâs not the focus of this thread, but other commenters have been correct in their observations.
Is it likely to have a big performance cost ? evaluating a complex rig at a single frame can already easily go into sub-realtime speeds, so evaluating even just six or seven times that rigâŠ
Thank you for putting this mockup-test case together so quickly and having it actually work on a beveled, non-static mesh is awesome.
It was nice that you and Christoph we able to be in the room together and talk with the module about making it more general.
@everyone commenting so far âŠI am going to make some suggestions to help you put this energy to use instead of spending energy worried about onionghost wording.
Re-read the post and read the intended use of this area so you have context and understanding of what is posted here.
How best to help if you arenât a developer and are an artist/animator/designer??
You might use these prompts as starters and think on them and share when you have a clear idea.
How would this help you work better/faster if you had it as a feature in Blender?
1a- What might you not have to do anymore or workaround if you had this feature instead?
What would you want to control if you had this tool? Colors, frames, what do you expect it to allow you to change and see? Read the text of the post under requirements as a start**
Would you want to use this in other places? Ghost preview of what Pose you are applying from the Pose Library etc
Would you use it all the time when animating or would you limit it to only sometimes, maybe you prefer edit motion trails or some combination of both ghost trails?
Lots to do and feedback to give and ways to help make this feature something we donât want to turn off.
Currently, this wouldnât help me work better or faster. If it used a more standard color-silhouette pattern like here: Ghosting System for Animation - #2 by nickberckley it would make animation both better and faster. As it currently exists, I would turn it off immediately.
Like @nickberkley, I currently use a free add-on called Animation Extras for onion skinning. I donât know that Iâd stop using it, to be honest, itâs extremely powerful and very easy to use. If Blender could have a built-in system that worked as well, I would stop using that add-on.
No need to re-invent the wheel- I want to control what I can already control:
Specifically, I need to be to control:
The object being onion skinned
The type of onion skinning (every frame vs direct keys only)
The number of frames before and after the current frame
The color and opacity of before and after frames
The amount of fade of before and after frames
Whether before/after frames are enabled
Transparency / x-ray of before/after frames
Flat color silhouette vs shading
No, I see no need for onion skinning beyond the 3D viewport as it relates to previewing animation.
This definitely should not be on all the time. You only use onion skinning during blocking for animation- it is no longer useful once you get to the spline polishing stage.
I think this should work not just with a rigged character, but with geometries generated/simulated with geonodes, too. Itâs less obvious how to display the onion skin of say a volume, or how useful that would be.
I would also consider having motion paths and onion skins be the same thing. Since one is a generalization of the other.
Onion Skinning would be most useful for blocking in pose-to-pose. Having ability to offset ghost a little and have previous pose next to one youâre crafting right now is not something I think anybody will say no to. But I also primarily work on stop-motion, both real (3D printing) and digital, and doing frame-by-frame stuff it is just part of the process.
And of course, for motion graphics it can be even more useful than for character animation. But I donât think that it should be merged with motion paths. I think motion paths should stay as they are, but we should be able to handle them like Bezier curves. They serve VERY different purpose for me in my workflow.
I guess this thread went a little bit off topic, but I think that is because no animator is debating whether this feature should exist or not. There is no disagreement on that. Itâs something we wanted for a long time. So naturally discussion goes to HOW it should be implemented rather than IF.
As for what I should be able to control is: Color of before and after, opacity, number of frames before and after (separately), with step (important for when Iâm animating on twos), and two things I didnât mention in previous comment but will add are:
Offset. When working with character that stays in same place being able to offset ghost in the 3D world so you can see previous frame fully would be beneficial.
Keyframes only option. Especially in blocking it is useful to see not the previous frame, but previous keyframe.
This goes for 2D too I do little bit of Grease Pencil as well. I donât see any reason why they shouldnât be just totally same.
This also would probably, like motion trails, need to somehow indicate if there is a keyframe on that frame already. Otherwise you (maybe accidentally) drag a frame that looks off to you without realizing itâs an unkeyed interpolation and add uncontrolled keys. Just theoretically speaking at least.
Regarding evaluation and memory usage, ideally you would have all of the following:
Non-animated datablocks evaluated and stored only once (including dependencies of animated datablocks).
Non-animated attributes (e.g. UV maps) evaluated and stored only once, including GPU buffers.
Shared topology cache for subdivision surfaces.
Shared GPU resources for textures and volumes.
Simulation and physics caches shared between frames.
Evaluate multiple frames in parallel.
Background evaluation for when ghosting is slow, so editing and changing a frame is still interactive. First evaluate current frame synchronously, then all other frames in a background job.
Sharing data may not be important for an initial version of this. We can imagine cases where e.g. a physics system pulls in a lot of static objects to be evaluated multiple times, or cases where UV maps and vertex colors take up a lot of memory. But for the simpler cases of an animator working in workbench or grease pencil, with objects relatively isolated from the rest of the scene, this is not as much of a problem.
A few potential approaches for sharing data:
Evaluate a secondary depsgraph N times, and make copies of the animated datablocks and other datablocks pointing to them, relying on implicit sharing to reuse memory. Making copies has some cost, and not sharing anything with the current frame depsgraph is not ideal.
Create N copies of the current frame depsgraph with some type of native implicit sharing support inside the depsgraph so it can share datablocks, as long as they are not re-evaluated.
Add support for evaluating datablocks at multiple frames inside one depsgraph, potentially scheduling the entire evaluation for all frames as one big task pool.
In all these approaches some attributes or datablocks will be re-evaluated even when not changing. Ideally both are addressed at the same time, if the attributes are not re-evaluated itâs both faster and implicit sharing can deduplicate memory usage. In practice this is not so simple though, geometry nodes and modifiers do not support partial re-evaluation. Deduplication after comparing memory is possible but not ideal for performance.
Ideally GPU buffers for attributes would be shared if the attributes use implicit sharing. But this is not so simple to implement, especially as those attribute values may not be sent directly to the GPU but rather depend on various settings or state.