Since the animation module started talking about a ghosting system, it became clear that it should be implemented at a more fundamental level, so that more areas of Blender such as Grease Pencil can benefit from it too. This system would be useful for all kinds of animations.
There are a few core things that this system needs to be able to do:
Display the evaluated state of one or more objects at any point in time in the scene multiple times.
Allow custom rendering for the ghost frames. Depending on the object type, different ghosting might be needed. It might also be useful to render the ghost frames with different render engines (overlay/workbench/eevee).
Allow custom transformations on the ghost frames. Animators often need to shift ghost frames to e.g. “de-clutter” overlapping frames. Sometimes the inverse is needed too, e.g. in 2D animation, in order to draw an in-between frame, animators will shift ghost frames so they overlap and it becomes easier to see what the in-between pose should be. This is known as “shift & trace”.
In the following, I’d like to propose an idea of how it could be implemented and gather feedback from developers.
The core idea is to create a separate minimal dependency graph for each ghost frame. Something like this:
These would be stored in an array on the Scene. When the scene is tagged for an update, instead of just tagging the main depsgraph, we also tag the ghost frame depsgraphs. And the same for when the scene is tagged for a frame change.
For the moment, I think it makes sense to figure out how the simple approach performs and only then think about possible performance improvements. Here are some ideas.
Instead of having multiple dependency graphs, a single dependency graph could evaluate the same IDs at different times (and store their evaluated state for each frame requested). It seems like that would require some major rewrites for the dependency graph code though.
Share as much data as possible between ghost frames. E.g. when the vertices positions of a mesh don’t change between frames, they would be duplicated in the evaluated states of the ghost frames. They could share this data e.g. with the current frame.
I use onion skinning for 3D and will try to give feedback based on my experience.
I’ve tried multiple add-ons that draw ghost frames in Blender and most of them are unusable for me, and the reason is that they do something that is also in your video. Ghost frames look exactly like the original mesh. Unless you have object selected it becomes impossible to see original frame. There’s also no difference between previous and after frames. It becomes visual clutter. I use addon called Animation Extras which is free and has onion skinning that draws like this
It is extremely helpful. Ghost frames are invisible, so you can see through them if you have something behind. They’re clearly different from original, difference in previous and after frames too.
I think there needs to be general Previous and After ghost frame color in theme that will work on every type of object including Grease Pencil, and if you have multiple and want to differentiate you can override on object level and set custom colors. Similar to how viewport material colors work I guess.
Also UI is very important. This is the UI from add-on that I cleaned up
Ability to choose Before and After ghost numbers separately, so you can have 1 previous and 3 after for example
Ability to display either around frame, or entire range, like Motion Paths.
Frame Step is also something I work with all the time, when spacing is too close it helps a lot
Color and opacity controls are also nice. Start and End values below colors create gradient, so that frames closer to current frame are more visible than ones at the end.
In Front/In Back also helps a lot if you have multiple objects and they’re intersecting with ghost frames.
Looking at the video perfromance seems amazing and real-time updates are top notch. I also like ability to disable them from overlays
It’s descriptive, though. If 3D doesn’t have a real well known equivalent in terminology it doesn’t hurt to call it the same, I think. On one hand it makes it more distinct to google when looking for tutorials later on on the other hand it’s really weird learning different terminology for the same thing in each software. After all Blender is not the only software to name things differently.
Then again - this is probably a minor problem in comparison to actually developing the feature itself.
If it’s only for developers, why is it open to replies and in the design feedback subforum?
That aside, @nickberckley is correct- onion skinning only works when with different colors. Just duplicating the mesh is not feasible for animation. @thorn-neverwake is correct as well- the name for this across all animation software is onion skinning. This proposal feels like a very technically focused idea that is missing the major points of what animators would actually expect from this.
This proposal as it currently exists really needs more input from animators and not just developers.
I’m not sure about that, “onion skinning” is ubiquitous in animation in general. It would work well, I think. I won’t comment on the bells & whistles such as colors because you mentioned it’s not the focus of this thread, but other commenters have been correct in their observations.
Is it likely to have a big performance cost ? evaluating a complex rig at a single frame can already easily go into sub-realtime speeds, so evaluating even just six or seven times that rig…
Currently, this wouldn’t help me work better or faster. If it used a more standard color-silhouette pattern like here: Ghosting System for Animation - #2 by nickberckley it would make animation both better and faster. As it currently exists, I would turn it off immediately.
Like @nickberkley, I currently use a free add-on called Animation Extras for onion skinning. I don’t know that I’d stop using it, to be honest, it’s extremely powerful and very easy to use. If Blender could have a built-in system that worked as well, I would stop using that add-on.
No need to re-invent the wheel- I want to control what I can already control:
Specifically, I need to be to control:
The object being onion skinned
The type of onion skinning (every frame vs direct keys only)
The number of frames before and after the current frame
The color and opacity of before and after frames
The amount of fade of before and after frames
Whether before/after frames are enabled
Transparency / x-ray of before/after frames
Flat color silhouette vs shading
No, I see no need for onion skinning beyond the 3D viewport as it relates to previewing animation.
This definitely should not be on all the time. You only use onion skinning during blocking for animation- it is no longer useful once you get to the spline polishing stage.
I think this should work not just with a rigged character, but with geometries generated/simulated with geonodes, too. It’s less obvious how to display the onion skin of say a volume, or how useful that would be.
I would also consider having motion paths and onion skins be the same thing. Since one is a generalization of the other.
Onion Skinning would be most useful for blocking in pose-to-pose. Having ability to offset ghost a little and have previous pose next to one you’re crafting right now is not something I think anybody will say no to. But I also primarily work on stop-motion, both real (3D printing) and digital, and doing frame-by-frame stuff it is just part of the process.
And of course, for motion graphics it can be even more useful than for character animation. But I don’t think that it should be merged with motion paths. I think motion paths should stay as they are, but we should be able to handle them like Bezier curves. They serve VERY different purpose for me in my workflow.
I guess this thread went a little bit off topic, but I think that is because no animator is debating whether this feature should exist or not. There is no disagreement on that. It’s something we wanted for a long time. So naturally discussion goes to HOW it should be implemented rather than IF.
As for what I should be able to control is: Color of before and after, opacity, number of frames before and after (separately), with step (important for when I’m animating on twos), and two things I didn’t mention in previous comment but will add are:
Offset. When working with character that stays in same place being able to offset ghost in the 3D world so you can see previous frame fully would be beneficial.
Keyframes only option. Especially in blocking it is useful to see not the previous frame, but previous keyframe.
This goes for 2D too I do little bit of Grease Pencil as well. I don’t see any reason why they shouldn’t be just totally same.
Ah, yes. I call those motion trails because that’s how they’re named in Maya. Didn’t quite think so far. And yet ! what about an onion skin that you can control, just like a bézier handle ? Imagine your animated character onion skinned… how about you just click & drag on a frame, and Blender lets you move the controller on that frame ? regardless of the current frame ? this is the time for fantasies.
Our workflows are what they are because the tools are what they are. Let’s come up with something even better
This also would probably, like motion trails, need to somehow indicate if there is a keyframe on that frame already. Otherwise you (maybe accidentally) drag a frame that looks off to you without realizing it’s an unkeyed interpolation and add uncontrolled keys. Just theoretically speaking at least.
Regarding evaluation and memory usage, ideally you would have all of the following:
Non-animated datablocks evaluated and stored only once (including dependencies of animated datablocks).
Non-animated attributes (e.g. UV maps) evaluated and stored only once, including GPU buffers.
Shared topology cache for subdivision surfaces.
Shared GPU resources for textures and volumes.
Simulation and physics caches shared between frames.
Evaluate multiple frames in parallel.
Background evaluation for when ghosting is slow, so editing and changing a frame is still interactive. First evaluate current frame synchronously, then all other frames in a background job.
Sharing data may not be important for an initial version of this. We can imagine cases where e.g. a physics system pulls in a lot of static objects to be evaluated multiple times, or cases where UV maps and vertex colors take up a lot of memory. But for the simpler cases of an animator working in workbench or grease pencil, with objects relatively isolated from the rest of the scene, this is not as much of a problem.
A few potential approaches for sharing data:
Evaluate a secondary depsgraph N times, and make copies of the animated datablocks and other datablocks pointing to them, relying on implicit sharing to reuse memory. Making copies has some cost, and not sharing anything with the current frame depsgraph is not ideal.
Create N copies of the current frame depsgraph with some type of native implicit sharing support inside the depsgraph so it can share datablocks, as long as they are not re-evaluated.
Add support for evaluating datablocks at multiple frames inside one depsgraph, potentially scheduling the entire evaluation for all frames as one big task pool.
In all these approaches some attributes or datablocks will be re-evaluated even when not changing. Ideally both are addressed at the same time, if the attributes are not re-evaluated it’s both faster and implicit sharing can deduplicate memory usage. In practice this is not so simple though, geometry nodes and modifiers do not support partial re-evaluation. Deduplication after comparing memory is possible but not ideal for performance.
Ideally GPU buffers for attributes would be shared if the attributes use implicit sharing. But this is not so simple to implement, especially as those attribute values may not be sent directly to the GPU but rather depend on various settings or state.