How would one go about solving the banding caused by stepping through volumes?

I’m still trying to wrap my head around this one- it’s been years now. the banding in dense volumes is caused by stepping, but what is the actual issue causing it? I’ve gone through blender’s algorithm a few times, and I must say I’m not enough of an expert to even know exactly how it works, even after playing with it a bit and getting results, but I understand enough that the system was built using stepping from the ground up, and to change that would require implementing lots of wide-reaching changes and be its own project.

looking at it as I am, though, is it possible that one of these three things is causing the issue?

  1. at line 221 in kernel_volume.h, the variable “tp_eps” is noted as “likely not the right value”
  2. at line 252 there’s a note “ToTo: Other interval?” that I don’t understand.
  3. at line 255 in, there’s a little note saying “stop if nearly all light is blocked” which, in a dense enough volume would happen within one step, and as well, it uses the tp_eps variable.

Is it possible this is causing issues with accurate extinction that only becomes more visible in dense volumes?- that would mean an optimization is biasing the results too much, and resulting in artifacts, but like I said, I barely have any idea what exactly these variables are. I’m kinda sure that eps stands for extinction per step, and if that’s wrong, it could result in the end of every step being counted as having 100% extinction, which I think is what we’re seeing here: #56925 - Cycles produces banded render artifacts in dense or sharp volume renders. - blender - Blender Projects , or perhaps it really was my first gut instinct, that it’s caused by uneven sampling distribution.

but I’m just grasping at straws. I don’t actually know what’s going on here, so I’m gonna play with the values (I do find it odd that they’re hard-coded) and see if I can manage a change in my test renders. There are a number of renders I have an interest in, and now that geometry nodes, volume to mesh, and so on are a thing, I might soon finally be able to manage the renders of my dreams, or begin using volumes in more creative manners.

Apologies because I don’t know the specifics, but I thought banding was a natural byproduct of raymarching ?

Ray marching is not a good algorithm for rendering very dense volumes like this, precision and performance will never be good. Rendering implicit surfaces requires a different algorithm designed for that specific purpose, or converting to a mesh. I would not bother trying to make ray marching give good results.

It’s unlikely any of the mentioned comments are related to the artifacts. They are about stopping ray-marching when only tp_eps = 0.001% of light gets through from behind the volume. If anything that value is too conservative.

1 Like

I don’t know which if any of these can work for Cycles but in principle, literature on unbiased volume sampling is easy enough to come by.

A Pixar paper from 2018 proposed this method:

Another Pixar from 2017:


this one’s a survey of sorts, going through lots of different stuff, including what looks like, pretty extensive sample shader code

Or here is one from Disney from 2017:


I think this one might also be interesting in light of the spectral branch?

Slightly older approach from 2014:
https://cgg.mff.cuni.cz/~jaroslav/papers/2014-upbp/index.htm

Maybe some of these techniques would help?

All these are good algorithm that could improve volume rendering in Cycles. However in T56925 it seems there is an attempt to render implicit surfaces, which is not typically handled well by such algorithms.

while banding is in a way a product of raymarching, I don’t think it should be in the way blender experiences. Given the way the algorithm apparently works, according to what I can understand of the code, a random point inside each step is to be sampled. From what I can tell, that spot does not seem to be random enough, being more favored at the beginning of the step. Yes, it’s more visible in denser volumes, but even volumes at more mundane things like the smoke from an explosion or dissipating smoke is often hit by this, as seen in the upper area of https://youtu.be/29yfS-icS3M?t=1268 and https://www.youtube.com/watch?v=qmBxaFgawLI this one, (watch the head and chest right at the start, and hands as he does finger guns).

@kram1032
I’ve read those, and as an algorithm nerd, they’re pretty fun to think about! decomposition tracking is my favorite, because it’s simple from a conceptual level, and perhaps one of the simplest to implement in a rudimentary way. (simply treating any value above, say 75% relative density (or a user-specified amount) as homogenous and only worrying about branched delta tracking/path tracing with what’s left… perhaps it would be possible to decompose VDB databases before rendering a frame with a python script…)

at the moment, I’m curious what would happen if you were to begin stepping at random points before the volume. for example, volumes have two bounding boxes- a ray goes forward till it hits the initial bounding box of a volume. its next step is anywhere between a full step and no steps(not affecting lighting as the bounding box is guaranteed to be empty), then continues stepping as normal. This would eliminate the “coordinated” bands that steps make, but it would require that the camera does not intersect either bounding box or at least has a “minimum range” at which to sample volumes.

I also played around with the idea of doing only homogenous volume calculations, and doing smaller steps the denser the volume is in order to give blender an “absolute volume density” measurement, but that’s one of those things that is easier said than done. you can’t exactly know a ray is about to pass straight through a tiny dense section… but then again, if you have enough, one would get stuck in there and eventually reach extinction, so who knows. it’d probably not be realistic, but I’m no rendering expert. yet.

It is is an attempt to render implicit surfaces maybe? I’m not exactly sure what constitutes one, but my searches all reference metaballs and turning volumes to meshes, neither of which I’m trying, and both of which would not give the right results.

it all came from this attempt: https://www.youtube.com/watch?v=G4als2g9Avg but as you can see, it’s not all that dense, or all that sharp, either, but because the lights are placed in little thin pockets of the volume, it seems to make the artifact worse- and this was using branched path tracing and denoising in another program. The same issue is seen in https://www.youtube.com/watch?v=qmBxaFgawLI and in https://youtu.be/W_csFgBwYQ4?t=98 , and though I timestamped it at a notable point, most of the smoke sims have the issue, even at common density values, just because of how plumes rise in a bulbous manner. I don’t think these would be implicit surfaces.

I really don’t wanna come off as whiny, as much as I probably do, but many very, very common uses of volumes- clouds, large explosions, dust plumes, smoke trails, wisps of smoke, all generally have some hard edged volumes that just come naturally from trying to be realistic. now that I know the issue is there, I can’t really “un-see” is in others’ animations.

I’ve actually read the studies in Kram’s post before, and the work on decomposition was incredibly interesting to me. Something like that would totally decrease volume rendering times, but I don’t think cycles is in a place to use it yet…

the only really “simple” solution I can see would be to replace raymarching with delta tracking. I say “simple” in the same way that I can say “let’s just make a mars base”. it’s almost laughable how much would need to change, but the benefits would be enormous, and it would bring the volume system into a more unbiased state, where the rays would be more analogous to most of the other rendering processes cycles does, with rays bouncing all over. As well, that would set it up to be able to use decomposition tracking as well as multiple importance scattering. (I am aware that it would probably be slower than raymarching until well optimized, which would take further time and resources)

I’ve always sort of had the goal of helping with the transition to delta tracking if and when it happened, and my practice is going well, so I may be able to start helping with the blender project earlier than I anticipated, now that I’m practicing on a dev team elsewhere and getting good experience.

Though, I’m still curious- after seeing https://computergraphics.stackexchange.com/questions/9871/volume-raymarching-aligning-sample-coords-with-the-3d-texture in my additional studies, I understand it’s not an isolated issue, but still can’t wrap my head around what causes it. If the volume is sampled at a random spot, why is it that as you get deeper into each step, the volume becomes more sparse in the render, when the sampled density increases further into the step?

1 Like

Sounds like a relatively easy thing to try but I doubt it’d fix the issues we’re having. Even with that, raymarching would introduce biases, I’m sure. I do hope a solution for this (maybe via one of the techniques I shared above) will eventually land in Cycles though. Would certainly be a huge improvement in that department!

I’m guessing slower in the sense that it’d take longer to reach a set number of samples, right? But if it clears up faster and/or without artifacts (which, in case of Ray Marching, would take insanely small step counts to approximate, in which case I doubt it’d take longer even in terms of samples/amount of time ), and you thus need fewer samples to get it done, I bet it’d actually be a speedup overall.

(Though I don’t really know a lot about the needed optimizations and all that)

I think the issue is, that it sorta isn’t.
I mean, while the entry point might be random, it only takes fixed steps beyond that, right? So for any given entry point, it effectively only has a sort of grid of possibilities beyond. And on average you get these stepping artifacts. And while it does take random directions inside the volume on each bounce, each of those is once again stepping constantly, and also, I suspect the first bounce is gonna be the most important one in determining the end result for that location.
What I find really interesting is how the “layering” you end up with tends to change direction depending on where you look. Like, sometimes it is horizontal, sometimes vertical, but clearly correlated for large pieces.

Like here:

The patterns really vary quite a bit.

(For these abstract renders, the stepping actually looks aesthetically pleasing though it absolutely shouldn’t look that way. For stuff like these super sharp transitions, where your volume either is empty, or it’s highly dense, I suspect a implicit surfaces method really would be best. But in order to get rid of that stepping, some unbiased volume sampling method ought to also fix it for this case)

2 Likes

The UPBGE blender fork (and Bforartists) have both implemente TAA (temporal anti aliasing) to volumentrics in the viewport, over time the volumes blend into each other and smooths out this banding. TAA should be applied to render, but for now it’s only a viewport solution which only updates when viewport updates (over time * interaction). TAA is the most common method of smoothing volumetrics and removed banding essentially, specially true in game engines and realtime volumetrics.

The whole point of TAA is that there gotta be motion, right? Or would this somehow work on still frames too?
Sounds like a thing that might be nice to have for Eevee but I don’t think Cycles really works that way? Cycles’ version of temporal AA would basically amount to activating motion blur, I think. That already works by sampling across multiple points in time for any given frame, afaik.

1 Like

You could use sample jittering and TAA to smooth out the volumetrics yes, both in viewport and on render, and also apply this to Eevee without a problem, similar to how they are jittering with the DOF refactor right now.

But yes, improving this with cycles will be something else - though you could apply the same technique since both systems essentially use raymarching based sampling if I am not wrong, only one casts shadows to other objects and the other does not.

1 Like

I guess that would be a relatively easy “fix”. I’d much prefer getting a proper unbiased version though.

I’d say while a more unbiased option should be the goal (it’s in line with newer industry standards and better follows blender’s mantra of being unbiased) a stop-gap between them would be nice, because it will take quite a while to fully implement, unless you just throw an entire team at that and only that for a good while.

@Draise
I took the liberty of simply rendering and compositing together the same image a number of times with different seeds, and these were my results:
control- 1 image, 228 samples total, seed 7, step size 1.0, 1m15s,

even spread of .94 steps to 1.07 steps, 1m28s, 16 samples, 14 images, 228 samples total, seeds 1-14:

even spread of .82 steps to 1.21 steps, 1m26s, 16 samples, 14 images, 228 samples total, seeds 1-14:

even spread of .58 steps to 1.49 steps, 1m35s, 16 samples, 14 images, 228 samples total, seeds 1-14:

I understand it’s a bit tough to see the differences, but if you can put them in a folder and flip through them, you can see there definitely are differences, and notably the horizontal artifact on the left is a good visualization of how obscured the artifact becomes, and that somewhere around 10%-25% above and below the chosen step size appears to be the best range to randomize the step size in.

basically, when the step size is randomized almost any amount, the artifact almost completely disappears, but with the downside of larger variance allowing more light deeper into the volume, and causing any self-shading to become less accurate, as if some rays were going faster (which they essentially are)

I think, for ease of implementation, this might be the stop-gap measure we’re looking for. it would simply be a value that when above 0, randomizes the step size up to that percentage above and below the main step size. for example, if your step size was 1, and the volume variance was .12, it would use samples with random step sizes between .88 and 1.12.

the other option, would be to simply randomize the size of a sample’s first step inside the volume bounding box, which would result in only having the artifact within one step of the camera or edge of the bounding box, the latter of which is to be avoided anyway.

also note that because this test used samples at predetermined and even intervals, I may have accidentally set up an interference pattern in my study.

3 Likes

Try D10576

2 Likes

I think @StefanW was working in the implementation of some new algorithm to avoid this kind of artifacts, he can correct me if I’m wrong.

ah! thanks for telling me! and as of today it was committed! I’m so happy, it’s been 3 years since I first tangled with it- and now I’m going to show just how insanely useful this is! Now, step size is no longer a necessary thing to change when working with dense volumes, and that alone will speed up volume renders tremendously, because when it converges, we can be sure it’ll be accurate, even if it’s very dense in some areas!

1 Like

Could it be that there never got anything implemented for fixing the issue? I still get heavy banding effects with my attempt to generate nebulas using a Volume shader. Samples and tile setting doesn’t do much except slightly changing the interval

what render engine/blender version? I’m definitely not getting these issues in cycles, but it’s nearly intended functionality in eevee, and still exists before 2.9 or something.

I’m using 3.1.2 but I also tried the newest version with the same result. I’m using Eevee, had no luck with getting anything useful out of just setting rendering to cycles so I didn’t look further into that.

This one shows it very obvious as its more clean shapes: https://twitter.com/TocoGamescom/status/1633826778035912706
I added more layers of small noise since then to make it harder to see. But a small dots noise is also not really the intended look.

The volume shader is basically various 3D noise texture plugged into another and then the result into ramps. I guess it gets much more obvious the tighter the gradient in the ramp is and how dense the result. So bascially how sharp the nebula clouds are. I used Nebula Generator from the Blender Market as the base.

This was the result of just changing it to Cycles: https://twitter.com/TocoGamescom/status/1633460499190009856 also took quite a long time