Let's (finally) fix the ShadowCatcher

Hi,

for past half a year, ShadowCatcher in Blender has remained in a state which makes it unusable in any serious production. It’s lacking in many areas which are absolutely essential for any meaningful use in terms of integrating CG elements with real life footage. Since on the Blender home page itself, ShadowCatcher is described as a tool to enable users to CG elements with real life footage…


…I am going to present a detailed breakdown of why it is not capable of doing so. I will be using V-Ray as the example for comparison, but ShadowCatcher is handled similarly in other production renderers too (Corona, Arnold, etc…):

1, Incorrect shadow color:


Cycles’ ShadowCatcher does not capture actual shadows and illumination of the scene (HDRI in this case) with their correct color. It just renders completely grayscale occlusion from the environment, that’s it. This is something that will never hold up in a production scenario where correct representation of environment captured on the movie set is required.

2A, ShadowCatcher is not used for secondary rays:


Correct ShadowCatcher implementation is supposed to take backplate or environment map, and camera-map it on the surface marked as ShadowCatcher. This way, reflections, especially on contact points appear to be correct. Cycles’ ShadowCatcher solution completely ignores it and instead simply renders the original material of the ShadowCatcher object.

Some people tend to argue that it is possible to perform the camera mapping yourself, and assign such cameramapped material to the ShadowCatcher geometry. This is wrong. ShadowCatcher needs to reflect a ShadowCatcher material, which is not available in Cycles. It is a special type of material, which does not have shading, so it behaves kind of like emission, but at the same time, it can capture shadows as well as bounced light from the CG objects in the scene. Reflecting a simple, camera-mapped diffuse material would modify the surface with the scene lighting which is already once “baked” in the plate, resulting in a wrong ShadowCatcher surface shading in both specular and diffuse reflections. CG objects need to reflect exactly the kind of special ShadowCatcher shader that eye rays see.

2B,:


Due to the same issue as 2A, not only specular reflection, but also diffuse reflection produces significantly incorrect results, which tend to harm resulting realism. You can see what a big difference incorrect diffuse reflection makes on this example.

2C,:


Yet another problem with issue 2A is that it hugely increases room for error, as the material on meshes designated as ShadowCatchers has large impact on the final result. A simple misjudgement of correct diffuse color can completely ruin the result. Example above shows ShadowCatcher which has a red diffuse material in both V-Ray and Cycles. In V-Ray example, red diffuse Color has no effect on final result, as diffuse reflection is correctly taken from the projected background.

3, ShadowCatcher does not catch diffuse bounces:


Yet another defficiency which severely compromises output quality is the fact Cycles’ ShadowCatcher does not receive any indirect illumination from the CG objects in the scene. Therefore, final results lacks illumination from emissive objects, but more importantly, any degree of color bleeding. This is obvious already on the example #1.

4, ShadowCatcher does not capture reflections:


Last but not least, Cycle’s ShadowCatcher does not even appear to be capable of capturing reflections, which is another element that’s crucial for any degree of production use. This may appear to collide with point 2C, which states that diffuse material property should be ignored. That is indeed the case, but it applies to diffuse component only, which should be acquired from the projected backplate. Reflection component should be used by ShadowCatcher to enable users ability to specify reflective ShadowCatcher surfaces (for example wet road).

Now, you may be asking “How in the world would it be possible to composite all this mess together?”. Well, it’s actually not that hard. Only problem here is that there is currently no image format which allows Alpha channel to be stored as RGB instead of Mono. Storing RGB data in Alpha is crucial for this to work. Other renderers work around it in a very simple way where they allow you to save RGB Alpha channel as another AOV/Render Pass. Once that is out of the way, this is how it works:


1, You store any multiplicative data in the alpha channel (Shadows and Reflection occlusion)
2, You store any additive data in RGB channel (RGB and GI bounces)
3, You take separately saved Alpha channel with RGB data, invert it and multiply it over the backplate. This gives you proper colored shadows and introduces holdouts for RGB data and reflections)
4, You then Add RGB channel with additive information on top. And just like that, you have an output that’s 1:1 identical to the Beauty pass, with all the colored shadows, GI and reflections.

One more note:
In most of the renderers, there are two ShadowCatcher use cases:

  1. Integrating CG elements with real life footage
  2. Integrating parts of CG scenes with other parts of CG scenes, or “rendering in layers”. (For example, adding another character to already rendered CG environment).

For use case #2, it’s important to preserve some aspects of current ShadowCatcher functionality. Namely, it’s important to preserve an ability to use original scene materials for secondary rays (diffuse and reflection) instead of using ShadowCatcher material for secondary rays.

For this reason, other renderers give you a choice. In V-Ray, this choice is named “Matte for refl/refr”. Right now, Cycles has just one switch “ShadowCatcher”, which defines if the selected mesh is treated as ShadowCatcher. So what we need is one more switch, which could be called “ShadowCatcher for secondary rays”, which would define that the mesh will return ShadowCatcher shader for the secondary rays, not just for eye rays. If it’s enabled, then it will acoomodate for the use case #1, if it’s disabled, then it will serve the use case #2.

Issues #1, #3 and #4 don’t need to be preserved, as their fixes will improve both use cases.

Once these fixes are implemented, Cycles can become a very capable tool for any VFX work where integration with real life footage is required. Right now, however, it unfortunately fails to do what it’s supposed to do.

Thanks

76 Likes

I agree pretty much entirely that this is how the shadow catcher needs to be improved. Not sure when this will be tackled though. Part of the reason it doesn’t work more like this already is because an efficient GPU implementation is not straightforward, but I think it can be solved.

34 Likes

Yes, I am aware that a lot of ShadowCatcher stuff is shamanism, when it comes to light transport accuracy. However, I am very thankful that you have acknowledged it and that it’s on the TODO list. Thank you! :slight_smile:

3 Likes

I just wanted to add my voice/concern to this topic. I’ve been exploring a lot of work recently advocating for Blender as a serious tool in the VFX pipeline and consider this a critically needed fix. If I can help with testing in any way let me know.

8 Likes

What you are describing is fundamentally associated alpha I believe. It is the basic compositing encoding format convention for both TIFFs and EXRs.

Associated also let’s you encode luminance without occlusion situations with alpha = 0.0 RGB !=0.0.

No, it’s about having a separate alpha value for the R, G, B channels. You get this quite naturally from rendering, but it’s averaged to a single alpha value at the end. Some renderers can write it out as its own RGB pass in OpenEXR.

It’s clear that he meant separate alpha values for each channel, but how does that impact in this issue? An associated alpha image can be composited with a single alpha channel can be composited properly using the method above (i.e.: shadows will be occlussion without emission, reflections will be pure emissions). Why is it needed to have separated alphas for each channel in this particular case?

To support colored shadows / occlusion, instead of just grayscale.

Which makes perfectly good sense given that the original design of alpha to behave as glass etc. was to have a channel per component. I believe it was later reduced to a single channel.

At any rate, the channel per channel should solve the issues. Just a matter of getting it in trunk.

I am not too concerned about how it is done as long as the end result is what I’ve showcased above. If you can manage to fix shadowcatcher, or at least a part of it, I will be eternally grateful to you (or anyone who does) :slight_smile:

2 Likes

Honestly, it would be easiest to just put them under RenderPasses. For example, simply make ShadowCatcher objects behave differently, outputting only light received from other objects, with multiplicative values in the Beauty pass and additive values in the Glossy render passes, for example.
However, in that case, it would also want the glossy pass to be denoised as well…

No, not at all. The renderpass compositing workflow is increasingly less common and reserved only for special use cases. These days, the amount of cases where users render CG on the plate right in the final rendering has became majority. Separating it into passes is reserved only for certain tricky VFX scenarios.

Most important aspect here is that it needs to be interactive. When you are rendering interactively, you already need to see the end result, to see what you are doing, instead of seeing small pieces and hoping they will fit together in comp phase. That’s an ancient workflow.

I really hope that within 10 years, with the advent of refined interactive tools and workflows (like Eevee), as well as convergence of realtime and offline rendering into one point, the render pass/layer compositing workflows will die all together, and will always be remembered as very dark and embarrassing period of computer graphics history.

3 Likes

I was mainly thinking of stashing the complex “Alpha” data in a render pass would be easier than changing the way Alpha channels work in blender at a core level.

Ah, yes, I misunderstood. While I agree with that, if there is a way to contain the colored alpha in a single file, as Troy has suggested, that would be actually preferred solution. But since no other renderer out there has been able to do that, I remain skeptical :slight_smile:

And even if it worked, I wonder how many other CG packages would it be compatible with…

Adding it as a layer in EXR would suit this perfectly, and the user would just need to know how to use it. Any decent compositing software (and user) should be able to deal with the output. There’s nothing special about multiplying and adding images, and that’s all this is on the compositing side.

1 Like

Associated alpha with three channels as per simple RGBAAA format in an EXR would suffice. Relatively easy to extend a standard alpha node to composite three channel alpha.

Never ever going to happen. See Blinn’s Law.

It’s actually happening already. One proof is Chaosgroup’s project “Lavina”, but even better proof is Eevee. Eevee is not really realtime renderer, but it’s not exactly offline renderer either. Eevee is interactive enough, but proper final quality frames out of it usually take a couple of seconds. It’s a type of renderer that bridges the gap between realtime 24+FPS renderers and offline ~1 hour/frame renderers. It uses mostly rasterization but it does seem to raytrace at some places. And raytracing can become even bigger part of it in future with RTX. Statement that both will converge to one at some point is not a matter of if, just a matter of when. Of course it will not match the quality of path tracers, but if the 95% of the quality is there at just a 5% of rendertime, that’s a great compromise to make. Even these days, in some specific cases, Eevee results are almost indistinguishable from Cycles, and as the Eevee, realtime raytracing, and advanced rasterization techniqeues evolve, the proportion of those cases will grow.

EDIT: Link to Lavina demo. Especially the interior showcase at 8:30. https://youtu.be/K7LWzTvfgU0?t=8m29s

Yes, there is some trickery, some expense of quality at the cost of faster results, but trust me, very few people care today, and even less of them will care in future :wink:

This will really happen, maybe not in next 5 year’s, but perhaps in 10. Regardless, it’s preposterous to even suggest otherwise.

Blinn’s law is not relevant anymore, and will be even less relevant as the time progresses. We’re approaching the point in time where lack of proper realtime interactivity (not necessarily realtime final quality image output) is way more of a hindrance in terms of final result quality than lack of renderer quality/bias.

With both realtime and offline engines finally having adopted PBR workflows, the only differences in realism between the two are closing very fast. What actually limits artist these days on the side of offline rendering is iteration time. Having to do an adjustment and then having to wait relatively long time to see it. Since this is absent in realtime or at least interactive-in-realtime solutions, artists working with such solutions can perform many more iterations during the same time frame, being able to add a lot more subtle details to the scene, increasing the realism or artistic qualities.

1 Like

See Manuka and deep compositing etc. etc.

It’s a sidetrack to this issue.

If all the problem is having an RGB channel, coudn´t we have a separate render pass for shadows were shadows from the catcher are stored, and leave the main pass as it is right now?

Apart from that, the reflection capture is capital, there is no way to composite reflections right now :stuck_out_tongue:

Cheers!

I still don’t understand how to get reflections on the sphere from shadow catcher using this pipeline. For example, this road marking.