Cycles and surface normals (normalmap shading)

firstly, apologies- I’m long-winded, that’s a reason I post so sparingly.

Yeah, the bump node is really not very easy to use, I can confirm that much. for one, “distance” isn’t immediately clear, as even in practice, it has a similar effect to strength and new users struggle to differentiate. It seems to mostly just be two adjustment sliders- strength flattens to neutral grey, and distance increases the output intensity of the normal map values.

I have an interest in the various parts of blender that “are suboptimal but known to have few true solutions”, so I’ve been following this for a good while, finding little workarounds here and there (nothing really easily accomplished or reasonable amount of work).

The discussion seems to be going between “this is an issue that multiple biased solutions exist for” and “this is known and intended behavior- it’s a limitation of normal maps and other path tracers do the same thing” and I don’t think either viewpoint should be taken as gospel- these things should be given to the user to decide- that’s why we have boxes to enable biased things, and have some “biased but almost unbiased” defaults already in the software.

Notably, the solution to “just only use the displacement input and ignore normals in things like the principled shader” … To me is just as restrictive as Alaska’s geometry normal replacements, which normal maps with multiple shaders breaks.

We should begin discussing functional alternatives, solutions, and algorithms to solve these things, because they’re issues in path tracing at large, but have easy biased solutions, and probably hard/unoptimized unbiased solutions to be found.

at the moment, it seems blender’s solution is to limit the angle that rays may reflect, to ensure normal maps don’t send rays through meshes. Anything past a certain angle seems to simply be given a bad normal, which works for things that don’t have deep normals to begin with, non-ideal geometry, and very detailed normals that interact with a gentle terminator.
…but while this solution works for things like dents, wood grain, and uneven paint, it’s clear to me that normals are used for everything from stucco-style walls and tree bark to engravings and panel lining, all of which tend to exceed the 30 degree angle which seems to trigger “correction”.

for example: A trick that games with a path-traced mode use is ray offsetting. it’s not unlike what cycles does for volumes or baking. Basically, when a ray intersects, you pull the ray up above the surface a little, and then allow it to continue. If it was to self-intersect, it will now hit the surface again. The actual texel it hits will likely be incorrect, but will be in the correct direction, and because a shallower bounce angle will go farther, it should have some semblance of coherence. There are a number of variations so I’ll just draw some of them.

Offsetting between each bounce is pretty simple, but can result in light leakage or black spots where things are too close to each other, first bounce projection does not have that issue, but as the name implies, cannot manage more extreme cases with 2+ bounces that result from very vertical normal maps. They would both probably be noisy, too.
another I mentally toyed with, is adding displacement data into the alpha channel. That would theoretically give blender the data it needs to treat a normal as a displacement- but we need a new bake workflow before we play with that.

I don’t think this is a legitimate worry, and I mean that in the best way. “normal correction” and normal maps in general already inject bias, so I think the most responsible thing to do at this point, is ensure normal maps work as intended as possible, even if they inject some bias and aren’t totally accurate.
I really want to see your ideas described or in action, and like this stack overflow answer, there do seem to be acceptable approximations. I can use displacement, but not for everything- that would be far too heavy, and most users don’t have giant render farms, and need to use CPU to handle displacement. Using a normal map itself should signal an acceptance that an approximation is acceptable.

But most importantly;
There are solutions out there, ones that don’t by default include “correcting” a user’s intentional choice in material, normal map or geometry normals

and for that reason, I think normal correction should by default be disabled.

1 Like

Apologies for the double-post, but in case anyone has some information I would like to ask:
Is there anything stopping us from implementing “virtual surface bounces”?
I’ll explain what I mean:

  1. supply a height map alongside the usual normal map. Another texture, channel-packed is my favorite, but whatever works. we need both.
  2. when a ray hits a point on the texture, record the incoming angle, the normal map vector, and the heightmap value it hits.
  3. compare the incoming vector to the normal map to get the vector it WOULD travel at.
  4. we now know the virtual height of the ray, as well as the virtual vector
  5. we also know the steeper the vector, the more quickly it will contact a nearby pixel
  6. travel along texels till the virtual height is below the height map of the texel it just passed.
  7. because we know the vector it WOULD be traveling at, we can now use the normal map and height value of that texel to start at step 2, to rinse and repeat till the ray is culled or “leaves the virtual surface”.
  8. when the ray leaves the virtual surface (exceeds the height of the displacement), add the distance below the midpoint to the ray.

essentially, it would be fudging things to make rays treat a normal map + height map as a displacement, sans actual geometry.

this could also theoretically produce self-shading normal maps, and if you used the face normal with the height map to move and occlude pixels as a last step, it could even result in parralax occlusion mapping.

essentially, this would amount to a probably less performant, more memory efficient version of something between displacement and normal maps that can self shade and provide self indirect bounces.

if we truly want a totally agnostic displacement system, then we need to have a node which turns these things into displacement, and the more you give it, the more methods should be available to you- just like bumpmaps and displacement use heightmap data, but normal mapping only become available with normal maps. if you supplied both, it would open this up.

Lifting the second bounce up by a little bit sounds kinda interesting, it will have other consequences but it’s worth some experimentation I think.

Interesting approach, but I think it exceeds the scope of what I had in mind for this thread : )

I think something like that has been worked on before:
https://devtalk.blender.org/t/parallax-occlusion-mapping/

I’m not sure if it was decided not to pursue it or if the original author just quit working on it.

btw. please don’t post in that very old thread unless someone plans on working on it.

yeah, it’ll produce off results, but because you can angle the incoming direction to be steeper and make the bounce act like it came from shallower in response to the height, it could be coherent. I worry that it might be noisy or flicker in animation, though.

I wasn’t mostly talking about POM, but rather suggesting that normal maps have a clear issue when it comes to low poly or smooth shading in path tracing, and that if they’re going to be super biased and produce unrealistic results, we should at least start looking at alternative algorithms which might produce more realistic results. After all, cycles does support NPR anyway.
my suggestion, I think, would by computationally heavy but produce better results that could work with or be modifiable into POM. but I’m mostly just suggesting a way for light rays to travel “inside” a texture, which could solve the whole “normal maps still don’t act anything like light”

A user (I don’t know their alias on this platform), suggested going with something like this: Chapter 8. Per-Pixel Displacement Mapping with Distance Functions | NVIDIA Developer

And it might be similar to what you’re looking for/describing.

2 Likes

I see!
it’s a little similar, but I think that proposal is a bit heavy-handed. It’s definitely in the same vein of thought, but ray marching and so on, on top of calculating march distances probably leave it poorly performant.
my suggestion is closer to the one they mentioned at the beginning- Polcorpo’s… and given it’s an older idea and tested on older machines, it is probably more reliable, too.
and he too notes that such data can be stored in a single RGBA image! it makes me happy to have found my idea isn’t so outlandish. Chapter 18. Relaxed Cone Stepping for Relief Mapping | NVIDIA Developer you can see here that Polcarpo also considers tangent space a “simulated 3D space”.

though my interpretation didn’t use cone stepping, and instead uses deterministic values, and in many cases won’t need to check more than a few dozen pixels to find where a ray would hit- in other words, it avoids cone mapping, which can be more biased. It’s pretty clear that cone mapping is performant at the cost of accuracy, which would make it viable for eevee but not necessarily cycles. The intriguing part is that most of the solution works for both.

But these nvidia solutions are primarily a POM solution. I think it would be best to add that as an option because the greatest downside of naive, hyper-performant POM is that it can’t self-light or self-shade. Once you’ve solved that, even the most basic POM will provide at least usable results, but to get back on subject with actual normal map discussion-

the bulk of my proposal is just a suggestion on how to “pixel march” within a texture, so that you can get what we should call “indirect lit normals”, because the primary use is simply to simulate one or two more bounces “within” a normal map, so that it can self-shadow and self-indirect light. This would result in much more believable normal maps…
and also solve the issue with the current Oren-nayer diffuse model blocking 100% of incoming light at glancing angles if you have a normal map- it simulates back-scattering, but not the resulting self-lighting that would cause.
the issue is that I’m not sure how to obey smooth normals with this solution. Theoretically, if UV’s wrap around a corner, light can “flow through the texture and around the corner” unless it obeys object normals too. Still, this would allow us to simplify cycles a little by lieu of removing systems to “correct” or “adjust” normals to account for edge cases.

Weizhen Huang mentioned this :

But it caused a 10% loss of performance.

1 Like

interesting! a 10% loss is pretty extreme, and according to the paper, that’s a minimum figure- but an energy conserving method does look very interesting! I think the current “normal correction” method blender has is the “switching” method, where invalid rays simply use a different, generic normal or shader. most importantly, though, the method used in that paper trades potentially infinite bounces of displacements for… potentially infinite bounces with a much more complicated algorithm. “this ain’t it, chief”, as they say. However, I am a bit impressed by the result of the “flip” method for handling intersecting rays. it has very low overhead, and seems to retain better color and overall consistency than the “switch” method.

Normal maps are importantly simple and fast with a low overhead. We need a method that can be evaluated in a few steps, not upwards of several steps and multiple bounces with a minimum of 3 bounces for plausibility for each ray. This also disqualifies my suggestion, but its major saving grace is that it treats the material as if it was geometry, which means in many cases it resolves in a single bounce and a couple texels checked, and the lighting fidelity should approximate pixel level displacement and self-light/shadowing. It would be heavier than a standard normal map and require a displacement map, though, so it’s still no replacement for a better normal map algorithm.

At best, microfacet normals should be an option that trades performance for accuracy unless we can get it to below a 10% perfomance regression. Even if it were 30% less efficient, I’ve had renders where that would have been helpful

the more I think about this, the more parallax occlusion mapping seems to be the answer. As much as I dislike admitting that normal maps aren’t all that great, POM is lightweight and solves “seeing the backside of a surface using normals” by simply occluding it as it would be in real life.
the only other option I realistically see that can be done in just one bounce and with only a normal map is some sort of tangent space transform like we already have, or by “faking it” such as simply allowing rays that would intersect to simply exclude the object so they may reflect something from behind it, thereby simply continuing the appearance of reflections.

2 Likes

Where would I find the code that actually handles normal map processing, and “corrected normals”? I am the most amateur at code, but I understand vector math and want to see if I can implement a naive “flip across normal” algorithm. The more I think about it, the more impressed I am of how simple it is, and how well it seems to work in that paper, probably because instead of flattening normals facing away, it returns light from things facing away… something that should realistically be happening if a ray can hit a backfacing normal.

edit; found it. was dumb- it was posted higher in the thread. blender/bsdf_util.h at main - blender - Blender Projects this is the spot in question, specifically. the “maybe ensure valid specular reflection” section seems to be for ignoring the correction in edge cases like curves and when geometry and shader normals are the same, with more space for other exceptions (useful if you wanna experiment with disabling the reaction from various geometry types)

with any luck, flipping should be more performant than correcting/replacing/multiscatter, too, because it’s a simple vector math transform.

sorry for a third post, promise I’m not bumping it.

I finally understood the kernel enough to get

(I - 2 * dot(N, I) * N)

working without errors.
It’s a really, super naive approach, where all I am doing is reflecting the vector. I realized after testing that I should have removed the Z component from the vector, but overall it still got me… passable results, at least something I’m proud of for my first real change of blender’s code.
Default:


Corrected (replacement method):

Corrected (my really bad reflection method):

Ground Truth Displacement

from top to bottom, they are the default, the current corrected normals, and my corrected normals, and then the displacement version of that same normal map.
it’s obvious from this that mine does not pass muster in either a furnace test or visually. This is almost unusable because there is a black band- a situation which allows normals to self-intersect, somewhere between a normal bounce and one that triggers “correction via reflection”

however… reflection does seem to be more accurate as a whole, then “normal damping”, where it does work.
look closely- the “ground truth” displacement map shows almost zero of the orange part of the sky here, but the corrected normals shows a whole lot of it! This is because at extreme angles of incoming rays, the normal map acts as if it were flat.
This totally destroys the usefulness of normal maps, whose single use is to provide detailing and shading that appears non-flat.

also note how my code accurately handles the reflections of the green sphere, compared to the “ground truth” displacement map.

I want to propose a refined implementation of normal reflection as a correction method. It’s super inaccurate when it comes to lighting, but is visually and perceptually more accurate in most cases.

these are the diffuse tests. They aren’t significantly different from one another, but if you flip them back and forth you can see that reflected normals does have a slight effect on lighting at glancing angles.


some more information; I’ve not finished working on this yet, or stopped in these couple weeks.
I noticed it was very strange that something tagged “only used for glossy normals” was affecting diffuse only normals with an IOR of 1. That’s theoretically 0% glossy.

I’ve been scouring the code, and finally found it almost 2 weeks later.
the bulk of it is that Blender is using this as a reference: Taming the Shadow Terminator
it’s a siggraph talk from disney, but after even just a brief look at it, it… doesn’t really pass muster. Sure, the method does exactly what it’s supposed to, but the samples are clearly cherry picked in order to show the best case for their method. One of them isn’t, and, well, I’ve spliced all the slides together, so you can actually compare them fairly… (yes, the bump tests are a couple pixels wider for some reason)

I see nothing wrong with the original bump here, and it’s achingly clear that their solution (the one Blender now uses), is the worst presented on the slides, both visually and functionally from a lighting perspective, whereas the original bump method allows peaks to catch light past the terminator, and the Esteves method doesn’t knock down peaks even if it dims them. The Chiang-Li-Burley method seems to prematurely remove detail around the terminator. It essentially takes normals at certain angles and past, and replaces it with a shaded gradient. This is hardly unbiased, and results in very unusual shading at glancing angles, where normally, the tips of bumps receive striking highlights. The corrected ones, however, simply see all the normals pointed toward the light act as if they’re nearly flat.

the issue is not limited to this bump smoothing, however. part of it… is that this bump smoothing uses the same flag as correcting normals for glossy reflections. this means that if an object has both glossy and diffuse parts in its texture, you’re forced to choose between corrected reflection normals, or accurate edge shading on diffuse normals.

after a hefty amount of commenting out and tinkering, I was able to get this battery of tests:




and it turns out that my reflection normals are almost totally useless from above! interesting! here’s another angle to prove they’re “working”:

Corrected glossy normals and corrected diffuse normals should be separate flags, as their solutions and effects on a rendered scene are different, and both correction solutions at the moment need to be replaced with more accurate ones.
and now to figure out algorithms that do those things. easier said than done.

@Thomas_Kole @Atair that seems like it would solve the issues in your initial posts… at least the normal mapping ones.
as per the geometric normals, I’m totally unsure how to fix that- it seems that you’d need to figure out a way for all the shaders to “talk” and figure out how a normal map should be processed in line. I think the best solution would be @Alaska 's, but it would take some real rough work to reduce bias to invisible levels, I think. still worth a shot- but definitely a level above what I know to do.

6 Likes

Apologies if any of my terminology is off. I’m not a technical user, but I can see the impact of this issue on my work. I create assets using a pseudo-game modelling workflow (LP/HP meshes with baked normal maps), so correctly using shader-based normals significantly affects my renders.

Based on the logic detailed by @Alaska, I applied the same two lines of code to the closure.h file, but instead of adding them to the Diffuse BSDF, I used them to affect the Principled BSDF node. I have no real coding experience. I experimented based on reading the existing implementation and doing some additional research.

// closure.h - Starting at line 85
85  float3 N = stack_valid(data_node.x) ? stack_load_float3(stack, data_node.x) : sd->N;
86  N = safe_normalize_fallback(N, sd->N);
+87  // Force the shader data normals to use the modified normal:
+88  sd->N = N;
+89  sd->Ng = N;

I’m not suggesting this should replace the existing method outright, as I assume there are valid reasons Blender doesn’t currently work this way by default. However, allowing users to toggle between the current mesh normal method and forcing Blender to use shader normals would provide greater artistic control.

Given how much of a difference this makes in certain workflows, would it be viable to implement this as a toggleable option? I imagine some technical limitations or trade-offs have prevented this from being an option, but I’d love to understand what those might be.

Edit:
In the custom-compiled version, I have tested toggling the bump map correction on and off, but I cannot see it making any difference. The render in 4.3 has it toggled off; with it on, the edge bleeding is far more egregious.

1 Like

For most people with models that have normals which match their geometry, that change isn’t going to do a whole lot except cause issues with multiple shaders being used for one material. Think of it like this. to make a proper shader, that shader needs to take into account normals and reflections. What happens when you then add together two shaders which have added their own bumpiness, or shine, or anisotropy based on normals? it’s like a slapstick scene where every chef in the kitchen adds salt to a soup one after the other.

Most of what we’re talking about here is issues with the normal correction algorithm meets user-defined normals/bad geometry/all of the above. If you do photogrammetry, you’ll be used to models like that- ones that are a toxic mess of faces that only a proper modeler can retopo.

also-!
@Atair @Thomas_Kole

you’ll be interested in this new pull request! it’s to ensure that normals don’t get washed out because of bump correction! I’m still working on the reflections bit- that’s a decades old rendering puzzle- but hopefully it’ll be well received, and make it into the next version (or not. it’s a breaking change)! if you want me to test any of your files, shoot them my way! (and remember to pack em first!)

5 Likes

Impressive! Would you mind showing what this example looks like?
https://projects.blender.org/attachments/08c65d27-c619-483e-bd39-faf0faba3efd

I assumed you just wanted me to open the file in my branch and press f12.
You’ll need to do the same so we can flip between them- I don’t anticipate my changes to help here- they mostly help with regards to terminator lighting of normal maps on smooth surfaces, not the normals themselves.

For people interested in the PR Nubnubbud has made, I have created a package build. It should be avaliable from here in the next few hours.

CC @Roggii Just so you know about this after contacting me privately about it.

1 Like

@Alaska I Appreciate the effort! Will check it out once it’s available.

Thank you to Nubnubbud for creating the PR. I just triggered a build on the build bot.

The difference is very noticeable, very nice.
Take a look at this example.
The areas halfway look weird before, and look perfect now.

20 Likes

Question, does this only apply to diffuse ?

Diffuse
4.3


4.5

SSS
4.3


4.5

It seems to me that the SSS does not take into account the normal map in the SSS entry angle.