Cycles and surface normals (normalmap shading)

firstly, apologies- I’m long-winded, that’s a reason I post so sparingly.

Yeah, the bump node is really not very easy to use, I can confirm that much. for one, “distance” isn’t immediately clear, as even in practice, it has a similar effect to strength and new users struggle to differentiate. It seems to mostly just be two adjustment sliders- strength flattens to neutral grey, and distance increases the output intensity of the normal map values.

I have an interest in the various parts of blender that “are suboptimal but known to have few true solutions”, so I’ve been following this for a good while, finding little workarounds here and there (nothing really easily accomplished or reasonable amount of work).

The discussion seems to be going between “this is an issue that multiple biased solutions exist for” and “this is known and intended behavior- it’s a limitation of normal maps and other path tracers do the same thing” and I don’t think either viewpoint should be taken as gospel- these things should be given to the user to decide- that’s why we have boxes to enable biased things, and have some “biased but almost unbiased” defaults already in the software.

Notably, the solution to “just only use the displacement input and ignore normals in things like the principled shader” … To me is just as restrictive as Alaska’s geometry normal replacements, which normal maps with multiple shaders breaks.

We should begin discussing functional alternatives, solutions, and algorithms to solve these things, because they’re issues in path tracing at large, but have easy biased solutions, and probably hard/unoptimized unbiased solutions to be found.

at the moment, it seems blender’s solution is to limit the angle that rays may reflect, to ensure normal maps don’t send rays through meshes. Anything past a certain angle seems to simply be given a bad normal, which works for things that don’t have deep normals to begin with, non-ideal geometry, and very detailed normals that interact with a gentle terminator.
…but while this solution works for things like dents, wood grain, and uneven paint, it’s clear to me that normals are used for everything from stucco-style walls and tree bark to engravings and panel lining, all of which tend to exceed the 30 degree angle which seems to trigger “correction”.

for example: A trick that games with a path-traced mode use is ray offsetting. it’s not unlike what cycles does for volumes or baking. Basically, when a ray intersects, you pull the ray up above the surface a little, and then allow it to continue. If it was to self-intersect, it will now hit the surface again. The actual texel it hits will likely be incorrect, but will be in the correct direction, and because a shallower bounce angle will go farther, it should have some semblance of coherence. There are a number of variations so I’ll just draw some of them.

Offsetting between each bounce is pretty simple, but can result in light leakage or black spots where things are too close to each other, first bounce projection does not have that issue, but as the name implies, cannot manage more extreme cases with 2+ bounces that result from very vertical normal maps. They would both probably be noisy, too.
another I mentally toyed with, is adding displacement data into the alpha channel. That would theoretically give blender the data it needs to treat a normal as a displacement- but we need a new bake workflow before we play with that.

I don’t think this is a legitimate worry, and I mean that in the best way. “normal correction” and normal maps in general already inject bias, so I think the most responsible thing to do at this point, is ensure normal maps work as intended as possible, even if they inject some bias and aren’t totally accurate.
I really want to see your ideas described or in action, and like this stack overflow answer, there do seem to be acceptable approximations. I can use displacement, but not for everything- that would be far too heavy, and most users don’t have giant render farms, and need to use CPU to handle displacement. Using a normal map itself should signal an acceptance that an approximation is acceptable.

But most importantly;
There are solutions out there, ones that don’t by default include “correcting” a user’s intentional choice in material, normal map or geometry normals

and for that reason, I think normal correction should by default be disabled.

1 Like

Apologies for the double-post, but in case anyone has some information I would like to ask:
Is there anything stopping us from implementing “virtual surface bounces”?
I’ll explain what I mean:

  1. supply a height map alongside the usual normal map. Another texture, channel-packed is my favorite, but whatever works. we need both.
  2. when a ray hits a point on the texture, record the incoming angle, the normal map vector, and the heightmap value it hits.
  3. compare the incoming vector to the normal map to get the vector it WOULD travel at.
  4. we now know the virtual height of the ray, as well as the virtual vector
  5. we also know the steeper the vector, the more quickly it will contact a nearby pixel
  6. travel along texels till the virtual height is below the height map of the texel it just passed.
  7. because we know the vector it WOULD be traveling at, we can now use the normal map and height value of that texel to start at step 2, to rinse and repeat till the ray is culled or “leaves the virtual surface”.
  8. when the ray leaves the virtual surface (exceeds the height of the displacement), add the distance below the midpoint to the ray.

essentially, it would be fudging things to make rays treat a normal map + height map as a displacement, sans actual geometry.

this could also theoretically produce self-shading normal maps, and if you used the face normal with the height map to move and occlude pixels as a last step, it could even result in parralax occlusion mapping.

essentially, this would amount to a probably less performant, more memory efficient version of something between displacement and normal maps that can self shade and provide self indirect bounces.

if we truly want a totally agnostic displacement system, then we need to have a node which turns these things into displacement, and the more you give it, the more methods should be available to you- just like bumpmaps and displacement use heightmap data, but normal mapping only become available with normal maps. if you supplied both, it would open this up.

Lifting the second bounce up by a little bit sounds kinda interesting, it will have other consequences but it’s worth some experimentation I think.

Interesting approach, but I think it exceeds the scope of what I had in mind for this thread : )

I think something like that has been worked on before:
https://devtalk.blender.org/t/parallax-occlusion-mapping/

I’m not sure if it was decided not to pursue it or if the original author just quit working on it.

btw. please don’t post in that very old thread unless someone plans on working on it.

yeah, it’ll produce off results, but because you can angle the incoming direction to be steeper and make the bounce act like it came from shallower in response to the height, it could be coherent. I worry that it might be noisy or flicker in animation, though.

I wasn’t mostly talking about POM, but rather suggesting that normal maps have a clear issue when it comes to low poly or smooth shading in path tracing, and that if they’re going to be super biased and produce unrealistic results, we should at least start looking at alternative algorithms which might produce more realistic results. After all, cycles does support NPR anyway.
my suggestion, I think, would by computationally heavy but produce better results that could work with or be modifiable into POM. but I’m mostly just suggesting a way for light rays to travel “inside” a texture, which could solve the whole “normal maps still don’t act anything like light”

A user (I don’t know their alias on this platform), suggested going with something like this: Chapter 8. Per-Pixel Displacement Mapping with Distance Functions | NVIDIA Developer

And it might be similar to what you’re looking for/describing.

2 Likes

I see!
it’s a little similar, but I think that proposal is a bit heavy-handed. It’s definitely in the same vein of thought, but ray marching and so on, on top of calculating march distances probably leave it poorly performant.
my suggestion is closer to the one they mentioned at the beginning- Polcorpo’s… and given it’s an older idea and tested on older machines, it is probably more reliable, too.
and he too notes that such data can be stored in a single RGBA image! it makes me happy to have found my idea isn’t so outlandish. Chapter 18. Relaxed Cone Stepping for Relief Mapping | NVIDIA Developer you can see here that Polcarpo also considers tangent space a “simulated 3D space”.

though my interpretation didn’t use cone stepping, and instead uses deterministic values, and in many cases won’t need to check more than a few dozen pixels to find where a ray would hit- in other words, it avoids cone mapping, which can be more biased. It’s pretty clear that cone mapping is performant at the cost of accuracy, which would make it viable for eevee but not necessarily cycles. The intriguing part is that most of the solution works for both.

But these nvidia solutions are primarily a POM solution. I think it would be best to add that as an option because the greatest downside of naive, hyper-performant POM is that it can’t self-light or self-shade. Once you’ve solved that, even the most basic POM will provide at least usable results, but to get back on subject with actual normal map discussion-

the bulk of my proposal is just a suggestion on how to “pixel march” within a texture, so that you can get what we should call “indirect lit normals”, because the primary use is simply to simulate one or two more bounces “within” a normal map, so that it can self-shadow and self-indirect light. This would result in much more believable normal maps…
and also solve the issue with the current Oren-nayer diffuse model blocking 100% of incoming light at glancing angles if you have a normal map- it simulates back-scattering, but not the resulting self-lighting that would cause.
the issue is that I’m not sure how to obey smooth normals with this solution. Theoretically, if UV’s wrap around a corner, light can “flow through the texture and around the corner” unless it obeys object normals too. Still, this would allow us to simplify cycles a little by lieu of removing systems to “correct” or “adjust” normals to account for edge cases.

Weizhen Huang mentioned this :

But it caused a 10% loss of performance.