Oh, damn i was confused by duplicated referenceplane and scale parameters and thought that it requires scalar valuse and not vector.
Why it requires “Displacement node” and have same settings inside material?
Okay fair enough. I hop thet design of Dsiplacement inpuit will be redesigned soon.
Ok. Hmm, What if there was “Depth Offset” node with shader and offset inputs?
Would it be possible to implement like that? (And 5. AO also? I’ve asked Campbell why AO is not exposed in Principled and goal was to have separate output or something like that)
Or have enable/disable shader features for more advanced users:
The problem is, the displacement node is not required, you can create other valid displacement outputs without using it. And the displacement node is not clamped at the input. If you feed values outside the 0…1 range to the Height input, you will get displacements larger than the range given by Midlevel/Scale. For the POM code it is very difficult to reliably (by static analysis) predict the range (in world space units) that an arbitrary valid displacement node tree could output. That’s why originally I intended to add a “max displacement” and a “min displacment” parameter to the material POM settings. In practical use I realized that always converting the Midlevel/Scale parameters (which is the still the typical setup) to Min/Max parameters manually is not very convenient. For world space units the conversion is quite simple:
Min = -Midlevel * Scale
Max = Scale - Midlevel * Scale
Midlevel = -Min / (Max - Min)
Scale = Max - Min
The global Min/Max displacement (or alternatively Midlevel/Scale) information is required for the POM ray search, to know which space has to be covered. The Min/Max parameters act as a bounding “box” for the displaced geometry.
Technically this is no problem at all. In the first version I have implemented it this way, see the first post in this thread. It could be added back (should be easy) if this is something that the Blender team wants.
There is a new optimized GLSL shader for rendering shadow maps. The material has now two parameters for the number of samples: One for the number of ray marching steps when shading the surface fragment and an additional one for the number when rendering shadow maps. Often the rendering of shadow maps requires less samples for acceptable visual quality, so this additional parameter helps improve overall performance.
I have ported the automatic image texture LOD selection used in the original POM node based approach (see first post in this thread and diff linked there) to this displacement node tree based approach. For image textures with linear interpolation used in the displacement node tree it will use a LOD (mipmap level) based on gradients calculated at the first step of the POM iteration. It will then keep this LOD fixed for any further POM iterations in order to avoid any issues with divergent code branches (in the 2x2 blocks used for differential approximation). Good LOD selection is important in order not to be bottle-necked by memory access/bandwith when sampling the image textures. This improvement should remove the main performance drawback of the displacement node tree based approach compared to the original node based approach. For image textures with cubic interpolation I have kept Blender default of using LOD 0. This is not so great for performance, but still gives generally the best quality (except for potential aliasing issues).
The new version also fixes a small memory leak issue present in previous versions.
I just wanted to chime in and thank you for your effort in implementing POM into Blender.
I did test the feature with some custom textures and it looked nice.
Thank you again, i hope to see this at some point in trunk, it really helps for the game art side of things.
I used a mist pass for the depth, and I found that using a normal map gives really good results.
Even at 4 samples, and a texture resolution limitation of 128x128, the understanding of the image holds up.
I find best results when the POM and displacement node drivers match each other, drivers work well for this.
Hope we can have a branch with the new DOF included in 2.93. Will be awesome to implement both effects in eevee. I not a programmer so i´m not having luck with git.
Hadriscus, what i mean is that i wish we have this POM feature included in blender 2.93 the one that has implemented the new Unreal engine like Depth of Field. Regards!