Parallax Occlusion Mapping

What a BIG difference, I hope PDO can be implemented in Cycles too :slight_smile:

3 Likes

This feature would be a godsend for me. It would make so many scenes infinitely easier to create and render. In both Eevee and Cycles!

I know we can achieve this result more or less using micro displacement but the sheer amount of geometry that generates is insane, often resulting in tens of millions of polygons (depending on image resolution) consumes a lot of memory on the GPU, and it brings with it very long generation processing times that need to be computed every frame. It’s really not ideal for something like sand on a beach.

This in comparison could generate the exact same effect in many situations with nothing more than a low poly shape and with no time spent generating geometry, consuming much less memory in the process. From an artistry point of view it would also be significantly easier to work with since it would be material based and you wouldn’t need to add modifiers and whatnot to achieve the desired outcome.

I really hope this makes it into Blender for both Eevee and Cycles. It would be much easier to work with than microdisplacement.

12 Likes

This is amazing and the development team should implement it in blender right away.
Here’s a little experiment I did with POM

without POM

with POM

11 Likes

I agree, but we need it also for cycles, it would relieve a LOT of the displacement limitations :slight_smile:

12 Likes

I think I found a (simple enough) way to compile the displacement node tree into a GLSL function for Eeevee that can be sampled iteratively for POM. I’m now trying to complete this into an implementation without user exposed POM or Depth Offset nodes.

From what I see so far, I will have to add the following settings to the material (Eevee only):

  • Displacement Type: {Bump, POM}

For POM:

  • Samples: integer: Number of POM linear search steps
  • Min Displacement: float
  • Max Displacement: float
  • Depth Offset: bool: To enable (or disable) depth offset effects
  • Modify Normal: bool: To enable (or disable) replacing the default normals in the shader node tree.

The user provided min/max displacement parameter is required to define the space in which the search for the displaced surface is done. It is difficult to determine the possible min/max displacement by static analysis of the user provided displacement node tree. If this space is chosen too large (by the user) some of the Samples steps are wasted, if it is too small, the displacement will be “clamped”. An alternative could be to specify Midlevel/Scale instead of Min/Max (Min = -Midlevel * Scale, Max = Scale - Midlevel * Scale), but this might be not very intuitive when the displacement node tree does not use a (height input clamped) Displacement node as output.

Other potential problems of a purely displacement based implementation:

  • The displacement node tree can become quickly “expensive” to evaluate (iteratively). For two reasons: Because the user adds too many operations or because the code generation in GLSL does not manage to completely eliminate “unused” optional operations (like perhaps per node texture mappings or similar).

  • In the basic implementation there will be no way for the code to specify mipmap levels for the image textures used in the displacement node tree. This results generally in things like a heightmap texture getting sampled at too high resolution (because ddx/ddy fails on iterated uv), which degrades cache/memory performance on the GPU.
    It might be possible to detect the case where the user adds only an image texture node followed by a displacement node for the displacement node tree. In this case instead of adding (internally) a POM node based on sampling a generic compiled displacement GLSL function, a POM node with a texture parameter could be added (basically like the implementation in the first post, but automatically generated). Like this at least for the functionality provided by the original implementation, the same performance could be maintained.

  • The “Relative” displacement scale mode will be removed from the implementation. In any case, it does not really work well with POM in generic cases. It works well only if there is no strong gradient in the texture scaling along the geometry surface. That means you need to have constant texture scaling on each individual part of your geometry. There are use cases where the relative mode could be useful. But this would have to be added to the Displacement node anyway, so I will currently remove it as objective.

In this new implementation (without new user exposed node types) I will completely drop the support for Cycles as there will be no need to support it “just for completeness”. Also the Depth Offset based material blending will be dropped (this could be added separately if there is any interest).

The main advantages of this implementation will be (hopefully) support for (some) proceduraly generated height maps. No new node types are introduced, so this will just be an (optional) setting for how to make use of a user provided displacement function during rendering (in Eevee), like the current Bump effect.

Regarding the discussion of whether or not node types for ddx/ddy, depth offset or POM should be exposed to the user: I think it depends on the long term objective/design of the shader node tree. Should it be a renderer independent description/model of the shading and displacement properties of a surface (which then is rendered as good as possible on any given platform)? Or should it be a way to expose all techniques/features of every renderer platforms in a user friendly (non programmer) way. The first approach (if followed strictly) will mostly limit support to techniques for which there is a good way to implement them on all rendering platforms, while keeping the node tree relative well separated from the implementation. The second approach will tightly couple the shader node tree design to the supported rendering platforms, but at the same time provide support for more techniques.

11 Likes

Hello, first time in this channel.

I understand that the POM is something tricky to integrate with the existing tools. Also, I do not understand why not considering Cycles ( I do not want to ask again for Cycles support ). This is a feature coming from the gaming industry and it is a regular progression for eevee to have it. But why not Cycles ? IMHO, the cycles engine is made to be the realistic render engine, and can also include a such functionality. I saw on forums (blenderartist_POM_old_discuss) that it is hard to make with GPU. But, it is the unique limitation ?

Anyway, thanks to work on it, I’m waiting this feature for long time now.

1 Like

Hi Kdaf,

Brecht had answer this on 14 october… If you had miss it :slight_smile:

Mmmh I’ll just say that unless there is a better alternative, other engines like FStorm or Corona AFAIK are using this, the Corona 2.5D is based on this I suspect, and the same for the FStorm feature (that I don’t remember the name).

BTW I’m not saying adaptive subdivision and displacement don’t need improvement, because they need it, a lot, we cannot use it in practically any scene, since it does not work for instanced geometry for example, and it’s not adaptive inside the same mesh, just at object level, so for example a wall of stone is not truly adaptive, only if it’s separated in several segments and those segments will be adaptively subdivided, and apart from that, it loads the ram so much that in the end we cannot use it because in a medium scene it will max out the gpu memory.

So far I don’t think all that is going to be improved in at least 3 or 4 years, because there is a super big list of things to improve with cycles, and it’s being filled step by step but having a developer that actually wants to make an improvement like this one, may need some guidance, and even could be good to “isolate” the improvement to make it more manageable and look for something more in the middle ground, but it’s a very welcome improvement from a user perspective, and we users need improvements :wink:

6 Likes

I’m totally with you, I think that the POM is important. Even if someone improve the Adaptive Subd, I don’t think it can be better than the POM ( in terms of performance ). But, I think Bretch have others reason to say that; and I really would like to understand.

3 Likes

There is a difference between POM and a POM-like algorithm adapted to ray tracing. What I don’t think is suitable for Cycles is standard POM as used in game engines.

9 Likes

Ah! Totally agree with that, that’s why I mentioned a middle ground solution :slight_smile:

2 Likes

A POM-like algorithm?

You have my attention sir! Do you have something in mind?

1 Like

It looks really cool!
But, why does the Parallax node look different from Bump or Normal map node?
I think the Parallax node should work the same as Bump or Normal map node

because the underlying feature is being displaced, not simply shaded- that’s the whole point of POM. if the feature is displaced, all of the other relevant maps need to be displaced by the same amount or they will be contributing to the wrong feature.

1 Like

Hello, any news about the POM development ?

I have completed a first version of an alternative Eevee only implementation that uses the displacement output of the material node tree for parallax occlusion mapping. The code diff and a sample blend file can be found here: https://developer.blender.org/D9792. An experimental build can be found here: https://blender.community/c/graphicall/frbbbc/

User Interface Integration

The user interface gets a new combo box in the Eevee material settings that lets the user select either “Bump” (default) or “Parallax Occlusion” as displacement rendering technique. A simple material node tree with displacement output looks like this:

For parallax occlusion mapping only the displacement in the direction of the local normal vector is considered. The “Displacement” node does exactly this, so it is recommended to use this as last node connected the displacement material output.

If “Parallax Occlusion” is selected as Displacement method, additional options and parameters are shown:

  • Samples: Defines the number of linear search steps used for ray marching in the parallax occlusion mapping shader. Higher values give better results, but take longer to complete. The linear search for the height map intersection point uses a randomization to achieve a dithered blend for parts of the geometry that can’t be reliably resolved with the given number of samples

  • Midlevel / Scale: These parameters are basically the same as on the Displacement node and define the displacement search space for the parallax occlusion mapping. If the Height input on the Displacement node is in the [0, 1] range then the exact same values can be used (or copied with a driver as in the sample file). For other ranges the values have to be chosen such that the complete displacement (min/max) is contained within the space described with Midlevel and Scale.

  • Displace Normal: If checked then the normal vector is displaced for the shading operation based on the provided displacement function. Otherwise the original normal vector (for the geometry without displacement) is used for shading.

  • Depth Offset: If checked then depth value of each fragment is offset for shading and shadow mapping operations based on the provided displacement function.

  • Displace Backface: If checked then the parallax occlusion mapping is applied to backfacing parts of the geometry as well. Otherwise backfaces remain flat (no displacement). This option can be useful in preventing rendering artifacts from (wrongly) displaced backfaces.

The displacement node tree can make use of (any combination of) the following inputs for parallax occlusion mapping:

  • UV texture coordinate (only the default map is supported)
  • Texture Coordinate: Generated
  • Texture Coordinate: Object (arbitrary objects as source supported)
  • Geometry: Position
  • Geometry: Normal

This approach supports both image texture based and procedural displacement functions.

Known Issues and Limitations

  • The known limitations of parallax occlusion mapping still apply. The curvature of the geometry is not correctly represented. The ray-marched volume attached to each triangle is flat and unlimited (beyond the boundaries of each triangle) but only visible “through” each triangle.

  • The displacement node tree is complied into a GLSL function and then evaluated iteratively in the parallax occlusion algorithm. Any overhead or inefficiency in the displacement node tree gets multiplied by the number of iterations.

  • Currently the code always requests (and interpolates) orco coordinates (for supporting “Texture Coordinate: Generated” as input) even when the displacement node tree does not make use of them. This could be optimized by analyzing the displacement node tree first and requesting it only when required.

  • Images Textures used in the displacement node tree are always sampled at the default LOD. For linear interpolation this is a dFdx/dFdy based automatic LOD which can lead to 2x2 pixel blockiness near the visibility contours of displaced geometry. Because of the iterative nature (with divergence) of the texture evaluation in parallax occlusion mapping, this automatic LOD can be wrong or undefined. For cubic interpolation, always texture LOD 0 is used (full resolution). This produces good results, but can impact performance (memory bandwith / cache). This issue is difficult to fix in a general setting. For a displacement node tree ending with an image texture and a displacement node (with constant parameters) one could code an optimized alternative implementation that directly integrates the texture lookup (and LOD selection) into the parallax occlusion iteration code.

  • The internally generated tangents for the “Texture Coordinate: Generated” input are not compensated for the smooth normal vector, which can lead to flat shading like artifacts. This issue is present in the current bump mapping implementation as well. See https://developer.blender.org/T74367 for details. Similar artifacts present with bump mapping when using “Geometry: Position” or “Texture Coordinate: Object” as input are prevented when using the parallax occlusion mapping option. Potentially this partial fix could be “backported” to bump mapping as well.

  • The shading node tree part is evaluated with “Geometry: Position” and “Texture Coordinate: Object” without the displacement in normal direction (the parallax occlusion based offset is still applied). In contrast Cycles evaluates the shading part using a position with the displacement in normal direction applied. The underlying problem is illustrated (using the normal vector as input) in this post: Parallax Occlusion Mapping. It would be quite easy to make the current implementation produce the same results as Cycles. But in my opinion either the behavior in Cycles should be changed or both values (displaced and undisplaced) should be exposed to the user as input to the shader node tree.

  • The approximation of the displaced normal can no longer (in a general node tree setting) exploit directly the knowledge of the resolution of the heightmap as in the first implementation (see above). This prevents “intelligent” smoothing of texture based displacement functions and forces the use of cubic texture interpolation (everywhere) to get smooth results. This also has some impact on performance.

Again, it would be great if something like this could be integrated into the official release of Blender. Regarding that objective, I’m open for any change requests.

56 Likes

Thanks for your great work, now it’s very intuitive to use.

I noticed that sometimes an artifact is visible along the edge of the tris making up a simple plane:


Here I’m using this displacement map: https://cc0textures.com/view?id=Rock034

When I enable screen space reflections, colorful artifacts appear. As soon as one moves the camera, they are gone:

Also, if the displacement node tree is too complicated, Blender becomes unresponsive.

I can’t reproduce the errors here on a Nvidia RTX 2070 Super.

The artifacts along the edge of triangles is most likely related to how the dFdx/dFdy functions and/or LOD selection for texture lookup is implemented (in the GPU hardware/driver). Do you have the same/similar artifacts when using simple Bump mapping (from displacement)? Have you tried if changing the interpolation on the Image Texture node (displacement texture) from linear to cubic does help? This should disable the texture LOD selection and always use level 0.

What GPU are you using?

Not sure what the issue is with the screen space reflections. You see these issues only when using parallax occlusion mapping?

I’m using an AMD Radeon RX 480.

I noticed the line disappears when I enable ‘Displace Backface’ or use any of the other texture interpolation methods. I don’t get any artifacts with bump mapping.

The colorful glitches only occurr when I enable POM. When I zoom in on a glitch, it spreads and most of the mesh (and neighboring meshes) is colored in cyan, magenta, yellow or black:

Hi @mmoeller,

I have made a video Showcase for your feature :slight_smile:

I have test your GraphicAll build during a 5-6 hours.

With texture it’s going fine and always work, even with a lot of textures in one scene it stay fast and responsive.

Little advise for future users :

With procedural it’s a bit slower. After reading about it. You need to be cautious with your node : choosing 3D instead of 4D noise and reduce detail in noise texture. But even with Bump mapping it can be quite slow, this is how Eevee works, so it’s normal that with POM it’s getting even slower with heavy node setup. Ppl just need to be aware of limitation

Great work btw ! Hope it will land on Master at one point.

File with textures can be downloaded here

13 Likes