Parallax Occlusion Mapping

Normal maps and bump maps are also “hacks”, but they are incredibly useful and very commonly used. Parallax occlusion mapping has huge benefits for rendering, not just view.

Parallax occlusion mapping can give better results than subdivision, because the GPU does not handle very tiny triangles very well.

And parallax occlusion mapping is way way way way way way faster than subdivision, while still giving the exact same result in most situations. This is a really big deal, especially if you have a big scene.

This means that parallax occlusion mapping can be useful even in Cycles, to drastically improve performance.

That already exists in Cycles. But even with that feature, there are still advantages to parallax occlusion mapping in Cycles. And of course parallax occlusion mapping is necessary for Eevee.

6 Likes

All 3D rendering are hacks, unless you’re simulating the actual atoms or bosons.

POM takes a place that’s not yet satisfied by either existing technology. True displacement, delivers very realistic, accurate results, but at exponential cost to rendering, and has a very limited practical use case, especially towards the smaller Blender user, who doesn’t have the means to render farms.

Bump and normal, also are very useful, but break down in lots of situations, with heavy displacement, or shallow angles.

POM sits in a spot neatly between those, both for cycles and eevee, and would be very valuable to many people.

If there is research towards better alternative methods of doing something similar to POM, that would work as well. Yet as far as I know that’s the best we got for now.

6 Likes

You must be talking about on-the-fly displacement at rendertime and not micro-displacement by way of adaptive subdivision, because Cycles uses the latter and sees very little difference in the amount of time it takes to put samples on the screen, just that the memory use will go up.

1 Like

No I am talking about subdivision based micro-displacement. Memory is a “cost to rendering”. Where do you see me talking about render time? For small users, memory can be just as much if not more of a bottleneck than computation time.

BVH build time should also be taken into account btw. Not to mention, things tend to get unstable if memory use grows too large on a system that isn’t build for it.

Lastly, it’s not just the final render that matters, it can be a lot of struggle to preview subdivision based displacement since the build times can drive you crazy.

EDIT: adaptive subdivision and true displacement are as of 3.2 still an ‘experimental’ feature btw, so even that isn’t technically officially in blender, and not advised to be used in production.

4 Likes

For comparison:


18 Likes

From the mindset, that if you implement some hack from the VIEW side of things (as in MVC pattern) it means that you are free to take lots of creative shortcuts and have more freedom to represent the graphics as you want. You are free from the specs and restrictions of the data, thus you can interpret the data in new and creative ways.

If we talk about the standard subdivision modifier of Blender, since it’s main purpose is to duplicate vertex count by X2 indeed is not an option to consider. However we will have to look at more advanced adaptive subdivision techniques. Such as for example the most standard subdivision modifier that can be used is “terrain mesh subdivision” since it takes into account lots of interesting techniques. Tiles + View Based Distance + Adaptive Subvision etc… On top of that there should be other techniques such as BVH to exclude tiny triangles entering the GPU pipeline.

Perhaps is good for this specific usecase. As far as I know, though it still is not the best of the best techniques. Such as for example I have seen various tech demos that allow for rendering of entire planets at 60FPS. Obviously we talk about clever optimizations that take place here and make this feasible. (No point if the code is written in C# since the entire point is to know what occlude - then the GPU renderer takes care of the rest).

As for example if you write shader code, the boundaries between raytracing/volumetric/polygon all blend together (all techniques and principles can be or can’t be the same technique). Unless for example you want to lock the code design into some specific technique then you follow these principles by intent. Indeed you are not restricted to think only about how to calculate normals for triangles, but go directly to the interesting raytracing techniques at once (normal+lightbounce+etc…).

POM is great however as it looks so far, the problem is that the implementation was very difficult and still postponed. In game engines since they have only GLSL shader code you are free to throw a shader in and get the result you want, however in Blender lots of other factors play a role, such as Cycles/EEVEE | CPU/GPU | GLSL/Metal/Vulkan which makes the process much more complex.

Something wrong with Depth/Parallax Mapping in Spatial Material · Issue #15934 · godotengine/godot · GitHub

So kinda the point here is that what if we skip POM entirely and go for something better.
eg:

GitHub - sp4cerat/Planet-LOD: Planet Rendering: Adaptive Spherical Level of Detail based on Triangle Subdivision
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf

I have no idea what you’re talking about. Blender is not MVC. 3D rendering is not MVC.

The only thing that matters in the art world is the final result. If you can get a good final result with “hacks” like normal maps or parallax occlusion mapping, then that is what artists will do.

It doesn’t matter if it’s adaptive or not. It is a hardware limitation, GPUs do not like tiny polygons, it doesn’t matter how those tiny polygons are created. You should learn about overshading:

" A close relative of overdraw is overshading, which is caused by tiny or thin triangles and can really hurt performance by wasting a significant portion of the GPU’s time. Overshading is a consequence of how GPUs process pixels during pixel shading: not one at a time, but instead in ‘quads’ which are blocks of four pixels arranged in a 2x2 pattern. It’s done like this so the hardware can do things like comparing UVs between pixels to calculate appropriate mipmap levels.

This means that if a triangle only touches a single pixel of a quad (because the triangle is tiny or very thin), the GPU still processes the whole quad and just throws away the other three pixels, wasting 75% of the work. That wasted time can really add up, and is particularly painful for forward (ie. not deferred) renderers that do all lighting and shading in a single pass in the pixel shader. This penalty can be reduced by using properly-tuned LODs; besides saving on vertex shader processing, they can also greatly reduce overshading by having triangles cover more of each quad on average."

I told you, that already exists in Cycles. If you want adaptive subdivision, you can use it right now.

Eevee cannot use adaptive subdivision. POM is needed for Eevee. POM is also useful for Cycles because it is faster than adaptive subdivision.

The implementation is not difficult, POM is well understood. For example, here is an OSL implementation of POM:

And here are some implementations in GLSL:

It’s just a couple dozen lines of code. Compared to other rendering techniques in Blender, POM is not complicated, it’s a really simple technique. It is so simple that it is used in realtime game engines like Unreal. POM is a great alternative to hardware tesselation.

The reason it was postponed is just because there were other things that had higher priority. And now that those higher priority things are done, they are now implementing POM in Eevee Next.

1 Like

It is already usable. Actually, I have used it to create detail that otherwise would be memory prohibitive using the displacement modifier. That it is still not fully complete represents a lingering issue in the way Blender is being developed (since it has been experimental since the very first Cycles commit 10 years ago, but that is another thread).

1 Like

In terms of software architecture Blender is implemented with an MVC design. The geometry/vertices exist in the “Model Domain” anything that has to do with rendering goes into the “View Domain”.

If you think that the renderer is an entire standalone module on it’s own you would not care about how it render things based on the “Model”, you can do many creative hacks such as POM just to make things look pretty, and skip technical specifications of the “Model” if needed.

As for example since POM is a ‘hack’ it could never find it’s place in the context of the 3D renderers or software because by mission statement and design, 3D renderers favor clean and hardcoded geometry.

If you talk about 3D real time rendering, obviously you have dynamic LOD that would take care of this problem. Perhaps other times you would use more advanced BHV based subdivision techniques. The point is that you shall not let tiny polygons pass (as Gandalf said).

Why do you keep referring to POM as a “visual hack” as if all the other things are not “visual hacks” by literally the same logic. Either you call all 3D rendering techniques “visual hacks” (since they all are) or you call none of them that.

POM is just another technique in a long list of techniques that helps you trick your brain to into the believe that it’s observing a world made of physical matter.

BSDF is a visual hack, PBR is a visual hack, Ray Tracing is a visual hack, normal mapping is a visual hack. RGB pixels themselves are a visual hack. Heck, painting is a visual hack.

Your arguments hang together with inconsistent logic.

The only reason POM hasn’t found it’s place, is because it’s a just little too resource intensive for real-time games, so they stick with less realistic normal maps. And if there is no time constraint on rendering, there are more advanced solutions, like micro-displacement. It has nothing to do with it being a “hack”

But I know a few perfect places for it: WYSIWYG prototyping and amateur (low resource) animation production. Both places where blender is used a lot in, where this will help a lot of blender users.

There is no sensible reason to oppose the addition of this technology, besides that the devs have more important things to focus on, which is a valid point. But that’s not the point that you are making.

Yours is just based on an arbitrary ideological subjective view of what you consider a “hack”, and what counts as “clean and hardcoded geometry” And has no place or use in this discussion.

8 Likes

Even at the hardware GPU level, vertices and pixel shaders are a hack. Bones and weight painting are hacks. Even the very concept of a “normal” is a hack. The entire 3D stack is hacks built on top of hacks, from the lowest level up to the highest level. But it works.

That is starting to change now: https://www.artstation.com/artwork/XBnL3l

Modern AAA games are already using techniques which are much more expensive than POM. The issue isn’t performance per se, the issue is that normal maps are “good enough” 99% of the time, and many game devs haven’t even heard of POM.

And some people are having great success using POM in the Archviz industry: https://80.lv/articles/creating-the-parallax-effect-with-the-power-of-osl-and-jiwindowbox/

Eevee and Cycles don’t have the hard-realtime requirement that games have, so POM is even more useful in Blender.

2 Likes

Kind of makes me think of NeRFs, which are view dependent reconstruction hacks of the volume object. I have wondered if you could bake out a NeRF into a POM surface

1 Like

you guys know how to use that branch on 3.0+ blender? That 2.92 ver is too old for my scene

If it can generate a height map / depth map, then yes absolutely.

Alternatively, if it gives you a mesh, then you can generate a height map from the mesh. Baking a mesh into a height map is very easy.

Because NeRF generates a 3D model from a very specific viewing angle, it seems like it would work quite nicely with POM.

You’ll have to port the changes to the newer branch in the source and build a 3.x version yourself. Not sure how much work that’s going to be, but certainly not a few clicks.

Any non-standard and not validated way of getting a result is a hack. As of now POM is not offered out of the box as a feature so any other workaround is a hack.

As of now the only way that people could get POM is through creating dozens of nodes (to simulate a for loop). It can be done if there is a great need and willingness to do so:

Blender 3.0 - Parallax Occlusion Mapping - Node Setup (Updated) - YouTube

In many games people would just change the shaders a bit and get POM.

https://www.reddit.com/r/Minecraft/comments/1p02js/mods_that_use_parallax_mapping_create_that_next/

Take a look at this message from 2018:

https://mobile.twitter.com/simonthommes/status/1061036431416664066

By this logic we can say that you can throw a GLSL shader inside Blender and everything would be great, but actually the problem is that it didn’t happen so far.

If POM now is a WIP feature and is released within the next EEVEE then let’s go by this logic. Unless it doesn’t happen then we will still talk about POM for a while.

Decided by whom? you?

Any innovation is non standard and any new feature is invalidated. That’s the whole point of adding new features to an existing product.

Of course the only way currently to get POM in blender is to “hack” it. That doesn’t make POM itself a hack, just the current custom build the OP made. The whole point is to probe for interest and devs with time to check and work this implementation out into something robust, solid and tested so that it can become standard and validated for Blender.

Maybe you have no clue, but this is how open source development works. Someone makes a branch with new code for a new feature, then others check, alter, and confirm or reject that code for being merged into the main branch. Are you going to post “tHiS iS a HaCk!1!1!” on all code from experimental branches, just because?

That’s literally what the OP did. That’s why we are talking about this. :man_facepalming:t5: :man_facepalming:t5:

Please read the first post again (especially the last paragraph, before posting any more replies. Because this is pointless, and wasting everyone’s time.

3 Likes

Please keep this thread on topic. Nine posts have been flagged as off-topic here. If someone likes to continue work on this feature or has development questions, that’s fine, but asking for updates or discussing this in a feature request way is off-topic and devtalk isn’t the proper place for this.

11 Likes

Yes I fully agree, people always talk about the necessity of feature by almost quarreling or preventing ideas development by giving workarounds or already known alternatives instead of helping new additions or ideas. So, Thank you so much

3 Likes