Pixel Depth Offset and Layered Material Painting: Crazy useful Unreal features Blender needs!

To demonstrate both of the features I’d like to suggest for Blender, here’s Quixel’s Jack McKelvie creating a beautiful rendition of Halo’s Blood Gulch in Unreal Engine 4 using them:

Create Halo’s Blood Gulch in UE4

1) Pixel Depth Offset

When two meshes intersect, adds dynamic material blending and normal smoothing for a natural transition. McKelvie uses it numerous times in the video, for grass and dirt intersecting the ground, the bunker intersecting the ground, the mountains intersecting the Halo ring, the cliffs intersecting each other, and more.

If this were implemented in Blender, it would make creating outdoor environments with natural weathering and accumulation simple, without obvious hard edges. We’d obviously also need knobs to adjust the blending amount for both materials and normals.

I think this is critical for Eevee’s lasting success. Real-time homemade CG films could well be the future of independent moviemaking. Environments need to be believable for this vision to be realized.

2) Layered Material Painting

Simple enough concept. For a single mesh, the shader engine can layer materials (or layer the node groups within a single material), so that in texture painting mode you can arbitrarily paint those materials/groups across the surface of the mesh, such as for a landscape.

Right now, this is ridiculously difficult in Blender, requiring cascading two-input Mix Shaders each with their own maps as input to the Factor. In the following video, we see that Unreal allows shader groups to be compiled into a dynamically expanding layer node, which then exposes all of those layers for direct painting onto the surface, with blending techniques such as Height Lerp:

Texturing a Landscape in UE4 (Unreal Engine 4)

This would allow much, MUCH simpler handmade terrain creation.

These two features are, in my opinion, critical to Blender and Eevee competing with Unreal Engine in the real-time CG film space. Seeing that Blood Gulch rendition, I’m sure many Blender users would be happy to see more scenes like that, both as wallpapers and as potential environments in which characters can interact for film and TV.

P.S. – Weight painting vertices to place particle objects is nowhere as simple or performant as McKelvie painting the grass in that scene. Why can’t we have that?


Use a mix shader with two shaders is not a complex or hard task

1 Like

Maybe we should remove proportional editing and sculpting, too. Manipulating one vertex at a time is not a complex or hard task.

What kind of answer is this? Why would anyone insist on regression and stagnation in their tools?

1 Like

I’m sure the best comparison is to compare creating a unique node group with a few nodes that you can save… with edit millions of vertex one by one.

Of course.

Let’s entertain the notion that my analogy is invalid (it’s not).

Why did Epic implement material layering? I thought it wasn’t so complex or hard to mix two materials at a time with a separate mix map each time? That’s money they spent paying a developer to implement the feature!

Epic can spend millions on developing tools that are more optimized for real time, but that’s not an argument for Blender. Devs can’t spend time making every node system that someone doesn’t want to just copy and paste.

This is an incredibly poor argument, given that Blender’s budget is in the millions now, including $1.2 million from Epic themselves in July of this year, and also given that a significant portion of that budget was spent on the largest 2.80 feature, Eevee, which is a real-time engine that is primarily meant to be used for real-time rendering.

If your argument is that Unreal doesn’t pre-render, but holds 60 FPS during real-time interaction, I’m unsure what layered material painting (which makes environment creation much easier and more natural) has to do with maintaining 16 ms frame timimg.

This topic doesn’t make sense.

If you don’t even want to waste five minutes making a node preset and saving it, don’t ask others to do it for you.

I don’t think Blender should have a default node for something that some lazy guy doesn’t want to do.

Ah, so this was all a ridiculous extended ad hominem where you just wanted to call me lazy. Thanks for wasting everyone’s time.

1 Like

Don’t be a dick about it Roberto, I was thinking about this now and saw the solution in another threat

1 Like

LMAO, @Alberto strikes again. :man_facepalming:
Dude, when you’ll gonna stop with this craziness? You’re one of the reasons many good users are stopping visiting this forum.
Take a break dude. Damn.


@Nerve Welcome to the community.
Are you searching for something like this?

I have also tried creating a node setup that would do this.


I can’t tell from your video ,but do you run into the same problem? I seems to work OK for flat terrain ,but anything that’s not flat doesn’t work. The only way I have solved this is using dynamic paint ,but that’s a pain with a lot of objects.

1 Like

@AFWS Currently on weekend. Will track it down.
But I am pretty sure, it can be done with World+object normals very efficiently. Let me know if any progress can be mode on that methodology.
The paper for devs :

All those other examples aren’t really what the opening post is describing though. Sure, it’s easy to just mask the lower part of a mesh based on Z position and change the material for that part.

The special thing about that Unreal/Battlefront concept is the fact that first of all it’s not just creating a mask based on the world Z position, but on the Z position of the terrain underneath, no matter the shape.

Furthermore the blending material is taking its UVs from the terrain AND blends its normals with those of the terrain, eliminating a visible seam. None of those Blender node setups seem to do any of that.

1 Like

Thanks for the clarification. I get it now. Possibly have some prototype. Let’s try.


I think I’m getting pretty close. Does like @Laserschwert mentioned. Uses terrain UVs and normals.


Here’s my ground masking approach. Only works for ground plane. Currently using ground positional value to derive with the dot product.
For curvature I guess we need to find a way out for making a dot product or distance field of hill object normal and ground object normal.

Maybe Animation node has some opportunity on that. Unfortunately going back to production for now. So, can’t really get into it right now,

Here’s the file and demo if it helps.

1 Like

This all looks like really good work. Is there are way to smooth the normals across the intersect seam? The intersections are still fairly visible.

Actually, wait, AFWS’s solution looks like it’s doing…something like that?

These solutions needs to be mixed together. I am currently on production and will become free in February. If someone else want to handle it, can dld the zip. Although mine one serves only for planner. Curvature needs some mixture of dependent object normals.
@AFWS is using Dynamic Paint for masking if I am not wrong. Which is actually a very good way.