Displacement from Material Modifier

I’d like to add the option to the Displace modifier to displace the mesh based on the Displacement output from its Materials, instead of a texture.

The feature is (kind of) already in Blender since that’s basically what True Displacement does at render time, but I’m having a hard time figuring out how to call MeshManager::displace from MOD_displace.c. I’m not even sure if that would be considered the right way to do it, since that depends on the Cycles code base.

I think the cleanest way to do it would be to get the current instance of BlenderSession and go from there, but I don’t even know how to do that. Seems like the only instance of BlenderSession is created inside blender_python.cpp, so it’s not possible retrieve it from C ?

Also I’m not sure if that would be safe to do regarding concurrency.

I would love to recieve some feedback from someone more familiar with the Blender code base.

Thank you for reading.

1 Like

So this would be a great feature to have, but it will require very deep changes to the code. This should probably be tackled by someone who is already very familiar with Blender and Cycles code since it touches a lot of stuff, but mainly it needs some difficult design decisions first.

This also is a topic for Blender 2.8 where Blender Internal is being removed, but we still want textures for the displacement modifier, texture painting, particles, etc. Ideally the same textures and would be available for Cycles, Eevee and the rest of Blender.

One solution would be to let Cycles do the displacement as you suggest, and this will require a new Cycles API the Blender can call. The existing BlenderSession should not be used for that, it would need to be something quite light and fast, since evaluating modifiers is quite performance sensitive. Some mesh data should be available to Cycles when running that shader, like mesh coordinates, UV maps, tangents, etc. But not an entire scene since that would be too slow.

The other solution is to implement most of the Cycles texture nodes in Blender itself, and just evaluate everything there. The downside would be duplicated code, and we are already implementing it all in GLSL for Eevee as well. I don’t like the code duplication but this may be the preferred solution in the end. It will need to be discussed with other core developers, and hopefully someone can come up with a better solution.

1 Like

Thanks for your answer Brecht. I undertand that maybe this isn’t the right project for me at the moment.

The thing is, I actually need this feature for a project I’m working on. So while I would prefer to implement it in a way that could be merged into master, I’m ok with implementing it in the ugliest and hackiest way to make it work in my private branch. So I would still appreciate some tips.

If you need a quick hack, maybe bake the displacement to an image and use that?

For something more automatic, the existing baking API is probably the way to go, but it’s also not easy to fit that in modifier evaluation since it’s based on exporting the scene, not a single isolated mesh in the middle of a modifier stack. See source/blender/render/intern/source/bake_api.c, instead of baking pixels you’d bake to vertices.

1 Like

That’s the plan B.
I’m using displacement shaders for procedural environment modeling quite extensively, so having realtime feedback while in edit mode would be quite awesome.

I’ve been triying with the baking API, but the results seems to be running at the wrong UV positions for some reason I can’t figure.

I’ve been debugging the code and even if I make a UV bake pass, the UVs that comes out are different from the ones I pass.
If I manually move every UV vertex to the same coordinate the results are accurate so it’s probably some interpolation problem ?
Honestly, at this point I’m quite lost.

Here’s the modifier code in case someone wants to take a look at it:

static DerivedMesh* applyModifier(ModifierData *md, Object *ob,
	DerivedMesh *derivedMesh,
	ModifierApplyFlag flag)
{
	TestModifierData* tmd = (TestModifierData*)md;
	ModifierData* nmd = &tmd->modifier;

	DerivedMesh *dm = CDDM_copy(derivedMesh);

	MVert* verts = dm->getVertArray(dm);
	MLoop* loops = dm->getLoopArray(dm);
	MLoopTri* loop_tris = dm->getLoopTriArray(dm);
	MLoopUV* loop_uvs = CustomData_get_layer(&dm->loopData, CD_MLOOPUV);
	const int loop_count = dm->getNumLoops(dm);
	const int loop_tri_count = dm->getNumLoopTri(dm);
	const int vert_count = dm->getNumVerts(dm);

	float(*UVs)[2] = MEM_mallocN(sizeof(float) * vert_count * 2, __func__);
	int* primitive_ids = MEM_mallocN(sizeof(int) * vert_count, __func__);

	for (int v = 0; v < vert_count; v++)
	{
		for (int t = 0; t < loop_tri_count; t++)
		{
			MLoopTri* loop_tri = loop_tris + t;
			if (loop_tri->tri[0] == v ||
				loop_tri->tri[1] == v ||
				loop_tri->tri[2] == v)
			{
				primitive_ids[v] = t;
				break;
			}
		}

		for (int l = 0; l < loop_count; l++)
		{
			if (loops[l].v == v)
			{
				const float *uv = loop_uvs[l].uv;
				copy_v2_v2(UVs[v], uv);
				break;
			}
		}
	}

	BakePixel* bake_pixels = MEM_mallocN(sizeof(BakePixel) * vert_count, __func__);
	
	for (int v = 0; v < vert_count; v++)
	{
		BakePixel* pixel = bake_pixels + v;
		pixel->object_id = 0;
		pixel->primitive_id = primitive_ids[v];
		copy_v2_v2(pixel->uv, UVs[v]);

		pixel->du_dx = pixel->du_dy = pixel->dv_dx = pixel->dv_dy = 0;

		bake_differentials(pixel,
			UVs[loop_tris[primitive_ids[v]].tri[0]],
			UVs[loop_tris[primitive_ids[v]].tri[1]],
			UVs[loop_tris[primitive_ids[v]].tri[2]]);
	}

	// Build a new scene for baking, with just a copy of the current mesh  
	Scene* scene = BKE_scene_add(G.main, "BAKE");
	BLI_strncpy(scene->r.engine, RE_engine_id_CYCLES, sizeof(scene->r.engine));
	Object* object = BKE_object_add(G.main, scene, OB_MESH, "Bake_dummy");
	Mesh* bake_mesh = BKE_mesh_from_object(object);

	DM_to_mesh(dm, bake_mesh, object, CD_MASK_MESH, false);

	for (int i = 0; i < ob->totcol; i++)
	{
		BKE_object_material_slot_add(object);
		Mesh* ob_mesh = (Mesh*)ob->data;
		assign_material(object, ob_mesh->mat[i], i, BKE_MAT_ASSIGN_OBDATA);
	}

	Render *re = RE_NewRender(scene->id.name);
	RE_bake_engine_set_engine_parameters(re, G.main, scene);
	
	float* result = MEM_mallocN(sizeof(float) * vert_count * 4, __func__);

	RE_bake_engine(re, object, 0, bake_pixels, vert_count, 4, SCE_PASS_EMIT, R_BAKE_PASS_FILTER_EMIT, result);

	for (int i = 0; i < vert_count; i++)
	{
		MVert* vert = verts + i;
		float displace[3];
		normal_short_to_float_v3(displace, vert->no);
		mul_v3_fl(displace, result[i * 4]);
		add_v3_v3(vert->co, displace);
	}

	BKE_libblock_delete(G.main, &object->id);
	BKE_libblock_delete(G.main, &scene->id);

	MEM_freeN(result);
	MEM_freeN(bake_pixels);
	MEM_freeN(UVs);
	MEM_freeN(primitive_ids);

	return dm;
}

It assumes the mesh is pre-triangulated and it has already a UV layer.

The UVs that you should store in BakePixel should be barycentric coordinates in the triangle, and not use any UV map.

Since you are baking to vertices those would be simply (1, 0), (0, 1) and (0, 0) for the three vertices in a triangle.

1 Like

I finally got this working!
I’ve been struggling a lot with the primitives ids until I figured out that Cycles uses MFaces indices instead of MLoopTris. The bake api uses looptris internally but performs some stunts to make Cycles happy. In the end I’m just triangulating the mesh and using MFaces indices.

I still have to polish a few things but it’s mostly finished. The next thing I’d like to try implement is adaptive tesselation based on surface detail, but that will have to wait for now.

Thank you so much for you help @brecht. Hope you don’t mind but I have a few questions left:

  • How can I change the Cycles render settings (specially render samples) for the new scene?
    I think I have to retrieve a pointer through RNA, but I have no idea how to do it.
  • I still have no idea what the du_dx, du_dy, dv_dx and dv_dy in BakePixel are. I think is some calculus thingie, but that’s outside of my current math knowledge. I’m setting everything to zero and seems to work ok, but I feel awful because of it.

Yes, these are Python registered properties so it’s not entirely obvious. It’s something like this:

PointerRNA sceneptr;
RNA_id_pointer_create(&scene->id, &sceneptr);
PointerRNA cscene = RNA_pointer_get(&sceneptr, "cycles");
RNA_int_set(&cscene, "samples", 1024);

These are not really important. They are the differentials that describe the area covered by the sample for texture filtering, but Cycles mostly used unbiased texture sampling so it doesn’t use them much. You could set them to a value like (0.5, 0.0, 0.5, 0.0) to indicate the sample covers about half the area of neighboring triangles.

1 Like

I’ve updated a branch with this modifier to github in case someone wants to play with it. It’s a bit hacky, right now needs to triangulate the mesh and it doesn’t auto-update when the material changes(not sure if that’s actually possible), but it works.

I’ve been thinking in how to get this functionality in a clean way into main Blender, and the best solution I can think of is a VertexBaking modifier. It would take a shader node tree and store the output as vertex colors/groups, so it could be used not only with the displace modifier, but with anything that takes a vertex group as a parameter.
It would also allow to easily pre-bake complex shader based masks (edge detection, ao…) on dense enough meshes.
And would open lots of possibilities for procedural modelling.

There are some issues that would need to be solved, though:

  • For this to be useful, vertex colors/groups should be stored at the modified mesh resolution. I skimmed the code and I think the only problem is that mesh customdata is not read back after applying a modifier, but It should be trivial to do so and store the result.

  • There’s the issue with evaluating the whole scene for baking. As a work-around I’m just creating a new scene with the only object being a copy of the mesh at the current modifier stack level. It’s ugly but works fine.
    A cleaner solution could be to implement an option in the baking api to evaluate just a single mesh in isolation.
    This could also be used as a replacement for the current texture nodes implementation when the internal renderer gets dropped, since material nodes are more powerful anyway. So texture node trees would be just like material node trees without shader nodes (Or just drop texture node trees and use material custom node groups instead?).

Opinions?

2 Likes