# Using Python to bake displacement for Cycles— How does Cycles displace a point?

EDIT: it appears my problem is color-space conversion in Cycles.

Hi everybody. I’ve been trying to create a script that bakes displacement from one ‘source’ mesh to a ‘target’ mesh by comparing the position of each vertex. How I’m doing it is explained below and I’ll share the code if anyone asks.

Problem:
I’m trying to bake displacement for Cycles to use, but I don’t really understand what data Cycles expects, nor do I understand what it does with the data. I can’t reproduce the displacement Cycles produces with Python.

Cycles displaces the geometry almost correctly, but there’s a noticeable margin of error on the vertices that transform along something other than the world x,y, z axes (diagonal transformations). I think somehow the normal of the face or normal is being used by Cycles, since the margin of error is worse in some test-meshes than in others. The problem is that I simply don’t know what Cycles is doing! It obviously isn’t just moving the vertices in global space (v.co += disp) because when I do that in Python, it’s exactly correct. Any help in this matter would be appreciated.

I think it’s possible that Cycles is performing some sort of color-space transformation to the vertex-color data. Any ideas on how to test this, or work around?

( I tried looking through the source code for Cycles, but I didn’t understand it (maybe I just wasn’t looking in the right place). This is the part of the source code I was looking through- nowhere do I see the code that actually changes the location of anything, it looks like it’s being done somewhere else. I’ve been poking around elsewhere and can’t find anything relevant.)

Here’s how I’m doing it:

1. I’m generate the data by subtracting the location of the target vertex from the source.
2. Then, I take the square root of each (x, y, z) component of the displacement vector. I think this is what is going on: right now it represents a translation on x, y, and z, and what I want is a vector that goes straight through the rectangular space of that transformation. Or to put it another way, x and y, x and z, y and z all make the legs of right triangles, and I take the square root to get the hypotenuse. Scaliing by the magic number 1.05 seems to help some of the problems in Cycles, but it’s not consistent.
3. Finally, I scale each vector until the maximum value in either direction, positive or negative, is 1. This way I can fit the data into vertex colors without it being clamped. I use a layer for positive coordinates and negative coordinates. The negative coordinates need to have their signs flipped, so they can be stored as colors.

If I skip the square root step, the result is bizarrely exaggerated, especially along the x, y and z axes. The direction is correct, but the magnitude is wrong. Taking the square root fixes this, for the most part.

My shader is simple, the two layers of vertex colors are subtracted ( pos - neg) and fed into a vector displacement node set to object space. Midlevel = 0, and the scale is the reciprocal of the value I scaled the vectors by in step 2 (right now I just have to do it manually, the value is printed in the console).

Here’s the code:

``````import bpy
import mathutils
import math

sign = lambda x: x and (1, -1)[x < 0]

def InactiveObject():
"""Returns the first selected object that is not Active"""
obReturn = None
if (len (bpy.context.selected_objects) != 0):
for ob in bpy.context.selected_objects:
if (ob != bpy.context.active_object):
obReturn = ob
break
return obReturn

def calcDisplacement(dict, matSource, vSource, matTarget, vTarget):
disp = dict
posOffset = 0
negOffset = 0

for i in range(len(vSource)):
v1 = (matTarget @ vTarget[i].co)
v2 = (matSource @ vSource[i].co)
value = v2 - v1
disp[i] = value

if (value.x > posOffset):
posOffset = value.x
elif (value.y > posOffset):
posOffset = value.y
elif (value.z > posOffset):
posOffset = value.z

if (value.x < negOffset):
negOffset = value.x
elif (value.y < negOffset):
negOffset = value.y
elif (value.z < negOffset):
negOffset = value.z

return (disp, posOffset, negOffset)

def scaleDisplacement(dispData, obSource, obTarget):
disp      = dispData
posOffset = dispData
negOffset = dispData

scale = 0
if ((posOffset != 0) and (negOffset != 0)):
if ( abs(posOffset) > abs(negOffset) ):
scale = 1/posOffset
else:
scale = -1/(negOffset)
print ("Scale: %s" % scale)

dispScaled = disp

for key, value in dispScaled.items():
value *= scale
#for some reason I can't iterate through the vector?
value.x = math.sqrt(abs(value.x)) * sign(value.x)
value.y = math.sqrt(abs(value.y)) * sign(value.y)
value.z = math.sqrt(abs(value.z)) * sign(value.z)

disp = dispScaled
return [disp, scale]

def CopyDisplacementToVPaint(dict, ob):
posVCol = "displacementPos"
negVCol = "displacementNeg"
ob.data.vertex_colors.new(name = posVCol)
ob.data.vertex_colors.new(name = negVCol)
for l in ob.data.loops:
i = l.vertex_index
val = dict[i]
val_x = 0
val_y = 0
val_z = 0
if (val.x > 0):
val_x = val.x
if (val.y > 0):
val_y = val.y
if (val.z > 0):
val_z = val.z # the values are automatically clamped, so unecessary
ob.data.vertex_colors[posVCol].data[l.index].color = (val_x,val_y,val_z, 1)

val_x = 0
val_y = 0
val_z = 0
if (val.x < 0):
val_x = -1 * val.x
if (val.y < 0):
val_y = -1 * val.y
if (val.z < 0):
val_z = -1 * val.z
ob.data.vertex_colors[negVCol].data[l.index].color = (val_x,val_y,val_z, 1)

def TestDisplacement(disp, scale, ob):
for v in ob.data.vertices:
d = disp[v.index]
d.x = (d.x**2)*sign(d.x)
d.y = (d.y**2)*sign(d.y)
d.z = (d.z**2)*sign(d.z)
d.x *= 1/scale
d.y *= 1/scale
d.z *= 1/scale
mat_loc = mathutils.Matrix.Translation(d)

v.co = mat_loc @ v.co

obTarget = bpy.context.active_object
obSource = InactiveObject()
assert (obSource), "Select two objects with identical topology."

vTarget = obTarget.data.vertices
vSource = obSource.data.vertices
assert (len(vSource) == len(vTarget)), "Select two objects with identical topology."

print (obSource.name, obTarget.name)
disp      = {}
matSource = obSource.matrix_world
vSource   = obSource.data.vertices
matTarget = obTarget.matrix_world
vTarget   = obTarget.data.vertices
dispData  = calcDisplacement(disp, matSource, vSource, matTarget, vTarget)
scaleData = scaleDisplacement(dispData, obSource, obTarget)
disp      = scaleData
scale     = scaleData

CopyDisplacementToVPaint(disp, obTarget)
#TestDisplacement(disp, scale, obTarget)

posOffset = 0
negOffset = 0
``````

Thanks. I wasn’t sure whether to post here or on Stack exchange, so I’ll start with this.

1 Like

It seems to be scaled by the bounding-box of the object, not linearly as i had assumed but maybe logarithmically?

I did some poking around in the code for Displacement modifier- it is actually doing what I thought. I was wrong in the last comment. The problem is with color space. Using a Gamma node in the shader seems to fix the issue… still, I have no idea whether this is a hack or not! Right now I’m using a magic number, gamma = 0.882, but I’m just eye-balling it.

Does anyone know how to correct for color space?

The following may be of some use;

https://docs.blender.org/manual/en/latest/render/post_process/color_management.html

Thanks! I’ve actually decided to try to write a patch for Blender to fix this behavior, but I’m a long way from being able to do that. I need to brush up on C a bit, and learn a lot more about Git, CMake, etc. . I’d also like to make Cycles able to bake to vertex colors, and then my little script won’t be quite as useful!

1 Like

@Josephbburg

The following may be of some use;

1 Like

From a user perspective that is very nice idea for a modifier or in a future - node modifier. Assuming no UV unwrapping is needed.