Help: Information on Cycles Bake code

Hey everybody, I just built Blender for my first time and I’m looking for a first project. I think I’d like to work on adding support for baking to vertex colors for Cycles, since it was possible with Blender Render in 2.79, but is no longer possible in 2.8.

I’m having trouble finding information on Cycles bake code. I’ve looked through the information on the project page at and can’t find anything on baking. I’ve been trying to figure out bake.h and bake.cpp, and I’m not sure what I’m looking at.

If I understand correctly, the code sets up the bake operation by figuring out which pixel (in the output image) belongs to each triangle of the object (it looks like Blender and Cycles are talking back and forth at this point- Blender sets the bake data and the properties of the Bake Manager in blender_session.cpp). Some information is saved about the surface of the object at that point on its surface (I think is the primitive and the uv’s, and BakeData.differentials is the tangent and bitangent of that point? I’m not formally educated in higher math, but eager to learn more) Then, it runs a rendering task for each pixel for the number of samples that Blender requests. This is saved to d_output, which is then copied into result, which looks is an array of floats that are each one channel of a pixel (so the data looks like r,g,b,a,r,g,b,a,r,g,b,a,r,g,b,a in the result).

What confuses me is that I don’t see anything use result. It isn’t a pointer or a reference, so it’s being passed by value, isn’t it? And I don’t see it being returned or referenced anywhere… so how does the result get out of the function? How does Blender get ahold of it?

What I want to do is bake to vertex colors, so I think it should be possible to bake only the pixels at the corners of each triangle, and use the result as a vertex color for the associated vertex-face-corner (loop is what it’s called in mesh Python API).

Am I far off the mark in my understanding of this piece of code? Any advice on finding more info? And most importantly, is this too hard for a first project?

For vertex color baking, the main work would be on the Blender side, in object_bake.c and object_bake_api.c.

You’re right about the approach to doing this. Rather than setting the face index + barycentric uv coordinates corresponding to a pixel, you’d set them for a vertex. So that means a face index + barycentric uv corresponding to the corner of a face where the vertex is.

Regarding the Cycles side, BakeManager::bake has a result parameter, that’s where the output gets written to. But I think vertex color baking may not require any code changes in Cycles at all.


Thanks for the reply! I’ve been looking through the files you mentioned, and I think it will take me a few days to read them and still a few more to understand.

I was confused by result because I thought it was being passed-by-value, but it looks like Blender is giving it a NULL*, so I suppose Cycles is writing the pixels directly to the memory? I think I can figure it out, but it looks like I probably don’t have to in order to do what I want. It will take a while to learn, though.

Thanks again.

1 Like

So I’ve done some poking around, and I think the I can add vertex baking without modifying Cycles at all, as @brecht said. In fact, I think I only need to touch two functions in two source files.

This is my reading of the code, please correct me if I have it wrong somehow:
Cycles recieves an empty list of pixels and a list of pixel-data for rendering. It renders the pixel data to pixels and puts them in the pixel array. At this point, the pixel array is not an image, it’s just a list of pixels that can be made into an image. Blender then receives the pixel array and puts them in an image, and does the cleanup work needed after the bake is finished.

The crucial part of the code for vertex baking is in /source/blender/editors/object/object_bake_api.c, specifically the function aptly named bake. First, the size of the pixel array is set by num_pixels. So I would have to add a bit of code to set the number of pixels to the number of face-corners (or loops I think they’re called) if the bake is going to vertex colors. Once the number of pixels in the array is equal to the number of loops, each BakePixel must have its data set up. So I would have to modify RE_bake_pixels_populate() (in source/blender/render/intern/source/bake_api.c), or write a new, similar function, that create bake data for the loops directly. This should be easy to do, since there wouldn’t have to be any kind of interpolation across the face to find a pixel data value at a specific UV-coordinate. Perhaps I can even re-use the existing code to do this.

Finally, I will need to modify the image saving section at around line 1130 in object_bake_api.c to save the pixel for each loop into vertex color for each loop. This doesn’t seem like it should be very tricky.

Note that none of this takes high-poly/low-poly workflow into account just yet. I want to solve the easier problem first. A couple of other difficulties I foresee are: pixel data may not be normalized and I think vertex colors expect normalized data. Also, vertex colors currently have color space hard-coded into the material node that accesses them. This is not ideal for baking data passes like normals, etc. . That’s a separate issue that I’d really like to fix. I think I can talk to OmarSquircleArt about that since he’s the one that made the new vertex color node (excellent addition). Finally, it’s possible that loops sharing a vertex can have different RGB values from the bake- much like when neighboring pixels in an ordinary bake have different values due to noise or fireflies. This will need to be addressed- since it will look really ugly in vertex colors. I think I can average the pixel value for each loop that shares a vertex. Either that, or I can merely bake one pixel per vertex and find a way to associate them to the loops later on.

Does this sound like a reasonable way to tackle the problem?


Can someone point me to the code that Blender Internal used for this? I’ve checked out the code for one of the older versions, but I can’t get it to compile. (blender 2.79) And there doesn’t seem to be anything about vertex color baking in object_bake_api.c even in the older source. Nothing in \source\blender\render\intern\source\bake_api.c or multires_bake.c either, that I can find.

I’m going to keep looking, but I’d appreciate any input.


I’ve been able to find the baking code in 2.79. It wasn’t hard to find-- I just needed help building the source. Thanks, LazyDodo!

Anyways, it looks like Blender Internal baked the maps face by face (which is what I originally thought Cycles did- I suppose I must have been remembering BI), by passing around a BakeShade that gets its mpoly updated in get_next_bake_face. The mpoly of course, holds a reference to the loops, which Blender uses to find the vertex color data associated with it. Then, the render engine bakes directly into the vertex color in bake_shade.

I don’t think I will try to re-implement this sequence of events in Cycles. It seems much better to me to follow the plan of action I outlined earlier.

pixel data may not be normalized and I think vertex colors expect normalized data. Also, vertex colors currently have color space hard-coded into the material node that accesses them

Blender 2.79 handled this with linearrgb_to_srgb_v3_v3(rgb, rgb); and of course rgb is the result of the bake for that loop.

Finally, it’s possible that loops sharing a vertex can have different RGB values from the bake- much like when neighboring pixels in an ordinary bake have different values due to noise or fireflies. This will need to be addressed- since it will look really ugly in vertex colors.

I don’t think the old code made any attempt to avoid this. But it does do this right here:

	/* shrink barycentric coordinates inwards slightly to avoid some issues
	 * where baking selected to active might just miss the other face at the
	 * near the edge of a face */
	if (bs->actob) {
		const float eps = 1.0f - 1e-4f;
		float invsum;

		u = (u - 0.5f) * eps + 0.5f;
		v = (v - 0.5f) * eps + 0.5f;
		l = (l - 0.5f) * eps + 0.5f;

		invsum = 1.0f / (u + v + l);

		u *= invsum;
		v *= invsum;
		l *= invsum;

I think what’s going on here is that the shading point is moved into the face-corner so that it isn’t on the border anymore. I might be able to reuse some of this code directly.

Finally, I found that Blender Internal tracked its baking behaviour with the use of scene->r.bake_flag which uses 2^6 (or 64) for RE_BAKE_VCOL. This is still reserved in 2.8x, but is currently commented out and marked as deprecated.

I’d like to bring it back, but I haven’t got the faintest clue how to put the check-mark back in the UI and have it affect scene->r.bake_flag. I think for now, I’m going to hard-code the value and figure out how to do it right once I’ve got baking working.

I’m writing all of this here, by the way, for two reasons: to keep myself from forgetting, and to help anyone else that’s trying to tackle this (or a similar) problem.

But I’m also looking for feedback and help!

Hi @Josephbburg,

I took Cycles from latest blender and trying to use it to bake textures in my application. Do you suggest me to use Cycles from 2.7?

No, absolutely not! The stuff in this thread abut 2.79 is all Blender Internal. The baking code is a lot cleaner now, from what I understand. I only pointed you to this thread because I explained how some of the code works (initially so that I would remember it later, but maybe it will be helpful to you).

Thank you @Josephbburg,
Could you crack it finally? I have been working on this for a long time with no luck. Under tremendous pressure. :frowning:

I’m not trying to solve the same problem as you, but it’s not the difficulty of the situation that has slowed me down, it’s Blender’s huge code-base that I don’t know how to begin learning. The baking code is relatively straight-forward:

First, Blender generates a list of pixels which are empty, It creates a list of data that corresponds to each pixel in the list -UVs, normals/partial derivatives, etc. Then it passes the pixels with their corresponding data to Cycles, which renders them into the image. Finally, the image (in the form of an array of pixels, now having color information instead of being empty) is saved to the destination.

So baking is handled entirely by Blender. All Cycles does is take the information required for rendering and render the data. I think there is a special render function for baking in Cycles, but it’s essentially the same as an ordinary render.

Ok. I took the respective code from blender as below. Still I am getting some exception. Will troubleshoot and see.

inline int max_ii(int a, int b)
  return (b < a) ? a : b;
inline float min_ff(float a, float b)
  return (a < b) ? a : b;
inline int min_ii(int a, int b)
  return (a < b) ? a : b;
inline float max_ff(float a, float b)
  return (a > b) ? a : b;
static void bake_differentials(BakeDataZSpan *bd,
                               const float *uv1,
                               const float *uv2,
                               const float *uv3)
  float A;

  /* assumes dPdu = P1 - P3 and dPdv = P2 - P3 */
  A = (uv2[0] - uv1[0]) * (uv3[1] - uv1[1]) - (uv3[0] - uv1[0]) * (uv2[1] - uv1[1]);

  if (fabsf(A) > FLT_EPSILON) {
    A = 0.5f / A;

    bd->du_dx = (uv2[1] - uv3[1]) * A;
    bd->dv_dx = (uv3[1] - uv1[1]) * A;

    bd->du_dy = (uv3[0] - uv2[0]) * A;
    bd->dv_dy = (uv1[0] - uv3[0]) * A;
  else {
    bd->du_dx = bd->du_dy = 0.0f;
    bd->dv_dx = bd->dv_dy = 0.0f;

static void zbuf_add_to_span(ZSpan *zspan, const float v1[2], const float v2[2])
  const float *minv, *maxv;
  float *span;
  float xx1, dx0, xs0;
  int y, my0, my2;

  if (v1[1] < v2[1]) {
    minv = v1;
    maxv = v2;
  else {
    minv = v2;
    maxv = v1;

  my0 = ceil(minv[1]);
  my2 = floor(maxv[1]);

  if (my2 < 0 || my0 >= zspan->recty) {

  /* clip top */
  if (my2 >= zspan->recty) {
    my2 = zspan->recty - 1;
  /* clip bottom */
  if (my0 < 0) {
    my0 = 0;

  if (my0 > my2) {
  /* if (my0>my2) should still fill in, that way we get spans that skip nicely */

  xx1 = maxv[1] - minv[1];
  if (xx1 > FLT_EPSILON) {
    dx0 = (minv[0] - maxv[0]) / xx1;
    xs0 = dx0 * (minv[1] - my2) + minv[0];
  else {
    dx0 = 0.0f;
    xs0 = min_ff(minv[0], maxv[0]);

  /* empty span */
  if (zspan->maxp1 == NULL) {
    span = zspan->span1;
  else { /* does it complete left span? */
    if (maxv == zspan->minp1 || minv == zspan->maxp1) {
      span = zspan->span1;
    else {
      span = zspan->span2;

  if (span == zspan->span1) {
    //      printf("left span my0 %d my2 %d\n", my0, my2);
    if (zspan->minp1 == NULL || zspan->minp1[1] > minv[1]) {
      zspan->minp1 = minv;
    if (zspan->maxp1 == NULL || zspan->maxp1[1] < maxv[1]) {
      zspan->maxp1 = maxv;
    if (my0 < zspan->miny1) {
      zspan->miny1 = my0;
    if (my2 > zspan->maxy1) {
      zspan->maxy1 = my2;
  else {
    //      printf("right span my0 %d my2 %d\n", my0, my2);
    if (zspan->minp2 == NULL || zspan->minp2[1] > minv[1]) {
      zspan->minp2 = minv;
    if (zspan->maxp2 == NULL || zspan->maxp2[1] < maxv[1]) {
      zspan->maxp2 = maxv;
    if (my0 < zspan->miny2) {
      zspan->miny2 = my0;
    if (my2 > zspan->maxy2) {
      zspan->maxy2 = my2;

  for (y = my2; y >= my0; y--, xs0 += dx0) {
    /* xs0 is the xcoord! */
    span[y] = xs0;

/* reset range for clipping */
static void zbuf_init_span(ZSpan *zspan)
  zspan->miny1 = zspan->miny2 = zspan->recty + 1;
  zspan->maxy1 = zspan->maxy2 = -1;
  zspan->minp1 = zspan->maxp1 = zspan->minp2 = zspan->maxp2 = NULL;
/* Scanconvert for strand triangles, calls func for each x, y coordinate
 * and gives UV barycentrics and z. */

void zspan_scanconvert(ZSpan *zspan,
                       BakeDataZSpan *handle,
                       float *v1,
                       float *v2,
                       float *v3,
                       void (*func)(BakeDataZSpan *, int, int, float, float))
  float x0, y0, x1, y1, x2, y2, z0, z1, z2;
  float u, v, uxd, uyd, vxd, vyd, uy0, vy0, xx1;
  const float *span1, *span2;
  int i, j, x, y, sn1, sn2, rectx = zspan->rectx, my0, my2;

  /* init */

  /* set spans */
  zbuf_add_to_span(zspan, v1, v2);
  zbuf_add_to_span(zspan, v2, v3);
  zbuf_add_to_span(zspan, v3, v1);

  /* clipped */
  if (zspan->minp2 == NULL || zspan->maxp2 == NULL) {

  my0 = max_ii(zspan->miny1, zspan->miny2);
  my2 = min_ii(zspan->maxy1, zspan->maxy2);

  //  printf("my %d %d\n", my0, my2);
  if (my2 < my0) {

  /* ZBUF DX DY, in floats still */
  x1 = v1[0] - v2[0];
  x2 = v2[0] - v3[0];
  y1 = v1[1] - v2[1];
  y2 = v2[1] - v3[1];

  z1 = 1.0f; /* (u1 - u2) */
  z2 = 0.0f; /* (u2 - u3) */

  x0 = y1 * z2 - z1 * y2;
  y0 = z1 * x2 - x1 * z2;
  z0 = x1 * y2 - y1 * x2;

  if (z0 == 0.0f) {

  xx1 = (x0 * v1[0] + y0 * v1[1]) / z0 + 1.0f;
  uxd = -(double)x0 / (double)z0;
  uyd = -(double)y0 / (double)z0;
  uy0 = ((double)my2) * uyd + (double)xx1;

  z1 = -1.0f; /* (v1 - v2) */
  z2 = 1.0f;  /* (v2 - v3) */

  x0 = y1 * z2 - z1 * y2;
  y0 = z1 * x2 - x1 * z2;

  xx1 = (x0 * v1[0] + y0 * v1[1]) / z0;
  vxd = -(double)x0 / (double)z0;
  vyd = -(double)y0 / (double)z0;
  vy0 = ((double)my2) * vyd + (double)xx1;

  /* correct span */
  span1 = zspan->span1 + my2;
  span2 = zspan->span2 + my2;

  for (i = 0, y = my2; y >= my0; i++, y--, span1--, span2--) {

    sn1 = floor(min_ff(*span1, *span2));
    sn2 = floor(max_ff(*span1, *span2));

    if (sn2 >= rectx) {
      sn2 = rectx - 1;
    if (sn1 < 0) {
      sn1 = 0;

    u = (((double)sn1 * uxd) + uy0) - (i * uyd);
    v = (((double)sn1 * vxd) + vy0) - (i * vyd);

    for (j = 0, x = sn1; x <= sn2; j++, x++) {
      func(handle, x, y, u + (j * uxd), v + (j * vxd));

inline void copy_v2_fl2(float v[2], float x, float y)
  v[0] = x;
  v[1] = y;
static void store_bake_pixel(BakeDataZSpan *handle, int x, int y, float u, float v)
  BakeDataZSpan *bd = (BakeDataZSpan *)handle;
  BakePixel *pixel;

  const int width = bd->bk_image->width;
  const size_t offset = bd->bk_image->offset;
  const int i = offset + y * width + x;

  pixel = &bd->pixel_array[i];
  pixel->primitive_id = bd->primitive_id;

  /* At this point object_id is always 0, since this function runs for the
   * low-poly mesh only. The object_id lookup indices are set afterwards. */

  copy_v2_fl2(pixel->uv, u, v);

  pixel->du_dx = bd->du_dx;
  pixel->du_dy = bd->du_dy;
  pixel->dv_dx = bd->dv_dx;
  pixel->dv_dy = bd->dv_dy;
  pixel->object_id = 0;
/* each zbuffer has coordinates transformed to local rect coordinates, so we can simply clip */
void zbuf_alloc_span(ZSpan *zspan, int rectx, int recty)
  memset(zspan, 0, sizeof(ZSpan));

  zspan->rectx = rectx;
  zspan->recty = recty;

  zspan->span1 = (float *)malloc(recty * sizeof(float));
  zspan->span2 = (float *)malloc(recty * sizeof(float));

void populate_bake_data(
    const Mesh &mesh, size_t uv_map_index, size_t image_width, size_t image_height, BakeData *data)
  size_t num_pixels = image_width * image_height;
  BakePixel *pixel_array = (BakePixel *)malloc(sizeof(BakePixel) * num_pixels);
  /* initialize all pixel arrays so we know which ones are 'blank' */
  BakeDataZSpan *bd = new BakeDataZSpan();
  bd->bk_image = new BakeImage();
  bd->bk_image->width = image_width;
  bd->bk_image->height = image_height;
  bd->bk_image->offset = 0;
  bd->pixel_array = pixel_array;
  bd->zspan = new ZSpan();

  for (size_t i = 0; i < num_pixels; i++) {
    pixel_array[i].primitive_id = -1;
    pixel_array[i].object_id = 0;
  zbuf_alloc_span(bd->zspan, image_width, image_height);

  /*ZSpan zspan;
  zspan.rectx = image_width;
  zspan.recty = image_height;*/

  Attribute *attributes = mesh.attributes.find(AttributeStandard::ATTR_STD_UV);
  float2 *fdata = attributes[uv_map_index].data_float2();
  size_t triangles_count = mesh.num_triangles();
  for (size_t i = 0; i < triangles_count; i++) {
    bd->primitive_id = i;
    float vec[3][2];

    Mesh::Triangle triangle = mesh.get_triangle(i);
    /* array<float3> p1 = mesh.verts[triangle.v[0]];
     array<float3> p2 = mesh.verts[triangle.v[1]];
     array<float3> p3 = mesh.verts[triangle.v[2]];*/
    for (size_t j = 0; j < 3; j++) {
      float2 uv = fdata[triangle.v[j]];
      vec[j][0] = uv[0] * (float)image_width - (0.5f + 0.001f);
      vec[j][1] = uv[1] * (float)image_height - (0.5f + 0.002f);

    bake_differentials(bd, vec[0], vec[1], vec[2]);
    zspan_scanconvert(bd->zspan, bd, vec[0], vec[1], vec[2], store_bake_pixel);
  BakePixel *bp = pixel_array;
  for (size_t i = 0; i < num_pixels; i++) {
    data->set(i, bp->primitive_id, bp->uv, bp->du_dx, bp->du_dy, bp->dv_dx, bp->dv_dy);

Hi @Josephbburg,
I remember you mentioned in the other post that you could not understand how to save the “result” as image. Please excuse me if I was mistaken. This is how we should.

std::string file_name = output_folder + "\\" + std::to_string(object_index) + ".png";
  ImageOutput *out = ImageOutput::create(file_name);
  ImageSpec spec(buffer_params.width, buffer_params.height, 4, TypeDesc::FLOAT);
  out->open(file_name, spec);
  out->write_image(TypeDesc::FLOAT, result);

I did eventually figure it out, I wasn’t thinking in pointers at the time, being used to Python. I even got it to write to vertex colors instead, but I never found the time to write the code to send the right information through to get the actually correct colors.

Quick question, just curious could another approach at implementing baking be that a special camera could be made which defines the rendering space via UVs of a mesh to shoot out the sampling instead? Kind of like how the camera can change to panorama/fisheye. I might look into that soon, much like the Kettle bake shader for an old version of Arnold did.