Blender2.8 crashes: kernel_compact_cpu.h - struct texture - int width = 32bit overflow

Hi,

I have problem with limited size of BVH (root, leaf, , …) and texture (__bvh_nodes) by using int32 values. (I tried to render Spring scene - width of __bvh_nodes has to be int64/size_t).

Thanks.
Milan

That would mean you have more than 64GB of BVH nodes alone, are you sure that’s right? Spring scenes should be much smaller.

We could increase the size of width, but it wouldn’t solve the issue. The BVH nodes themselves contain 32 bit indices of child nodes, traversal uses 32 bit int, it’s not trivial to change this without significantly affecting performance and memory usage.

Could you look at on kernel.cpp and kernel_tex_copy? There is “kg->tname.width = size”, where size is int64 and width is int32. (__bvh_nodes, 2,243,277,824)

The limit of int texture::width is 2,147,483,647.

I understand that, but increasing that size is pointless if we don’t make lots of changes elsewhere to ensure the indices are 64bit too.

If a Spring scene uses that much memory for BVH nodes alone, then there is no way it is going to fit in memory as a whole. So that means the scene should either be made smaller, or there is a bug somewhere that is leading to invalid sizes.

I used BVH8, I will try BVH4.

Hi Brecht,

I tried BVH4 and it has size of bvh_nodes 1.7G.

I see there is problem with int64. What about change type of width from int to uint? I think is more real.

Thanks.
Milan

It could be changed to unsigned int, but again it needs to be checked if the indexes handle it correctly as well.

If the size of bvh_nodes is 1.7G, that’s not near overflow since each element in the array is 16 bytes. So you would need it to be 16x bigger to get near overflow.

Hi Brecht,

I think there is problem with BVHParam

/* fixed parameters */
enum {
	MAX_DEPTH = 64,
	MAX_SPATIAL_DEPTH = 48,
	NUM_SPATIAL_BINS = 32
};

It has to be different for BVH2, BVH4 and BVH8. When I changed MAX_DEPTH to 32 for BVH8 there is no overflow in texture<__bvh_nodes>::width.

The second option is to change the parameters:
min_leaf_size = 1;
max_triangle_leaf_size = 8;
max_motion_triangle_leaf_size = 8;
max_curve_leaf_size = 1;
max_motion_curve_leaf_size = 4;

The rendering time is without change:)

There is different in GCC and ICC compiler. ICC crashes on overflow but GCC continues with bad values.

Thanks.
Milan

This would be working around a bug. There is no fundamental reason why MAX_DEPTH must be lower for BVH8, though typical BVH depths would be lower.

I suspect the BVH_OSTACK_SIZE is wrong and it overflows there, and reducing the max depth avoids that stack overflow. There are asserts in the kernel to test for stack overflow, but I guess you are compiling without debug and so the overflow does not cause an error?

Doubling BVH_OSTACK_SIZE would be a better solution in that case, though we should really be able to figure out the exact stack size needed.

Hi Brecht,

There is no assert in obvh-traversal.

Could you add “assert(size<INT32_MAX);” to kernel.cpp (master, b2.8)?

else if(strcmp(name, #tname) == 0) { \
	kg->tname.data = (type*)mem; \
	kg->tname.width = size; \
            >>>><<<<  \
}

What do you think about the adding min/max_leaf_sizes from bvhparam to debug_panel?

Thanks.
Milan

I can add that assert, if we ever run into a case where this is an issue it can be helpful.

I don’t want to add BVH parameter tweaking as a workaround for a bug though, we should really fix the actual problem. Otherwise we’re like fixing one scene but breaking another.

If you have a scene and steps to reproduce the problem we can investigate, or you can dive more deeply into the code and try to understand the cause, … but this random parameter tweaking is not the right way to solve problems.

I had a bug which was one year old. I set memory size to texture size (it has to be data size). But I have still some scientific data which has more than 64GB.

Now I am using min_leaf_size > 1.