Sorcar BLI_Mempool Chunk memory Increases without limits

Hi guys.
I’m trying to debug the sorcar addon and when using the custom object node I’ve noticed that if you use any node behind it and change values the memory grows without limit.

How do I know what in particular is making the memory go up?

Is it possible clear the BLI_Mempool Chunk memory from python?

1 Like
  • What does the custom object node do?
  • How do you know this is related to BLI_mempool?

You could start Blender with the --debug option. Maybe you can see some unfreed memory blocks when you close Blender.

The CustomObject node constantly duplicates the original object and then applies the rest of the node tree to the new copy.

I think it is the bpy.ops.object.duplicate() itself that increases the memory without limit, as it is done multiple times. Even if you later delete the copies and orphan data if you run it multiple times it still seems to increase the memory.

I could know that I increased the BLI_Mempoll chunk because I executed blender with --debug-memory and after increasing the memory in the f3 search engine I put “Memory Statistics” and saw that the one that increased the most was the BLI_Mempoll chunk

I created this little code to test and effectively increase memory every time you run:

ob = bpy.context.active_object

def duplicate_test_memory(repeat):
    for b in range(repeat):
        bpy.ops.object.duplicate(linked=False, mode='TRANSLATION')
    for mesh in
        if mesh.users < 1:

repeat = 100

if ob:
    for r in range(repeat):
    bpy.ops.mesh.primitive_cube_add(size=2, enter_editmode=False, location=(0, 0, 0))
    ob = bpy.context.active_object
    for r in range(repeat):

Although now with this test it seems to increase the memory of ReportMessage and Report… :S

I made this little video to show how to reproduce the problem in Sorcar:

when close blender get this:

… more …
BLI_Mempool Chunk len: 32712 0x7fb2e4ed5838
CustomData->layers len: 520 0x7fb2df5f46b8
CustomData->layers len: 520 0x7fb2e630cce8
memory pool len: 48 0x7fb2e6303d68
BLI_Mempool Chunk len: 8184 0x7fb2e4edda38
memory pool len: 48 0x7fb2e630bda8
BLI_Mempool Chunk len: 32760 0x7fb2e4edfc38
editmesh_tessface_calc_intern len: 288 0x7fb2df5f5df8
BM_mesh_create len: 1048 0x7fb2d84cca38
memory pool len: 48 0x7fb2dffbfae8
BLI_Mempool Chunk len: 32712 0x7fb2e5f0e038
memory pool len: 48 0x7fb2dffb8658
BLI_Mempool Chunk len: 131048 0x153e13038
memory pool len: 48 0x7fb2dffd82f8
BLI_Mempool Chunk len: 262088 0x153e34038
memory pool len: 48 0x7fb2dff9e728
BLI_Mempool Chunk len: 32712 0x7fb2e5f16238
CustomData->layers len: 520 0x7fb2dffd7898
CustomData->layers len: 520 0x7fb2dffbd008
memory pool len: 48 0x7fb2dff9aa48
BLI_Mempool Chunk len: 8184 0x7fb2e5dd8238
memory pool len: 48 0x7fb2dff9ed28
BLI_Mempool Chunk len: 32760 0x7fb2e5f1e438
editmesh_tessface_calc_intern len: 288 0x7fb2dffbdc38

The memory growth has been reduced even though it is still increasing but it is increasing much less than beforewith this:
and this:

The idea is only to make a backup of the original if there is not already a backup. Then we work on the original, without fear since that’s what the backup was made for. This way we avoid being constantly duplicating, it is only duplicated if there is no previous backup.

This was just an illusion. It’s not possible to do it this way because if you never delete the original object you can never start over from the beginning of the node tree :frowning: but at least I think the reason for the increase in memory is the massive duplication and removal of objects throughout the working process.