I’m using Cycles4D, I find a very slow part where material preview needs refreshing, it sends the material (which may contain 10 4k texture images), and although the scene for the material preview is quite small (may be 256 x 256 image output on a single sphere), the render is quite slow here due to image texture loading.
this part loads the 4k image, then it gets scaled down to the required size (for example 256x256):
I suggest using another approach, using OIIO ImageBuf and ImageBufAlgo::resample
or if there is a way to just load the smaller image without reading all pixels.
Did you profile what the slowest part is, reading the image or scaling it down? We could add multithreading for scaling, and maybe ImageBufAlgo rescaling does it already. But I’m not sure that’s where the bottleneck is, probably loading is just as slow? Each image should also already be loaded and scaled in its own thread.
If the image is a .tx file we could try loading the appropriate mipmap level, that’s probably a pretty simple change.
However these wouldn’t normally exist, so we’d need to add a workflow to generate .tx files along the original images so they can be loaded faster on subsequent renders. Ideally we’d have such a workflow along with a texture cache that can load just the tiles and resolutions needed. Stefan was doing work on the texture cache here (CPU only though):
I can’t think of any easier solutions unless it turns out there is an unexpected bottleneck in the code…
I didn’t profile it.
but I did a quick test, by replacing the manual scaling with ImageBufAlgo::resample.
it was slower than manual scaling (probably because I allocated 1 more buffer).
so I expect the time lost in memory allocations is far more significant than image reading/scaling.
probably a memory pool would solve a large part of this.
texture caching is a must though.
personally in Cycles4D I will do a workaround for now (by forcing it to use a Cinema4D cached image and use the image data pointer callbacks, here I cache the image from Cinema4D at the texture_limit size, without reloading the image everytime as it is already in Cinema4D memory).
so my suggestions:
1- a folder to store cached loaded images (so it can be accessed from material preview render, real time preview, anywhere), so it won’t reload it every render call. (some how similar to how persistent data works, but it doesn’t depend on context!).
2- a texture memory pool? we also need the OIIO texture cache to avoid huge memory footprints. (no idea how this will work on GPU!)
3- MipMap is a must, I guess it is not used at all yet.
edit: my method didn’t work well (by caching from c4d)., it gives a slight boost.
so the only way is to create a cache on disk which doesn’t rely on context.