Cryptomatte doesn’t work in 16-bit and Z-Depth saves automatically 32-Bit, so Blender seems to be able to handle the need technically when saving EXR.
My 2000x2000px render comparison for file sizes with about 20 AOV layers:
half-float (16bit) is 225mb
full-float (32bit) is 415mb.
Both multilayer EXR with ZIPS compression.
I don’t know at the moment how much difference would be if cryptomatte layers are 32bit in half-float image, but there should still be clear difference. Same shot in 5000x5000px size half-float is already 1.4gb, but I don’t have full-float comparison made for that size.
Would be nice to get some help when setting the tile/samle size, so that the system won’t end up under-utilised. For Cuda it would be something like if the device would be saturated according to cuOccupancyMaxPotentialBlockSize().
It is already using cuOccupancyMaxPotentialBlockSize() and scheduling several samples at a time if a single sample per tile isn’t saturating the GPU. See device_cuda_impl.cpp, CUDADevice::render().
I didn’t notice there was this Cycles Requests thread.
I’m going to throw my request in here.
Can we add a boolean at render time?
Similar to what in other render engines is called Clip Geo or Clipper.
Basically, the boolean is handled via shader.
Examples in this thread
That would be neat!
Setting up such effects with booleans is tedious (clipping away many objects requires a modifier for each object) and error prone (booleans on non-closed or non-manifold meshes).
I guess this discussion about licensing must have come up before on https://opensource.stackexchange.com.
Please keep the thread on-topic. Create a new one to discuss this, and link that here.
That was my personal opinion, this is in no way / shape / form the stance of the blender project as a whole
Always lovely when people post a private conversation without asking, while you are naturally free to do so, it is very likely I will be much less engaging in future conversations with you.
Well, in forums more and more users are wondering how it would be possible to speed up OptiX denoiser’s work in transparent areas.
Adaptive Sampling solved the problem for render in transparent areas. Could it be possible that somehow OptiX denoiser could do something similar to what Adaptive Sampling does in transparent tiles to speed up the work?
Is performance overhead of the OptiX denoiser a significant issue? It has not seemed to be for me. Is it slower on cards that do not have tensor cores?
Also, I believe the OptiX denoiser is essentially a black box in the graphics driver the Cycles hands image sections to and gets back denoised versions, so not sure there’s much the Blender/Cycles devs could do. (except maybe identify empty tiles and skip denoising them entirely?)
Raytracing-based Bools would be cool indeed.
In the meantime I was partially successful accomplishing something like this with a shader. Plenty of breaking instances so it’s far from perfect, but for the simplest cases it might work for you: