Below you find posts where an ugly render problem appear when rendering in combination with CPU/GPU using cycles. I have the same problem. Maybe you already noticed it, too.
The main message of the posts:
At the end of a render session the very last tiles of a frame are mostly still rendered by CPU threads, while the faster GPU has already finished its job and do “nothing”. This result in “felt” much long render times. So is it possible to render the last tiles always by using the GPU?
Here are two user posts on rightclickselect where the same problem is described in more detail:
Vray has a feature called “Dynamic bucket splitting” which subdivides last tiles to smaller ones. It’s not always working well, because some tiles take so long that the render is still stuck on them even after the last small ones have finished, but generally it speeds up renders. I don’t know if this would be possible in cycles, because it would mean differently sized tiles.
I’ve looked into this a while ago - splitting tiles for CPUs is easy in Cycles itself, but the Blender-side tile handling code doesn’t work well with it currently.
That should be fixable, I’ll have a look again.
I expect we can do this entirely on the Cycles side, and still deliver fixed size tiles to Blender? Otherwise it gets a bit complicated to handle things like the Save Buffers feature. Denoising also seems simpler if we keep tiles as they are.