I’m investigating a low effort path to making use of path guiding on the GPU.
Attempting a GPU implementation of path guiding on the GPU is far outside the scope of time I have available, however, I was looking at the code for the way the CPU Path Guiding is integrated into cycles, and I think I might have a possible path forward for research.
As a test case, I was thinking of doing a low number of samples CPU render, save the resulting fields, and then do a GPU render using the pre-trained data.
The bottlenecks I see are the high frequency querying of the fields during the render, effectively stalling the GPU while those are taking place.
I’ve thought of ways around it, ranging from bulk querying, to implementing only that small subset of OPG in the GPU.
But before I embark on any of those - any thoughts on if it is worth tackling? Is a GPU implementation of OPG close enough its not worth it? Too much effort going for a hybrid approach and just go the full GPU implementation?
I don’t think this hybrid approach is worth implementing. The goal is to get this working fully on the GPU, and I don’t think it’s a good use of our time to implement, maintain and support an intermediate solution that we’d have to discard later on.
Thanks so much for the reply brecht! Sorry - I didn’t mean to imply it was something I expected you or anyone else in the blender team to implement - I’m considering implementing it as an experiment to see how feasible it is to improve render quality on some of my own projects.
I believe blender is dependant on the OPG team working on a GPU implementation, which I think might be a year or two off, unless there are other plans?
Right, what I meant is that we probably wouldn’t want to go through the process of code review and maintenance even if someone else did it. But certainly anyone can experiment with their own implementation, and I imagine it can be made to work reasonably well for some scenes.
There is no concrete timeline for OpenPGL for GPU, so indeed might be a while.