I’m trying to implement “Light Probe” geometry node that can capture light value at a given point in space. The end goal is to be able create a point cloud with attribute which contains light color and intensity at each point and use it to drive further generation(for example: moss only on shaded side of the rock, flowers growing only where light shines etc.).
I think, Renderman is the only render engine that can achieve similar functionality with baking illumination to point cloud, but I’m not sure if it works only for surfaces or for any arbitrary points.
I was planning to use the same code that is used to calculate irradiance volume, but it seems it was reworked recently for eevee next, so instead of calculating values for each point it uses the whole volume.
So, I’m looking for advice of what would be the best approach for this?
Thanks for your interest in developing for geometry nodes. What you want to do sounds interesting. I’m not sure if that’s possible. Maybe Clément can tell us a bit about whether thats feasible, i.e. whether light probe data could be made accessible from geometry nodes. Right now I also don’t know what EEVEE Next holds for light probes.
The irradiance volumes in EEVEE Next do holds the baked irradiance data. So it should be easy to query and expose in geometry node.
You mean something like “Irradiance Volume Info” node, which gets the object as an input and outputs the baked data? Or maybe “Sample Irradiance Volume” node which gets irradiance volume object and position as inputs and outputs sampled data.
I was thinking about actually calculating irradiance data during the node evaluation, but using input positions instead of pre-baked data(something like a 360 degree 1 pixel camera).
I guess, just sampling pre-baked data would produce good enough result and would be significantly faster, maybe even sampling all of the available light probes like EEVEE does.
This does sound like an interesting idea. I do wonder if a more general feature is the ability to convert a baked irradiance volume to a volume object (openVDB grid), which would then be sampled with the existing “Sample Volume” node. That would also decouple the baking functionality and geometry nodes, which might make things simpler right now, and ease maintenance in the future. Another option is that the result of baking can be a volume object.
In that case I can start by exposing irradiance volume data to python API. If I’m understanding blender architecture correctly this will only require to write appropriate RNA in
Not sure if that solves the general case. Afaik, the light probes are direction dependent. So just a color per voxel/point is not enough to fully represent it.
Could still be useful of course.
Not sure if RNA api is correct direction. It is used for interact with python. But geometry nodes use C++ api, so you could directly access to volume data from geometry node code. Just make sure you have correct dependency graph state and read cow volume object.
Looking at some classes, it might even use spherical harmonics(still trying to wrap my head around this, if anyone has useful links, please send me). That allows to store even more information than simple direction. This is not a problem, since we can just store this info in a separate attributes.
Right now I found a workaround for my goal: I instance cubes on points, generate custom UV, so each side of each cube fills one pixel on a texture, then I use generated mesh to bake lighting into a texture. I use this light texture to calculate a weighted sum of normals of all sides of the cube, which gives me a light direction, color and intensity.
Sorry, I can’t provide you with my setup right now, because this is done for internal projects in our company, but hopefully this description provides enough information on its own.
The point is to make data accessible for the end user, even through the python api. This already can be used to create new addons(to generate those point clouds from python or to export to other software, for example). The request for such API is already on Right-Click Select.
Then after gaining enough knowledge of how this data functions, I can use it to create a new geometry node.