Tips or tricks how to deal with Eevee Limitations

Hello everyone! It’s been almost two years that I use Blender. It is an awesome 3D software. Also 2.80 gives so many changes and improvements, comparing it with the latest stable release.
Also Eevee is a game-changer for the upcoming release.
But as I’ve read the Eevee Manual, this engine has some limitations, because it is OpenGL-based and not a raytracing render engine.
I don’t know if these limitations will be fixed later or not but, there may be some tips or tricks how to deal with them.
For example the maximum number of active lights in a scene, which is 128. If I add more than 128, the rest don’t illuminate. Is there a culling option for this? I mean those lights behind the camera do not be active. The second, Eevee has another limitation regarding the light probes. Eevee supports up to 64 irradiance volumes, up to 128 reflection cubemaps or so. There could be a way how to tweak indirect lighting by manipulating with material nodes.
Thanks!

The light limit is news to me. I wonder if you really need that many lights, or can you break it up into separate scenes? I imagine the roadmap for this is: keep the light limit in the viewport for the sake of speed, and remove the limit for the final render. But this is just a guess.

Edit: As for tweaking indirect lighting, what you want is the “light path” node, and of course you can do a lot by using different light-visibility-collections.

I really like interactive viewport rendering in Eevee. The light changes are very fast. It is like RTX Gaming. I wanted to know if there could be a way how to automatically turn off the lights that are behind the camera view or viewport and those too far (with LOD). VIewport Performance wouldn’t be slow. That’s my feedback.

That’s doable with a little Python, for sure. You can use a driver on the visibility of the object, although I am not sure that would fix the light-limit, that may be per view-layer or per scene, in which case it would take a little more work.

You could compare the world-space location of the object the the negative z axis of the camera with some very simple math. If you need anything more complex, write your code in the text editor and register it as a function, then you can use it in drivers.