What are the reasons for OSL shader nodes (script nodes) being not available on GPU?

I suppose the main issue is requirement of cuda/opencl/optix SDKs, which is very user unfriendly.

But is it feasible to move the problem to addon developer’s side?
Like, a developer could compile some nodes and ship binary code as part of addon.

NVidia contributed experimental GPU backend support. I suspect it is simply about someone doing the work. From here:

Experimental OptiX rendering:

  • Build option USE_OPTIX=1 enable experimental OptiX support.
  • testshade and testrender now take --optix flags to run tests on OptiX. You can also force either to run in OptiX mode with environment variable TESTSHADE_OPTIX=1 . #996 #1001 #1003
  • Various fixes to how strings are represented on the Cuda side. #973 (1.11.0)
  • More complete support of all the noise varieties. #980 (1.11.0)
  • Got spline() function working. #974 #1011 (1.11.0)
  • Work towards getting texture calls working. #974 (1.11.0)
  • printf works for multiple values. #1007 (1.11.0)
  • Work on color-related functions. #1012 (1.11.0)
  • Support for native OSL closures. #1028 #1029 (1.11.0)
  • Work on matrix ops. #1054 (1.11.1)
  • Allow init ops. #1059 (1.11.1)
  • Fixes to string closure params. #1061 (1.11.1)

That’s from the change log from an upcoming release, it’s not supported yet, and currently it is still missing basic stuff like texture support. But if support does land in it’s current form it will only be supported through optix, which we currently support only for the RTX range of cards.

I read elsewhere that it’s possible to run older cards on the Optix backend, and that this is the plan for Nvidia cards in the future - to replace CUDA with Optix and support all the Nvidia cards that way. Is this accurate? It seems like it would simplify some things.

I don’t quite understand - how would it help?

What I’m talking about is ‘script nodes’ in Cycles.
I’ll clarify the title.

No we understood, but i’ll rephrase: The OSL library just didn’t support the GPU, nvidia has contributed code that will make it work for optix that will land at one point in the future (this has been dragging on for a few years now) once support lands, we can have OSL on the gpu but only for cards that support the optix codepath.

If you can work with a simplified subset of OSL you can use OSLPy an addon of mine that translates it to a regular eevee/cycles nodegraph that’ll run anywhere. (but i last tested it in 2.80, not sure if it still works)

Oh. So, it’s me misunderstaning whole situation. I thought OSL has been ported for GPU long ago.

The official OSL site says: “Additionally, the source code can be easily customized to allow for renderer-specific extensions or alterations, or custom back-ends to translate to GPUs or other special hardware.”

I didn’t expect that “easily customized” could take years.

Just because something can be “easily” done, doesn’t mean it will happen anytime soon it’ll mostly driven by the needs of the stake holders.

OSL is made by sony who uses it for movie productions which are generally rendered on the CPU due to their rather massive RAM foot print.

To give an impression on much data goes into such a such a thing disney released a production data set from moana. At 200G you can see why that segment of the market is not super into GPU’s

So with no-one driving that feature, it took a long time to get there.

Brecht said this a while ago:

The main issue is to get OSL working on the GPU. It’s almost as much work as building Cycles itself. It’s not easy to do so we need more people to help us.

I’m not sure I understand the reasoning. Would it not behoove Sony to have hybrid GPU + CPU rendering? Is it really an either-or situation?

Higher quality means more memory usage for textures. The two ideas are fundamentally opposite with CPU versus GPU needs, and GPU limited memory scope.

It doesn’t matter as there is a clear divide between offline and online rendering.

Also I believe the optimizations in OSL has resulted in CPU results that can in some instances exceed GPU.