I have an intuition that might be possible to run computations on the GPU by writing a fragment shader and creating some buffers (Maybe some textures)
I wanna compute a smooth vector field on a mesh but averaging loads of vectors though bmesh is slow in python. Thought I could build a few textures and throw all the data in to it and run all the computations in parallel using GLSL.
I tried using numpy but I didnt got a great speed improvement since it relies too much on the python calls.
I am not very good at writing algorithms with numpy.
Yes this is possible, but probably rather difficult. Numpy is your best bet, but it does take some effort to learn how to make the most of it (mostly this involves using numpy functions instead of python loops).
Edit: Watch this PyCon talk about getting the most out of numpy before resorting to the gpu. The problem with bmesh is that you’re using python loops to iterate through, which is far, far slower than numpy vectorized operations.
I ended doing it with the Numpy module, I could get some speed but for a so discrete and repetitive task like generating a cross field, I guess the GPU would do better.
so something you might try is using the Anaconda Python distribution which comes bundled with natively (and possibly some GPU) accelerated versions of numpy etc. which might get you a performance boost for little or no effort and can also provide/host just about any Python package you want through the conda package manager.
You could then play with things like PyTorch or TensorFlow.