Something wrong with making openGL calls from an independent module?


recently forked blender because life is short. been experimenting with small changes, fun times.

right now im trying to use transform feedback to run some computations on the gpu. my code works so to speak, the console is printing out the right values and doesnt spit out errors. i am however getting many, many crashes.

this method right here is the culprit,

std::vector<mt::vec3_packed> deformTest(int totvert) {

	glUseProgram(program); // crashes here

                         0, vertBuffers[POSITION_OUTPUT]);

	glDrawArrays(GL_POINTS, 0, totvert);	


	std::vector<mt::vec3_packed> feedback(totvert);
                           0, vertSize, &feedback[0]);
	return feedback;


the ctd isnt immediate, it only happens after a few seconds (ten or so) of this running every frame. after doing some reading, i suspect this is a matter of either

A) opengl calls outside of gpu module are illegal


B) the cpu-gpu sync is brutal and im actually running out of memory, thus the segfault.

can anyone tell me which one?

If the program crashes, the backtrace often helps to reveal the reason for the crash.

Further, OpenGL commands must be run within a specific OpenGL context. If you don’t explicitly activate a context, commands may run in whatever context happened to be active before. GPU_context_* functions are used to create/discard/activate context.

ah, that would make sense. i dont think containers are shareable.

the code only runs inside the bge main loop and i assumed single context, all the activity is confined to the viewport after all. maybe i was wrong. ill have to check.

thank you c: