Suggestions / feedback on the extensions for the gpu module

Hi, any news to expose these to the Python gpu module? since bgl is being fully deprecated in 3.5 and 2 years passed since this thread started

  • GPU_scissor
  • GPU_line_smooth

Hi,

It seems we only added gpu.state.viewport_set, but not scissor testing. I don’t see an issue adding these as well as both Metal and Vulkan have support for them. Added #104911 - GPU: Add gpu.state.scissor_set/gpu.state.scissor_reset - blender - Blender Projects to track the issue.

For line smoothing we cannot use a global state anymore and it has been replaced with polyline shaders. If you need smooth lines we suggest to use polyline shaders as global state line smooth wasn’t supported by all OpenGL platforms and will not be supported by Metal or Vulkan. For polyline please check builtin shaders.

3 Likes

Thank you so much!!!

Sad that global state for line smoothing is not supported but glad that there is a non-global alternative at shader level :smiley:

[For other addon devs]
Here you have the source code of the POLYLINE builtin shaders which is helpful to build your custom line shaders with smoothing. Check for all gpu_shader_3D_polyline_XXX.glsl files.

Also, in gpu_shader_3D_polyline_info.hh you have the info about constants, uniforms, etc… in order to create your own shader from Python by using gpu.types.GPUShaderCreateInfo and gpu.types.GPUStageInterfaceInfo from the API

7 Likes

Hey there,

Will it be possible to alter the sampler state for gpu textures? It would be awesome to control the filter/wrap mode.

1 Like

Hi there,
In my sound waveform display addon I load a texture and use bgl to filter it (sharp pixel after resize) with the following:

bgl.glTexParameterf(bgl.GL_TEXTURE_2D, bgl.GL_TEXTURE_MIN_FILTER, bgl.GL_NEAREST)
bgl.glTexParameterf(bgl.GL_TEXTURE_2D, bgl.GL_TEXTURE_MAG_FILTER, bgl.GL_NEAREST)

Full code is here: sound_waveform_display/display_wave_image.py at b8e0f79a7cd5bf181d2c1aefbe57c1e5c7da2e88 · Pullusb/sound_waveform_display · GitHub

This addon is used by quite a lot of people now, and I would like to update it to support metal and future Blender version.

I didn’t find how I can do that with new gpu module. I suppose this should be done in shader now ?
But I have no clue on how to do that ^^.

If this is supposed to be done in shader, do you have a good reference on how to manipulate texture with fragment shader within Blender ?
Is there a good place for gpu module example in general out of the official doc ?

3 Likes

Good point @Pullup!

The idea of exposing parameters filter, repeat, use_mipmap, clamp_to_border_color and compare_enabled when linking a texture to a shader has been discussed before (in chat).

It would be like this:

gpu.types.GPUShader.uniform_sampler(name, texture, filter='NEAREST', repeat=[False, False, False], use_mipmap=False, clamp_to_border_color=False, compare_enabled=False)

But there were some open questions in this solution. For example.

  • Would these values be better set when creating the texture?
  • Is it worth it to set default values when creating the texture?
  • Could these values be getters/setters and modified in the texture (eg texture.filter = 'NEAREST')

That’s why it went unresolved :\

Wanted to share some simple example to get things going with smoothed lines after migrating from bgl to gpu. It kind of have this bug that on zoom-out line appear more thick though.

import bpy
import gpu
from gpu_extras.batch import batch_for_shader

coords = [(1, 1, 1), (-2, 0, 0), (-2, -1, 3), (0, 1, 1)]
shader = gpu.shader.from_builtin('3D_POLYLINE_UNIFORM_COLOR')
batch = batch_for_shader(shader, 'LINES', {"pos": coords})


def draw():
    shader.bind() # required to change uniforms of the shader
    shader.uniform_float("color", (1, 1, 0, 1))
    # POLYLINE_UNIFORM_COLOR specific uniforms
    shader.uniform_float("viewportSize", (bpy.context.region.width, bpy.context.region.height))
    shader.uniform_float("lineWidth", 2.0)

    # make sure to set state before the draw
    # otherwise line won't be smoothed
    gpu.state.blend_set("ALPHA")
    batch.draw(shader)

bpy.types.SpaceView3D.draw_handler_add(draw, (), 'WINDOW', 'POST_VIEW')

hi @mano-wii
would it be possible to get access to instanced arrays? i’ve seen something close to it has been done recently Python: Add range and instance drawing to GPUBatch · cf572f1a64 - blender - Blender Projects but this is still not what will help if you want thousands instances of a single object drawn. for example think of drawing a fast interactive preview of many thousands instances of mesh that will be added with operator. looping over matrices and drawing meshes one by one is very, very slow…
what i mean is described here LearnOpenGL - Instancing in Instanced arrays part
cheers!

1 Like

@mano-wii is there replacement for bgl.glDepthRange(zNear, zFar) in gpu module?

Is it possible to implement with gpu module some other method for smoothed lines besides polylines? Implementing polylines for each shader seems indeed cumbersome - I’ve found some simple method by the link below.

What it does is basically calculating screen position in vertex shader:

vLineCenter = 0.5*(pp.xy + vec2(1, 1))*vp;

And then in frag shader it’s checking the distance from gl_FragCoord.xy to vLineCenter to calculate the alpha to create blending effect.

double d = length(vLineCenter-gl_FragCoord.xy);
double w = uLineWidth;
if (d>w)
  col.w = 0;
else
  col.w *= pow(float((w-d)/w), uBlendFactor);

Would it work given the way gpu.state.line_width_set currently works?
I’ve spent awhile trying to implement it but had no success - curious if it’s possible after all and it’s worth continuing work or just stick to polylines. Posted it also on blender.stackexchange

Simple and fast high quality antialiased lines with OpenGL – vitaliburkov (wordpress.com)

@mano-wii would you please confirm that it is not currently possible to create shader with geometry stage using gpu.shader.create_from_info? this renders (pun intended) my code impossible with metal backend…

went through all docs available (not much at the moment) and this is all i’ve found

Geometry shaders
To be completed. Due to specific requirements of certain gpu backend input and output parameters of this stage should always use a named structure. 

which is not python api related Source/EEVEE & Viewport/GPU Module/GLSL Cross Compilation - Blender Developer Wiki
nothing geometry related in GPUShaderCreateInfo

I have been recently working on an addon and it would be nice to have “blit” methods for GPUFrameBuffer and GPUOffScreen objects

I’m currently porting an addon to the gpu module.
Everything works fine in Blender 3.5, but then I tested if it also works in 2.93 LTS, and it throws an error.

The code (simplified example):

import gpu
from array import array

pixels = array("f", [0.1, 0.2, 0.3])
buffer = gpu.types.Buffer("FLOAT", 3, pixels)
print("done")

The error in 2.93.17 LTS:

Traceback (most recent call last):
  File "\Text", line 5, in <module>
TypeError: array size does not match

It smells like a bug in 2.93 to me, but maybe there’s something I’m overlooking?

Bug report: #107247 - gpu.types.Buffer creation fails in Blender 2.93 with error "array size does not match" - blender - Blender Projects

As there is no geometry shader support on Metal I want to share some simple example how to make custom shader with smoothed polylines on Metal using only vertex shader (based on polyline shader that blender uses for Metal). The idea can be extrapolated to more complex geometry to replace geometry shader.

1 Like

Here are some concerts that I have with BGL deprecation.

  1. Since Blender is now moving towards a multi-GAPI rendering approach for its internal features, we can not longer rely on the presence of OpenGL in the pipeline at all.
    Blender still exposes a custom render engine API that is a used by a few alternative renderers. The usual workflow for a custom render engine for implementing “material” and “render” render modes is to render a full-screen quad, and display their render result in it.

This has always allowed to make your render-engine GAPI agnostic and use another API than OpenGL. In order to achieve this, you need to have access to a native texture handle. With the current GPU capabilities, there is no way to create a GPUTexture instance using a native GAPI-specific texture handle. So, the user has to convert their render result into a float-array at best, as the bpy.types.Image approach is not feasible at all for this purpose.

  1. There are currently no ways to efficiently update a buffer or texture with data.

Can anyone help out with some opinions on migrating geometry shaders to Metal?
Created a question on blender.stackexchange.com:

Has a decision been made about the texture filter?
The example you gave with a use as keyword argument at texture sampling seem nice to me.
But I’m no expert in gpu-API usage, and I get that those are not trivial choices.
Do you think this is something we can expect for 3.6 ?

1 Like

(Sorry for duplicated post, I accidentally replied to last poster instead of the thread in general)

I’ve been somewhat away from active addon development/support for some time, and started looking into fully porting my addons to the gpu module only now. Unfortunately, it seems that some things are still impossible to do with it (or I couldn’t find how in the documentation):

  • Control over texture mipmapping (creating a texture without mipmaps, creating a texture with mipmaps, uploading data for each mipmap level or at least generating them automatically).
  • Control over texture filtering and wrapping modes.
  • Control over rasterization antialiasing (in my case, I need to make sure it’s turned off, though otherwise it could be quite useful where supported).

A way to efficiently read/write buffer data could also be quite vital (currently, there seems to be only to_list(); it would be much preferable to have get/set methods that work with numpy arrays).

I hope bgl won’t be removed until these issues are addressed :slight_smile:

(By the way, out of curiosity: are there any plans to support stencil operations? They seem potentially useful for gizmos and such :thinking:)

3 Likes

Hmm… Also, after I tried to convert my code to use the gpu module (without mipmapping/filtering/wrapping for now), I’m getting this error when attempting to create a texture from RGBA8 buffers (either 4-channel UBYTE or 1-channel INT):

ValueError: GPUTexture.new: Only Buffer of format FLOAT is currently supported

This seems like a pretty big omission :thinking: Converting all buffers to FLOAT is, of course, possible, but very inefficient. I hope this gets amended before long :slightly_smiling_face:

After converting buffers to FLOAT, I no longer get the error… but the textures are displayed correctly for only 1 frame, and afterwards are rendered as completely black. Am I doing something wrong, or is this actually a bug? (I’m testing in Blender 3.6 for now)
EDIT: turns out, the error was on my side (there were still some GL commands left; after I completely removed any use of bgl, textures started displaying correctly).

1 Like

Thanks for the feedback, We initially only added support for float textures. Adding support for byte buffers should be possible including uniform integer.

I will make a design for settling the API.

3 Likes