Suggestions / feedback on the extensions for the gpu module

Hi, is there any equivalent in gpu module of this things?

  • glScissor
  • Antialiased Lines… glEnable(GL_LINE_SMOOTH) and so on…
1 Like

In C we have an API for that.
But in python it wasn’t exposed.
Part of the reason is because it’s still unclear what will be deprecated with the Vulkan implementation.

For the record, so far these were the functions requested here:

GPU_framebuffer_read_depth
GPU_framebuffer_blit
GPU_vertbuf_read
GPU_vertbuf_unmap
GPU_line_smooth
GPU_scissor
GPU_scissor_get
2 Likes

GPU_framebuffer_read_depth

Reading data back to CPU should be supported as it is needed to construct render passes. Although it will be more complex due to memory requirements of the buffers the GPU functions should be able to reuse the current API.

GPU_framebuffer_blit

Should be supported maps to vkCmdCopyBuffer Might need a commands per row when blitting to a subbuffer. Or perhaps we could use vkCmdBlitImage but that depends on the type of plane that is being copied.

GPU_vertbuf_read
GPU_vertbuf_unmap

Although I miss the reason why it is needed (except for debugging/testing without using fancy tools like renderdoc). It should be possible to support it.

GPU_line_smooth

Line smoothing is available in VK_LINE_RASTERIZATION_MODE_RECTANGULAR_SMOOTH_EXT But depends on the platform.

GPU_scissor

Should map to vkCmdSetScissor. That can handle more than we need.

GPU_scissor_get

Is a nobrainer as it returns internal state of blender. I don’t see this changed for vulkan.

1 Like

I’m happy to see we have read_depth for framebuffer now. I wonder if this suggestion

will still be a possible option? As the the document (gpu.types.GPUOffScreen) suggests, does offscreen.texture_depth sound like a proper name for that?

And if I create a GPUTexture object from the Buffer object returned by read_depth, should this GPUTexture object work the same as offscreen.texture_depth, just at the cost of redundant memory copy?

I analyzed the code in order to implement this option.
Changes to the gpu module in C will be needed first so we can expose it in python.

As the python module only exposes what already exists in C, it’s good to be sure first if this is the only solution to port your script.

2 Likes

hi,
i read latest documentation and so far it looks there is everything i need to update my addon: Point Cloud Visualizer - Blender Market
only missing are these… is it going to be exposed in python, please?

bgl.glEnable(bgl.GL_CLIP_DISTANCE0)
bgl.glEnable(bgl.GL_CLIP_DISTANCE1)
bgl.glEnable(bgl.GL_CLIP_DISTANCE2)
bgl.glEnable(bgl.GL_CLIP_DISTANCE3)
bgl.glEnable(bgl.GL_CLIP_DISTANCE4)
bgl.glEnable(bgl.GL_CLIP_DISTANCE5)

We could expose the function void GPU_clip_distances(int distances_enabled);

But maybe a better design needs to be studied, since just exposing the number of clip_distances, although it solves, doesn’t seem to be enough.
It is still necessary to configure the clip_planes in the shader.

Fortunately, for builtin shaders, we just need to set the "WorldClipPlanes" uniform:

GPU_batch_uniform_4fv_array(batch, "WorldClipPlanes", 6, rv3d.clip_planes);

I’m studying the possibility of adding a gpu.state.clip_distances_set(num)

2 Likes

yes, i know, i do in vertex shaders

vec4 pos = vec4(position, 1.0f);
gl_ClipDistance[0] = dot(clip_plane0, pos);

hmm, i use all custom shaders. builtin are not enough…

i learn as i go, so i am sure i am missing some essential pieces of knowledge about how whole pipeline works, so some simple table in docs or release notes for updating scripts to use gpu module would be very very nice. something like what used to be:

bgl.glEnable(bgl.GL_DEPTH_TEST)
bgl.glDepthFunc(bgl.GL_LEQUAL)

is now (i hope)

gpu.state.depth_test_set(mode='LESS_EQUAL', )

and so on…

thanks for looking into it :slight_smile:

1 Like

I looked into the idea and maybe it’s too soon to expose this method since for custom shaders the developer would depend on gl_ClipDistance which is only available for OpenGL.

Here is an example of how you can get around this limitation in custom shaders:
https://developer.blender.org/rBA8c8df3e36974c000ffb6b11c40c0e017902e98d7

thanks a lot, i’ll have a closer look soon.

Rectifying:
Actually gl_ClipDistance is supported by Vulkan, but the enabled amount is the amount declared in the shader.

so, shall i use the same mechanism like here: ID_color_frag.glsl · rBA i.e. discard in fragment shader by my own planes and distances calculated in vertex shader or something else is possible? i always read that discarding is expensive and i am going to draw many millions of points…

That’s a good point.

It’s still unclear whether GPU_clip_distances will be deprecated.
But surely bgl.GL_CLIP_DISTANCEi will be.
So from the developer’s point of view, it’s better to risk the GPU_clip_distances.

I will implement it.

3 Likes

Implemented:
https://developer.blender.org/rB06a60fe9f70061cd28af3ffcbf075273225d54b2

7 Likes

hi, can somebody please explain me how to interpret number obtained from gpu.capabilities.max_batch_vertices_get?
for example on my machine i get:

>>> gpu.capabilities.max_batch_vertices_get()
1048575

roughly one million, right? but i can create batch from 50m points (each point with location, (normal sometimes as well) and color) without a problem? over that, blender tend to crash while creating batch (from my crash testing, around 70-80m), so i split points in several batches, 50m each. but it is empirical value, i’d like to know more…

1 Like

gpu.capabilities.max_batch_vertices_get() is just the GL_MAX_ELEMENTS_VERTICES in OpenGL.
I’m not sure what this value is used for, but it looks like it only involves performance:
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glDrawRangeElements.xhtml

I will study and edit the documentation.

2 Likes

aaand one more thing, can we draw antialiased from python? right now as far as i know, nothing drawn from python gpu is antialiased, we had in bgl GL_LINE_SMOOTH and such, was not ideal but worked. and with fxaa on top of my drawings i run into performance problems or crashes: ⚓ T94202 crash or error when using GPUFrameBuffer.read_color(... data=data)

Hi All, I have been experimenting with the python gpu module, and going through the examples here. I can draw the lines and wireframes as in the examples, but I can’t make a solid triangle that is properly shaded; it is just a solid color with no lighting dependence.

When I send a list of normals using batch = batch_for_shader(shader, ‘TRIS’, {“pos”: coords, “normal”: normals}), it gives me a ValueError: Unknown attribute name. I asked this in chat and one reply was that ‘normal’ is not a valid attribute, but from the vbo example here, it seems that ‘normal’ should be valid. My code is below

import bpy
import gpu
from gpu_extras.batch import batch_for_shader

coords = [(0,0,0), (0,1,0), (0,0,1)]
normals = [(1,0,0), (1,0,0), (1,0,0)]
colors = [(0.5,0.5,0.5, 1), (0.5, 0.5, 0.5, 1), (1.0, 1.0, 0.5, 1)]

shader = gpu.shader.from_builtin('3D_FLAT_COLOR')
batch = batch_for_shader(shader, 'TRIS', {"pos": coords, "color": colors, "normal": normals})

def draw():
    shader.bind()
    batch.draw(shader)

bpy.types.SpaceView3D.draw_handler_add(draw, (), 'WINDOW', 'POST_VIEW')

If anyone has an idea what I am doing wrong, I would appreciate any help. Thanks very much, and thanks for making this awesome software.

1 Like

I see now that none of the gpu.shader built-ins accept the normal attribute, so it seems that I will need to write a custom one.

1 Like

Any chance we could see an example to read the pixel value under the mouse in the current View 3D?

fb = gpu.state.active_framebuffer_get() is only returning black for me, I guess it’s not getting the View 3D framebuffer.

EDIT: So the problem is not because of the active_framebuffer_get, but the event.mouse_x and event.mouse_y that return incorrect values in 3.3 and above.