XR controller support

Hi everybody!

I was talking with the people on the xr-channel in blender.chat about controller support in the mixed reality branch. I am interested in developing for Blender but have little experience with the blender source, so there are probably a lot of newbie questions. I figured that they are better placed here.

I got a custom controller representation working with gizmos in blender, but they do not show up in VR. This is as @muxed-reality explained to me due to the missing context in the offscreen draw loop. I tried adding the draw function of my gizmo with draw_handler_add with:

‘’’
args = (self, context)
self._handle = bpy.types.SpaceView3D.draw_handler_add(self.gizmo.draw, (self.gizmo, context), ‘XR’,‘POST_VIEW’)
‘’’

in the setup function of my gizmo group but this gives me an error: “TypeError: draw() takes 2 positional arguments but 3 were given” This was probably a little naive.

Another solution would be to just add controller objects and update their position in python. Would one write a modal operator for this? If so, what would be the event that starts the operator?

The last solution would be to create custom models in wm_xr_draw_controllers() but this seems a little hacky.

As I understand it, the preferred option would be to use the XR_MSFT_controller_model extension, but this only works for windows mixed reality at the moment. Did I miss anything?

I apologize if these are really basic questions, but I’m also willing to learn.

Best regards,

Tobias

2 Likes

Hi Tobias,

Thanks for offering to help out with VR/XR controller support.

Although it would be nice to have the custom gizmos working so that users and add-on developers can use their own custom models, the preferred option is, as you said, to use the XR_MSFT_controller_model_extension and draw this glTF model using one of Blender’s built-in draw engines (most likely the overlay engine, but workbench/EEVEE might also be desirable).

Again, like you said, currently only Windows Mixed Reality seems to support this extension, in which case a suitable fallback option is needed. This could take the form of including a custom controller model in the Blender draw cache, or just reverting to primitive shapes. Maybe @julianeisel has a better idea?

I created a diff with some WIP code for what I think is the way forward for controller visualization. It should apply cleanly against the xr-controller-support branch if you want to test it out.

Actually, it would be a big help if you could continue this work since I think I’ll be busy with code review for the other VR patches for the foreseeable future. The main issue I ran into was that I don’t think Blender has any glTF utilities in C/C++ (I may be wrong, @julianeisel can confirm) so I started using tinygltf like the Microsoft OpenXR samples (which btw is a great resource for using the controller model extension).

Just as a note, we don’t have to do the controller specific part itself in the draw engine. We can just pass it a GPUBatch somehow containing the controller geometry. @JeroenBakker suggested this - I think he said we could store it in GPUViewport, or maybe it was more on the WM level, I don’t recall.

I think we should do it this way:

  1. If available, use OpenXR (extensions) to get the geometry. This probably gives the best result and doesn’t require us to create & store the models ourselves. I hope this will become available for all OpenXR runtimes and devices before too long.
  2. Until then, we can bundle our own models for common controllers. This can be used as fallback for runtimes/devices not covered by 1.
  3. Last resort: Draw some generic dummy controllers. Could even be simple coordinate axes.

We don’t have any glTF functionality in C/C++. I think the glTF usage of XR_MSFT_controller_model_extension is simple enough that we could just write the reading code ourselves (we already have libs to read the JSON part AFAIK). But TinyGLTF looks like a much better way to go.

1 Like

@Morbias1986
Btw, I tested the XR draw handlers using this script (credit: iceythe from blenderartists) and it seems to work fine:

import bpy
import gpu
from gpu_extras.batch import batch_for_shader
import bmesh

shader = gpu.shader.from_builtin('3D_SMOOTH_COLOR')
ob = bpy.context.object

def draw():
    bm = bmesh.from_edit_mesh(ob.data)
    verts = [v.co[:] for v in bm.verts]

    face_colors = ((0.9, 0.25, 0.25, 1),) * len(verts)
    edges_colors = ((0.0, 1.0, 0.25, 1),) * len(verts)

    faces = set(f.index for f in bm.faces if f.select)
    face_tri_indices = [[loop.vert.index for loop in looptris]
                        for looptris in bm.calc_loop_triangles()
                        if looptris[0].face.select]

    batch1 = batch_for_shader(
        shader, 'TRIS',
        {"pos": verts, "color": face_colors},
        indices=face_tri_indices)


    edge_indices = [(v.index for v in e.verts) for e in bm.edges if e.select]
    
    batch2 = batch_for_shader(
        shader, 'LINES',
        {"pos": verts, "color": edges_colors},
        indices=edge_indices)

    batch1.draw(shader)
    batch2.draw(shader)

bpy.types.SpaceView3D.draw_handler_add(draw, (), 'XR', 'POST_VIEW')

In addition, there are “Motion Capture” objects that you can assign via the VR sidebar that will automatically update with the headset/controller positions. You can also set these objects in python with context.window_manager.xr_session_settings.headset_object/controller0_object/controller1_object

There are a lot of great ideas here, thanks! I will hopefully have some time on the weekend to try some things.

The Motion Capture sounds like a quick fix for just getting controllers in there (imported as fbx for example).

Regarding the draw handlers: I will definitely try this. I think my problem was that I tried to hijack the gizmo draw function for this. Or I might have messed up with the draw_handler_add itself. But this might be an solution to draw controller models only with Python until the controller model extension is supported.

I will look at the diff, this sounds really good. I will look into it and let you know if I feel able to continue the work on this, okay?

@julianeisel if one were to bundle ones own controllers, how would one store them?

Where to cache the batch depends on user design. WM would be the easiest, but limits to one set of visual controllers in a session. In a viewport it will be easier to create a visual difference per viewport. In the end I would mark it as runtime data and does not really matter on technical perspective where to place it as long as it matches the desired user design.

Hi. Just found this thread.

I tried to integrate the controllers and Headset via asset browser. Free gltf models.
I move them with the motion capture feature.

Next level is ist to get in sync with the multi user addon Also to drive all the buttons from the controllers visually via existing actions for connected users.

Here you can find the current state… with blend