GSoC 2019: Core Support of Virtual Reality Headsets through OpenXR

This thread is to discuss and get feedback about my GSoC project.
Currently available documentation, including the project proposal(s) is available here: https://wiki.blender.org/wiki/User:Severin/GSoC-2019/

The branch is called soc2019-openxr.

4 Likes

Current state: The branch should compile with necessary files and libraries from the OpenXR SDK. With that, Blender can dynamically select and link to the operating system’s active OpenXR runtime.
So for example if you have the Windows Mixed Reality OpenXR Developer Preview installed and set up, Blender can connect to it to interface with Windows Mixed Reality compatible HMDs.
From the Blender code side, we can do OpenXR function calls now (e.g. xrCreateInstance). The code also already attempts to initialize OpenXR.

So there’s nothing really interesting to see for users yet, basically we should be set up to use OpenXR now.
Note that nothing has been tested on MacOS yet.


I could need some feedback from platform maintainers/project-admins:

For OpenXR we need some headers (most importantly openxr.h), and need to link to the SDK’s OpenXR-loader lib. For convenience I added needed sources to extern\, so that people compiling the branch don’t need to compile the SDK themselves.
I guess for a later merge, having the lib as 3rd party dependency makes more sense, so I’ve also added a find_package() variant. To use this OPENXR_USE_BUNDLED_SRC can be disabled. Later we can remove the bundled sources version.

Do maintainers/admins agree that this approach is fine for now? Obviously we may want to re-evaluate later on.

CCing @ideasman42, @sergey and @brecht for feedback on this.

3 Likes

For such a small library, I could go either way, it doesn’t really add that much to the build time, did you make any changes to the code? The openXR sdk mentions needing python 3.6 at build time, which may be problematic on windows.

How big is the library? What are the needed dependencies to get it compiled? Is it available in the common Linux distributions?

For small library which is not likely being available via package manager having it in extern/is the easiest way to go.

The convention is is use WITH_SYSTEM_FOO, so OPENXR_USE_BUNDLED_SRC is to be renamed and meaning is to be negated.

Oh ok i see what you did, at build time it generates a few files and these were included in /extern so we could build it without any deps.

Updating the lib in /extern is gonna be less than straight forward, so be sure to write up some instructions there.

@sergey lib is about 800k in code, it’s not commonly available (yet?) in most distro’s since the first preview version came out near the end of march. I assume long term it’ll be pretty common, but we’re early adopters at this point, so it may take a bit to get to that point.

What is the relationship like with Marui-Plugin?

(Wanted to keep previous post compact so I didn’t go into the rationales.)

  • Python is only needed to generate some files. We could put these generated files into extern/ (at least if they are system independent) so we wouldn’t require Python for that.
  • The loader depends on JsonCpp (<8k LOC)
  • The build instructions for the SDK do list some further dependencies. But it seems like most are only required for the demo applications, not the OpenXR loader (the part we care about).
  • My guess is that the SDK will be provided by common distros in the not too distant future. It’s too early to tell though, so I’m really guessing.
  • We only need the loader lib (<9k LOC) and the OpenXR headers (~2k LOC), not the entire SDK.

As @LazyDodo noticed, I placed the generated files into extern/, not the original code (and resources) that generated it. That would involve XML parsing and Python script execution at compile time. And indeed, it wasn’t trivial to get that all up and running. So I share the concern that updating this isn’t going to be straight forward at all. Especially if the Khronos/SDK guys decide to change the build process.
So from my experience of trying to get this to compile/link, I woudln’t suggest the bundled sources approach.

Using find_packages() OTOH seems pretty fine, esp. given the tools we have in place now (install_deps.sh, prebuilt libs, etc.). And there’s a good chance the SDK will be added to distro packages.

I’ve been in touch with @makx, the MARUI-Plugin CEO prior to submitting my proposal. I wanted to be sure there’s no conflict. He explicitly stated that he doesn’t see one and that he’s interested in collaborating on a proper VR UI once my project is ready for that.
BlenderXR was/is a temporary project for until OpenXR support is there.

1 Like

I have no objections to having it in svn for windows, it was dead easy to build.

1 Like

Hello!
This is correct. We hope we could (and can in the future) help to provide code for giving Blender a great VR/AR/XR user interface.
But BlenderXR was only a temporary branch and is not intended to compete with the normal Blender development.
Please let me know if there is any way we can help this GSoC project to be successful.

4 Likes

Hi, I am looking forward to your GSOC project, as I actively use Blender and VR!

I noticed something using the Marui BlenderXR, I’m not sure whether you are aware or have thought of a solution. When you look at something with a planar reflection probe on it, then the reflection works for only one eye, but with the other eye you only see flickering. I assume this is because Blender only renders a reflection probe once from one perspective. With a headset there would need to be two renders per probe.

When you get to working builds, I can help with testing.

2 Likes

Daily Win64 builds now available here.

2 Likes

Awesome, thanks a lot!

This is an interesting point. Clement just made me aware of DRWViews which should be perfect for optimized drawing for multiple perspectives (single pass, two draw calls to start with). They should also help solving this relatively easy (not sure if probes use them already though). That obviously will come with a performance penalty, as all probe reflections will have to be rendered twice. Maybe it’s acceptable to use the same reflection for both eyes though as a compromise.

Phase 1 Evaluation

I’ve just completed my part of the phase 1 GSoC evaluation (mentor evaluation and project feedback). This seems like a good point to do a public evaluation of the first 3.5 weeks from my POV.

General project management:

  • So far, I’ve been mostly working on my own. I know my way around in the code and I should be experienced enough to make reasonable decisions. Also, the first phase was mostly about preparing the internals, so there’s not much to feedback from a user’s view yet.
    Whenever I felt a need for feedback from Dalai or other devs, I got in touch with them. And I write weekly reports as requested.
    I’d be more than happy to hear from mentors if they think this was fine given the state of things.
  • For the couple of times I wanted feeback from other devs, they were there to help. That includes Dalai who has also been very responsive whenever needed.
  • University exams and assignment deadlines kept and keep me busy. So I haven’t been able to deliver with full steam yet. Friday in a week, the busiest phase is over, things should become more relaxed then.

Ongoing technical project challenges:

  • Biggest issue for my project is that I have to work in a very limited development environment. Only two OpenXR runtimes (platforms implementing the new & provisional OpenXR specification) are available: Windows Mixed Reality OpenXR Developer Preview and Collabora’s Monado.

    • Monado - although I strongly support it as a FOSS enthusiast - has limited features and worse, it doesn’t even want to run for me. Even after having gone through required hoops like updating my Linux distros to a testing version. This is for sure solvable, but takes time away from the project.
      I really hope this will change before too long.
    • The Windows platform of course requires me to use Windows and the Windows development environments. At least the OpenXR runtime is easy to set up and works.
    • So I have little choice but to use a Windows dev environment. One that I’m not used to and that turns out to have quite some quirks (see next point).
  • For the most time, I couldn’t use a C/C++ debugger at all. The Windows debugger keeps hitting whenever OpenXR, or the Windows MR runtime comes into play. I can continue execution to some point but not to where I need it now. Not sure where the issue lies, in the debugger, our code, the OpenXR SDK code or the runtime’s code. The same happens with the OpenXR SDK’s example applications though, suggesting it’s not a fault on our side.
    Either way this is a big problem.

  • Graphics, VR and OS dependent code is in general difficult to debug. Execution passes through level’s not in our control. You often have to pass around opaque handlers (pointers to memory with foreign data structures we can’t read). You rely on provided error detecting mechanisms which tend to be vague and at times broken even.

    For example, right now I’m trying to use the DirectX compatibility layer I added to pass viewport rendering to the Windows MR runtime (through OpenXR). For some reason the extension I use for DirectX compatibility (NV_DX_interop) fails. Even though I use it almost the same way I used it previously. I can’t use the Windows debugger (see above), the error happens in foreign code and the error message is not helpful and possibly broken (same error as described here).

It was expected that the bleeding edge nature of the project would present quite some challenges. It’s definitely not been clear sailing but I’ve overcome all challenges so far and I’m in schedule. Still frustrating…

Few words on the OpenXR specification:

  • I have no doubt the specification is going to be the standard for VR/AR/MR/…
  • The specification is documented quite fine given the early state. The OpenXR SDK contains a good reference implementation I use as a guide permanently.
  • More OpenXR runtimes are expected to appear soonish. I wouldn’t bet on that happening during the GSoC period though.
  • I think it was a good decision to encapsulate all OpenXR calls behind an abstraction (GHOST level by now). Not just because there’s low level OS and graphics library related code. There’s quite some complexity in OpenXR too, that can be simplified a lot through a layer of abstraction. I guess it’s comparable to Vulkan in that regard.

And finally on user testing:

  • Many people already offered help on testing. Gotta love this community!
  • The unfortunate thing is that users will also rely on the availability of OpenXR runtimes. I guess only testing using Windows MR runtime and therefore WMR headsets is usable for now.
  • I’m planning to document the requirements and (few) set up steps once the project is ready for user testing. I might also do a call for testing then. I expect that to happen in 2-3 weeks.
8 Likes

@severin whatever you did last week, great job!!!

After some minor code changes (force the backend to DX) I was able to get output on my oculus headset!

Framerate isn’t great, (a rather steady 19.7 fps) but IT IS WORKING!!!

7 Likes

@Severin did some changes today to make testing on oculus work, if you have an oculus rift, you can try starting blender with the blender_oculus batch file and cross your fingers!

2 Likes