GSoC 2019: Core Support of Virtual Reality Headsets through OpenXR

Hello!
This is correct. We hope we could (and can in the future) help to provide code for giving Blender a great VR/AR/XR user interface.
But BlenderXR was only a temporary branch and is not intended to compete with the normal Blender development.
Please let me know if there is any way we can help this GSoC project to be successful.

6 Likes

Hi, I am looking forward to your GSOC project, as I actively use Blender and VR!

I noticed something using the Marui BlenderXR, I’m not sure whether you are aware or have thought of a solution. When you look at something with a planar reflection probe on it, then the reflection works for only one eye, but with the other eye you only see flickering. I assume this is because Blender only renders a reflection probe once from one perspective. With a headset there would need to be two renders per probe.

When you get to working builds, I can help with testing.

3 Likes

Daily Win64 builds now available here.

4 Likes

Awesome, thanks a lot!

This is an interesting point. Clement just made me aware of DRWViews which should be perfect for optimized drawing for multiple perspectives (single pass, two draw calls to start with). They should also help solving this relatively easy (not sure if probes use them already though). That obviously will come with a performance penalty, as all probe reflections will have to be rendered twice. Maybe it’s acceptable to use the same reflection for both eyes though as a compromise.

1 Like

Phase 1 Evaluation

I’ve just completed my part of the phase 1 GSoC evaluation (mentor evaluation and project feedback). This seems like a good point to do a public evaluation of the first 3.5 weeks from my POV.

General project management:

  • So far, I’ve been mostly working on my own. I know my way around in the code and I should be experienced enough to make reasonable decisions. Also, the first phase was mostly about preparing the internals, so there’s not much to feedback from a user’s view yet.
    Whenever I felt a need for feedback from Dalai or other devs, I got in touch with them. And I write weekly reports as requested.
    I’d be more than happy to hear from mentors if they think this was fine given the state of things.
  • For the couple of times I wanted feeback from other devs, they were there to help. That includes Dalai who has also been very responsive whenever needed.
  • University exams and assignment deadlines kept and keep me busy. So I haven’t been able to deliver with full steam yet. Friday in a week, the busiest phase is over, things should become more relaxed then.

Ongoing technical project challenges:

  • Biggest issue for my project is that I have to work in a very limited development environment. Only two OpenXR runtimes (platforms implementing the new & provisional OpenXR specification) are available: Windows Mixed Reality OpenXR Developer Preview and Collabora’s Monado.

    • Monado - although I strongly support it as a FOSS enthusiast - has limited features and worse, it doesn’t even want to run for me. Even after having gone through required hoops like updating my Linux distros to a testing version. This is for sure solvable, but takes time away from the project.
      I really hope this will change before too long.
    • The Windows platform of course requires me to use Windows and the Windows development environments. At least the OpenXR runtime is easy to set up and works.
    • So I have little choice but to use a Windows dev environment. One that I’m not used to and that turns out to have quite some quirks (see next point).
  • For the most time, I couldn’t use a C/C++ debugger at all. The Windows debugger keeps hitting whenever OpenXR, or the Windows MR runtime comes into play. I can continue execution to some point but not to where I need it now. Not sure where the issue lies, in the debugger, our code, the OpenXR SDK code or the runtime’s code. The same happens with the OpenXR SDK’s example applications though, suggesting it’s not a fault on our side.
    Either way this is a big problem.

  • Graphics, VR and OS dependent code is in general difficult to debug. Execution passes through level’s not in our control. You often have to pass around opaque handlers (pointers to memory with foreign data structures we can’t read). You rely on provided error detecting mechanisms which tend to be vague and at times broken even.

    For example, right now I’m trying to use the DirectX compatibility layer I added to pass viewport rendering to the Windows MR runtime (through OpenXR). For some reason the extension I use for DirectX compatibility (NV_DX_interop) fails. Even though I use it almost the same way I used it previously. I can’t use the Windows debugger (see above), the error happens in foreign code and the error message is not helpful and possibly broken (same error as described here).

It was expected that the bleeding edge nature of the project would present quite some challenges. It’s definitely not been clear sailing but I’ve overcome all challenges so far and I’m in schedule. Still frustrating…

Few words on the OpenXR specification:

  • I have no doubt the specification is going to be the standard for VR/AR/MR/…
  • The specification is documented quite fine given the early state. The OpenXR SDK contains a good reference implementation I use as a guide permanently.
  • More OpenXR runtimes are expected to appear soonish. I wouldn’t bet on that happening during the GSoC period though.
  • I think it was a good decision to encapsulate all OpenXR calls behind an abstraction (GHOST level by now). Not just because there’s low level OS and graphics library related code. There’s quite some complexity in OpenXR too, that can be simplified a lot through a layer of abstraction. I guess it’s comparable to Vulkan in that regard.

And finally on user testing:

  • Many people already offered help on testing. Gotta love this community!
  • The unfortunate thing is that users will also rely on the availability of OpenXR runtimes. I guess only testing using Windows MR runtime and therefore WMR headsets is usable for now.
  • I’m planning to document the requirements and (few) set up steps once the project is ready for user testing. I might also do a call for testing then. I expect that to happen in 2-3 weeks.
11 Likes

@julianeisel whatever you did last week, great job!!!

After some minor code changes (force the backend to DX) I was able to get output on my oculus headset!

Framerate isn’t great, (a rather steady 19.7 fps) but IT IS WORKING!!!

9 Likes

@julianeisel did some changes today to make testing on oculus work, if you have an oculus rift, you can try starting blender with the blender_oculus batch file and cross your fingers!

7 Likes

@julianeisel ironed out the last issues preventing using a pure opengl backend, there’s still room for improvement but it’s starting to look pretty great!!

Wow that is looking promising! Can’t wait to try it out and help with testing, but I can only do that in 3 weeks from now (on holidays).

@julianeisel did some optimizations, basic scene now almost hits the 90fps

4 Likes

The barber shop never looked better!

7 Likes

Sadly oculus software update 1.39, broke the unofficial openXR support, it seems to report openXR 1.00 and our 0.90 loader (latest public known version) is not super thrilled about that :frowning:

so OpenXR won’t be supported anymore? I hate facebook!

It will be supported, support is not even officially released. The thing is it was just changed to use the also unreleased OpenXR 1.0. This is not supported by the OpenXR functionallity available to the pubilc (and us). Which is another hint that OpenXR 1.0 will be released soon, likely during Siggraph.
So we just temporarily can’t use the Oculus runtime.

1 Like

I’ll clarify what happened.

  1. We found some dlls in the oculus software folder that implemented openXR support that worked with the OpenXR SDK 0.90

  2. Oculus never said anything about openXR support, we were just lucky to stumble upon those dlls and were testing blender with it.

  3. Latest oculus software update bumped the openXR version used in those dll’s to 1.0 which no longer works with our 0.90 sdk.

So now we have to wait for khronos to officially announce 1.0, once that is done we update our OpenXR SDK and all will (should) be good again

we were lucky we got early access, too bad it broke, but no need to be upset with oculus here.

6 Likes

Well that was fast. Oculus implementation (and a few others) apparently coming this week.

4 Likes

Well yeah, you could kinda see that one coming a mile away, happy to see it’s out though :slight_smile:

The Last Third

We’re approaching the last third of the coding period. At this point I’d like to digress from my schedule and work on stuff that I find more important. Namely performance improvements and polish to get the branch into a mergeable state.
In the end we should have stable and well performing VR viewport rendering support. This would be the base from which we can work on a more rich VR experience. Possibly with the help from @makx and his MARUI-Plugin team.

Note that I’ve already done some work on performance. Last week we went from ~20 FPS to ~43 FPS in my benchmarks with the Classroom scene. Others have reported even bigger speedups.

The following explains a number of tasks I’d like to work on.

Perform VR Rendering on a Separate Thread

I see four reasons for this:

  • OpenXR blocks execution to synchronize rendering with the HMD refresh rate. This would conflict with the rest of Blender, potentially causing lags.
  • VR viewports should redraw continuously with updated view position and rotation. Unlike usually in Blender where we try to only perform redraws when needed. The usual Blender main loop with all of its overhead can be avoided by a giving the VR session an own, parallel draw loop.
  • On a dedicated thread, we can keep a single OpenGL context alive, avoiding expensive context switches which we could not avoid on a single thread.
  • With a bit more work (see below), viewports can draw entirely in parallel (at least the CPU side of it), pushing performance even further.

I already started work on this (b961b3f0c9) and am confident it can be finished soon.

Draw-Manager Concurrency

Get the CPU side of the viewport rendering ready for parallel execution.
From what I can tell all that’s missing is making the global DST and batches per thread data.

In general, this should improve viewport performance in cases where offscreen rendering already takes place on separate threads (i.e. Eevee light baking). Most importantly for us, it should minimize wait times when regular 3D views and VR sessions both want to draw.

Single Pass Drawing

We currently call viewport rendering for each eye separately, i.e. we do two pass drawing. The ideal thing to do would be drawing in a single pass, where each OpenGL draw call would use shaders to push pixels to both eyes. This would be quite some work though, and is not in scope of this project. We can however do a significant step towards it by letting every OpenGL call execute twice (for each eye) with adjusted view matrices and render targets. This would only require one pass over all geometry to be drawn.
The 2.8 draw-manager already contains an abstraction, DRWView, which according to @Hypersomniac is perfectly suited for this.

So I could work on single pass (but multiple draw calls) drawing by using DRWView.

Address Remaining TODO’s

T67083 lists a number of remaining TODOs for the project. They should probably all be tackled during GSoC.

6 Likes

I could need some feedback from other devs on points made above:

  • @Hypersomniac is what I wrote above on draw-manager concurrency correct, or is there more work needed that I didn’t notice? So is this doable in a few days of work?
  • @sergey it seems that for drawing on a separate thread I need to give it its own depsgraph. Only to ensure valid batches for the separate GPU context I think. Is that correct? Would that mean duplicated updates if so, or would the depsgraph only update batches and share the rest with other depsgraphs? I guess thanks to CoW, we just need to do correct tagging to avoid unnecessary workloads?

Also, I didn’t pay much attention to the increased memory load this all would bring, so if you see a serious issue, please let me know.

1 Like