GSoC 20 virtual reality controller input interaction support proposal

Hey,

I am a developer with focus on graphics and visualisation for years. In the past, I usually spent my time at projects inside campus and this is my first time to take a stab at approaching Blender development. I’ve enjoyed building customised tools towards a big picture in mind. Besides, I am a master student studying computational science and engineering at EPFL, living in Lausanne CH.

I’m most interested in XR input support project, based on OpenXR Action system. When I was brainstorming some extended ideas, I have some troubles about being sure if I am on the right track. I could see inputs support will release many imagination on what kind of interactions to expect inside virtual reality. I imagined some use cases to implement:

  • I wish to navigate(fly) in the scene by performing translation along any direction which controller points in.
  • I wish to rotate the scene around me along z-axis(because other rotations make people feel awkward) by controller.
  • I wish to drag a controller selected object and rotate myself around it to perform inspection.

I got these ideas when I played around with Google Earth VR. To implement these ideas, I need to binding XrAction and XrPath, find what a controller points at by either OpenGL selection or ray-casting, and define some inputs mapping and user habits.

Does anyone have advice here? Do you think these are reachable goals during GSoC period? Do you think these have priorities to appear on wish list? Or should I focus more on fundamental things on low level?

1 Like

are you familiar with the 2019 GSoC OpenXR project?

2 Likes

Yes, I looked through his proposal and reports. I should have learned more on his work. I guess I should write less on these application-level aspects and focus more on providing stable and well-performing XR input support. Is this also what you mean?

1 Like

You can have a look at the progress on the Developer portal:

https://developer.blender.org/T68998

https://developer.blender.org/T68994

https://developer.blender.org/T71347

@julianeisel is the master of desaster regarding XR development.

2 Likes

Thank your for your kind recommendation. This is the right direction to follow. Great to know someone else is also concerned to XR’s development.

2 Likes

Name

Huajian Qiu

Contact

Email: [email protected]

Forum: wakin

Github: github.com/huajian1069

Synopsis

VR/AR/MR(or XR) is an entirely new way to interact with computer generated 3D worlds. It will bring rich and immersive experience to artists. The groundwork for OpenXR based XR support in Blender has been merged into master not long before, marking the end of Virtual Reality - milestone 1. To approach milestone 2: Continuous Immersive Drawing, I think controller Input/haptics support will naturally be the next step and waiting for design and engineering.

In this project, I will be working on bringing OpenXR Action based interaction support to Blender. At the end of the project, users will be able to navigate in the 3D world, select and inspect objects pointed at by controllers, based on stable and well-performing low level support. This project will lay the foundation of user input, therefore take the next step for milestone 2.

Benefits

VR navigation without external assistance

Currently, the scene inspection is based on Assisted VR, and that is having a person to assist you by controlling the Blender scene using mouse or keyboard. With controller input supports, users will be able to navigate and inspect scenes actively and independently. Google Earth VR has great user experience in this aspect, so it is attractive to realise that in Blender too.

Rich usability and visual assistance

When a user is wearing a headset, he/she is not aware of the real world. So it will provide better user experience if we draw a visualisation of controllers inside VR. Some related improvements include the rendering of laser sent from the controller, drawing a little dot on surfaces where the controller points at, providing haptic feedback when interacting with a virtual scroll or button.

Rich controllability

Input integration and mapping will give users the possibilities to take abstract VR actions(like “teleport”), physical motions(like flying) or graphic actions(like lensing) inside VR. It may also introduce some motion sickness at the same time. If integrating a blackfade or some optic flow reducing, it will make users feel more comfortable. I would prefer to define adding motion sickness reducing technique as stretch goal.

Base to continuous immersive workflow inside VR

With interaction support from controllers, some existing workflow will benefit from continuous immersive workflow. Grease pencil drawing, sculpting, drawing come into mind. The project will pave the way for deep integration of VR and Blender operators.

Future-proof adaption to VR/AR/MR industry standard

The OpenXR 1.0 specification was released on July 29th 2019. Almost all main players in the XR industry have publicly promised to support OpenXR. A further step towards VR support in master will give Blender an edge in the fast-pacing industry.

Deliverables

1. Minimum Viable Product: successful communication between Blender and controllers based on OpenXR action system

Success means Blender will be able to stably request input device states, such as 6 DoF positional and rotational tracking pose and button states, and control haptic events, like make controllers vibrate for a while as response. An error handling mechanism will be accompanied. Exception throw-and-catch mechanism is a good choice. When error occurs, informative messages will be forwarded to users.

2. Python query and API design

To enable abstract away low level communication details, an encapsulation in python will be designed to query controller states and events. The API will be designed in a convenient way for high level application. Based on previous work, one kind of navigation will be achieved to verify the success. Users will be able to navigate(fly) forward and backward along any direction which controller points in. I think it will be a straightforward application.

3. Picking support, finding what a controller points at

This includes two kinds of cases, pointing at an object with an actual surface and without a surface. Both cases will be dealt with. But the former will be more concerned. Based on the results of this section, users will be able to drag a controller selected object and rotate herself/himself around it to perform inspection.

4. Visualisation of controller, End-user documentation on controller input/haptic

The mesh of controller should be found in OpenXR runtime. It would be nice to have these rendered in the HMD session as visual assistance. The documentation for end-users will be straightforward, referring to the precursor project last year.

5. Extending VR debugging utilities, Abstraction of code with good maintainability

Since graphics and XR applications are hard for debugging, I will take advantage of current debugging utilities and extend it in need of my development. It may be helpful to insert custom layers in-between Blender and OpenXR runtime. The abstraction should be carefully designed on the Ghost level or port the code into a new XR module. At the end of the project, the code will be tested and cleaned to be consistent with the code base. Basically, the last two deliverables are aimed at being friendly to future end-users and developers.

Project Details

About controller states

Most controllers have one trigger button, one or two grip button, one menu button, one touchpad / thumb-stick, and several other keys. Some key may has multiple work modes and also several states that can be detected, such as click, long press, touch, double click. These states are defined as input sources.

OpenXR Action system support

There are several concepts introduced by OpenXR to learn for implementation. Applications (Blender) communicate with controllers’ inputs/haptics using Actions. Each Action has its state(of boolean, float, vector2…type) and is bonded to input sources according to the interaction profile. Blender needs to create actions at OpenXR initialisation time and later used to request controller state, create action spaces, or control haptic events.

Interaction profile path is of the form:

“/interaction_profiles/<vendor_name>/<type_name>”

Input source is identified and referenced by path name strings, with the following pattern:

…/input/<identifier>[<location>][/<component>]

An example of listening click of trigger button:

“/user/hand/right/input/trigger/click”

The final step is to repeatedly synchronise and query the input action states.

Finding what a controller points at

For objects with real geometry and without real geometry (e.g. lights, cameras, empties), I will choose OpenGL to pick the point on object. For the purpose of visual assistance, I will draw a little dot on surfaces the controller points at with help of the depth buffer of the current frame.

Use-case driven development

I also propose to do a use-case driven development, enabling some interactive use cases. These are based on querying and mapping common keys’ states to operations in Blender. All of them are straightforward and can be used as a testing to verify success and find bugs.

Proposed use cases:

  • I wish to navigate(fly) in the scene by performing translation along any direction which controller points in.
  • I wish to rotate the scene around me along the z-axis(because other rotations make people feel awkward) by the controller.
  • I wish to drag a controller selected object and rotate myself around it to perform inspection.

Blender side integration

As I understand, when a user navigates in a scene with a first person perspective, it is equivalent to move the virtual camera in the scene. If I am correct, it would be enough to modify the position and bearing for a virtual camera according to controller inputs.

Related projects

Core Support of Virtual Reality Headsets through OpenXR - GSoC

It aims to bring stable and well performing OpenXR based VR rendering support into the core of Blender. Assisted VR experience is partially realised with the help of this project. I often refer to this project for kicking off my work.

Virtual Reality - Milestone 1 - Scene Inspection

This is the continuum of the GSoC project above with more patches and new features like Mirrored VR view, location bookmarks. My project will be based on these previous achievements and share many utilities.

Virtual Reality - Milestone 2 - Continuous Immersive Drawing

This is the parent project of this GSoC project, with a big picture to enable grease pencil drawing in VR. I will try to contribute to this milestone by finishing this GSoC project well.

Testing Platforms

I have an HTC Vive Pro Eye, an Oculus Rift at my host laboratory. If necessary, I will also try to apply for a window mixed reality device. Various controllers are available at hands. The choice is dependent on the current released OpenXR runtime. Then headset and controllers associated with WMR, Oculus would be the best choice.

Project Schedule

May 4 - June 1 : community bonding period:

  1. getting familiar with prior work done by Julian
  2. play around with OpenXR Action system
  3. Setup of development environment

Actual start of the Work Period

June 1 - June 29:

  • Define action sets and meaningful actions.
  • Bind actions and input source path.
  • Set up debugging utilities

June 29 - July 3: First evaluations:

  • MVP should be done at this point.
  • Navigation in-VR should be realised with performing translation along direction controller pointed in.

June 3 - July 27:

  • Query input states and events
  • Write Python API
  • find where the controller point s by OpenGL

July 27 - July 31: Second evaluations:

  • Picking support should be realised. Users should be able to see one little dot on the surface of the object, select an object and inspect around it.

July 31 - August 24:

  • Design UI of new feature
  • add visualisation of controller
  • Add technique to reduce motion sickness(stretch goal)
  • writing documents

August 24 - August 31: submit code and evaluations

  • End-user Documentation
  • Mergeable branch with XR input support
  • Weekly reports and final reports

I will be based in Lausanne, Switzerland during the summer. Therefore, I will be working in the GMT+2 time zone. And I will be available 35 - 45 hours per week. Since our summer vacation is from June to September, I believe that I have enough time to complete the project.

Bio

I have been a developer with focus on graphics and visualisation for years. In the past, I usually spent my time at projects inside campus and this is my first time to take a stab at approaching Blender development. I’ve enjoyed building customised tools towards a big picture in mind. Besides, I am a master student studying computational science and engineering at EPFL, living in Lausanne CH.

I speak C as my mother language, but I also have rich experience in other programming languages such as C++, Python, and Java. I started to write C programs from high school, from a competition about controlling a group of wheel-robots to play simplified basketball.

I have been using Blender for 3 years since a chance to build a simulated fly model from CT-scanned image stacks. It is about optimising the mesh quality and animating the fly to move as recorded in video.

I am also a big fan of VR games. I am currently involved in doing a research project about how to reduce cyber sickness of users during VR games at the host lab in my university. It gives me a scarce chance to be close to many virtually reality devices and experts. I hope to start a VR professional career and make continuous contributions to Blender with this project.

6 Likes

@wakin contacted me before opening this thread already. Getting the first VR milestone merged took all my attention though, so I wasn’t able to check and reply here in a thoughtful way.


Sounds a bit too negative IMHO :slight_smile: After all, it’s going as planned. Would just mention how your work takes the next step for milestone 2.

I don’t find this information important enough to be in the abstract. Especially since this project wouldn’t introduce this standard to Blender, but extend what’s already there. Instead I’d mention which overall benefits it gives in regards to this project.

It’s a bit confusing that you mention it as a project benefit, but then you say “It is a bit out of the scope of this porject”. So do you intend to address this or not?


Some general suggestions on the proposal:

  • You’re not promising too much, which is always a good thing! Nevertheless, I’d suggest to trim it a bit further. Take some parts from the schedule and make them stretch goals.
    E.g. the first month seems quite packed, something like the Python API will need more time. Remember that getting familiar with the code base will take you a bit, probably more time than you’d expect now.

  • There are some aims towards making our GSoC projects less waterfall like. Should I become your mentor, I’d ask you to submit your work in multiple patches over the coding period. I’d also ask to write code with readability and maintainability in mind right away. Your proposal makes this sound like an afterthought, which won’t work well in my experience. Lastly, you will need debugging tools during development, so you should set them up early on. Rather than planning this stuff for the last month, take your time for it during regular development.

  • Make it more clear that your work aligns with general planning in Blender, in that you’re taking important steps towards milestone 2 (continuous immersive drawing). The design tasks needs some more work, but that is our responsibility.

  • Part of our big-picture design for VR is that UIs should be defined in Python (e.g. by add-ons), the native implementation (the C/C++ code) provides the necessary features to build these. The Python part is not just a wrapper, it’s the API that future VR UIs will be built with.

    My current implementation strictly follows that idea, your work should as well. Are there parts in your implementation that could/should be done in an add-on, not in the C code? How could controller state be accessed in .py? There’s no right or wrong here, but would be nice to see some thoughts on it. Check the new VR add-on for some reference.

  • There’s no real need to implement both ray-casting and OpenGL picking support. Just the latter should work fine. It would probably be nice to be able to test how they compare, but that doesn’t sound like something you should spend time on before the rest is done.

  • That OpenXR uses glTF for controler visualization is actually incorrect (although I notice that it’s taken from my initial proposal). I don’t recall if that used to be different back then, I might just have mixed up something with a different library. Either way, AFAIK the OpenXR specification doesn’t define how controllers can be drawn. I think the runtime just always composits them on top when in use. It might also be that we have to register a specific compositing layer.
    My controllers are in the office and I’m working from home currently, so can’t check.

2 Likes

Thanks for your detailed advices and hard working!
I made some relatively easy changes. I will modify the others after some research.
For the “low motion sickness” part, I prefer to define it as stretch goal since I think there are enough deliverables and I am not sure about its effectiveness and difficulty of integration.

I’m not sure this is a good idea in general. Nearly all our 3D viewport navigation and editing operators are defined in C. If the VR variants of the same operators are defined in Python that’s going to lead to code duplication and make it harder to maintain code that is spread across two different places.

It’s less about the operators themselves, most of them would stay in C. But the method of calling them can be defined in Python. It’s not on us to set one VR UI in stone then. There are questions like, how are tools activated, using a floating toolbar or radial menus attached to and operated through controllers? Should the regular UI be accessible as floating regions or do we define a limited, but more optimized VR UI? Does the workflow require keyboard input or is it purely controller based? …

Add-on authors can experiment with different approaches and create workflow specific UIs then. E.g. the Ubisoft team was really keen on this, they have their own applications and workflows and would love to bring some of it to Blender through .py. Gizmos are really important for this by the way.
Indeed, adding a VR execution mode to operators is an issue. It’s one of the main concerns I have (it might also turn out to be simple for most operators).

2 Likes