GSoC 2021 Draft Proposal: VR Drawing Optimizations : Single pass (multiple draw-calls) drawing

Hello all,
This is the first draft of my proposal for the project.
I would highly appreciate any inputs you could provide. Thanking you.

Potential Mentors are @julianeisel and @dfelinto


Karthik Rangasai Sivaraman



The initial implementation of the OpenXR based XR support has some performance-wise optimizations that can be done with respect to viewport drawing. One such optimization is the use of a Single Pass Drawing to draw both eyes of the VR view in one render-loop. The single call will have multiple draw calls differing for each view (eye). The idea is to use the DRWView abstraction for the aforementioned.


This project will aid both developers and users of the blender software. For developers, the entire viewport drawing of the VR scene will be abstracted away and thus seen a single call similar to the current viewport drawing of the window which will streamline the codebase. From the users point of view, the single pass rendering is supposed to provide a performance boost to the application because it will prevent the application from drawing the entire viewport twice.


  • A robust Single Pass VR Viewport drawing mechanism.

Project Details

The OpenXR specification states that during an active OpenXR session, the rendering of the VR output happens in a certain predefined manner which is as follows:

  • Acquire the Swapchain Image

  • Wait for the Swapchain Image

  • Locate Views

  • Locate Space

  • Perform Required Graphics Processing (Application dependant)

  • Release Swapchain Image

In the current implementation of the VR viewport drawing, this sequence is called twice once for each view (eye).

The aim of the project is to reduce the aforementioned overhead by performing the sequence of functions only once and running the per-view (per-eye) based logic in the application based graphics processing part of the sequence. This is possible because the xrLocateViews function of the API returns us the details like the view matrix and the projection matrix for both the views (eyes) at once. We can leverage this and apply the required transformations on the input to change the VR viewport for each eye.

To achieve this task, the DRWView abstraction can be used. The DRWView structure has a member named storage that holds the perspective, view, and window matrices that will be used to finally render the output. The main structure DRWManager has members of the type DRWView which is used to update the current screen as and when it is required. The GLUniformBuf are updated using the values of type DRWView present in the global DST variable of type DRWManager. Thus updating the DRWManager type to hold two such DRWView objects, one for each view (eye), during VR session will keep most of the functionality constant and the final output of both views (eyes) can be drawn into the viewport field of the wmXrSurfaceData type object which will then be drawn onto the user’s HMD device screens.

Project Schedule

Community Bonding Period: Go through the code base for the VR module.

Week 1:

  • Discuss with mentors and other menbers of the VR module to design and finalize the necessary updates for the implementation of the project.

Week 2,3:

  • Setting up the required transformations for generating outputs through both the views (eyes) from the main scene’s view.

  • Updating the necessary structues and classes to hold the new transformation data and perform the operations respectively.

Week 4,5:

  • Adding new callbacks to perform eye-specific transfromations instead of eye-speific drawing.

Week 6,7:

  • Linking the surface draw call to newly added changes.

Week 8,9: (Testing Phase)

  • Testing the new method with respect to the current method.

  • Making all the necessary changes to code for proper working of the draw call and finally replacing the old method with the new one ready for final submission.

Week 10: Buffer week to complete the work, refactor the code, adding documentation and comments as required.

My exams end on 12 May 2021 hence there will be no clashes with the GSoC Project for the first half of the duration.

The next semester of my college is supposed to start around the second week of August. Hence, I will be available for the full duration without any distractions.


I am currently in the senior year of my undergraduate education double majoring in Mathematics and Computer Science at BITS Pilani, Hyderabad Campus in India. I have used various open source software and want to contribute back to the community as well. Through this program, I feel I will be able to do my part in helping the open source community grow.

  • Relevant skills:

  • Have used Blender in a smaller capacity and I am familiar with the basic controls.

  • Taken up a course on Computer Graphics this semester (which I am loving) and also have a math degree which I feel will be of quite help with the project.

  • Proficient in C, C++, and Python. Quite familiar with Javascript, Java, and Dart.

I have worked on a number of projects for my Computer Science courses and on my own. I have listed some here.

  • A lexer and a parser for a mini programming language that I designed.

  • Reliable Data Transfer using UDP.

  • Basic 2D Vector Field Visualization using OpenGL.

  • 3D Scene Navigation using Cameras and 3D Model loading using OpenGL.

  • A Minimal P2P web-based Video Application using WebRTC framework built javascript.

  • Vector Space Model based search engine built using Python.

I have also contributed to the blender project for one known issue:

  • D9938 - Fix reset pose transforms when X-Mirror is enabled in Pose Mode.

I have gone through the blender codebase and have a decent understanding of the same.

I hope that through the course of this program, I will be able to make more meaningful contributions to the project. I wish to continue contributing to the project after the program as well.