Hi everyone! I am a mechatronics engineering student from UPIITA IPN, Mexico and I’m currently in the process of planning and testing a project that I would like to develop in GSOC.
In order to graduate I have to design and develop a project, which should be finished by the end of 2023.
The project that I’m working on aims to implement headtracking to change the perspective of the viewport in Blender and hand tracking to detect gestures to change between tools in Blender.
I’m developing this project with someone else, so far, we’ve made some prototypes in Unity using OpenCV for headtracking, one in Python for hand tracking using MediaPipe, and a basic one for headtracking within Blender using a script.
We both have experience in some programming languages such as C, C++,C#, python, java, assembly, etc.
I’m currently taking a course on Computer Graphics (which I expect to finish by the end of march), so I believe I have enough knowledge to make this project work. Having said that, I’ve never been a developer for Blender.
Here’s a visual example of what I’m trying to achieve with headtracking and the viewport: AR Concept - 3D Modeling/Animation - YouTube
The main principle behind this is to change the viewing frustrum based on the position of the user’s head. One way this can be done is by using a camera to track the position of the eyes in screen space, then use this information as input to calculate the position in world space relative to the screen.
I’d like to know your opinion regarding the viability of this project for GSOC. Of course, I don’t plan to propose the whole implementation of this project for the summer, more so, a scaled down version.
Thank you everyone
Roberto Samuel Tuda Alonzo