XR Actions branch question

Thanks for the fast answer. Testing your MultiUser is pure fun.

I will provide a clean example blend on GitHub like the Screeshot above,
The vive controllers and the suzanne head are already moving through the motion track feature after you started VR.
my plan:

There is a collection: VR User with the meshes

VR_USER

  • VR User head
    (viewer_pose_location,viewer_pose_rotation )
  • VR User hand left
    (controller_left_location, controller_left_rotation)
  • VR User hand right
    (controller_right_location, controller_right_rotation)

only the pos/rot of this 3 meshes per user should be in sync.
Every joined user has an instance of this collection VR_User who is in sync with the other collab members.

So everyone is seeing where the other joined users are and what they show with their hands.
Thats only my first idea, probably you have an prefered simpler method?

Speech is delivered over parallel running MS Teams or skype session. We use it daily.

So with this you could already have a full VR review of the scene.
You see where the joined users are, what there looking at and what they show with their hands.
The speech allows full communication and discussion.

My first focus is to have an outstanding stabillity in this VR collab session.
Therefore this reducement to this min needed communication concept.

This could be extended then through live vr collab modeling through other XR Action functions (edit mode/ pick vertices with controller raycast/ move). The devs here do great stuff to finally make this possible.

But first i would like to focus on absolut stability and ease of use.

also to think about (optional):
The most easy setup could also be that every user who will join the session must have the file loaded (screenshot) from shared folder.
Only the pos head and hand are then in sync with MultiUser.
This would be an alternative method who fullfills highest data security policies for collab sessions through firewalls where it is not allowed to share geo data over connection.
Only the interaction data is transfered.
But this would be only a stronger simplification of that what you have already done.

1 Like

Hi,

there is some progress on the example scene.

Yellow

  1. the vive headset and controller asset are in the the
    Multi_User_Avatar Collection (also see in the
    Outliner)

  2. There are driven trough Motion Capture

    path drives object/empty

    /user/head - > user_head
    /user/hand/left - > user_hand_left
    /user/hand/right - > user_hand_right

  3. After starting VR the Susanne Headset and the controllers are moved.

Could you give some help how to get them in sync per user who joins the Multi User session?

I´am using the Branch build XR-DEV from

You have to enable the VR Scene Inspection addon.

other functionality.
Thanks to @muxed-reality
RED

The Landmark Avatar Collection is nicely put to show when you pull the trigger.
Than select the avatar and release controller index finger button to teleport to this position. Release also is hiding the avatar positions.

The current file

2 Likes

all credits for the Racer scene to
katarn66

Only added some new irradiance volumes, higher res IBL, changed the scene structure a little and added some eevee/cycles material switches to allow a more seamless switch from eevee to cycles.

@sasa42 Nice setup! :slightly_smiling_face:

@slumber The multi-user data fields to synchronize would be the object transforms for XrSessionSettings.mocap_objects collection property (which is only present in the XR Dev branch now).
I’m not familiar with how the multi-user sync works, but accessing the fields would look something like this:

session_settings = bpy.context.window_manager.xr_session_settings

for mocap_ob in session_settings.mocap_objects:
	if mocap_ob.object:
		# Sync object location/rotation
		
		# mocap_ob.object.location
		# mocap_ob.object.rotation_euler

Thanks for your help!

1 Like

Thanks. Trying hardly to find the most sustainable way in a motivational environment/example setup.
Without your help hardly possible.

Next week i will correct the pivots from the gltf controllers and headset. Also try to make the index finger button and thumb pad visual moveable to have some visual feedback in scene.

@slumber has given me already some help for making the multi user repräsentation. But i need some time to get into.
Probably he has some quick hacks/tips to make it happen faster/ the right way by reviewing/trying the example file i uploaded to GoogleDrive. (2 posts before)

1 Like

@muxed-reality

Proposal to speed up XR Action/VR development in scene and make it accessable for dev even if you have no headset or controllers attached.

Probably you already use/have something like this already.
But per default you cannot start a VR Session without headset attached.
Therefore i found no way.

VR simulation/XR Action debug mode:
(integrated as checkbox under Start VR session?)

  • allows to Start VR Session without headset and controllers
  • shortcut pressed or per default you can fly around with blenders fly mode → simulated headset
    position
  • shortcut pressed and mouse movement and click to simulate left controller + pull controller
    index finger
  • shortcut pressed and mouse movement and click to simulate right controller + pull controller
    index finger

This would greatly accelerate VR/XR action development speed in scene because you can full test and develop in the VR scene without headset and controllers attached.

sorry for the delay, I had trouble running my occulus quest on linux to test the scene inspection ^^
but it works now and I’ll be able to explore different way to replicate the information needed to represent the user in VR (thanks for giving the python properties @muxed-reality and thx for the test file and details @sasa42 ! )

I made a dedicated issue on the multiuser gitlab to track easily the progress about the topic :slight_smile:

I’ll keep you informed of my progress and as soon as I have a partially working version of the multiuser to test I’ll post it here :slight_smile:

1 Like

Thats great.
I jumped also into your Discord.

The scene is currently optimized and set up for the HTC Vive /HTC Vive Pro/ Varjo XR-3 on roomscaled tracking. Therefore the one click move to Avatar/Landmark positions via controller concept.
But i will try an Occulus Quest too next week.

Minor test scene optimization will follow next week.

  • mostly correct pivots for headset and controllers
  • full avatar landmarks set (car exterior review and driver experience positions)

But the main structure will not change.

To run different render quality and speeds you can select all car objects over the car collection or under the car empty in the outliner and
hit STRG + 1 (Subdivision Level 1) in object mode.
Or use the Max Subdivision on 1 in Simplify/Viewport.
At the moment there are set individually to display round what´s round (1 to 3) and max to 3. Optimized for an RTX 2080 card.

1

I will also add an minimal car rig. So a collaborative VR TestDrive with 3 users in one seater car for a few meters is pretty close.)
A box stop where 2 other users change wheels or go to edit mode and change tire width also.
Not because it would be needed, more because we can.)

1 Like