The Photon Forum
is Closed Permanently.

After many dedicated years of service, we have made the decision to retire our Forum and switch to read-only: we´ve saved the best to last! Your search result can be found below. Plus, we offer support via these channels:

Try Our
Documentation

Please check if you can find an answer in our extensive documentation on Fusion.

Join Us
on Discord

Meet and talk to our staff and the entire Photon-Community via Discord.

Read More on
Stack Overflow

Find more information on Stack Overflow (for Circle members only).

Write Us
an E-Mail

Feel free to send your question directly to our developers.

Sync object transform from client side

Unavi
2022-07-19 20:54:11

I am currently trying to sync the position of a VR playspace and headset. If the VR player is the host it works but if the VR player is the client it doesn't. What would be good practice for achieving that? I currently tried adding a NetworkTransform on the playspace and requesting state authority on the client side with Object.RequestStateAuthority(); but that doesn't seem to work.

Comments

spoivre
2022-07-22 13:24:33

Hi !

Since you are in host topology, the host always has the state authority on all the network objects, and you cannot request the state authority on an object.

If you want to move a client in this topology, you have to use the network inputs.

In the Fusion 101 sample, which is a relevant read if you want to use the host topology, this chapter deals with the input : https://doc.photonengine.com/en-us/fusion/current/fusion-100/fusion-102#collecting_input

You can also read this page : https://doc.photonengine.com/en-us/fusion/current/manual/network-input

To summarize, the global idea is :

  • for each player, you collect the inputs associated to it, and store them in a INetworkInput, that you provide to Fusion during the OnInput callback. In VR, I usually store in this input at least the rig parts (headset, hands, play space) position and rotation, and to collect them I have an hardware rig object using classic VR components (TrackedPoseDriver, ...) to place in space my rig parts, that I then collect and store in the input.

  • during the FixedUpdateNetwork of the network behavior representing the player, we try to read the inputs with GetInput. Both the host, and the player that filled the inputs will receive the inputs. In VR, you will apply here the input positions and rotations to the various rig parts

  • if you have NetworkTransform on the rig parts, the position and rotation will be synchronized to other players (the host being the state authority, when it applies the position it found in the inputs, they are send to everybody). In VR, I usually create a player object with one unique NetworkObject at the root, a NetworkTransform at the root to synchronize the play space, and also several NetworkTransform for each other rig parts I store in the inputs (hands and headset)

  • if you want to have smooth movements for the local user (it is mainly useful if you handle the hands), you may want to update more often the rig parts position during the Render. If you have the input authority, you are the local user and you can use the raw hardware info directly here.

Please note that you have to be sure that when you spawn a VR player, you assign the input authority for it. If you use the NetworkDebugStart component, or if you use the Spawn() method like in this chapter of Fusion 101, it will be the case : https://doc.photonengine.com/en-us/fusion/current/fusion-100/fusion-102#spawning_the_avatar

Best regards,

--

Sébastien

Back to top