What is the exact interplay of (Photon) Voice components?

Hi guys,

I try to understand how the interplay of the different (Photon) Voice components works, e.g. what happens from saying something until the other players hear what i say step-by-step.

In my case, i have a Player Prefab with an Audio Listener (attached to the camera) and Audio Source, PhotonVoiceView and Speaker attached to the player itself. In the scene, i have an object called VoiceManager with PhotonVoiceNetwork and Recorder component, marked as Primary Recorder.

When joining a room, the Camera component (and so the AudioListener) of the other players are deleted locally, however the Speaker and AudioSource stay. It works as far as i could test it but should i delete them too?

Thanks for your help :)


  • vadim
    vadim mod
    edited August 2022

    You are using Voice2 PUN integration. You need PUN client, single PhotonVoiceNetwork object and a player prefab with PhotonVoiceView, Speaker and optional Recorder (you can use PhotonVoiceNetwork's Primary Recorder instead).

    The PhotonVoiceNetwork follows PUN client state and joins the voice room after PUN client joins the game room.

    The player prefab is instantiated as any other PUN prefab with remote copies automatically created on other clients. The Recorder created during local prefab instantiation starts recording and streaming to the voice room. The Speaker crated on remote instances of this objects receive the audio stream and play it via Unity's AudioSources.