Photon Voice on c# .net framework

Did anyone had experience consuming Photon Voice in a .net framework application written in c#? I mean out of Unity?

Is that feasible. Any suggestion from anyone already into it?

Regards.

Best Answer

  • vadim
    vadim mod
    edited May 31 Answer ✓

    Yes, it's possible. You can use voice Photon Voice Core API without Unity. Note that audio i/o is outside the scope of this API and should be interfaced via IAudioReader<T> or IAudioPusher<T> for input and IAudioOut<T> or AudioOutDelayControl<T> for output. Normally, we rely on i/o provided by a platform like Unity's Microphone and AudioOut. For convenience, we also provide native microphone capture modules which can be used in C# projects. But for output, 3rd party support or custom module is always required. (Though we have i/o FMOD wrapper with dependency to C# FMOD only but I'm not sure if it can be used outside of Unity).

    Import Assets/Photon/PhotonVoice/PhotonVoiceApi folder without Platforms in your project. If you want to use Voice's mic capture modules, import also Platforms/UWP or Platforms/Windows.

    Depending on the platform and whether you use Voice mic capture, you may need some or all files form Assets/Photon/PhotonVoice/PhotonVoiceLibs/x86_64 and WSA folders. opus_egpv.dll is an audio compression lib essential for Voice. Other libraries are native mic capture implementations (wrapped by cs files from Platforms folder).

    You need to create LoadBalancingTransport2 instance and use it to join room as normal Realtime client. To stream out, it's necessary to create "local voice" with lbc.VoiceClient.CreateLocalVoiceAudioFromSource() which requires an audio input module. To process input audio streams (remote voices), you need to implement client.VoiceClient.OnRemoteVoiceInfoAction callback and provide an audio out module in it.

    For a sample Voice workflow implementation, see Unity Voice Assets\Photon\PhotonVoice\Code\VoiceConnection.cs, Recorder.cs, RemoteVoiceLink.cs, Speaker.cs. You do not need most of the code in this modules, e.g. "linking" is relevant to Unity integration only. Unfortunately we don't have a pure Photon Voice Core API sample at the moment.

Answers

  • vadim
    vadim mod
    edited May 31 Answer ✓

    Yes, it's possible. You can use voice Photon Voice Core API without Unity. Note that audio i/o is outside the scope of this API and should be interfaced via IAudioReader<T> or IAudioPusher<T> for input and IAudioOut<T> or AudioOutDelayControl<T> for output. Normally, we rely on i/o provided by a platform like Unity's Microphone and AudioOut. For convenience, we also provide native microphone capture modules which can be used in C# projects. But for output, 3rd party support or custom module is always required. (Though we have i/o FMOD wrapper with dependency to C# FMOD only but I'm not sure if it can be used outside of Unity).

    Import Assets/Photon/PhotonVoice/PhotonVoiceApi folder without Platforms in your project. If you want to use Voice's mic capture modules, import also Platforms/UWP or Platforms/Windows.

    Depending on the platform and whether you use Voice mic capture, you may need some or all files form Assets/Photon/PhotonVoice/PhotonVoiceLibs/x86_64 and WSA folders. opus_egpv.dll is an audio compression lib essential for Voice. Other libraries are native mic capture implementations (wrapped by cs files from Platforms folder).

    You need to create LoadBalancingTransport2 instance and use it to join room as normal Realtime client. To stream out, it's necessary to create "local voice" with lbc.VoiceClient.CreateLocalVoiceAudioFromSource() which requires an audio input module. To process input audio streams (remote voices), you need to implement client.VoiceClient.OnRemoteVoiceInfoAction callback and provide an audio out module in it.

    For a sample Voice workflow implementation, see Unity Voice Assets\Photon\PhotonVoice\Code\VoiceConnection.cs, Recorder.cs, RemoteVoiceLink.cs, Speaker.cs. You do not need most of the code in this modules, e.g. "linking" is relevant to Unity integration only. Unfortunately we don't have a pure Photon Voice Core API sample at the moment.

  • Hi, Do you suggest/recommend any nuget packages for audio recording and playback to implement outside unity?

  • The only 3rd party solution we tried with C# Voice API is FMOD. We provide integration for it.

    For capture, you can use Photon capture modules per platform like Photon.Voice.UWP.AudioInPusher and Photon.Voice.Windows.WindowsAudioInPusher. See Photon.Voice.Platform.CreateDefaultAudioSource().

    We have successfully integrated portaudio stream API in our C++ Voice project. So portaudio with a C# wrapper might be an option too.

  • Hi,

    I'm using 3rd party package (NAudio) for capture and playback. I'm getting the byte buffer(byte[] buffer, int bytesRecorded). I got struck on how to implement IAudioPusher to push data.

        var usecondsRecorded = 10 * 1000; //10ms

          var voiceInfo = VoiceInfo.CreateAudio(Codec.Raw, waveFormat.SampleRate, 1, usecondsRecorded);

          var audioPusher = new MyAudioPusher<short>(waveFormat.Channels, waveFormat.SampleRate);

          audioPusher.SetCallback(DataPushCallBack, null);

          var audioDesc = new AudioDesc(writer.WaveFormat.SampleRate, 1, "");

          localVoice = loadBalancingClient.VoiceClient.CreateLocalVoiceAudioFromSource(voiceInfo, audioPusher, AudioSampleType.Short);


    Is it the right way to do when I have byte buffer available?

    My DataPushCallBack is empty at the moment. How should I push bytes data? and receive at another another player?

  • vadim
    vadim mod
    edited July 21

    You need to implement SetCallback, not to call it. Photon Voice sets its callback to consume audio your IAudioPusher produces.

    In SetCallback implementation you just store the provided callback, Then you call it each time a new audio buffer arrives from the microphone capture module.

    E.g. you need to pass this callback as OnDataAvailable in the following snippet, via adapter of course:

    var waveIn = new WaveInEvent(deviceNumber);

    waveIn.DataAvailable += OnDataAvailable;

    waveIn.StartRecording();

  • Suresh
    Suresh
    edited August 3

    Thanks for the feedback!

    I'm able to push and I can hear voice. Also I tried running two instances of my sample app in same laptop and I can hear what I have captured.

    When I launch the same app from two different machines, I can see that it is connecting to master server and joining room, Creating local voice. Interestgroup set to 0. But I'm not receiving OnRemoteVoiceInfo () event.

    DebugEchoMode is working on both individual Laptops. Also, running two instances of my application in same machine without echomode is also working.

    Am'I missing anything ?

  • Make sure that you join the same room in the same region. Use ConnectToRegionMaster() and OpJoinOrCreateRoom().

    Increase logging level and check logs for join procedure reports.

  • Suresh
    Suresh
    edited August 4

    Thank you!

    If both the players starts recording after both joined in the room, then it is working as expected. If any player started recording(created local voice) before others join the room, new players who joined the same room are not receiving OnRemoteVoiceInfo() event.

    player1 joined room, started recording

    player2 joined same room later. What is expected behavior with OnRemoteVoiceInfo() in think case? I don't see it is triggering.

    Also, I tried to see lbc.VoiceClient.RemoteVoiceInfos() in OnJoinedRoom() with new joined player, that is also empty.

    How to subscribe for existing voices in room after joining a room?

  • vadim
    vadim mod
    edited August 4

    A client with an active local voice handles the (other player's) Join event and sends remote voice info to the joined client.

    There should be lines "Sending info:" in the 1st client log and "Info received:" in the 2nd.

    You can also check if the 1st client receives the Join event in onEventActionVoiceClient() in LoadBalancingTransport.cs.