PhotonCloud Who manage buffered RPC

Options
Hello everyone,

I was curious about a buffered RPC who's storing them? From what I've observed it's the MasterClient that store them. However when the MasterClient change during a game they are not transfered to the new MasterClient. I hope I'm wrong...

Otherwise, is there a way to transfer all the buffered messages from a MasterClient to another? Or ideally make the CloudServer to store them without having to make my own server code?

What are the best practices with buffered calls?

Thanks!

Comments

  • Tobias
    Options
    The server stores buffered RPCs and Instantiates.

    PUN sends all those messages via Photon "events". This is what the server caches. When a user leaves (or disconnects), the buffered events are removed.
    To allow scene instantiation (independent of player) we have a way of telling the server "this event belongs to the room". Then the instantiation or buffered RPC will stay buffered, no matter which player sent it. We limited this ability to master clients.
  • Tobias wrote:
    When a user leaves (or disconnects), the buffered events are removed.
    Which means buffered events are scoped under the user life span: MasterClient or not if you disconnect you all your buffered RPC are cleared. Which make sense otherwise new client would connect and receive events from an inexistent PhotonView. So for a simple game in which users can join or leave a room at any point, it's better to not use buffered messages but resend the messages that would be normally buffered manually to the new user.


    However as you are saying in the following statement PhotonNetwork.InstantiateSceneObject doesn't belongs to the MasterClient but to the Room! So who is the sender in a such case? The current MasterClient?
    Tobias wrote:
    To allow scene instantiation (independent of player) we have a way of telling the server "this event belongs to the room". Then the instantiation or buffered RPC will stay buffered, no matter which player sent it. We limited this ability to master clients.
    Since everything is an event, theoretically this is a limitation of PUN and not a limitation of Photon right? Looking deeper into PUN code I found that when we scene instantiate we are sending the event as a RoomCacheGlobal event. And this is the only instance where PUN uses that kind of caching. So I was considering of adding a new PhotonTargets which would be the Scene and when calling OpRaiseEvent we could send the event cached as a RoomCacheGlobal. Do you think it would be something feasible?
  • I confirm, after applying the following patch I've been able to send buffered RPC messages that stays in the Room even after a client disconnect. As for the who's the sender when it's a cached GlobalRoomMessage? The sender is set to null!
    --- a/Assets/Photon Unity Networking/Plugins/PhotonNetwork/PhotonClasses.cs     
    +++ b/Assets/Photon Unity Networking/Plugins/PhotonNetwork/PhotonClasses.cs     
    @@ -1,3 +1,5 @@
    +#define GlobalBufferedRPCs
    +
     // ----------------------------------------------------------------------------
     // <copyright file="PhotonClasses.cs" company="Exit Games GmbH">
     //   PhotonNetwork Framework for Unity - Copyright (C) 2011 Exit Games GmbH
    @@ -25,7 +27,13 @@ internal class PhotonNetworkMessages
     
     /// <summary>Enum of "target" options for RPCs. These define which remote clients get your RPC call. </summary>
     /// \ingroup publicApi
    -public enum PhotonTargets { All, Others, MasterClient, AllBuffered, OthersBuffered } //.MasterClientBuffered? .Server?
    +public enum PhotonTargets { All, Others, MasterClient, AllBuffered, OthersBuffered
    +#if GlobalBufferedRPCs
    +       , AllGlobalBuffered,
    +       OthersGlobalBuffered
    +#endif
    +
    +} //.MasterClientBuffered? .Server?
     
     /// <summary>Used to define the level of logging output created by the PUN classes. Either log errors, info (some more) or full.</summary>
     /// \ingroup publicApi
    
    --- a/Assets/Photon Unity Networking/Plugins/PhotonNetwork/NetworkingPeer.cs    
    +++ b/Assets/Photon Unity Networking/Plugins/PhotonNetwork/NetworkingPeer.cs    
    @@ -1,3 +1,5 @@
    +#define GlobalBufferedRPCs
     
     // --------------------------------------------------------------------------------------------------------------------
     // <copyright file="NetworkingPeer.cs" company="Exit Games GmbH">
    @@ -2328,10 +2329,25 @@ internal class NetworkingPeer : LoadbalancingPeer, IPhotonPeerListener
                 // Execute local
                 this.ExecuteRPC(rpcEvent, this.mLocalActor);
             }
    +#if GlobalBufferedRPCs
    +               else if (target == PhotonTargets.AllGlobalBuffered)
    +               {
    +                       this.OpRaiseEvent(PhotonNetworkMessages.RPC, rpcEvent, true, 0, EventCaching.AddToRoomCacheGlobal, ReceiverGroup.Others);
    +
    +            // Execute local
    +            this.ExecuteRPC(rpcEvent, this.mLocalActor);
    +               }
    +#endif
             else if (target == PhotonTargets.OthersBuffered)
             {
                 this.OpRaiseEvent(PhotonNetworkMessages.RPC, rpcEvent, true, 0, EventCaching.AddToRoomCache, ReceiverGroup.Others);
             }
    +#if GlobalBufferedRPCs
    +               else if (target == PhotonTargets.OthersGlobalBuffered)
    +               {
    +                       this.OpRaiseEvent(PhotonNetworkMessages.RPC, rpcEvent, true, 0, EventCaching.AddToRoomCacheGlobal, ReceiverGroup.Others);
    +               }
    +#endif
             else if (target == PhotonTargets.MasterClient)
             {
                 if (this.mMasterClient == this.mLocalActor)
    

    Eg.
    photonView.RPC("SetPlayer", PhotonTargets.AllGlobalBuffered, player);
    

    However I'm unaware of the side effects a such change may have. For exemple calling PhotonNetwork.RemoveAllBufferedMessages won't remove these messages. To global messages more investigation is necessary but I think it would be possible from what I saw. Also I assumed - and from what I observed - that Photon will destroy the room along with the global messages when no more users are connected it this correct?
  • Tobias
    Options
    scadieux:
    As far as I can see you got it all right. PUN is the limiting factor in this case, not exposing all options of Photon LoadBalancing (Cloud).

    Usually, some actor (player in room with ID >= 1) is the sender of events. We fake "room events" by using actorId = 0, so the actual sender gets lost. We didn't expect this to be a problem.
    Cached events can be deleted by actor, so deleting actor zero's event would remove the room's events. Again, PUN doesn't allow this at the moment but this is a simple change.
    Additionally, a filter can be passed in when deleting events. This is used when we delete just the RPCs instead of all events. The filter can specify a event code or content (key + value). If more than one event fits this filter and actorID, all fitting ones are removed from cache.

    When you excessively cache events, the sum of data sent to clients on join might break the clients, so you should limit yourself. At the moment, we can't provide a fixed limit. It depends a lot on the client platform, so we didn't want to implement limits here. We will work on analysis tools for the client side though, that help you detect bad conditions.

    When the last player leaves the room, all events, properties and the room itself gets cleaned up.


    Maybe a bit late but now I wonder if you actually need RPCs and event caching. Maybe you might work with properties instead: Room.SetCustomProperties() lets you set them easily and they stay in the room, independent of caching. They don't grow, if you just change the value of individual properties (their values being replaced) and they could even be sent to the lobby individually (for room listing).
  • It's not too late to step back, I'll look into it. Thx for the support now I feel more confident to modify PUN :) Also, to simplify some merging do you have a public git?
  • Tobias
    Options
    We don't have a public repository yet but were thinking about it already.
    Like so often, it's a matter of time :(