Question about OpRaiseEvent() and SendOutgoingCommands()

Hello,

We just started using Photon Realtime for our Unity game (we're not using PUN because we already have the object state / delta compression logic built in our game, we only use Photon Cloud as kind of a relay server, which served just fine), and have a few questions:

1) We use a byte array as the "customEventContent" parameter for OpRaiseEvent(). I know that this call does not immediately send, but only place in send buffer until SendOutgoingCommands() is called. My question is, can I reuse the same byte array for multiple OpRaiseEvent() calls, but with different content? The reason was to save some GC calls. My understanding is that OpRaiseEvent() would serialize the data. Does that mean the custom payload data is cloned in the process? If so, it would be safe for me to reuse the buffer, even if the previous event is not sent out yet?

2) Based on https://doc.photonengine.com/en-us/realtime/current/troubleshooting/analyzing-disconnects#unity, it looks like SendOutgoingCommands() is thread safe? If so, can I just run a non-Unity thread and have it call SendOutgoingCommands() periodically, rather than doing it in a MonoBehaviour, so that it would work regardless whether Unity's main thread is stuck at loading scenes?

And another unrelated question,
3) We found it takes a while to connect to name server -> receive region list -> call PingMinimumOfRegions() -> find best region, which is reasonable since it needs to send ping request to each region. My question is, is the client counted towards CCU while discovering the best region? If it does not, I would like to have each client perform this ping on application start, to reduce the handshaking time when a room is needed. However if that would affect CCU then we need to evaluate more carefully (as some players might only play single player and does not need this info at all).

Thank you very much!

Comments

  • Hello Jerry.

    About 1:
    For those cases, we got good news: In the library and SDK v4.1.4.4, we added support for almost zero-alloc sending and receiving of byte[] events.
    We added wrapper called ByteArraySlice, which is a pooled resource. If you opt to use it, you can send and receive this type instead of a byte[] (you can still send byte[] but it's read into pooled mem slices).
    There is no "manual" for this yet, so please take a look at the reference for PhotonPeer.UseByteArraySlicePoolForEvents and PhotonPeer.ByteArraySlicePool.
    TL;DR: You get a slice before sending and the peer will release it. When receiving a slice, your code has to call ByteArraySlice.Release() when the slice is no longer needed.
    Also, set PhotonPeer.ReuseEventInstance = true to reuse a single EventData instance for OnEvent calls.

    Let us know if you got questions. Let's do that in a separate thread.


    About 2:
    It is considered thread safe, yes. It uses a lock, so it could still block the main thread. Also, it makes sense to fine tune creation of updates with sending them (less local lag / variance).
    Have a look at the ConnectionHandler class. We call SendOutgoingCommands on the main thread but there is a fallback thread that sends ACKs and keeps the connection, when this main thread does not do it's work.


    About 3:
    The pining can be sped up by storing a BestRegionSummary.
    Currently, this is only implemented in ConnectUsingSettings(AppSettings appSettings) with the appSettings.BestRegionSummaryFromStorage being set.
    The summary is available once the client pinged all regions and picked a best one: LoadBalacingClient.SummaryToCache.
    While pinging, the client counts as one CCU as it's connected to the NameServer.
    It's possible to make your logic store the current best region while the client is running (or with a expiry date in the settings). When that's available, simply connect to a fixed region (skipping best region selection entirely).


    Hope this helps,
    Tobias
  • Hi Tobias,

    Thanks so much for the answers. It was very helpful!
Sign In or Register to comment.