Possible to run Photon Cloud instances higher update rate?

edited February 2014 in Native
From reading up on Photon Cloud and Photon in general, is it still the case that the Photon Cloud instances run at 10 updates / sec? If so this would add about 100 milliseconds to the round trip packet times. Normally this wouldn't be an issue ( I've run servers at that rate as well ) but .. from my testing of mobile networks where I'm getting 500+ ms latency any improvement would help. Is it possible to increase the update rate or reduce the inter-server latency somehow ? Perhaps a lower level api with less latency ?



  • Hi ddn278.

    "is it still the case that the Photon Cloud instances run at 10 updates / sec"
    This has never been the case. Where did you get that info from?
  • From this post


    which clearly states there is a 100ms internal latency for packet processing on the server side. Which I assume is the same server technology running Photon Cloud. That comes directly from a developer, so unless I'm mis-reading it..

  • Yes, you are indeed misreading that. My colleague stated that the PUN clients send out messages in intervals of 100ms. This is not a server side thing. In fact every client only sends messages to the server inside sendOutgoingCommands(), which gets called by service(). The native demos on default also call service() every 100ms, but you are free to choose a different interval in your games like for example calling it once every frame or even once after every function call that adds a message to the outgoing queue. You should just keep in mind that sending too often will add unnecessary overhead for small messages, because multiple messages may fit into the same udp packet, but triggering the sending of a separate udp packet for every message means adding the overhead of the protocol in full to each message.
  • It makes sense to lock the send calls to whenever you're done with your update and creating messages. This reduces the local lag (time between creating an event and sending it). As Ludi said: Keep in mind that sending often doesn't guarantee that everything is arriving and that all packages have the same lag. You need to comensate lag in any case. If you hide it right from the beginning, you run into the same issues you avoid now but later on.
  • K thanks for that info. I'll look into reducing the latency from the client perspective then, which I have much more control. Do you have any metrics on what the internal server side latency of a packet is? I'm just curious how much overhead there is and how much of the latency is client related.

  • Afaik the server side lag should be around 5ms, but my colleagues Nicole and Philip, who are more active in in the server area of the forum, may be of better help on this topic than me.
Sign In or Register to comment.