Cloud Server Latency

Options
terdos
edited March 2012 in Photon Server
I have recent been testing a project I have been working on against the cloud. Previously all work done has been against a local machine on the same lan/wlan or a server set up offsite using residential dsl (10Mbit down, 640Mbit down). Both installs of the loadbalancing server have only been modified to reflect the difference in IP addresses.

I have been keeping track of the trip time for messages for use internally. Measured from when the message was sent ie Peer.OpCustom() was called and the time it was recieved via the callback. The following are the approximations of the results.

- internally average 75ms, ranging from 60 to 175
- offsite average 210, ranging from 190 to 300
- exit games clound average 190, ranging from 150 to 300

I am sending only 10 messages per second per client. In these tests I only had two clients running. The messages consist of 2 integers and two floats, plus the overhead of the Hashtable, keys, etc. The total number of packets in and out of the server appear to be on average 35 per second in and 35 per second out. The amount of data sent is around 65 Kbit/s in and 65 Kbit/s out.

The ping from my client(s) to app.exitgamescloud.com is ~50ms. I have on average 100 ms unaccounted for when using the exit games cloud.

So, After all that here are my questions:

1. Is there a configuration in the loadbalancing for the amount of time messages are batched before they are sent by the server? I seem to remember there being one for lite and litelobby applications
2. If so, where can it be found in the configuration, and what is app.exitgamescloud.com configured for?
3. Is there any difference between the trial and paid versions of the cloud that would decrease the latency? between indie and not indie? between any of the packages?
4. and lastly are there any suggestions to minimize the latency, or perhaps reduce the fluctuations in the latency?

Comments

  • Tobias
    Options
    The variance in latency is something we want to improve. We're about to add another hosting location and monitor the latency closer now. At the moment, I think you could be out of luck and end up on a server in the US, too, which would be a bit of extra lag.
    Keep in mind: The ping to our cloud master server is not always identical to the ping of a game server. Those are assigned on demand and by load.
    Some of the lag could be client side, depending on how often you send and dispatch. If latency is a concern, you could probably send in the same frame in which you created updates and dispatch more often.

    1. If this question is about the clients:
    There is no configuration for this - neither in Photon Unity Loadbalancing, nor in the Photon LoadBalancing API (from the DotNet/Unity client SDKs).
    We supply the loadbalancing code with the SDKs and tried to keep it simple enough to be customized. This usually is cleaner than a mass of settings for everything.
    If this is about the server:
    The PhotonServer.config has the attributes DataSendingDelayMilliseconds and AckSendingDelayMilliseconds. Those are the main settings to either accumulate commands to less packages or to minimize latency on the other hand. The server SDK comes with a description of all settings in form of a PDF.
    These settings affect all apps of this server instance.

    2. The Photon Cloud uses the default settings from the SDK.

    3. There is no difference in latency. All clients use the same servers and there is no differentiation.

    4. See above. Aside from local lag, you can't do much about it until we setup new hostnames for US and EU servers and offer (alternative) location-based DNS resolution.
  • terdos
    Options
    Thanks for the reply.

    After I wrote the post I thought about it a while and the numbers I am getting are not too surprising assuming ~50 ms batching on the server.

    The DataSendingDelayMilliseconda and AckSendingDelayMilliseconds are the configuration options that I was looking for however in the documentation they are listed under instance, I suppose that was confusing me. My default configuration has both set to 5 ms, the documentation claims it should be 50. Judging from the numbers I am getting 50 makes more sense. So I still don't know where the latency is coming from.

    I also realized that in the client I am calling service once at the start of the frame. Anything that goes out in that frame will be batched until the next service call ~16 ms later. I may modify that.

    I will have to do some playing and see what is going on.