command queue problem

I've created a load test for PUN and found some oddities.
First the test scenario:
I've taken the ChatWorker sample and changed it a little: The client that creates the room is the authorative server. The server hosts 100 players, while all other clients only create 1 player.
The server side is the LoadBalancing sample with a small change so it starts a new unity server instance when a client connects. Should have nothing to do with the problem though.
I have the 3.0 RC7 server SDK.

Now for the problems:
a) If I change the PhotonViews "Observe" option to unreliable I see that on the server the QueuedOutgoingCommands grow and grow and grow ... BUT the client gets the commands and the 100 server-controlled players move around. Bug? Something wrong with the unreliable communication via server?

b) With the ReliableDeltaCompressed setting the queue gets very large once the 100 server-controlled players move and slowly goes down if they stop. The result of the large queue is that the movement is replicated to the clients with a large delay (easily gets past the 30seconds once the queue contains 1000 commands).
I've played with lots of options, but the only ones that seem to have any effect at all are:
PhotonNetwork.sendRateOnSerialize
PhotonNetwork.sendRate
And maybe PhotonNetwork.unreliableCommandsLimit, but that could only be a side-effect of large queues.

I've got a reasonable result with sendRateOnSerialize=4 and sendRate=120 (should work well enough with inter- and extrapolation), but I wonder which other settings are there which could have an impact on this behaviour?
Also, can I change the sendRateOnSerialize per view (or have a function to prevent serialization)? I might have a good idea how fast an object moves, and can scale replication down for slow moving obejcts.
Another question would be: Is it a good idea to change those two sendRate settings at runtime? Kind of congestion control as a reaction to the queue warnings.

Thanks!

Comments

  • The behavior is expected.

    Unreliable simply sends it at the rate you define with a server with a room for 100 players causing 100*100 messages per send out so sendrate * 100 * 100 messages at very least per frame as there is no scope handling (SetScope exists but the filtering happens on the receiving client not for the sending itself)

    With delta compressed the number of messages drops if they don't move because you stop sending updates if there are no updates (only the difference is sent -> no difference = no sending)
  • Thanks, but I think you misunderstood an important detail:
    There are not 100 different players (as in clients) but 100 server-controlled GameObjects in the scene (think AI enemies). So I do not think that 100*100 messages are generated (why should the server send the state to itself?).
    Also: the number of real clients I connect has no real impact on the behaviour (ok, i only tested with up to 4 clients): The problem scales only with the number of server-controlled GameObjects, but not with the number of receivers of state updates.
  • 'hosts 100 players' does not expose any information over how you did it so I assume its done through peer or alike too, not over Photon integrate chat bots using SendMessage and unless you use sendmessages directly within photon or within the PUN masterclient (there it could also be direct message invokes) it will be that many messages.

    So you might need to describe your architecture better, because otherwise its to assume that every of those 100 has a photon view and observes a script and thus its 100 sending actors, independent on where they are
  • The queues will get bigger, if you create more data than you're (able) to send.
    PUN is limited by it's default values for sendrate. Low level, the client will not try to create as many UDP packages as possible but as many as you make it send (which is controlled by the send rate).

    Of course the clients will receive the data but you will notice that the lag gets bigger over time and maybe the connection breaks. This is the same for reliable and unreliable but unreliable has less overhead and also skips operations/events if newer are available, which allows to catch up faster when you stop sending.

    sendRateOnSerialize defines how often you create data (position updates). Each item sends them.

    Let us know more about your architecture or goals and we can provide feedback how you could implement this.
Sign In or Register to comment.