problem about send buffer full
The whole answer can be found below.
Try Our
Documentation
Please check if you can find an answer in our extensive documentation on Photon Server.
Join Us
on Discord
Meet and talk to our staff and the entire Photon-Community via Discord.
Read More on
Stack Overflow
Find more information on Stack Overflow (for Circle members only).
problem about send buffer full
seasonwind
2012-09-10 12:38:17
hi, I encountered a 'send buffer full' problem.our application has a server to server connection,sometimes it will disconnected and the photon log say 'send buffer full'. I searched the forum and some old topics said it can be solved by modifying the server config file.I watched the photon-configuration.pdf and it said there is a 'TCPBufferSize' option,and I also watched PhotonServer.config in your photon samples,but there is no 'TCPBufferSize' option in the PhotonServer.config. I have tried a lot,like add 'TCPBufferSize' in the <!-- Instance settings --> <Default> section, increase the value of MaxQueuedDataPerPeer and so on,but they didn't solve the problem. Would you please give me a sample PhotonServer.config which have the 'TCPBufferSize' option and it can really change the buffer size. My photon version is v3-0-37-3631.
Thank you.
Comments
[Deleted User]
2012-09-11 09:05:50
Hi seasonwind!
First, a bit of background: we call the "send buffer full" - problem "flow control". I understand that it is a bit annyoing for you, but it was introduced as a way to prevent that Photon is "flooded" by immense amounts of data that it can not handle. It's in fact a protection mechanism. You will always see this "issue" if you try to send more data than the socket can send out (or Photon can handle).
There ARE some settings to tune this protection mechanism (I will explain them in a minute), but in the end, there will ALWAYS be a limit how much you can send - no matter how high it is . So you should think about what your app should do when it exceeds this limit: in most cases, you should simply stop to send data for a while. You might have noticed that the Peer class has an OnSendBufferFull() method. It's implemented in the base classes and it disconnects the peer. You might want to override that method so that the peer is not disconnected, but simply stop sending for a while, until OnSendBufferEmpty is called - and then retry and continue sending.
That being said - if you only need to handle a few "spikes" with large amounts of data, and you feel that your server & network infrastructure can handle a bit more load, you can modify a few settings.
TcpBufferSize does not help much - it only describes how many data is buffered for one write attempt on the socket. But there can be many pending writes on the socket - and also data that is enqueued before it is actually sent as a "pending write" to the socket. So you'll receive a OnSendBufferFull when your peer sends more data then: TcpBufferSize * ( MaxPendingWrites + MaxQueuedBuffers). These are the settings you are looking for. Note: that setting is per peer - so keep in mind how many peers you will have. You might use higher limits if you only have few S2S connections.
To actually modify the settings, you need to find out if you see the SendBufferFull on the outbound S2S connection (that is the side that established the S2S-connection - on the ServerPeer.). In this case, you can modify the <S2S> element:
<S2S
MaxPendingWrites="75"
MaxQueuedBuffers="500">
</S2S>
Or it might happen on the inbound connection (the side TO which the S2S connections where established - on your PeerBase-subclass.), in this case, you need to modify the TCP Listener:
<TCPListener
IPAddress="0.0.0.0"
Port="4531"
MaxPendingWrites="75"
MaxQueuedBuffers="500">
</TCPListener>
The numbers are only examples. Modifying the flow control settings require careful testing, it is easy to overload the server when you set them to high. I can not really give a hint what a good number would be - it all depends on your setup and your actual load.
There's a bit more info on both settings in the photon-configuration.pdf.
Let me know if this helps, or if you have further questions.
seasonwind
2012-09-12 06:14:45
Thank you very much,Nicole. This is a great help to me.And I also find that "SendBufferSize" setting will affect the "Send buffer full" problem.I will test these settings one by one carefully. I have another questions,Can S2S data sent by UDP?Do you think S2S data should be sent by UDP?And if it does,How?Now we use ApplicationBase.ConnectToServer(),can it be set to use UDP? Thanks.
It makes no sense for S2S to be UDP as you want to communicate between your trusted backend with a trusted outcome. UDP only makes sense for client to server where you want to be able to mark stuff as unreliable because it is not relevant that every single package is received (position updates and similar), a situation that can't arise in S2S normally where you want the state and data to be syncronized across the line for granted. If this was not the purpose of getting that done performantly, you could have used a db connection and replicate stuff there, forcing the other servers to fetch it from there (some MMO cluster technologies do that for example to overcome the bandwidth problems that S2S in high bandwidth or high frequency situations can cause on normal 100mbit / gbit lines in cheapo clusters)
[Deleted User]
2012-09-13 15:12:30
Right - thanks for the explanation, dreamora. :) However, if you should have a rare case where S2S UDP makes actually sense, it is technically possible. ApplicationBase.ConnectToServer has several overloaded methods:
- you can pass "Udp" as the protocol parameter to this one: public bool ConnectToServer(IPEndPoint remoteEndPoint, string applicationName, object state, IRpcProtocol protocol)
- or if you use this one, UDP is used instead of TCP : public bool ConnectToServer(IPEndPoint remoteEndPoint, string applicationName, object state, byte numChannels, short? mtu)
We'll make that a bit clearer in a future release.
However, in most cases, TCP is the better choice for S2S communication.
seasonwind
2012-09-17 03:42:36
Thanks,dreamora and Nicole.
Back to top