UDP Server Side Ordering and Thread Safety

Options
randomstring
edited June 2013 in Photon Server
I tried to look this up in the documentation but it seemed a little fuzzy so I just wanted to make sure that I understand what is going on behind the scenes so I don't break anything :) Also i'm just going to pre-sorry about the crazy long post but hopefully this will make good documentation when it makes it into the google.

As a sample server for everything from here on lets assume we have channels 1 and 2, where we only send odd numbers on channel 1 and even numbers on channel 2. When ever the server receives and operation it just sends the number received back to the client and the client prints out this number. For simplicity we will assume that everything is sent reliable.

Question 1: Basic Operations
Client Sends: 1,2,3,4,5,6,7,8,9,10

This just sends 1-10 in order to the server. Because we are using UDP some numbers could come back out of order from the server but because we are using channels odd responses will never arrive in the wrong order and even responses will also never arrive in the wrong order. However an even response could arrive before an odd response. Is this all correct?

If what I said above is correct then the following should hold as all possible, let me know if I am wrong:
Client Receives: 2,1,3,5,4,6,8,10,7,9
Client Receives: 1,3,5,7,9,2,4,6,8,10
Client Receives: 1,2,3,4,5,7,6,8,9,10

However these would never be possible because odds or evens arrive out of order:
Client Receives: 3,1,2,6,4,5,7,8,9,10
Client Receives: 10,1,2,3,4,5,6,7,8,9

Question 2: Events
Assuming the same sequence in question 1 lets assume that half way through processing our stream an event with the numbers 100 and 101 is generated and send back on the corresponding channels for evens and odds. How would this appear in our number stream? For simplicity lets assume the server receives and responds to all numbers in order. Would the return stream look like:
1,2,3,4,5,100,101,6,7,8,9,10
or
1,2,3,4,5,6,7,8,9,10,100,101
or could the event come back to the client in a random position like..
1,2,3,100,4,5,6,101,7,8,9,10

Question 3: Operation Handler Concurrency
As I understand it all operations are processed from a Fiber which makes be believe that no more than one operation for a peer will ever be executing an Operation Handler at any given time. Thus if I had an operation handler class on my peer that contained a class variable [code2=csharp]private int currentNum[/code2] and an OnOperationRequest that performed something like:
[code2=csharp]currentNum = operationRequest.Parameters[100];
return GenerateResponse();[/code2]
Where GenerateResponse() creates a response that contains the number received. I never have to worry about concurrency issues where another thread might be setting currentNum to the next number being processed correct?
Without your framework and trying to the above normally someone would have to do:
[code2=csharp]var response;
lock(currentNum) {
currentNum = operationRequest.Parameters[100];
response = GenerateResponse();
}
return response;[/code2]

Question 4: General Concurrency and Shared Data
I am assuming that Fibers are atomic on a per-peer basis. Thus everything one client does within its handler is guaranteed to be atomic but because there are multiple Fiber processors data that can be accessed from across all Fibers is not thread safe. Thus if for some silly reason I waned to run all operations through a class that is shared by all Peers then would I need to use locking? Think of this as the example (I know its a silly base case)
[code2=csharp]public class NumberParser
{
static int currentNumber;
static Peer lastPeer;
public static void setCurrentNumber(Peer p, int num)
{
currentNumber = num;
lastPeer = p;
}

public static int GetNum() { return currentNumber; }
public static Peer LastPeer() { return lastPeer; }
}[/code2]

If I ran every number received by every peer through this class then this would be incorrect(not thread safe) when executed in my operation handler. This is because current number and last peer might get changed by another peer at the same time.
[code2=csharp]NumberParser.setCurrentNumber(peer, number)
return GenerateResponse(NumberParser.LastPeer(), NumberParser.GetNum());[/code2]
But this would be correct
[code2=csharp]lock (typeof(NumberParser))
{
NumberParser.setCurrentNumber(peer, number)
return GenerateResponse(NumberParser.LastPeer(), NumberParser.GetNum());
}[/code2]

Question 5: Proper way to send events to other peers
Lets assume that every time I receive the operation "5" I want to send an event to the peer "John". What is the proper way to do this?
Should I do?
[code2=csharp]peer = "Code to find John's Peer"
peer.SendEvent(data, params);[/code2]

Or should I be putting a delegate for John into his RequestFiber? I am just worried about SendEvent not being thread safe and I end up clobbering a message in John's outbound data stream.


Sorry again about the length I should just quit worrying and make stuff! :lol:

Comments

  • Hi,

    sorry for the delay - long posts sometimes have long response times as well ;)

    A few short clarifications before I answer your questions - I guess you are aware of these details, but it might be helpful to avoid confusion nevertheless:

    - Reliable UDP makes sure that no UDP packets are lost and that all "previous" packets have arrived before a packet is forwarded to the "game logic" or "application layer" in Photon.
    - We generally recommend one channel for reliable udp and one channel for unreliable udp - so your assumptions with 2 channels for reliable data might not be that useful at all - depending on your use case.
    - Inside the server-side application (e.g., in Lite), thread safety is achieved by use of fibers (check http://code.google.com/p/retlang/ for more info, for example). Everything that is executed on a single fiber is processed "in order" and is thread safe. [/list]


    As we are only talking about reliable UDP, the order in which operations are executed on the fiber is the same as the order in which the data finally arrives at the client. Otherwise, it's not related to the underlying network protocol (i.e., if you use the standard, unreliable UDP, the order of packets might be different from the order in which events / operation responses are executed on the fiber).

    Question 1: Yes, you are correct.

    Question 2: It will arrive in the same order as it is sent out by the server. If the event is sent halfway between the operation responses, it will arrive between the responses on the client side, too. (I understand that you want to send 10 single operations + 10 single operation responses, so you can always raise the event "in between".)

    Question 3: Exactly, that's how the fibers work. Of course, you need to make sure that "currentNum" is only modified by code that is executed on the fiber.

    Question 4: You would need to synchronize access to that shared data class, yes. You can either add locking, or, recommended, add a fiber to that shared class as well, for example like this:
    (It's not exactly the same as your code, but you get the idea.)

    [code2=csharp]public class NumberParser
    {
    static int currentNumber;
    static Peer lastPeer;
    static PoolFiber fiber = new PoolFiber();

    public static NumberParser()
    {
    fiber.Start();
    }

    public static void setCurrentNumberAndGenerateResponse(Peer p, int num)
    {
    fiber.Enqueue(() =>
    {
    currentNumber = num;
    lastPeer = p;

    lastPeer.GenerateResponse(currentNumber)
    });
    }
    }[/code2]

    Question 5: SendEvent is thread safe (as all PeerBase methods), so your code is fine.

    Hope this helps!

    (Btw., I like your questions and examples, it would also make a very good blog post, for example ;))
  • Thanks Nichole, I suspected that is how things operated but I wanted to be sure before I made assumptions. Also the fiber example is very helpful didn't think the PoolFiber was open for end use but now I see that creating my own is possible. Being able to re-use your code instead of re-creating the wheel might save me a lot of time, especially knowing you have the Schedule and ScheduleOnInterval classes.