one peer one thread

Options
oman
oman
edited November 2012 in Photon Server
How to make setting let ApplicationBase deal with all peers with one thread but not the default poolfiber?

Comments

  • Philip
    Options
    Could you explain why / what you are trying to do?
  • oman
    Options
    I want to develop my gameserver with single thread mode. in the default, the photon deal with all the peers with a poolfiber, that means the peer's OnOperationRequest method is called in diferent thread, I must have to synchronize my game objects, to avoid this, I put the all peers' operationRequest into my own single thread fiber like this:

    GameLogicServer.SingleFiber.Enqueue(() => this.ExecItemOperation(() => this.DoOnOperationRequest(peer, operationRequest, sendParameters), sendParameters));
    but in this way, some time the server can't call the OperatioinRequest method.
  • dreamora
    Options
    Anything related to the same peer is executed on the same thread, there will never be a multithread communication related to the same peer.

    The only thing that can end on a different thread is the room handling which has an own fiber, but there is the possibility to send messages between fibers to solve the problem of data sharing without applying locks. The concept builds upon retlang, one of the strongest bases available to create true concurrent applications on.

    your thread approach would unavoidably require locks on basically everything you touch to not blow up the whole server.


    As for your example: The server can do that but Photon has, aside of the MMO project, no project where the user exists outside a room, so you might have been misslead. Lite - LoadBalance use the rooms to handlethe whole communication, player management etc, hence there simply is no need for them to pass anything directly as that would make the code not only ugly but also less maintainable while having a significantly higher risk of errors and timing issues due to the concurrency as rooms and players need to be able to be processed in parallel as the peer must be processed fast for short turnarounds while the room is required to be fast as it will in your / all games host the whole game logic and its processing must not impact players
  • oman
    Options
    thanks for your reply, dreamora
    my project is a mmo project, no room, in the photon's mmodemo project, the server has no a thread to run the server update loop, but in my project , I have a threadfiber running to make tick, in this threadfiber, it calls all the server's object's update method, such as scene's update and npc's update, when i access these objects in the peer class, i have to use lock,right? i want to let my own threadfiber process sequencely all the peers' OnOperationRequest not to use lock with my objects. my approach is not proper? if i use a timer to call my world's update, there is no need to use lock when accessing these objects in the peer class? for example in my world class hass a dictionary to store all the player instance, the photon use poolfiber, there is probaly that in the same time two threads call the corresponding peer's OnOperationRequest and then access the world's player dictionary object, so I must use lock in the player dictionary, right? how to avoid that?
  • dreamora
    Options
    If the fiber only accesses the associated object, then no, you don't need to lock anything.

    If you access something that modifies other objects then you would need to lock, hence you enqueue messages to circumvent that as it does not matter if something happens now or now+x (as with enqueue) as the x is always smaller than the latency.
    So for your purpose there does at least out of my view not seem to be any reason not to use enqueue

    Thats why the design is the way that the core sends out the messages to the fibers. If the core had to use locks to communicate to the peer objects, then you can forget to scale to thousands of users on a single machine, likely even if you would throw in the tens of thousands to get a 10 core Intel based server.
    The whole concept of scaleability bases on this message pattern to achieve high horizontal concurrency without the lock down problem and without spawning thousands of threads (which would bog down the application as well, OSs don't like to maintain that many threads, optimum number of threads is depending on their idle times cpu cores - cpu cores * 4 in total, including the network handling threads which are what does a lot of work for you due to the serialization etc