Peer dispose and threading

Options
escjosh
edited January 2012 in Photon Server
Is it a bad idea to keep Peer references in containers or as members? More specifically, do we need to guard against peers being disposed or disconnected concurrently while another thread is in the middle of using it?

Comments

  • dreamora
    Options
    If you work with the fibers within which they normally exist then its no problem and you don't have to worry (the implementations provided work on this base)

    If you work with real threads, then you must handle the whole stuff, not just disposing but also concurrent accesses etc and you will likely get a lower performance as consequence of the locks etc
  • Isn't it still possible for multithreading to cause problems if you're using fibers? For example, if you use a PoolFiber to enqueue two anonymous functions that both operate on the same Peer instance, can't both functions execute in parallel on two different pool threads?
  • dreamora
    Options
    if you enqueue them they will remain in the fiber, they won't be executed on a different fiber. As such no, they won't access it concurrently.

    If you want to access a different peer from one peer, then you would want to send a message to do that so it does not suffer from a risk of concurrent access
  • Oh, right. Thanks. I forgot that a fiber only actually has one thread behind it. The "Pool" part throws me off sometimes because PoolFiber.Enqueue feels a lot like ThreadPool.QueueUserWorkItem. But it's not.
  • Well, I'm not so sure now. If fibers don't ever run enqueued actions in parallel, then why does the ActionQueue class exist?
  • dreamora
    Options
    Actions for different fibers will or can run in parallel.
    But stuff for the same fiber runs serial to prevent the need of locking which would kill a vast amount of cpu time and scaleability, the less you need to lock the more you can do in the same time and you don't need to run in parallel to do asyncronous stuff as you can just fire them off and make them post a message / enqueue their result for further handling into the fiber at a later point again.
    Having the code running in parallel does not make it faster, it normally makes it run slower if it operates on a shared dataset thanks to the syncronization overhead. This holds even more if you consider the fact that most of the things you do are 'microtasks' (otherwise you won't scale far) where a single lock might waste more cpu time than the whole task would have taken to finish.

    Can't comment on the ActionQueue class especially in this context.
  • BenStahl
    Options
    The ActionQueue class is not used anymore.
    It's still left there for backward compatibility.
  • Thanks, Ben.