.Net 4.0

edited April 2010 in Photon Server
I have a question regarding Photon Server and the .Net 4.0 framework. As you are loading up an app domain to host the .Net process is it a simple matter of configuration settings to force the .Net 4.0 framework, or do you force a particular CLR within your C++ implementation?

I'm just curious as to the feasibility of utilizing the .NEt 4.0 clr within a Photon server environment. It is not clear from your documentation on how you are hosting the .Net app domain within the native environment.



  • Tobias
    We are not actively moving to DotNet 4.0 just yet, so this is just an experiment:

    As long as all applications require the same version of .Net you can probably use a .net config file for the server exe and specify the version of .Net that it should load. This SHOULD mean that when it loads up the runtime host will start the requested version of the CLR.

    At present, you could not use the .Net 4.0 side by side CLR hosting system (to load some app domains as .Net 4.0 and some as .Net 3.x).

    How important is DotNet 4.0 for you (and everyone else here)?
  • shawnmccarthy
    edited April 2010
    Hi Tobias,

    It is definitely important to me not only due to the productivity gains (optional parameters, etc), but also from a parallel computing perpective. The other improvements in the library around diagnostics and performance is also very important to me.

    My client APIs will definitely continue to run under the .Net 2.0 CLR, but I was hoping as you suspect that it is just a matter of a config file change and forcing the .Net 4.0.
  • dreamora
    What are you hoping to get from the parallel computing part of .NET 4.0? Especially in the light that you work with Photon, not pure .NET and could thus potentially cause you more headache than you gain as you work against the parallelization of photon.

    Also one problem I forsee is that you lose code portability cause .NET 4.0 will not happen for Unity for a long time (you will get 3.0 / 3.5 with Unity 3, thats the state of Mono 2.6.x). That negates the gain at least partially.
  • First I think its important to understand that the .Net 4.0 clr can load 2.0 clr based dlls. Obviously the client apis, and framework classes shared by both the server and client apis would remain based on a 2.0 clr runtime (be that 2.0, 3.0 or partial 3.5 support - dependent on how much of library is available in Mono 2.6.x), part of the beauty of multi-targeting.

    The parallel library is just a part of it, but I do see some operations benefiting from this concise model, but to your point it may belong in its own process. However I think if you read my post the other major part of it is the improved coding workflow. How much time do we spend plumping methods just to deal with different parameters. I can quantify it and tell you that much of the time in any api is spent on what I would call plumbing activities, which was much of the focus with the new tooling VS 2010 and .Net 4.0 is addressing.

    There is a big difference between transactional based workflow that you find within typical network based libraries, and those that are better suited for other subsystems within a game architecture. I can tell you that the testing I did on the library when it was first introduced in 2008 as a proof of concept performed comparable to the threadpool or completely managed thread model, but significantly reduced code bloat. I did extensive testing with both IO based (think web crawler based), as well as CPU based operations (breaking a large image into tiles and apply various filter (gaussian blurs, as well as other developed filters).

    I would be happy to share some of my experiences and documentation with the library sometime, but I wouldn't focus on that as the only reason to move .Net 4.0 clr.

    I do not want to lose site of the other more important reasons for considering the .Net 4.0 runtime:

    Productivity enhancements - removing the need to define 5 constructors, and dozens of method overrides to accomplish one goal - allowing the developer to provide different options to class or method. Any productivity gains even if it was only on the server side development is a big win in my mind.

    Diagnostic and performance enhancements - specifically the additional tracing events and diagnostic information to aid in debugging as well as performance gathering metrics. I work in a high volume transaction based business and the ability to delve deeper into the performance and diagnostic information gives us the ability to better tune our specific implementation. That is not to say that the updated VS 2010 profiler can't be used, albeit against an older version of the clr, its just additional data is available. These additional diagnostic capabilities on the server side implementation is also a very big win.

    At the end of the day though I believe my question was a valid inquiry without any performance testing to quantify the concern that there exists any limitations of a hosted .Net runtime within a multi-threaded native based network library.

    Thanks for the feedback dreamora I really do appreciate your insight and experience, and really look forward to talking to your more. I think its great to have dialog about this, but I would not want to place the focus or emphasis on the parallel library.