Network Culling increases Msg/s

Options
I implemented a first try of a custom photon culling-groups setup, but no matter what I do, the room Msg/s is always significantly higher (read: double) when the culling is enabled. The traffic is about the same, but again is higher with culling turned on. The game runs the same afaik.

I had expected to experience at least SOME msg/data savings with my first draft at culling, and then to be able to increase the gap with fine-tuning. After all, if everything defaults to 100% always send, any amount of culling should always reduce data, minus any culling overhead which should be just a handful of bytes. (I am throttling updates of the sending photonviews' .group and each client's subscribed groups to just once per second so I doubt the difference is culling overhead)

I captured packets with two instances running on my workstation, both with and without culling, and comparing traffic for the same length of gameplay, the difference in number of packets and average UDP payload is minimal. I suppose a single UDP packet contains multiple photon messages queued up.

I guess my questions are:
1. Am I correct in assuming that the default is to always send every message to everybody? Is there any other default behavior going on in a non-culling setup that I am perhaps turning off by enabling culling?
2. Does using culling create 'invisible' extra msg traffic that shows up in photon dashboard?
3. Are there any well-known pitfalls of implementing a custom culling setup with photon that can create extra traffic like this, and/or are there best-practices to avoid it?
4. Any tips on how to analyse this problem traffic?

Thanks!

Comments