How does Coherence handle scaling an MMO / game whilst using floating origin?
For exmaple, a player spawns in at (0,0,0) and moves very quickly to some absurd position (55555, 11111, 99999). For this client I assume the simulator moves with them as well as their local client.
However what would happen if a new player were to then join the game who spawns at (0,0,0). This is a significant distance away from the simulator’s origin now so I’m wondering how Coherence handles these situations especially for resolving physics events like collision detection, rigidbodies etc?
Hey, well there are a number of ways to solve this, but one way is to break up the large world into several simulators that handle different ranges of absolute positions but each has their own local floating origin so while in one area you’re on one simulator but your position isn’t a crazy number so physics precision isn’t lost. So, different from your example, the simulator would not move with the player, but the player would move to different simulators.
So, for example, your player at 55555,11111,99999 is on simulator 7 but the new player is on simulator 0.
Breaking the world up into multiple simulators can be done by creating multiple worlds in the dashboard each with a simulator that knows what space it cares about and then the client deciding which world to connect to depending on their position in absolute space.
If you’ve got a massive world (IE space game set at the solar-system level, perhaps not to the mm precision but certainly 10-1000m precision) wouldn’t that still require an insane amount of simulators (and have an equally large running cost?)
I’ve not tried out the cloud yet, all my work ahs been done locally so not sure how quickly credits go etc.
Also, how would this work for seamless transitions (not to mention being on the boundary between two worlds, would you be able to see entities in another world or would you have to cross that threshold first?)
yeah, so there’s a big difference between WoW style MMO with a large area like 99999,99999,99999 and a space game with solar system sizes. So, for space-scale you’re always going to have to consider the trade off of number of simulators and the space they simulate. Using rooms you could divide up your space in some kind of Octree and have areas dynamically connecting to rooms just to simulate that piece of space. For example, if you imagine a 100,100 2d grid broken into 10 10x10 zones players in 0,0 - 9,9 go into room 0 and players in 90,90 - 100,100 go into room 9. Using rooms has the drawback of not always running if there are no client connections, so that’s another trade off to consider of worlds (and lots of simulators) vs rooms and dynamic simulators. You could have one simulator that simulated the entire universe but didn’t have any players and just updated the global state.
Re: seamless transition, coherence has a nice feature that allows you to connect to more than one room / world at a time, so you can effectively be simulating in two while you transition the boundary.
Is it possible to have simulators connected to other worlds (I was having trouble having 1 simulator managing 2 scenes, one was a basic login scene, the other the main ‘game’ scene) and also was told they can’t communicate with each other.
It looks exactly what I need tbh, simulators can be span up / down as needed, even dynamic boundaries if my game is busy enough to need it.
I’m guessing the downside you mentioned to rooms is the simulators not running if no clients (meaning there’s some spin-up time associated with rooms in that case) so I’d assume in the above example that was ‘world’ simulators? (as the simulators persisted even though the player wasn’t connected to 4 of the simulators? Unless in the example the player connected to each simulator/room/world?)
Yep, that big worlds is another great example of how to approach simulator load balancing. In that example, it’s one world with multiple simulators however the cloud services currently available don’t allow this, you’d have to provide your own servers and scaling system. This service should be available sometime in the future though.
. Re scenes: yeah, entities in one scene have no way to reference another entity in a different scene so they can’t communicate.
The thing with my game that I’d like to make is it has 4 levels essentially; Ship, Planetary, Solar-system, and Interstellar.
Ship is the smallest in scale, and is where collisions/physics needs to be processed because its where players interact (they walk around aboard the ships, shoot each other, collide with walls, doors etc) And its entirely possible that there is only 1 player aboard a ship.
Planetary can probably be on a 1 planet : 1 simualtor as I don’t need 1:1 scale planets like Star Citizen (and only when players are present on a planet). There could be one or many players on a planet, and will also need to process collisions, physics etc similar t othe ship scale so they are more or less the same.
Solar System is the big one, but because I want there to be many star systems there’s a good chance most will be empty anyway, devoid even of players. But chances are if there’s a player in a star system, they may be the only one. But at this scale collisions isn’t as important, haven’t decided if I want my players colliding with asteroids, each other etc (and even so I could probably do it via maths rather than relying on 32-bit physics?)
Interstellar could be the simplest where all ‘interstellar’ space is shared by 1 simulator (±1000 in any direction will be fine for what I want). Collisions or physics. Just position and states.
So really, I’d only need a simulator for where one or more players are. The rest I imagine could be global and simulated via math/state-updates (position updates etc)
So all the features in the load-balancing video are currently not available? Or is that demo available?
It sounds like going for scene transitions then is not the way to go with using Coherence then because data will be lost? Is there no way to sync this data across scenes? (Such as a global simulator you mentioned?)
So yeah, everything in that demo is available out of the box for everyone except the hosting and scaling of the different simulators on one world. The coherence Cloud services only support one simulator per world unless you’re an enterprise customer.
An approach for your levels I would consider are:
ship: use rooms. Each ship in their own room unless you’re doing ship to ship combat as well as interior combat in which case I would merge the ship level with the solarsystem level
planet level: use rooms here too! If your planets are supposed to be living environments that change over time, you don’t need a world and a running simulator (when no one is connected) you can process things procedurally when people connect to the room that hosts the planet.
solar system level: yep, rooms again!
interstellar: maybe use a single world with a simulator that coordinates the groupings of players into rooms based on where they are.
Really the key is to consider not using server processing power when there’s nothing there and rooms are the most flexible for this if you’re doing things like matchmaking or just temporarily grouping players because they’re at the same planet.
There’s no one answer regarding scene transitions. The coherence system can be setup so that things that you want to transfer between scenes do and connection is maintained as you change scenes. But things that exist in different scenes can’t see each other so you have to keep that in mind.
Ah right ok thanks. Just to clarify, does the demo contain the logic for the scaling (and splitting of simulators) it just won’t work unless you have enterprise. Or is that whole feature-set missing completely from the demo? If its present in the demo (just disabled essentially) I’d be interested to have a peek at how it works out the boundaries and such if it is present is all xD
Awesome thanks for the suggestions. I’d like to simulate planets such as wildlife, colonies, cities etc. But that can easily be faked when a player goes near rather than constantly simulated I guess
I don’t think there’s source for the demo available. If you’re interested in hosting your own simulators and having them connect to the world and how to load balance them, I’m suggest starting here: Hosting - Unity Multiplayer SDK Documentation | coherence and then dropping an email to devrel@coherence.io might to see about accessing to the source. Depending on priorities we will hopefully have more samples and examples of how to handle simulators like this in the future.
The demo (which, btw, we usually refer to internally as “Big World”) hasn’t been shared mainly because it contains graphics assets that we can’t reshare. And because the code is just dirty
Regarding the spinning up/down of Simulators, it contains that. The scaling is custom-written, and it’s part of the game logic. Every time a Simulator is managing more than X entities (a fixed threshold in the demo), it launches a new one. Once the process is up, it starts rebalancing the existing Simulators by reassigning the authority of entities to redistribute them based on the space partitioning.
The demo also contains two different kinds of logic for space partitioning, both shown in the video: one is grid-based and the other is a voronoi cell algorithm. This logic is nothing crazy revolutionary, it literally just sections the world into various parts based on coordinates, and then each frame it checks one of the entities and hands its authority to a different Simulator.
This is probably the part that many are fascinated about, but the reality is that it’s also the least useful that we pre-make for you: only you would know how your game should partition the playable world! You could potentially get an octree algorithm from the Asset Store and plug it in there.
Spinning up a system like that is also something of a custom cloud setup that we need to create for you (for now), so it makes sense to get in touch via email (Cary posted it above) and tell us about your plans!