In another game? Could it be?!
Looks interesting either way.
View: https://www.youtube.com/watch?v=pdav0as54mU
Looks interesting either way.
CIG must have saw that video from the other studio and be like….”Hold up, we’re the OG Server Meshers!”
If it's done right, it'll be fewer servers, not more. If you recall for the longest time we were stuck at 50 players per server. That's because everything the game needed to play was on that server. When a player loaded in, all the stuff that player owned became an entity on that server, and whales would cause lag spikes when they logged in lol.Ultimately I'm hella nervous about the costs of Dynamic Server Meshing. All those servers are gonna cost a lot and even then...no guarantees it'll work.
Fingers crossed man! All that technogoobillygook better work!If it's done right, it'll be fewer servers, not more. If you recall for the longest time we were stuck at 50 players per server. That's because everything the game needed to play was on that server. When a player loaded in, all the stuff that player owned became an entity on that server, and whales would cause lag spikes when they logged in lol.
The direction they are going is that a DGS will only run the code needed to perform functions in the game. Background functions like persistence storage management, economic transactions, mission tracking, ATC, inventory and bunches of other processes moving OUT of the DGS code has been a contributor toward the successful 400 per server/800 per shard test that happened earlier this year. So by leaning the DGS code out and making fundamental workload changes like Object Container Streaming, one server has become efficient enough to replace 8. The code that was removed is being run more efficiently on other servers which feed the results to whichever DGS requests it. Some things like the Quantum simulation (StarSim?) and master entity graphs will only ever be on their own cluster, passing data back and forth to the many replication layer servers.
This is kind of a clumsy explanation, and I do not have an accurate sense of how it it will eventually scale; they might not either at this point. But it could potentially be FAR fewer servers for the same number of players just by implementation of architectural efficiencies.
Will it work? Yes, it'll work. It's already been shown to work. Getting it to scale across an MMO is the next challenge, but even that is just details at this point.
Look, out of the roughly 30 or 40 times that I had a server crash since the server recovery is in the live build, there wasn't a single instance where the game spooled back up to something even remotely playable. About 1/3 of the time it just didn do anyhting in the 10-15 mins I let it sit on the "please wait" screen, and every single time it did come back online, things, major things like being able to interact with consoles, inventory access, being able to shoot, missions,... were very fucking broken. This is just factual.@Lorddarthvik You having a bad day mon? As someone who plays most every day in the roughest patches that are publically released, I have to take your post as rhetorical rather than factual.
Server meshing as a concept isn't new at all. The way that CIG intends and NEEDS for it to work IS new, and means inventing new code to get to the goal.
Unless they've dramatically re-written their backend in recent years, EvE does not use server meshing as CIG is implementing it. Typically one system or a handful of systems will occupy 1 server, and going through a jump gate (or cyno jump) moves your data onto the new server. While they spent a gigantic amount of effort to optimize the netcode and server performance, the working solution for big fights was 'time dilation' where player involvement would slow down to allow the server to hopefully hold up. If your corp or alliance KNEW there was a big fight coming up, it was possible (and a good idea!) to submit a formal request so that resources could be provisioned for it. Server meshing, in the manner that CIG is doing it, would have completely negated this problem.
The demo at CitizenCon was fantastic from several perspectives; the most impressive for me was how well synchonized the separate pieces of hardware were. The Replication layer provides the entity (say a bullet) to both servers; one doing the calculations and the other receiving data. As the bullet crosses the 'physical' border authority to decide the entitie's fate swaps, and the original server becomes the receiver of updates. The hard part is having this happen in a time frame short enough to be undetectable to the human player. And then doing it at scale for all the things. A KEY takeaway here: each DGS displays much more information than just what it 'owns'.
Dynamic Server meshing is the end goal, which adds the ability for the backend to seamlessly spin up and move players onto a new server (presumbably fresh hardware, but that is another conversation) without the players realizing it has happened. So an EvE style Fleet battle with 2000 player clients on each side might involve, say 10 dedicated game servers all communicating via the Replication layer (at least 1 server itself) which also manages the flow to and from the backend services such as long term persistence and the economy simulation servers.
You're not wrong. Progress on SC has been slow and until we see proper evidence of server meshing that goes beyond the tech demo and some EPTU experiences...well the verdict is still out.Look, out of the roughly 30 or 40 times that I had a server crash since the server recovery is in the live build, there wasn't a single instance where the game spooled back up to something even remotely playable. About 1/3 of the time it just didn do anyhting in the 10-15 mins I let it sit on the "please wait" screen, and every single time it did come back online, things, major things like being able to interact with consoles, inventory access, being able to shoot, missions,... were very fucking broken. This is just factual.
Yes I have seen the demo, I loved it like everyone else. It was really impressive. Yes I understand what the end goal is, and it was awesome to see it work live so flawlessly in that small demo. This doesn't negate the fact that they keep talking about these things, like the replication layer and server recovery, as if they were working a 100% perfectly fine as intended in the LIVE build. They arent, and that's a fact. I'm just a bit tired of "watches vid, gets hyped, logs on, nothing actually fucking works" cycle.
CIG seems to be moving in huge strides now, which is awesome to see, but also worries me due to their previous track record of basically nothing in the game working as intended without major bugs just yet. Adding more buggy things won't make for a better game...
I hope it won't bite them, and us, in the ass.
edit: ah yes I was wrong on EVE. I knew about the time dilation but I thought they were doing some sorta phasing aka meshing unrelated to that. Apparently they don't!
Some quibbles with what you are saying as there seems to be some confusion in regards to what server mesh really is in the backend.@Lorddarthvik You having a bad day mon? As someone who plays most every day in the roughest patches that are publically released, I have to take your post as rhetorical rather than factual.
Server meshing as a concept isn't new at all. The way that CIG intends and NEEDS for it to work IS new, and means inventing new code to get to the goal.
Unless they've dramatically re-written their backend in recent years, EvE does not use server meshing as CIG is implementing it. Typically one system or a handful of systems will occupy 1 server, and going through a jump gate (or cyno jump) moves your data onto the new server. While they spent a gigantic amount of effort to optimize the netcode and server performance, the working solution for big fights was 'time dilation' where player involvement would slow down to allow the server to hopefully hold up. If your corp or alliance KNEW there was a big fight coming up, it was possible (and a good idea!) to submit a formal request so that resources could be provisioned for it. Server meshing, in the manner that CIG is doing it, would have completely negated this problem.
The demo at CitizenCon was fantastic from several perspectives; the most impressive for me was how well synchonized the separate pieces of hardware were. The Replication layer provides the entity (say a bullet) to both servers; one doing the calculations and the other receiving data. As the bullet crosses the 'physical' border authority to decide the entitie's fate swaps, and the original server becomes the receiver of updates. The hard part is having this happen in a time frame short enough to be undetectable to the human player. And then doing it at scale for all the things. A KEY takeaway here: each DGS displays much more information than just what it 'owns'.
Dynamic Server meshing is the end goal, which adds the ability for the backend to seamlessly spin up and move players onto a new server (presumbably fresh hardware, but that is another conversation) without the players realizing it has happened. So an EvE style Fleet battle with 2000 player clients on each side might involve, say 10 dedicated game servers all communicating via the Replication layer (at least 1 server itself) which also manages the flow to and from the backend services such as long term persistence and the economy simulation servers.
‘Scratches head’Some quibbles with what you are saying as there seems to be some confusion in regards to what server mesh really is in the backend.
The game world is still hosted on a single server. What is the slated goal is dynamic adjust the size of that game world, be it the size of stanton or hurston or just the space port.
Where games like eve and wow have static game worlds fixed to zones/solar systems. But in all cases the player cap is going to be dictated by both server optimization and network resource bandwidth.
The newish as far as tried with in a mmo is the use of a replication layer to hold the current state of game objects (those being mutable) which allows for a far more seamless and quick transfer between controlling game servers. Remember that zone transfer delays in games like wow are slow more out of the local game engine running on your PC having to unload the old zone and loading the new zone then the server having to transfer the player actor object.
DGS do not display anything to the client the are owners of mutable game objects in their control and sole updaters of the replication layer game state. The client gets the world view from the replication layer.
As for replication layers they are used all of the time in the cloud and allow for a lot of the e-commerce transactions to take place as well as web searches.
Before the whole "phasing" thing in WoW you could see and interact with players in the other side of the "zone fence". We're they using a replication layer 15 years ago or was everything running on one server, the way I thought it was?Some quibbles with what you are saying as there seems to be some confusion in regards to what server mesh really is in the backend.
The game world is still hosted on a single server. What is the slated goal is dynamic adjust the size of that game world, be it the size of stanton or hurston or just the space port.
Where games like eve and wow have static game worlds fixed to zones/solar systems. But in all cases the player cap is going to be dictated by both server optimization and network resource bandwidth.
The newish as far as tried with in a mmo is the use of a replication layer to hold the current state of game objects (those being mutable) which allows for a far more seamless and quick transfer between controlling game servers. Remember that zone transfer delays in games like wow are slow more out of the local game engine running on your PC having to unload the old zone and loading the new zone then the server having to transfer the player actor object.
DGS do not display anything to the client the are owners of mutable game objects in their control and sole updaters of the replication layer game state. The client gets the world view from the replication layer.
As for replication layers they are used all of the time in the cloud and allow for a lot of the e-commerce transactions to take place as well as web searches.
This is the part I'm not sure how they are planning on handling shards (shards being the same zone but split due to server/player limitations) ie hurston space port is holding a beerfest and all of test is doing their best. There are still limitations as to how many concurrent players can be tracked by the server and updated to all of the clients. Typically this would be handled by making either the zone full preventing others from joining the part or creating a clone and it being a second instance. The third way is what eve does with time dilation which really is just unlocking the game world update tickrate from a set real time (30 ticks a second is typical for fps).Before the whole "phasing" thing in WoW you could see and interact with players in the other side of the "zone fence". We're they using a replication layer 15 years ago or was everything running on one server, the way I thought it was?
What they're are doing nowadays with dynamic phasing in and out of different server populations per zone chunk is I'm guessing some sort of main database aka rep layer aka just another managing server in the middle. Irl it's annoying as fuck to be honest. Immersion and community breaking.
This shouldn't be an issu with SC though as we will not be choosing a server to live on, the game will run as a single cohesive game world from our point of viewing, and if they manage the phasing right, other players won't just vanish when still in view.
Cos yes we will have something very similar to wow phasing tech. It's absolutely necessary. It's been talked about many times that busy locations will have multiple servers serving the same block of the world, putting you the player into a server that they deem the most relevant, like having players from your friendsl list in the same place as you so you will be able to interact with each other without jumping through hoops. Kinda hard to imagine cig getting this part right, even blizz with way more time and money didn't manage to do it.
I remember having to group up with guild mates in busy hub cities just so I could see them (thus change onto the same server/phase as they were in) and interact and trade and such. Very annoying.