Instancing with 50-100 players is out, now 1000 players?

Xist

Moderator
Staff member
Officer
Donor
Jan 16, 2016
903
2,654
1,650
RSI Handle
Xist
The tech has only been available for a few years, and has only really been in widespread use for less than that. It is one of the things that makes Google Compute Engine so cool; most other cloud platforms don't give you this ability in such an easy way, but GCE does.

I think the chances that any released game is using this are very, very low. It's a completely different server paradigm.

In the future, more games will utilize it. But SC is definitely one of the first, if they do actually release it. It has a bunch of different problems to solve, but having solved them, is far more scalable than traditional servers.
 

thedeadlybutter

Vice Admiral
Dec 2, 2014
113
154
500
RSI Handle
thedeadlybutters
It is one of the things that makes Google Compute Engine so cool; most other cloud platforms don't give you this ability in such an easy way, but GCE does.
I'd disagree strongly on that... The founding premise of AWS, Google Compute services, Azure, and many of the platforms built for this ecosystem such as Heroku, is that you should be building apps that horizontally scale. This type of server architecture is the way most tech startups build their systems today & have been for a while now.
 
  • Like
Reactions: Kirk B and AstroSam

Xist

Moderator
Staff member
Officer
Donor
Jan 16, 2016
903
2,654
1,650
RSI Handle
Xist
You can scale them all, but it's way easier to do it on GCE IMO. Very easy to build the scaling into your application, which is important for something with actual state, like a persistent game server.
 

thedeadlybutter

Vice Admiral
Dec 2, 2014
113
154
500
RSI Handle
thedeadlybutters
handling state in a setup like this has puzzled me. efficient apps which scale horizontally tend to follow the 12-factor pattern to some extent http://12factor.net/

however, when you think about a game server, this model breaks down because traditionally a game server maintains quite a bit in memory or "state". I know improbable https://improbable.io/ is tackling this problem and I'm really interested to see how this all plays out.
 
  • Like
Reactions: Kirk B and AstroSam

Xist

Moderator
Staff member
Officer
Donor
Jan 16, 2016
903
2,654
1,650
RSI Handle
Xist
I'm not sure what solution they actually decided to go with, but we did discuss several possibilities that seemed viable in his situation based on my past experience scaling huge persistent servers.

As you say, 12factor is for stateless apps, which doesn't apply to SC.

Basically their VMs need to be aware of the concept of scaling, and coordinate this amongst themselves, seamlessly handing users and data off to another server for localized processing and sharing information amongst themselves for bigger picture processing.

This means networking becomes the main bottleneck, so they can do ~ 7 Gbps of internal communication before they can no longer scale the app. How efficient is the communication and how good are they at compartmentalizing the players becomes more important than how efficient they are at using CPUs, because CPUs are now very easy to increase, but the network is finite.
 
  • Like
Reactions: Kirk B and AstroSam

Xist

Moderator
Staff member
Officer
Donor
Jan 16, 2016
903
2,654
1,650
RSI Handle
Xist
PS- the other thing that we talked about in this context was how to take stateful servers out of production in order to update them without forcing downtime on the users.

This problem is very easy when the updates are backwards compatible, but often that is not the case.

What most games do is force several hours of downtime every week. WOW is a perfect example of a horrible way to handle it, with sometimes 9-12 hours of downtime in a week.

This bugs the shit out of me, and also CR. I shared with him our strategy for making updates. We have not had intentional downtime in more than 15 years - and our strategy is far easier now with GCE as a hardware provider.

The problem is, you have to build the entire app around the concept from the ground up, which he should have been able to do if he chose to prioritize downtime (or lack thereof) as an end user experience he wanted to provide.

He hasn't announced how they will handle that aspect of production yet, but I keep my fingers crossed and hope for zero downtime. :)
 
  • Like
Reactions: AstroSam and Kirk B

Kirk B

Commander
Donor
Apr 19, 2016
27
79
150
RSI Handle
Kirk B
So this just got real technical real fast, but I think I understand the gist of it! So kudos to you guys for inadvertently explaining what this actually means without having to walk us uninitiated through it like three year olds. The spontaneous expert level talk reminds me why having 10,000 members is insanely awesome.
 

Xist

Moderator
Staff member
Officer
Donor
Jan 16, 2016
903
2,654
1,650
RSI Handle
Xist
Physical hardware limitation.

GCE uses dual Ethernet connections, one on the internal network and one on the external network.

You can't reliably push NICs beyond 85% usage without starting to see persistent errors, so the theoretical max bandwidth on the internal network is 8.5 Gbps. They could go higher, but shouldn't. QOS would drop.

They can't use ALL of that, they do need to be able to maintain the machines.

I estimated 1 Gbps to be sufficient overhead for non-application network requirements, which is probably liberal.

Which leaves roughly 7.5 Gbps remaining, which is a little bit better than trying to read/write a local hard drive, for reference.

Maximum 8.5 Gbps if they don't use the internal network for anything at all during operations, but that seems highly unlikely.
 

RedLir

Vice Admiral
Donor
Feb 16, 2016
205
601
450
RSI Handle
RedLir
I will say this coming from an EVE background originally. Having EVERYONE in the same server, is awesome for immersion. The closer SC can get to that, the better.
 
  • Like
Reactions: AstroSam

chrizz

Digital Janitor
Staff member
Officer
Jan 22, 2014
1,024
913
2,620
RSI Handle
chrizz
thx Xist, for the technical details :D
 
  • Like
Reactions: Kirk B and AstroSam

Lexicon

Captain
Aug 1, 2016
162
453
210
RSI Handle
Lexicon
Yeah!

Time dilation is an awesome feature! ;)
Oh jeez. I wonder if SC is properly prepared/ing for EVE-style capital battles. B-R5RB had 2.7k in system at max, Asakai had roughly that by my estimate, and 6VDT-H had four thousand players simultaneously in system - and let's be honest with ourselves, EVE is a lot less classically-fun than a traditional spaceflight sim. I can honestly picture huge turf-war battles in Star Citizen drawing more players, not less.

I dunno if a "thousand player cap" is going to be enough...
 
  • Like
Reactions: AstroSam

RedLir

Vice Admiral
Donor
Feb 16, 2016
205
601
450
RSI Handle
RedLir
Yeah!

Time dilation is an awesome feature! ;)
After my time. That was their "answer" to a problem because they didn't have the foresight or resources to design it properly from the conception of the game. Obviously we're not going to have 40k people in one 'system' dogfighting, but in EVE it was nice to know everyone was 'there'. You could bump into them just beyond the next gate.

Limiting to a 1000 players would be pretty bad for PVP immersion. We could fill 10 instances alone with Test and filthy affiliates...
 

maynard

Space Marshal
May 20, 2014
5,124
20,290
2,995
RSI Handle
mgk
Time dilation is an awesome feature! ;)
sovereignty is the awesome feature

Eve's sov mechanics are a little stilted but they tap into our instinctive territoriality

the big fights happen because everyone is like, "we must win at all costs!"

I have yet to see what SC will provide with game design that exploits our territorial instincts

what will be the conflict drivers?

for TEST to hang together long-term we need goals beside 'more beer' to unite around

it's my biggest [CONCERN] For SC
 
Forgot your password?