this post was submitted on 27 Aug 2023
121 points (94.2% liked)

Asklemmy

43948 readers
496 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

I can understand patch updates, but what else are the devs doing?

all 36 comments
sorted by: hot top controversial new old
[–] ojmcelderry@lemmy.one 99 points 1 year ago

They could be upgrading hosting infrastructure - sometimes this requires servers to be shut down or restarted. They might also be applying database changes such as migrating data from one server to another, or updating the structure of the database to improve performance or support new features.

Honestly, there are quite a number of reasons for planned downtime.

Unplanned downtime is a different story. Usually that's because something unexpected went wrong and there will be engineers trying to get things back up and running ASAP.

[–] lowleveldata@programming.dev 56 points 1 year ago (2 children)

They interrogate the player characters 1 by 1 and question if their human has any suspicious activities.

[–] bkmps3@aussie.zone 20 points 1 year ago (1 children)
[–] Klear@lemmy.world 9 points 1 year ago (1 children)
[–] Moghul@lemmy.world 2 points 1 year ago (1 children)
[–] PipedLinkBot@feddit.rocks 1 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=02N87vLULTM

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] fubo@lemmy.world 44 points 1 year ago* (last edited 1 year ago) (1 children)

Over a decade ago, I worked in a big tech company that had a scheduled downtime on one Saturday a month. That was for database schema changes.

When you're changing the structure of how you keep track of customer data, you need to make sure that no customers are making changes at that same time. So you take the whole customer-facing service down for a little while, make the schema changes, test them, and then bring the customer-facing service back up. Ideally this takes a few minutes ... but you're prepared for it to take hours.

As the technology improved, and as the developers learned better how to make changes to the system without requiring deep interventions, long downtime for schema changes became less necessary ... for that particular business.

Every tech company pretty much has to learn how to do these sorts of changes for themselves, though.

[–] Synthead@lemmy.world 18 points 1 year ago* (last edited 1 year ago) (1 children)

This is the most informed answer in this thread. It really does come down to schema changes. There are even ways to avoid downtime during schema changes, but it's often complicated. For example, you don't see YouTube go offline for schema changes, but they're willing to make this effort and investment, even for very large databases.

Lots of other database tasks can happen while remaining online. For backups, use a read-only connection. For upgrades, you should have a distributed and scaled database, so take them down in sections during upgrades. For "cleaning up," you can do vacuum operations on part of your database while it's live. Etc etc.

Ultimately, there is almost never a technical reason why a database has to go offline. It's a matter of devotion to the stability and uptime of your infra. Toss enough engineering hours at a database problem and you can pretty much have 100% uptime in the scope of maintenance (not incidents, of course). But even with incidents, there are fail-over plans, replicas, and a ton of other things you can do to stay online. Instead of downtime, you have degraded performance that the users may not even notice.

[–] Cras@feddit.uk 4 points 1 year ago (1 children)

The other big one that usually requires downtime is network. You may not be touching your game servers all that often but if you need to do a major OS upgrade on a load balancer or switch, that's going to mean everything behind it loses connectivity - and unless you're talking one of the big hitters like WoW, they're probably not funding redundant dual network paths to allow you to take it down without downtime

[–] Synthead@lemmy.world 2 points 1 year ago (1 children)

If you are running metal, and the health of your entire network relies on a single load balancer or a single network switch, you're far from being production-ready from a redundancy and scaling perspective.

[–] Cras@feddit.uk 1 points 1 year ago (1 children)

I don't disagree, but at the same time running a whole setup that is fully ready for hot swap live failover whenever you have maintenance tasks to do is potentially just not desirable when you have the option of just taking the environment down instead - after all, gamers are pretty much conditioned to expect it at this point

[–] Synthead@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

running a whole setup that is fully ready for hot swap live failover whenever you have maintenance tasks to do

This is basically "ready for production 101." It's even easier to run an entire service on a computer under a desk, but this isn't how you run stuff in production.

Even if it's "easier" in the short term, you'll be paying more for not being production-ready in the long term (and get a reputation for not having good uptime).

[–] Cras@feddit.uk 1 points 1 year ago (1 children)

Yeah I feel you're widely overestimating the setup that's in place for smaller online games companies. We're not talking about Activision or some high-frequency fixed income trading firm here. "Give me something that people can play on that costs as close to nothing as possible" is usually the main driver

[–] Carighan@lemmy.world 32 points 1 year ago

One thing in particular older MMORPGs did was essentially just need a week restart. They could not figure out how to. Make the server not have some bug or another that slowly increased memory usage, so eventually it would just break from the bug.

To alleviate this, they did weekly restarts. Also a good time to do longer full backups, integrity checks, etc. But the main impetus was needing to restart everything.

[–] Dlayknee@lemmy.world 27 points 1 year ago

On top of what's already been said, to your question specifically of what the devs are doing - a lot of the time it's nothing out of the ordinary as the Ops teams are the ones conducting the maintenance. There will likely be a dev or devs on call, but that's routine anyway so it's ultimately just another day for them. Sure, when big patches are pushed they're typically more attentive to the process - but even then, they're essentially informed observers.

[–] intensely_human@lemm.ee 22 points 1 year ago (2 children)
[–] Agent641@lemmy.world 5 points 1 year ago (1 children)

Yes, but what are they doing?

[–] cheesemoo@lemmy.conk.me 4 points 1 year ago (1 children)
[–] bingbong@lemmy.dbzer0.com 3 points 1 year ago (2 children)

Yes, but what are the devs doing?

[–] sygnius@lemmy.world 3 points 1 year ago
[–] AnUnusualRelic@lemmy.world 1 points 1 year ago

Same as everyone else, waiting.

[–] Falmarri@lemmy.world 19 points 1 year ago

Not just database migrations as others have mentioned, but database state. Databases can result in a lot of dead data, because of how transactions and locks work. Cleaning that up can cause usage of the database to be blocked for a short time. It's easiest to do this periodically if there's down time

[–] theodewere@kbin.social 14 points 1 year ago (2 children)

databases are weirdly mechanical in that you have to shut them off now and then to sort of straighten out the rows and columns, and chuck out abandoned or corrupted files.. maybe add some grease in the form of optimizations and then fire it back up so users can get it all messy again inside.. mostly because they're all written just well enough to function..

[–] Synthead@lemmy.world 3 points 1 year ago (2 children)

How do you straighten out rows and columns?

[–] theodewere@kbin.social 3 points 1 year ago

just give the tape drives a little jiggle as they come online

[–] JackbyDev@programming.dev 1 points 1 year ago

That doesn't sound right. Why would turning a database off let it do anything? It's off. Most databases periodically do stuff like this in the background.

[–] sulunia 13 points 1 year ago

Database schemas can be updated, new services and special functionalities can be first activated and afterwards tested with specific accounts, among a myriad of other things, depending on the game and the update.

[–] atri@lemmy.world 8 points 1 year ago

Windows updates

[–] squiblet@kbin.social 7 points 1 year ago

The servers run on regular operating systems. They might wish to back up the storage (and databases), update the OS, or update their game server software, all of which is a lot easier if the service is stopped.

[–] what_is_a_name@lemmy.world 7 points 1 year ago

To add to others’ posts. It can be a huge variety of things that risk making the service unstable, unresponsive, and worst case could corrupt data in flight.

Customers view scheduled maintenance as minor inconvenience. Unplanned outage as an annoyance, and loss of data as a dealbreaker.

So any time there was a chance that what we need to do would limit functionality - or otherwise make the system unstable - best to take the system offline for scheduled maintenance.

[–] TheColonel@reddthat.com 6 points 1 year ago

Maintenance.