The Other Kind of Hacker: Video Game Developers and Cybersecurity

A few days ago, it was announced that action RPG and multiplatform Korean gacha Genshin Impact had had its anti-cheat driver weaponized by ransomware developers, thanks to a vulnerability that had been discovered back in October of 2020.

Which, I should point out, isn’t an entirely new phenomenon. There’s literally a Metsploit module designed to take advantage of a vulnerability in an anti-cheat driver Capcom used for the Street Fighter V on PC. All this has happened before, all this will happen again. (Hopefully with a better ending, though.)

It’s also not a new thing for video game developers to not really understand the importance of fixing vulnerabilities like that either. Remember back in 2018, when Epic’s Tim Sweeney got into a fight with Google because they publicly disclosed a vulnerability in their Android version of Fortnite a week after it got patched?

The CEO of Epic Games, everyone.

Rather than spend this entire blog post dunking on Epic Games (fun as doing so can be), let’s take a look at why this contempt for security research might come up given how much time and money game developers spend on stopping people from cheating. I mean, there’s more overlap than you might think. Security research and cheat tools both tend to involve poking at software to make it do things that the developers didn’t want. Hell, one of the most powerful speedrun techniques in The Legend of Zelda: Ocarina of Time, Stale Reference Manipulation, is literally a use-after-free vulnerability.

And it’s not like attacks that don’t even touch the game itself are off their radar either. Look at how much time and effort customer support teams for MMO developers have to spend whenever an account gets stolen. And it’s not like people lack for reasons to steal MMO accounts either. Remember Trump administration dipshit Steve Bannon? Dude used to be the CEO for a World of Warcraft gold seller. Credential theft (or automated actions or just outsourcing labor to China) can lead to a lot of ingame money, which can be exchanged for real-world money if you don’t care about things like Terms of Service agreements.

And this is where we start to see the actual cause of the disconnect: the threat modeling for video game developers isn’t the same as the threat modeling for other software developers. The concern there tends to be about gaining an advantage within the game itself: memory sniffing, graphical hacks, aimbots, and so forth. The goal is to protect the integrity of the game as a game, not the integrity of the game as software. Which is why you get things like “we trust the gaming community to know how to behave and not upload malicious mods that will intentionally cause damage to users” in the official Cities: Skylines wiki article about the modding API. An API that people have literally used to implement a web server in the game.

The real problem is this relatively lax attitude can have severe consequences outside of games. Dark Souls III had a remote code execution vulnerability that forced the PC multiplayer servers offline for seven months. Valve’s Source engine once had a missing bounds check that allowed for code execution by fragging the target, which is either the most cyberpunk thing imaginable or the most NCIS thing imaginable. And I’m sure a lot of people reading this blog are familiar with the big Log4j vulnerability that was discovered in December of 2021, but do you know what bit of software it was first found in? Minecraft. Imagine if that zero-day got dropped in something more high-profile.

The funny thing is, there’s areas where game developers are (usually) better at cybersecurity than other developers. Hell, one of the oldest maxims of multiplayer game development is something we have all wanted to beat into the heads of at least one programmer:

Never trust the client. The client is in the hands of the enemy.

Raph Koster

Obviously, that maxim is followed more faithfully by some games than others, but for valid reasons. Server-side control of character movement can be unacceptably slow for a lot of games, and it’s not that big a deal to trust the client for things like character location if you’ve got checks in place for things like “wait, no, there’s no way that could have happened” such as teleporting halfway across the map or hovering beneath the world. It’s a compromise between performance and security, which is a concept that plenty of us are familiar with.

Similarly, if there’s a solution to the problem, it’s one that many developers are already familiar with. MMO developers will often have a different process for reporting exploits than they will for other bugs (even if it’s just as simple as “use the in-game bug report tool, not the public forum”), because widespread public knowledge of ways people can take advantage of a bug to gain an advantage over other players is exactly the sort of thing they want to avoid. The real challenge is making sure the support staff know the difference between “hey this is something you should fix” and “lol i hakd ur game”, which isn’t as easy as it sounds. And it’s not one I have a solution for either.

Because one way or another, fixing this sort of thing will require developers to interact with gamers. And gamers…well, they’re gamers.