Microsoft Game Technology Group
December 2005
An increasing number of people play online games and games with user-made content. This, combined with the increasing security of the Windows Operating System, means that games are a growing and more tempting target for attackers to exploit. Game developers should place a strong emphasis on making sure the games they release aren't creating new security holes for attackers to exploit. Game developers have a responsibility and a vested interest in helping prevent their customers' machines from being hacked by malicious network data, user mods, or tampering. If a vulnerability is exploited, it could result in losing customers and/or money. This article outlines and explains some common methods and tools to increase code security without over-inflating development time.
The three most common mistakes made by a development team when releasing a product are:
Each of the listed mistakes is not only common but is easily correctable with no significant change in development workload, coding standards, or functionality.
The following is a simple example of all it takes to allow an attacker to perform a buffer overrun attack:
void GetPlayerName(char *pDatafromNet) { char playername[256]; strncpy(playername, pDatafromNet, strlen(pDatafromNet)); // ... }
On the surface this code looks ok, it's calling a 'safe' function after all; data from the network is copied into a buffer that is 256 bytes. The strncpy function relies on finding a NULL terminator in the source string or is limited by the provided buffer count. The problem here is the buffer size is incorrect. If this network data isn't validated or the buffer size is wrong (as in this example), an attacker could simply provide a large buffer to overwrite stack data after the buffer ends with any data in the network packet. This will allow the attacker to execute arbitrary code by overwriting the instruction pointer and changing the return address. The most basic lesson is to never trust input until it has been verified.
Even if this data doesn't come from the network initially, there still is potential risk. Modern game development requires many people designing, developing, and testing the same code base. There is no way to know how the function will be called in the future. Always ask yourself where the data came from and what could an attacker control? While network based attacks are the most common, they are not the only methods of creating security holes. Could an attacker create a mod or edit a save file in a way that opens a security hole? What about user supplied image and sound files? Malicious versions of these files could be posted on the Internet and create dangerous security risks for your customers.
As a side note, use strsafe.h or Safe CRT instead of strncpy to correct the example. Safe CRT is a complete security overhaul of the C Runtime and comes with part of Visual Studio 2005. More information about Safe CRT can be found in Saying Goodbye to an Old Friend by Michael Howard
There are several ways to improve security in the development cycle. Here are some of the best ways:
The book, "Writing Secure Code" 2nd edition by Michael Howard and David LeBlanc, provides an in-depth and clear explanation of strategies and methods of preventing attacks and mitigating exploits. Starting with methods of designing security into a release to techniques for securing network applications, the book covers all aspects that a game developer needs to help protect themselves, their products, and their customers from attackers. The book can be used to instill a culture of security in a development studio. Don't just think of code security as a developer's problem or a tester's problem. Think of security as something the whole team, from program manager to designer to developer to tester, should be thinking about when they work on a project. The more eyes that are part of the review process, the greater the chance of catching a security hole prior to release.
"Writing Secure Code" 2nd edition can be found here and more general security information can be found in Fending Off Future Attacks by Reducing Attack Surface by Michael Howard
Michael Howard, David LeBlanc, and John Viega have written another book on the subject that covers all common operating systems and programming languages entitled, "19 Deadly Sins of Software Security."
More good information can be found in the Meltdown 2005 presentations "Finding Security Bugs" and "Reviewing Code for Security Bugs" by Michael Howard.
A Threat Modeling Analysis (TMA) is a good way of assessing system security not in a specific language or using a tool, but in a broad, end-to-end method that can be accomplished in a few meetings. When implemented properly, a TMA can identify all strengths and weaknesses of system, without adding significant workload or meeting time to the project. The method of threat modeling also emphasizes the idea of assessing system security prior to and during the development process to help ensure a comprehensive assessment is being made while focusing on the most risky features. It can be thought of as a profiler for security. By not being based on a particular language or relying on a specific tool, threat modeling can be used in any development studio working on any project in any genre. Threat modeling is also an excellent method of reinforcing the idea that security is everyone's responsibility and not someone else's problem.
When threat modeling, pay special attention to:
These are the areas that have good potential for security weaknesses.
More on Threat Modeling can be found in the Threat Modeling section of the MSDN Security Development Center and in the book "Threat Modeling" by Frank Swiderski and Window Snyder.
A recent tool in mitigating multiple exploits is Data Execution Protection (DEP). By including this switch in the build command line, Visual Studio will mark memory pages with flags denoting whether the code has the right to execute or not. Any program attempting to execute in a memory page not flagged with EXECUTE permission will cause a forcible termination of the program. The protection is enforced on the processor level and will impact developers using self-modifying code or native JIT language compilers. Currently, only AMD's Athlon64 and Opteron processors and Intel's Itanium and latest Pentium 4 processors support execution protection, but it is expected that all 32-bit and 64-bit processors will support execution protection in the future. A copy-protection scheme used by a developer may be affected by execution protection, but Microsoft has been working with copy-protection vendors to minimize the impact. It is a good practice to use this build flag.
For more details on DEP, read this execution protection article
/GS is a compiler flag and /SAFESEH is a linker flag for Visual Studio .NET 2003 and later that can make the developer's job of securing code a little easier.
Using the /GS flag will cause the compiler to construct a check for some forms of stack-based buffer overruns that could be exploited to overwrite the return address of a function. Using /GS will not detect every potential buffer overrun and shouldn't be considered a catch-all, but a good defense-in-depth technology.
Using the /SAFESEH flag will instruct the linker to only generate an executable or DLL if it can also generate a table of the safe exception handlers of the executable or DLL. Safe Structured Exception Handling (SafeSEH) eliminates exception handling as a target of buffer overrun attacks by ensuring that before an exception is dispatched, the exception handler is registered in the function table located within the image file. These protection benefits are enabled with Windows XP SP2, Windows Server 2003, and Windows Vista. Also for /SAFESEH to work properly, it must be used in an all-or-nothing method. All libraries containing code bound to an executable or DLL must be compiled with /SAFESEH or the table will not be generated.
More information about /GS and /SAFESEH can be found in MSDN
PREfast is a free tool offered by Microsoft that analyzes execution paths in compiled C or C++ to help find run-time bugs. PREfast operates by working through all execution paths in all functions and assessing each path for problems. Normally used to develop drivers and other kernel code, this tool can help game developers save time by eliminating some bugs that are hard to find or are ignored by the compiler. This tool is an excellent way of reducing workload and focusing the efforts of both the development team and test team. A new version of PREfast comes with Visual Studio 2005 Team System.
For more information, read PREfast for Drivers
The Windows Application Verifier, or AppVerifier, can help testers by providing multiple functions in one tool. The AppVerifier is a tool that was developed to make common programming errors more testable. AppVerifier can check parameters passed to API calls, inject erroneous input to check error handling ability, and log changes to the registry and file system. AppVerifier can also detect buffer overruns in the heap, check that an Access Control List (ACL) has been properly defined, and enforce the safe use of socket APIs. While not exhaustive, AppVerifier can be one more component of the tester's toolbox to help a development studio release a quality product.
More information on the Windows Application Verifier can be found in Analyzing Your Applications with Windows Application Verifier by Michael Howard
Fuzz testing is a semi-automated method of testing that can enhance current testing methodologies. The central idea behind fuzz testing is to make a full assessment of all inputs through random data to see what breaks; this includes all network data, mods and saved games, etc. Fuzz testing is fairly easy to do. Simply alter well-formed files or network data by inserting random bytes, flipping adjacent bytes, or negating numerical values. 0xff, 0xffff, 0xffffffff, 0x00, 0x0000, 0x00000000, and 0x80000000 are values that are good at exposing security holes while fuzz testing. You can observe the resulting interaction combinations using AppVerifier. While fuzzing is not exhaustive, it is easy to implement, automate, and can catch the more elusive and unpredictable bugs.
More information on Fuzz Testing can be found in the Meltdown Slideshow
Authenticode is a method of ensuring that executables, DLLs, and MSIs that the user receives are unaltered from what a developer released. Using a combination of cryptographic principles, trusted entities, and industry standards, Authenticode verifies the integrity of executable content. The Crypto APIs provided by Microsoft can be used to auto-detect tampering of signed code. If a security leak occurs after a release a certificate can be revoked and all code signed with that certificate will stop authenticating. Revoking a certificate will revoke the validation of all titles signed with that certificate. Windows has been designed to work with Authenticode signing and will alert a user of unsigned code, in specific situations, that could expose a user's PC to attack.
Authenticode should not be considered a method of eliminating security flaws, but a method of detecting tampering after an executable has been released. An executable or DLL containing an exploitable security flaw can be signed and verified using Authenticode, but will still introduce the security flaw to the new system. Only after a product or update has been verified to be secure should the code be signed to assure users they have a release that hasn't been tampered with.
Even if a developer feels that there is no threat of their releases being modified, other technologies and services rely on Authenticode. Code signing is so easy to integrate and automate; there is no reason for developer to not have their releases signed.
More information on Authenticode signing can be found in the Authenticode Signing for Game Devlopers article.
Developing a game for the current and future marketplace is costly and time consuming. Releasing a game with security flaws will ultimately cost more money and time to properly fix. So it is in the interests of all game developers to integrate tools and techniques to mitigate security exploits prior to release. The information in this article is just an introduction to what a development studio can do to help themselves and their customers.
More information of general security practices and security information can be found at Microsoft Security Developer Center.