This article was originally published in The Inner Product column of Game Developer magazine, September 2006

The job of a game programmer has been constantly evolving since game programming began its commercial existence sometime back in the 1970s. The primary factor driving that evolution has been the exponential increase in the power of game platforms, particularly consoles.
Market forces have also influenced the evolution of game programming. The increase in the size of the game market, the subsequent diversification in the gaming audience, and the emergence of mobile and casual games have significantly impinged upon the traditional image of the work that a game programmer performs.

I’ve noticed a few interesting trends in game programming emerging over the past few years, which are worth reflecting on because any game programmer who want to advance their career, and any studio head who wants to make money, needs to anticipate and plan for what will be expected of them in the years to come.


Historically, programmers have been the primary bottleneck in the game production process. Frequently, large sections of game code have to be written from scratch, and significant portions of the game logic are either implemented in code or need to have code written to support them. This has meant that development schedules depended heavily on the programmers, as they were basically implementing the game.
But lately, the development of a game seems to be more heavily driven by the creation of content. The role of technology—and of game programmers—has shifted from implementing content to providing the tools for others to implement content, resulting in a trend that is causing a shift in scheduling. The programming of new features now happens toward the front of the schedule. Additionally, this change is increasingly relegating programmers to a supporting role toward the latter parts of a project.

The shift to content-driven development has essentially created a new breed of engineers: technical content creators, or more specifically, technical artists, technical level designers, and script programmers. The technical artists and level designers have to operate within a complex set of technical constraints, while also understanding the technology enough to leverage all it has to offer. They may be tasked with work that’s very much like programming.

Script programmers have a differently focused skill set compared to regular programmers. They have very little focus on algorithms and data structures, and instead focus on event handling and implementing state-driven behavior.


Ubiquitous high-speed internet connectivity has made episodic content a market reality. While the change is hampered by market inertia and piracy concerns, it is inevitable that the game industry will move to a system of content delivery that’s free of physical media, as has already happened in the casual games market, where nearly every game is sold online. The trend is also sweeping over the full price PC game market.

This prevalence of downloadable media naturally encourages the development of episodic content—content extends a game without being an entirely new one. The prime use of episodic content is to add extra levels, chapters, goals, missions, or stories to a game.

Since this additional content will consist mainly of data (models, levels, scripts, audio), the role of the programmer will be limited to providing the initial framework that allows for the additional content to be seamlessly incorporated into the game.

Episodic content will further advance the trend in content-based development. With a sufficiently robust base engine, a game might extent its series by several years without requiring any extra traditional coding, the only programming being executed at a high level via the technical content creators, particularly script programmers.


Probably the most dramatic change in technology from a programmer’s point of view is the forced shift from single-threaded engines to multi-threaded ones. The next generation of consoles all have multi-core processors, and the majority of PCs aimed at gamers released from 2006 onward will have some kind of multi-core processor.

While a multi-core architecture is going to be the norm, the majority of game programmers are still unfamiliar with the techniques of multi-threaded programming. In addition, tools for debugging and profiling multi-core code are still in their infancy. In a complex engine with many interacting systems and many patterns of memory access, the task of optimizing for multiple cores is going to remain something of an art form for several years.

Generally, the trend here is toward more and more cores on a single chip. Long-term trends point to 8, 16, 32, and more cores on one chip. Understanding the concepts of data level parallelism, Amdahl’s Law, and pipelining will become a game programmer’s core skills.


A decade ago, artists created their 3D models one polygon at a time. Eventually, modeling tools grew more sophisticated—yet most artists still deliver assets that are essentially just a bunch of triangles with materials.

An increasing trend is the creation of 3D objects in a procedural manner via a mathematical model of that object, and a set of parameters. The classic example of this is a tree. Trees of a particular species are very similar, but no two trees are the same. If a programmer can create a mathematical description of a tree, then she or he can generate an infinite number of varied trees.

Procedural content can either be pre-generated (essentially, used as an exotic modeling tool), or generated at run time, so the designer can simply say, “Forest here,” without having to specify the look and position of each individual tree.

As environments become more realistic, a much large portion of the models used in the game will be generated using some form of procedural content. Technical artists will be responsible for generating the archetypes of particular objects (and textures, animations, and even sounds), and then designers or other artists will tweak the parameters to create a specific instance or allow multiple instances (like a forest) to be created.

The challenge of the programmer within this trend is to provide the tools to allow the artists to work effectively and intuitively with the technology. Programmers are not artists, and the sooner an artist can start using the procedural technology in a non-technical environment, the better the results.


Originally, game programmers would program exactly what went into a game, and they would understand exactly why a certain thing happened at a certain time under certain conditions. The amount of code and data involved was reasonably small, and usually the behaviors of game entities were hard coded by, of course, coders.

Now, it’s more typical for the behavior to be determined by data set up by a designer, and involve the interaction of many complex systems. The programmer creates the technology to simulate an environment, and the game designer places objects in it and created gameplay by influencing the behavior of those objects in a variety of ways.

Thus, instead of the behavior of the game being specifically coded in, it now emerges from a large number of variables—and it’s no longer always clear why certain things happen in the game. Debugging becomes more difficult, and programmers often find it painstaking get the game to behave exactly as they want.

This trend, overall, is showing how game development is leaning toward a softer form of content creation, where (for example) non-player characters are inserted into the game with a set of very high-level directions and a sufficient level of underlying logic to handle all eventualities. The actually gameplay that emerges is not always clear at the outset, and will not be directly coded by the programmer.

But the challenges here lie in debugging the inevitably fuzzy mess. Avoiding performance issues may also be a problem, as layer upon layer of behavior modifiers may be added to push the behavior in the desired direction. Programmers and designers must work together to know when it is appropriate to write new code rather than modify the behavior via tweaking the data.


The rate of increase in power of video cards aimed at PC game players has outstripped Moore’s Law. By some measures, the processing power of the GPU can greatly exceed the power of the CPU. With this shift in power, an increasingly large amount of work can be done on the GPU, and not just rendering graphics.

The highly parallel nature of modern GPUs makes them very suitable for tasks that exhibit a high degree of data-level parallelism, where many individual chunks of data (such as a rigid body) have the same code executed on them independently (such as physics-based motion and collision resolution). Using the GPU for non-graphics related tasks is referred to as general purpose GPU, or GPGPU.
From an engine programmer’s point of view, the major challenges associated with this trend are managing the flow of data between the CPU and the GPU, and implementing the required logic in the restricted instruction set of the GPU.


A specific example of procedural content is muscle-driven animation, in which the motions of the game’s characters are driven by an accurate physics-based model of bones and muscles under the characters’ skin. Animations such as running and jumping are not pre-created by an animator, but instead are generated in real time, based on the physical state of the character and the interaction with the environment.
Doing this accurately requires a significant chunk of processing power, and so has not really been utilized very much in games. Even in the pre-rendered world of Hollywood CGI, much research is still being done to make this technology look good, even for relatively straightforward tasks such as running over variable terrain.

Muscle-driven animation is also the ultimate goal of facial animation, leading to lifelike and infinitely varied facial animations, which can also link directly into a speech synthesis system.

Again, the challenge programmers face with this new technology is how to provide the tools that allow technical animators to define the archetypical motion models and parameter sets, and then allow the less technical artists and designers the creative freedom to fully utilize the muscle-driven animation system.


On the PC, you have a mouse and a keyboard, sometimes a joystick. On a console you have a controller, included with the console purchase. For the vast majority of game players, the interface between their brains and the game has been fixed and consistent—and relatively simple, being just a two-axis analog control, and some buttons.

Three trends in technology are driving change here. First, newer consoles are shipping with motion sensing controllers. Most notably Nintendo’s Wii, with its revolutionary controller, opens up a whole new set of challenges for the programmer.
The technical challenges of working with a motion-sensitive device are to provide mapping between the user’s actions in manipulating the controller and game’s actions. Since the 3D motion of the Wii controller is a dimension more complex than the simple analog sticks and buttons of previous generation controllers, it will be quite some time before programmers really come to grips with all the ways this new technology can be used.

Second, there has been an increase in the number of “pointer” games, where the game action is controlled by mouse or stylus movements in a 2D plane, and the user is either pointing and clicking or drawing actions on the screen. This trend in control technology is driven by the Nintendo DS, but also by the casual games market. Since the Wii controller can function as a pointer, this type of control technology may also crop up in several games for that platform.

Third, Guitar Hero, Dance Dance Revolution, and Donkey Konga have shown that games can be packaged with a very inexpensively produced, game-specific controller, and be wildly successful. Each type of new controller presents new problems for programmers as they attempt to provide intuitive ways of translating the raw data from the controller into something representative of the player’s intentions.

The Sony EyeToy also represents something of a trend here with its own set of problems, namely, the idea of incorporating live video of the player into the game as a control element. This technology is still in its infancy, and the fiddly nature of setting up a video camera as a controller suggests it’s unlikely to achieve extensive usage.

The most likely use of camera is in-game chatting. I predict that people will attempt to incorporate some kind of facial expression recognition into their games (imagine a poker game that could tell when you are smiling, so you really have to maintain your poker face). The AI required for effective video processing is still unsuitable for games, but it’s an exciting avenue for the games of the future.


A game feature that’s closer to becoming common is voice control. The Nintendo DS is broadening the popular appeal of this with Nintendogs, which incorporates simple speech recognition into the game. It’s relatively simple for a game to use single word commands—even most mobile phones now have some form of voice recognition.

But beyond recognition of single words, the great leap forward in this trend will require leaps and bounds in natural language programming. Eventually, players will be able to hold simple conversations with characters in a game, or characters in a game will be able to synthesize conversations between themselves. This technology will inevitably appear in titles like The Sims, but it is unclear when the technology will mature.


Computers and game consoles can now be used as communication devices. Sometimes, this takes the form of online chatting or instant messaging. Sometimes it’s full voice and video communication over the internet, which may be incorporated into gameplay. Online games on next-generation consoles offer buddy lists and chatting by default.

As well as the more obvious challenges posed by this technology, the use of games as communication devices has the potential to greatly increase the emphasis on reliability and usability of the game.

Users develop a very strong expectation that their phones will not crash or pause, and this translates to a strong expectation that the game will not crash or interfere with communication. In a single player game, a game crash is very annoying, but in a multi-player experience, it is much more annoying, as you are ripped out of real-time communication with real people. This increases the programmer’s focus on reliability and a fluid user interface.


The complex interplay of technology, market forces and innovation in game design makes it impossible to project trends more than a few years in the future. Certain technological developments (more CPU cores, more memory) are inevitable, but that’s only part of what is driving trends in game development.

Ten years ago the Playstation One had only been out a short while and the industry was in the midst of a shift from 2D to 3D games. Much of what occurred during this shift was a gradual evolving from one game to the next. This seems like an inevitable progression, given the benefit of hindsight, but at that time the future of game development was as much in flux as it is now.

The evolution of game development is just that, an evolution, driven by the invisible hand of the market and shaped by periodic seismic shifts in technology and game design. While it is impossible to predict exactly where this will lead, or how quickly, the wise game programmer would do well to occasionally pay attention to where it seems to be heading.