Cowboy Programming Game Development and General Hacking by the Old West

April 23, 2008

Running in Circles

Filed under: Game Development — Mick West @ 7:03 am

(This article was originally published in slightly different form in Game Developer magazine, May 2007)

The player guides his in-game character across a footbridge. A monster appears at the other end, so the player decides to turn around and go back. Instead he walks off the side of the bridge and falls to his death. Who is at fault here? Was it the player for not mastering the controls? Was it the level designer for making the bridge too narrow? Was it the animator for making the walk stride too long? Was it the programmer who implementing the controls? Or was it the game designer for specifying the controls, bridge, and animations this way?

Desired result is not obtained....

Figure 1 – The player is facing forwards along the bridge, he then indicates backwards. But because movement is constrained by the facing direction, the player runs in a circle and falls to his doom.

Well, first of all it’s not the player. They did not buy this game to enjoy mastering the tricky art of turning around on a footbridge. Secondly, assigning individual blame is not helpful. Everyone listed had a role leading up to this slight disappointment for the player, and the end result is due to the interaction of all their efforts. This article examines, mostly from a programmer perspective, how to deal with these situations, and discusses the responsibilities of the involved parties.

THIRD PERSON MOTION

The type of game we are discussing is a third person 3D action game with sections involving running or walking. This includes such games as Zelda: Twilight Princess, Grand Theft Auto, Tony Hawk, Scarface, Harry Potter, Genji, Tomb Raider, and many other big name games. In these games, you see the character on screen, generally facing away from you, with the camera looking forward at a slight downward angle. Movement is controlled by pushing the controller stick in the direction you want to go. On PC games, similar control is achieved by pushing the WASD keys in the direction you want to go, sometimes with additional control of the camera by the mouse.

Although we are talking about a 3D game, the problem is essentially two dimensional, as we are concerned with the player’s movement across the ground. Since the camera moves up and down with the player, this essentially means we are dealing with motion in the XZ plane, which we can re-term the XY plane here to match the XY coordinates of our control stick. Here directions are represented by unit vectors, which is probably the most common method, but quaternions or Euler angles might also be used.

Now the basic code here is very simple. There are three directions, the direction of the camera’s forward view vector, the direction the player is facing, and the direction the player’s controller stick is indicating (with the controller stick, or simulated with direction keys such as WASD). The programmer has to take these three directions, and convert them into information that is used to move the player across the ground relative to the camera.

The simplest way to do this is to take the stick direction and rotate it by the camera direction, and then use this new direction as a velocity vector for the player, ignoring for now the facing direction of the player. In two dimensions this is trivially involves multiplying the x and y components of the stick direction by the view direction (view.x,view.y) and by the vector perpendicular to this (view.y, -view.x). See listing 1.

Listing 1 – calculate the desired direction from the view direction and the stick direction.

desired.direction.y = stick.y * (view.x, view.y)
desired.direction.x = stick.x * (view.y, -view.x)

COMPLICATIONS

So this is very simple so far. Where is the problem? Well, the problems occur when the programmer takes this “desired” direction, and applies it to the motion of the player’s character.

What we could do is simply take the desired direction, and set the player’s velocity to this direction. This actually gives the player very accurate control over their character, with the ability to instantly change direction. However, it is not very realistic looking, as the character will instantly snap to any new direction the player indicates.

To fix this lack of realism, the programmer or designer may reason that people always walk in the direction they are facing, so logically if they are moving, they should only move along their facing vector. If the facing vector is not the same as the desired vector, then the facing vector should be rotated at a natural looking rate towards the desired vector.

This all sounds very reasonable, and in fact a large number of games implement exactly this scheme. This leads us to the small problem I mentioned at the start of the article. The problem is that the player’s character walks in a circle. There are three reasons why this is a problem.

Firstly there is a disconnect between the player’s intentions and what actually happens. The player has indicated a desired direction. For the player they are not always indicating a direction in which they want their character to be facing, but instead they frequently want to walk towards a specific point in the world. Instead, their character will turn, walking forwards along the facing direction until the facing direction is parallel with the desired direction. The character walks in a circle, and will end up perhaps six virtual feet to one side of the desired path. The player now has to correct their direction again to point towards where they were indicating in the first place. Even worse, this inadvertent movement to the side of the path may put them in danger, perhaps dropping them off the side of a bridge (see figure 1, above).

Secondly, it introduces ambiguity. If the player’s character is facing in one direction the and player moves the stick to indicate 180 degrees in the other direction, then due to the various imprecisions, the character might do their six foot circle to the left or the right, with no feedback as to exactly why this direction was chosen. You can demonstrate this in many games by simply attempting to walk back and forth between two specific points, or along a line. Notice the lack of control, and the random nature of the turns.

Thirdly, despite the underly motivation for implementing it this way, it is actually NOT realistic. Try an experiment: get up and stand ten feet away from your chair, with your back to the chair. Then walk back to the chair. Try it now. Did you walk in a six foot circle and then correct your heading? No, you simply turned around, initially either by moving one foot backwards and turning it outwards 90 degrees and moving the other foot over it, or you moved one foot over the other about 45 degrees to the side then moved the other one to face backwards.

Try some more experiments to contrast what happens in real life with what happens in various video games. Walk back and forth along a line. Walk between two points. Run to a point and back. Humans do not walk in circles at constant velocity unless they are following a path. When a human makes a decision to change direction, they do this abruptly, leaning and pushing with a leg to very rapidly pivot their direction of travel. When the player quickly pushes the stick in a new direction, they want their character to move in that direction, just like in real life. This is one of the times in a video game where more physical realism is a good thing.

CIRCULAR THINKING

Why does this problem arise? The answer could be that the game developers put the game together rather quickly and did not have time to improve the controls. That excuse might be valid for a little casual game developed over a couple of months, but what of much more expensive mainstream games that cost millions of dollars to make and are in development well over a year? How did they end up with this inaccurate, ambiguous and unrealistic player control?

The answer is complex, and will vary from game to game. But the bottom line is that player control is often a shared responsibility and the problem arises through a lack of clear communication regarding what is actually wanted from each person. The game design document probably did not specifically address this issue. There was probably a diagram showing that the player would control their direction of motion in a camera relative manner with the left analog stick, or the WASD keys. But there is no detail beyond that.

Then the programmer and the animator come into the picture. The animator will supply idle, walk and run animations. The programmer implements code that matches the animation to the movement of the character. The animator is insistent that there is to be no sliding, that the character’s feet stay firmly planted on the ground during the walk cycle. Since turning on the spot in the walk animation results in sliding, the animator and programmer decide the way to solve this is to have the character always move along the forward facing vector, thus keeping the footsteps synced with the movement.

Perhaps instead the programmers come up with a very powerful scheme whereby the animators can fine tune the movement and rotation for each animation, and the designers can implement movement by playing the animations. The designers get various turn animations, including turning on the spot. But despite the power of this system it requires additional programming to actually implement swift turns for every situation, and the player is often left with ambiguous controls.

LETTING IT SLIDE

Why don’t the designers and producers, and the testers, and even the players, notice these problems? Why are they not addressed?

Different people involved have different goals. The programmer wants to implement the specification given to him and make it bug free and efficient. The animation wants her animations to look good. The game designer is concerned with a large number of issues. The testers have their hands full looking for bugs. The end users, who have paid for the game, have enough invested in the game to keep playing long enough to get used to the clunky controls. They learn to correct their heading after turning around. They learn to turn around very slowly if within six feet of danger. They get used to their characters running around in little circles and figure eights.

But inaccurate and ambiguous controls suck the life out of a game. The disconnect between intentions and actions prevents the player from becoming fully engaged in the flow of the game. The constant annoyances of unintended actions add up over time and contribute greatly to the tipping point where the player, consciously or unconsciously, decides a game is not worth playing any more, and hence neither is the sequel, and nor will they recommend it to their friends.

PROGRAMMER RESPONSIBILITIES

What is the responsibility of the programmer here? I mentioned before that the causes of this problem are often shared. But the programmer is often in a unique position to do something about it. The control programmer has the deepest understand of what is actually going on at the frame to frame level in the code. The programmer understands the interaction between the view direction, the stick direction, the facing direction and the desired direction. The programmer should know exactly how the animation system ties into the movement of the character, in relation to these four directions.

The programmer’s responsibilities here are to communicate this understanding to the other members of the team in a way that allows them to deal with the issues in a timely manner. The producers and the designers have the responsibility of making sure that the programmer does this.

Programmers sometimes work as is their only task is to implement the feature requests of the designers and technical directors. Programmers get a list of features, and they implement those features one at a time, tick them off and go home happy. But games are complex systems. When you implement a feature you are adding to the complexity of the system, and other features will inadvertently arise. When you implement the “walk along the forward direction, turning it towards the desired direction”, you are also implementing the “walk in circles” feature and the “make it difficult to walk to a point behind you” feature.

If a programmer is a step removed from the implementation of player control, then the situation is even worse. The programmer has created some system of defining player control with data or scripts, and handed it off to a designer. The designer may simply implement the “walk in circles” method simply because that is the only option. Here the programmer’s responsibility is to continue to be available to explain and update the system after it has been implemented to the initial specification. The producer needs to allot time for improvements to the system for many months after it is initially implemented.

CONCLUSIONS

I’ve kept this discussion as simple as possible, and as non-technical as possible. In reality the problem is really quite simple, and the solutions are simple. But time after time games are released with this problem.

The walking in circle problem is just one of many similar problems that crop up again and again. Players get stuck against lampposts, cameras snap oddly, players jump at the right time but still fall off the cliff, pressing a button a millisecond too early means the attack does not happen or you can’t turn for a fraction of a second after an attack. I could list player control problems like this forever.

Players are frustrated by these problems. But they keep playing the game, and work around them. To a similar extent so do the game designers. The reason being that they don’t really understand what is going on within the code, and so are unclear as to the causes of the problem, and even perhaps don’t really appreciate that there is a problem, since they cannot see how it might be addressed. Or perhaps they see the problem, but their “solution” is to make the bridge wider.

This is where the role of the programmer is of utmost importance. The programmer needs to communicate the way things work in a clear and concise manner that allows the designers to both appreciate the causes of the problem, and to find a solution. The programmer is also in a unique position to actually detect problems with the control implementation.

Players are very adaptable. They will work around a problem so intuitively that they will not perceive that there actually IS a problem. Instead they perceive a vague quality problem. The controls “don’t feel right” or they are “sloppy”. They can’t say what the problem it. But the programmer, with his unique insights into what is going on under the hood, at the state level, at the vector level, and at the millisecond level, should be able to see these problems, and it is his responsibility to raise them as issues, and to suggest and implement solutions.

April 1, 2008

Practical Fluid Mechanics

Filed under: Game Development,Inner Product — Mick West @ 1:57 pm

(This article originally appeared in two parts in Game Developer Magazine, March and April, 2007) Fluid effects such as rising smoke and turbulent water flow are everywhere in nature, but are seldom implemented convincingly in computer games. The simulation of fluids (which covers both liquids and gasses) is computationally very expensive. It is also mentally very expensive, with even introductory papers on the subject relying on the reader having math skills at least at the undergraduate calculus level.

In this article I will attempt to address both these problems from the perspective of a game programmer not necessarily conversant with vector calculus. I’ll explain how certain fluid effects work without using advanced equations and without too much new terminology. I shall also describe one way of implementing the simulation of fluids in an efficient manner without the expensive iterative diffusion and projection steps found in other implementations. A working demonstration with source accompanies this article and can be downloaded from here, example output from this can be seen in figure 1.

Figure 1 - Sample Output

GRIDS OR PARTICLES?

There are several ways of simulating the motion of fluids. These generally divide into two common types of methods: grids methods and particle methods. In grid methods, the fluid is represented by dividing up the space a fluid might occupy into individual cells, and storing how much of the fluid is in each cell. In particle methods the fluid is modeled as a large number of particles that each move around and react to collision with the environment and interacting with nearby particles. Here I’m going to concentrate on simulating fluids with grids.

It is simplest to discuss the grid methods with respect to a regular two-dimensional grid, although the techniques apply equally well to three dimensions. At the simplest level, to simulate fluid in the space covered by a grid you need two grids, one to store the density of liquid or gas at each point, and another to store the velocity of the fluid. Figure 2 shows a representation of this, with each point having a velocity vector, and also containing a density value (not shown). The actual implementation of these grids in C/C++ is most efficiently done as one dimensional arrays. The amount of fluid in each cell is represented as a float. The velocity grid (also referred to as a velocity field, or vector field) could be represented as an array of 2D vectors, but for coding simplicity it is best represented as two separate arrays of floats, one for x and one for y.

In addition to these two grids we can have any number of other matching grids that store various attributes. Again each will be stored as matching array of floats, and can store things such as the temperature of the fluid at each point, or the color of the fluid (whereby you can mix multiple fluids together). You can also store more esoteric quantities such as humidity, for if you were simulating steam or cloud formation.

ADVECTION

The fundamental operation in grid based fluid dynamics is advection. Advection is basically moving things around on the grid, but more specifically it’s moving the quantities stored in one array by the movement vectors stored in the velocity arrays. It’s quite simple to understand what is going on here if you think of each point on the grid as being an individual particle, with some attribute (the density) and a velocity.

You are probably familiar with the process of moving a particle by adding the velocity vector to the position vector. On the grid, however, the possible positions are fixed, so all we can do is move (advect) the quantity (the density) from one grid point to another. In addition to advecting the density value, we also need to advect all the other quantities associated with the point. This would obviously include additional attributes such as temperature and color, but also includes the velocity of the point itself. The process of moving a velocity field over itself is referred to as self-advection.

The grid does not represent a series of discreet quantities, density or otherwise, it actually represents (inaccurately) a smooth surface, with the grid points just being sampled points on that surface. Think of the points as being X,Y vertices of a 3D surface, with the density field being the Z height. Thus you can pick any X and Y position on the mesh, and find the Z value at that point by interpolating between the closest four points. Similarly while advecting a value across the grid the destination point will not fall directly on a grid point, and you will have to interpolate your value into the four grid points closest to the target position.

In figure 3, the point at P has a velocity V, which, after a time step of dt, will put it in position P’ = P + dt*V. This point falls between the points A, B, C and D, and so a bit of P has to go into each of them. Generally dt*V will be significantly smaller than the width of a cell, so one of the points A,B,C or D will be P itself. Advecting the entire grid like this sufferers from various inaccuracies, particularly that quantities dissipate when moving in a non-axis-axis-aligned direction. This inaccuracy can actually be turned to our advantage.

STAM’S ADVECTION

Programmers looking into grid based fluid dynamics for the first time will most often come across the work of Jos Stam and Ron Fedkiw, particularly Stam’s paper “Real-Time Fluid Dynamics for Games“, presented at the 2003 Game Developer Conference. In this paper Stam presents a very short implementation of a grid based fluid simulator. In particular he describes implementing the advection step using what he terms a “linear backtrace”, which simply means instead of moving the point forward in space, we invert the velocity and find the source point in the opposite direction, essentially back in time. We then take the interpolated density value from that source (which, again, will lay between four actual grid points), and then move this value into the point P. See figure 4.

Stam’s approach produces visually pleasing results, yet suffers from a number of problems. Firstly the specific collection of techniques discussed may be covered by U.S. patent #6,266,071, although as Stam notes, the approach of backtracing dates back to 1952. Check with your lawyer if this is a concern to you. On a more practical note the advection alone as described by Stam simply does not work accurately unless the velocity field is smooth in a way termed mass conserving, or incompressible.

Consider the case of a vector field where all the velocities are zero except for one. In this situation the velocity cannot move (advect) forward through the field, since there is nothing ahead of it to “pull” it forward, instead the velocity simply bleeds backwards. The resultant velocity field will terminate at the original point, and any quantities moving through this field will end up there. This problem is solved by adding a step to the algorithm termed projection, which is basically smoothes out the velocity by making it incompressible, thus allowing the backtracing advection to work perfectly, and making the paths formed by the velocity be “swirly”, as would be the case in real water.

The problem with this approach is that projection is quite expensive, requiring 20 iterations over the velocity field in order to “relax” it to a usable state. Another performance problem with Stam’s approach is that there is a diffusion step, which also involves 20 iterations over a field. This is needed to allow the gas to spread out from areas of high density to areas of low density. If the diffusion step were missing, solid blocks of the fluid would remain solid as them moved over the velocity field. The diffusion is an important cosmetic step in the process.

ACCOUNTING ADVECTION

If a velocity field is not mass conserving, then this means that some points will have multiple velocity vectors from other points pointing towards them. This means that if we simply move our scalar quantities (like density) along these vectors, then there will be multiple quantities going to (or coming from) the same point, and the result will be a net loss or gain of the scalar quantity. So if the total amount of something such as the density would either fade to zero or gradually (or perhaps explosively) increase.

The usual solution to this problem is to make sure the vector field is incompressible and mass conserving. But as mentioned before, this is computationally expensive. One partial solution is to make the advection step mass conserving, regardless of if the velocity field actually is mass conserving. The basis of this solution is to always account for any movement of a quantity by subtracting in one place what is added in another. Advection uses a source and destination buffer to keep it independent of update order. In Stam’s implementation, the destination buffer is simply filled one cell at a time by combining a value from four cells in the source buffer, and placing this value into the destination buffer.

To properly account for compressible motion, we need to change this copying to accumulating, and initially make the destination buffer a copy of the source buffer, and as we move quantities from one place to another we can subtract them in the source and add them in the destination. With the forward advection in figure 3, we are moving a quantity from point P to points A,B,C and D. To account for this we simply subtract the original source value in P from the destination value in P, and then add it (interpolated appropriately), to A,B,C,D. The net change on the destination buffer is zero. With the reverse advection in figure 4, as used by Stam, the solution would initially seem to be symmetrically the same: just subtract the interpolated source values in E,F,G and H from the destination buffer, and add them to P.

While this works fine for signed quantities such as velocity, the problem here is that quantities such as density are positive values. They cannot go below zero as you cannot have a negative quantity of a liquid. Suppose that point E was one source point for two destinations P1 and P2, both of which wanted 0.8 of E. Now, if we follow our initial plan and subtract 0.8*E from E and add 0.8*E to both P1 and P2, the net effect is zero, but now the value at E is negative. If we clamp E to zero then there is a net gain of 0.6*E. If we subtract 0.8*E from the source value of E after updating P1, then when we update P2 it will only get 0.8*0.2*E, when clearly both P1 and P2 should both get equal amounts, and intuitively here it seems they should both get 0.5*E, and the resulting value in E should be zero, leading to a net zero change.

To achieve this result I first create a list that for each point records the four points that are sources for that point, and the fraction of each point they want. Simultaneously I accumulate the fractions asked of each source point. In an ideal world, this would add up to one, as the entire value is being moved somewhere (including partially back where it started). But with our compressible field the amount of the value in each point that is being moved can be greater than or less than one. If the total fraction required is greater than one, then we can simply scale all the requested fraction by this value, which means the total will be one. If less than one, then the requesting points can have the full amount requested. We should not scale in this case as it will lead to significant errors.

With the mass conservation of advection fully accounted for in both directions, it turns out that neither forward or backward linear advection alone will produce smooth results. After some experimentation I determined that applying forward advection followed by backward advection worked very well, and give a very smooth and artifact free flow of fluid over a compressible velocity field.

NOW WHAT?

So, we can now perform both forward and reverse advection in a mass-conserving manner, meaning we can move fluid around its own velocity field. But even though our velocity field does not need to be mass-conserving, we actually still want it to be, since the velocity fields of real world fluids generally are incompressible. Stam solves this problem by expensively forcing the field to be fully mass conserving after every change. This is necessary, since the reverse advection requires it. The key difference now is that since our advection step does not require the field to be mass-conserving, we are really only doing it for cosmetic purposes. To that end, any method that rapidly approaches that state over several time-steps will suit our purpose. That method, and the method of diffusion, can be found in the accompanying code, and are discussed below.

PRACTICAL FLUID DYNAMICS: PART 2

In last month’s article (above) I gave an overview of the nuts and bolts behind simple two dimensional fluid dynamics using a grid system. This month I’ll expand upon this, explaining how we can achieve a reasonable level of realism without too many expensive iterations. I’ll also continue with my goal of explaining how everything works by using no math beyond basic algebra.

50 x 50 velocity field, click to view full sizeTo recap so far: we have a velocity field which is an array of cells, each of which stores the velocity at a particular point (click example on the right). Remember this a continuous field, and we can get the velocity at any point on the field surface (or in the field volume for 3D), by interpolating between the nearest points on the field. We also have a matching field of density. The density field represents how much of the fluid or gas is in a particular grid cell. Again this a continuous field, and you can get a density value for any point in the simulated space by interpolating. I then described the process of advection, which is the moving of the values in one field (say the density field), over the velocity field.

I described both forward advection and reverse advection, where the quantities in the field are respectively pushed out of a cell, or pulled into a cell by the velocity at that cell. I noted that the advection process worked well if you perform forward advection and then follow it with reverse advection.

INCOMPRESSIBLE FIELDS

I noted that reverse advection in particular would only work if the velocity field was in a state termed incompressible. But what does this mean? Well, you might have heard that “water is incompressible”, meaning you can’t squeeze water into a smaller volume than it already occupies. Compare this with gasses such as air, where you can clearly be compressed. Picture, for example, a diver’s air tank. The tank contains a lot more air than the volume occupied by the tank. But if you were to take that tank, and fill it with water, and then somehow push in another pint of water, then the tank would explode.

Water, in fact, is actually compressible, very slightly, since it’s physically impossible to have a truly incompressible form of matter. The incompressibility of a material is measured by a metric called a “bulk modulus”, For air this is about 142,000 whereas for water, it’s 2,200,000,000 or approximately 15,000 times as much. By comparison, the least compressible substance known to humankind, aggregated diamond nanorods, are just 500 times more incompressible than water.

So for most practical purposes, you can imagine water as being incompressible. So, with water being considered incompressible, then when considering a solid volume of water, there can not be more water in one cell than in another. So, if we start out with an equal amount of water in each cell, then after moving the water along the velocity field (advecting), we can’t increase or decrease the amount of water in each cell. If this happens, then the velocity field is incompressible or mass conserving.

PRESSURE

You can think of the pressure at a particular node as being the difference in density between a cell and its neighbors. Now with water being incompressible, the pressure is going to be the same throughout the density field. If we think of a node as having a series of inputs and outputs during the advection process, then in an incompressible field, the sum of input is equal to the sum of outputs (Figure 5a). When we move the water along its incompressible velocity field, then the density at each node will remain constant, and hence the pressure will remain constant.

On the other hand, if the velocity field happens to be structured in such a way that for some cells more is going into them then is coming out, then the velocity field is compressible (Figure 5b). When the density of the fluid is advected across a compressible velocity field, then the density in individual cells will increase or decrease. If we simply keep advecting the density, then the density will eventually all be compressed into the cells of the velocity field that have a net gain of input over output.

If we were not performing accounting in our advection step (as explained last month), then there would be an overall net loss in density (the field is not mass conserving). Stepping back from our abstraction for a second, what prevents this from happening in real-life? Well, obviously if more of a fluid flows into a cell than is flowing out, then the density of that cell increases relative to its neighbors, and hence the pressure in that cell increases. High pressure in a cell creates an acceleration force on the neighboring cells, increasing their velocity away from that cell, hence increasing the outflow rate from the cell, and evening out the imbalance. As with the atmosphere, fluid flows from an area of high pressure to an area of low pressure.

APPLYING PRESSURE

Listing 1 shows the code for applying pressure. Here mp_p0 is the array that stores the density (which is equivalent to the pressure, so I actually refer to it as pressure in the code). The arrays mp_xv1 and mp_yv1 store the x and y components of the velocity field. The function Cell(x,y) returns a cell index for a given set of x and y coordinates. The loop simply iterates over all horizontal and vertical pairs of cells, finds the difference in pressure, scales it by a constant (also scaled by time) and adds it to both cells.

The logic here is slightly unintuitive, since physics programmers are used to the Newtonian principle that every action has an equal an opposite reaction, yet here when we add a force, there is no opposing force, and we don’t subtract a force from anywhere else. The reason is clear if you consider what is actually going on. We are not dealing with Newtonian mechanics. The force actual comes from the kinetic energy of the molecules of the fluid which are actually randomly traveling in all directions (assuming the fluid is above absolute zero), and the change to the velocity field actually happens evenly across the gradient between the two points, so in effect we are applying the resultant force from a pressure gradient to the area it covers, which here is two cells, and we divide it between them.

Here’s an example, just looking in the x direction, we have a flat pressure field, with one cell denser that the rest. The cell densities are 4,4,5,4,4. The gradients between the four pairs of cells is 0,-1,+1,0 Adding this to each cell (ignoring scaling), we get: 0,-1,0,+1,0. See Figure 6.

Here the cells on either side of the high pressure cell end up with a velocity pointing away from that cell. Consider now what will happen with the advection step, the reverse advection combined with forward advection will move the high pressure cell outwards, reducing the pressure, and reducing the force. The fluid moves from an area of high pressure to an area of low pressure.

Effectively this makes the velocity field tend towards being incompressible and mass conserving. If there is a region that is increasing in density, then the resultant increase in pressure will turn the velocity field away from that area, and hence decrease the density in that area. Eventually the velocity field will either become mass conserving (mass just circulating without density change), or it will stop (become zero).

Listing 1 – The pressure differential between two cells creates an identical force on both cells

for (int x = 0; x < m_w-1; x++) {
for (int y = 0; y < m_h-1; y++) {
int cell = Cell(x,y);
float force_x =  mp_p0[cell] - mp_p0[cell+1];
float force_y =  mp_p0[cell] - mp_p0[cell+m_w];
mp_xv1[cell]     +=  a * force_x;
mp_xv1[cell+1]   +=  a * force_x;
mp_yv1[cell]     +=  a * force_y;
mp_yv1[cell+m_w] +=  a * force_y;
}

INK AND SMOKE

What we are modeling here is motion within a fluid (such as air swirling around inside a room), and not the overall motion of a volume of water, (such as water sloshing around a cup. This method, as it stands, does not simulate the surface of the fluid. As such, visualizing the fluid itself is not very interesting, since a room full of air looks pretty much the same regardless of how the air is moving. Where it become interesting is when we introduce some substance into the fluid that is suspended by that fluid, and carried around by the fluid.

In water this could be silt, sand ink, or bubbles. In air, it could be dust, steam, or smoke. You can even use the velocity field techniques outlined here to move larger object such as leaves or paper in a highly realistic manner. Note it’s important that what we are talking about is a suspension of one substance in another. We are generally not so interested in simulating two fluids that do not mix (like oil and water).

Games have things that burn and explode, so smoke is a very common graphical effect. Smoke is not a gas, but a suspension of tiny particles in the air. These tiny particles are carried around by the air, and they comprise a very small percentage of the volume occupied by the air. So we do not need to be concerned about smoke displacing air.

In order to simulate smoke, we simply add another advected density field matching, where the value at each cell represents the density of smoke in that region. In the code this is referred to as “ink”. This is similar to the density of air, except the density of smoke or ink is more of a purely visual thing, and does not affect the velocity field.

HEATING THINGS UP

One final ingredient that often goes along with a fluid system like this is the heat of the fluid/gas at each location. Sources of smoke are usually hot, which heats up the air the smoke initially occupies. This causes the smoke to rise. It rises because higher temperatures mean more energy, which means the fluid molecules are moving faster, which means higher pressure, which means lower density (remember density is only proportional to pressure at constant temperature), which makes the hot air rise.

Now, that’s a complex sequence of events, but it’s initially simpler to just model the result, “hot air rises”, and have the relative temperature of a cell create a proportionate upwards force on the velocity field at that cell. We can do this trivially by adding a scaled copy of the heat field to the Y velocity field. Similarly, rather than attempt to model the effects of heat in the initial phases of a simulation, I found it easier to simply model the expected results.

So, although a flame creates heat which makes the smoke rise, more pleasing results were found by “manually” giving the volume of air around the flame an initial upwards velocity, and then letting the heat take it from there. With more complex systems such as an explosion, the fiddly physics happens in the first tenth of a second, so you can just skip over that and set up something that looks visually pleasing with our simplified physics.

FILTERING THINGS OUT

The simplistic application of forces we perform for acceleration due to pressure (Figure 2) has the tendency to introduce artifacts into the system. These typically present as unnatural looking ripples. The way these are dealt with is to smooth out the velocity and pressure fields by applying a simple diffusion filter. If you use the Stam style reverse advection with projection, then you have to use a computationally intensive filter iterating several time. But with the inherent diffusion of forward advection, combined with the accuracy of the combined forward and backwards accounted advection, we can get away with a single iteration.

It’s often difficult to see exactly what effect a change can have on a fluid system such as this. The fluid is very complex looking, and small changes to parameters often have an effect that is not immediately obvious. The ideal way to solve this problem is to set up your system so you can run two copies of the same system in parallel, with one having the modified parameters. The difference can then become obvious. Figure 7 (below) shows such an A/B comparison. The image on the left has no diffusion filtering, and the image on the right has a single pass of diffusion filtering applied every update.

FLUID IDEAS

I’ve glossed over a few other important aspects here, but details of these aspects can be found in the accompanying code. You need to pay particular attention to how you handle the cells that are at the edge of the system, as the differing number of neighbors has a significant effect. At the edges of a system you have the option of either reflecting, wrapping or zeroing values, depending on what you want. By wrapping in one direction you essentially get a tiling animated texture in that direction, which could be used as a diffusion or displacement map for the surface of a moving stream of fluid .

There is also the issue of friction. Motion in a fluid is generally quite viscous. This can be implemented as a simple friction force that applies to the velocity field. If there is no friction in the fluid it can slosh around forever, which is generally not what you want. Different viscosity setting give very different visual results. There are a very large number of variables that can be tweaked to give radically difference visual effects, even in my rather simple implementation. It’s worthwhile spending some time just playing with these values just to see what happens.

Additional resources

3D Version

This approach has also been implemented in 3D by Quentin Froemke, et al, at Intel, as part of their research into Multi Threaded programming.

http://www.gamasutra.com/view/feature/4022/sponsored_feature_multithreaded_.php

March 23, 2008

Debugging Heisenbugs

Filed under: Game Development,Inner Product — Mick West @ 4:29 pm

(This article originally appeared in Game Developer Magazine, October 2007, in a slightly different format)

A Heisenbug is a type of bug that disappears or alters its behavior when you attempt to debug it. The word “Heisenbug” is a slight misnomer, referencing Heisenberg’s uncertainty principle, which describes how, in quantum physics, it is impossible to know both where something is, and how fast it is. A related phenomenon is the “observer effect”, which says you cannot observe something without altering it – this “observer effect” is what causes the problems we call Heisenbugs.

Heisenbugs are common in game development, most frequently in lower level code. A programmer may encounter several such bugs in the course of development, and a failure to appropriately handle them can seriously derail development, as it may take many days to track down the elusive bug. This article discusses some of the causes of Heisenbugs, and gives some guidelines for avoiding them and tracking them down.

RANDOM CAUSES

The causes of Heisenbugs are as varied as the causes of regular bugs. But some factors are more likely to result in a Heisenbug. Typically those bugs are highly depended on what are essentially random factors which are outside the programmer’s control.

The most literal example of this would be a bug that is caused by the generation of random numbers. Perhaps a table overflow bug might only occur when two particular random numbers are generated in sequence. Random number generation is really not random, you are usually just generating deterministic, but random looking numbers in sequence. But because the amount of numbers generated can be affected by the game state, which is in turn affected by the user input, then these pseudo-random number quickly become unpredictable. To remove this possibility, try making the random number generator return the same number, and see if the bug still occurs.

Other essentially random factors could be the addresses of dangling pointers, the order of data processing in multi-threaded algorithms, the contents of an unflushed cache that is underwritten by DMA, the contents of uninitialized memory (see later), the assumed state of a GPU register, user input (especially analog), read and write times for persistent storage, the persistence of values in improperly synchronized memory (volatile variables). The key diagnostic technique here is to try to eliminate all sources of randomness or indeterminism.

UNINITIALIZED MEMORY

Often when memory is allocated, or variables are instantiated, they are not set to any particular value. Generally this is not a problem, as the code that uses that memory should initialize it to some meaningful value. However, badly designed code, or code that is extended without fully understanding the full implications of the extension can introduce code pathways which result in memory being used before it is being initialized. This will result in a Heisenbug if the uninitialized value is generally the same value, but under certain circumstances the value changes because of changes in the flow of unrelated logic.

That’s a fundamental problem with Heisenbugs, they often appear to be related so some kind of game function that is in fact basically unrelated – (Example: “The game glitches when I open a box”). This can result in a wild goose chase, where you focus your efforts on what seems to be the cause of the bug (code related to opening boxes), and the real problem is in something entirely unrelated.

This can cause problems with assigning bugs to the correct programmers – if a bug is assigned to the game object programmer, simply because the glitch happens when boxes are opened, then you may have a programmer fruitlessly spending several days trying to track down a bug that is nothing to do with them. This can be highly problematic if the assigned programmer is a junior programmer, and unfamiliar with such problems. For this reason it is important that such imprecise bugs be evaluated by a more experienced programmer, and the junior programmer is able to ask for help if their hunt for the bug leads them out of their domain.

Uninitialized memory Heisenbugs can be tracked down by initializing memory to a known value, but one that is more likely to cause a problem than zeroing the memory, such as 0x55555555. Uninitialized variables can be nipped in the bud by having your compiler not allow them. This may be a language default, such as in C#, or a warning, such as in C++. If it is an available compiler warning, then it is highly advisable to make this be an error, so the code will not compile with this warning. While this may require a few minor annoying code changes to get around the warnings, it is generally preferable to the problem of last minute debugging of a Heisenbug, lost in a stream of compiler warnings.

MEMORY CORRUPTION

One of the hardest types of Heisenbug to track down is random memory corruption. In this bug, with random frequency, at a random point in time, a random location in memory has a random value written to it. The less randomness involved; the better for the debugger. If it happens at a particular time, you can try to determine what exactly is going on at that time. If it’s at a particular location, you can trap the write, or look into what code or data has pointers to that location. If the value written is always the same, then sometimes that holds a clue. If it’s always 0x3fe80000, then that’s 1.0f in floating point, so ask what might be storing a 1.0 in memory.

If it’s totally random (but reasonably frequent) that’s actually fine too, as writing to random locations can usually be caught in the debugger, as it will eventually write to an illegal location, and you can set a write access breakpoint on read-only data.

The worst problem comes when the memory being corrupted is randomly within a narrower range of memory that is constantly being written to by legal processes, such as the stack (used for local variable), or a dynamic heap, where memory locations are constantly being used and reused. In this situation, unless you can narrow down the precise point in time the bug occurs, you will be unable to observe the corruption happening, or set a breakpoint, as all the other writes in that memory area will obscure the moment of corruption.

If it’s difficult to see what is being corrupted, and how much, and if you can see the corrupt values after the fact, then again you can try to characterize the corruption from the nature of the data. If a block of three or four words is corrupted, perhaps with values that start (in hex) with 3, then are followed by a bunch of very random digits, then that might be a clue. See figure 1a

Figure 1a – a hex dump of some ASCII data (file names) with some corruption on the second line. The numbers look like they might be floats.

5c6b6369 73636f64 6d61675c 6e697365  ick\docs\gamesin
3e6fdb1a bd0ee1b0 3f7909cd 6f635c6b  .Ûo> °Ã¡. ½Ã.y?k\co
655c6564 706d6178 5c73656c 6d617865  de\examples\exam

Figure 1b – the same data, but viewed in float mode. The numbers that are actually sensible floats are quite obvious.

2.6502369e+017 1.8019267e+031  4.3599426e+027  1.8062378e+028
0.23423424    -0.034883201     0.97280580      7.0364824e+028
6.5049435e+022 2.9386312e+029  2.7403974e+017  4.3612297e+027

Here the corruption is not immediately apparent in the hex view. But looking at the ASCII data, you can see where things are going wrong. Then looking back at the hex, we see the first three words on the second line are actually very different, they look like they might be floating point values (two of them start with 3), so we switch to floating point view (figure 1b) and we see that yes, they are very sensible floats (most floats in games are small, usually less than one). Looking closer we can see they actually form a unit vector.

So these are all clues. They don’t tell you where the corruption is coming from, but they do tell you a little about it. In this case, something is writing a solitary unit vector to memory, and not corrupting the memory on either side. Perhaps you already have some suspects, and this might help whittle them down. Or perhaps this is your first clue, in which case it is a valuable first step, and can help you mostly eliminate many other things from consideration (all the code that could not be writing unit vectors).

TRACKING THE UNTRACKABLE

But how do you find something that vanishes when you look at it? A Heisenbug in a game will come up with a certain frequency. The more frequently it occurs, the easier it is to track down. Even a bug that occurs as infrequently as once a week can eventually be tracked down (although hopefully you would have a few weeks left on the project).

If a bug cannot be isolated by normal means, then you must look at circumstantial evidence. What is happening when the bug occurs? What just happened? What was going to happen? Perhaps the bug occurs only on a particular level, or in a particular area of the game. Try to build up a characterization of the bug, no matter how vague.

Enlist the help of the testers here. They play the game in ways very different from the way programmer play the game. A good tester will try to make a bug happen more often, and will often come up with convoluted theories as to what sequence of events they think precipitates the bug. These theories are often wildly off the mark, and contain many red herrings, but they also can contain many valuable clues. If a tester can reproduce a bug in a reasonably period of time, even an hour or so, then it is often worth watching the tester do this, as the programmer could quite easily waste several hours or days in fruitless code speculation, when observing some gameplay might provide a clue.

The classic definition of a Heisenbug is one that goes away when you look at it. This is generally not strictly the case. While it is true that you often get bugs that only occur while playing the game, and not when you hook up the debugger, or when you recompile in debug more, you can always make some changes to the situation that will tell you more about the nature and location of the bug.

FIXING BY NOT FIXING

Characterizing the bug by describing the gameplay situations under which it occurs (or is more or less frequent) is half the story. The other story is what modification you can make to the code, and how the affect the bug.

If you’ve gone through the usually debugging methods, and failed in isolating this elusive bug, then you need to focus on narrowing it down. Now a Heisenbug is different from a regular bug. Heisenbugs are sensitive to state changes in the total state of the program. If you remove some code, and that prevents the bug from happening, it generally tells you nothing definite about the bug – you’ve quite possibly simply modified the state so the bug is either removed or hidden. You can’t tell either way. For example, if you suspect synchronization issues, and you turn off multi-threading, and the bug goes away, this unfortunately does not mean that you have isolated the cause of the bug. It’s a clue, but turning off multi-thread so greatly alters the state of the system in so many ways, you could simply have hidden the bug.

On the other hand, if you remove some code and the Heisenbug still happens, then paradoxically this could be much more useful. You have eliminated some code that is nothing to do with the bug, meaning you don’t need to consider that code any more, and your field of possible culprits shrinks. If you turn off multi-threading, and the bug still happens, that means you can be 99% sure it’s nothing to do with multi-threading, and you can move on with confidence, having eliminated a huge range of possible causes.

As well as narrowing down the bug in this way, you can try to clarify its location (and speed your tracking) by trying to make it happen more often. You have to get quite creative here, focus on amplifying the bug. If it seems to happen when more instances of a certain object are in the level, then modify the level so there are hundreds more of those objects. Make bold sweeping moves here, if it often happens when explosion are triggered, then trigger thousands of random explosions. If it happens when running fast, then double the running speed. Stress test the game until the bug either become repeatable, or its nature is revealed.

MAGICAL THINKING

Mental discipline is important when tracking Heisenbugs. Their very nature makes it very difficult to discern anything concrete about them and so even quite wild theories can start to take root in your mind. Perhaps, you might think, your computer or dev-kit is malfunctioning? Perhaps there are glitches in the power supply? Perhaps that flickering light is causing EMF resonance in the CPU? Perhaps vibration from passing trucks is jigging a loose component in the motherboard? Perhaps there is a bug in the compiler?

This is magical thinking – it is tempting to ascribe some esoteric cause which absolves you from guilt, but it’s rarely true. Much time can be wasted by entertaining these remote possibilities, especially with bugs that are highly intermittent. It is import to dispense with these ideas at once. If you suspect your computer, then change it. If you think there are problem with the power supply, then install a UPS or move to a different circuit in another room. Perhaps it was a cosmic ray, but it’s vastly more likely there is something wrong with the code.

It’s also tempting to blame the compiler. Compiler bugs do exist, but they are very rare. For all the bugs where the programmer has said “that can’t possibly be a code bug, it MUST be the compiler”, in 95% of cases, in my experience, the problem has turned out to an ordinary bug, and not a compiler problem. If it IS a compiler problem, then that may require the assistance of someone familiar with the very low level debugging required during the final stages of tracking this down.

Heisenbugs are mentally difficult for programmers to deal with. It is very frustrating to have something that eludes clear methodical debugging, and where you are forced into speculation, experiments and even debugging based on vague statistics. But a single Heisenbug can derail a project, especially if it is not addressed as soon as possible. Some Heisenbugs crop up only when the system is stressed, which might not be until just before beta, when all the assets and systems are fully incorporated. Programmers should be familiar with the possible causes, and general debugging techniques for dealing with Heisenbugs.

RESOURCES

Why Programs Fail: A Guide to Systematic Debugging, Ch 4, by Andreas Zeller, Morgan Kaufmann Publishers, 2006

Cross Platform Game Programming, Ch 6, by Steven Goodwin, Charles River Media.

Debugging Concurrency, Philippe Paquet, June 2005, Gamasutra, http://www.gamasutra.com/features/20050606/paquet_01.shtml

March 18, 2008

Exponential Optimizing, or not.

Filed under: Game Development — Mick West @ 9:57 am

I wrote an article titled “Exponential Optimizing” for Game Developer, and since it was published a few people wrote to tell me that what I was describing was not exponential, but was actually polynomial.

They are quite correct.   I had mistakenly used “exponential” in the sense of “rapidly increasing”, which is a valid usage, but not too helpful in discussing algorithms.

To be clear, for a variable x, and a constant c,

x^c is polynomial

c^x is exponential.

This is confused a little by the use of n for the variable (as in the number of elements), as n is usually constant.

February 26, 2008

Managed Code in Games

Filed under: Game Development,Inner Product — Mick West @ 11:13 am

This article originally appeared in Game Developer Magazine, January 2007.

MANAGED CODE IN GAMES

The term “Managed Code” was once considered little more than a buzzword by many game developers. Synonymous with poor performance, uncertain memory usage and the unfamiliar C# language, managed code had a bad rep that many established game programmers could not get beyond. Yet managed code is becoming increasingly relevant in the world of game development. This article explains what managed code is, how it can be used in games, and why it is important to game programmers.

WHAT IS MANAGED CODE?

Managed code can be best explained by comparing it to “native” code. Native code is the executable file that results from compiling, say, a C++ program into the .EXE file that contains actual machine code that runs “natively” on the target platform. Managed code, on the other hand, is code compiled into an intermediate language (IL) that is executed either on a virtual machine (like early Java), or semi-natively using “Just In Time” (JIT) compilation (like C#). At a more fundamental level, native code runs directly on the CPU and has direct access to system resources (particularly memory), whereas managed code has a layer insulating the code from the hardware, which “manages” the code operations and resource interactions.

Many games, especially big budget AAA games, already use some kind of home-grown managed code in the form of either an interpreted scripting language, or a language that compiles into a byte code that runs on a virtual machine. Commercial game engines often have their own scripting language, which is essentially managed code. The Unreal engine has a Java-like UnrealScript. The Quake engine has “Quake script” . But when people speak of managed code, they generally are not referring to these home-grown scripting languages, but rather to writing the actual game in managed code, which for the PC means using Managed DirectX.

Managed DirectX is not DirectX written using managed code. It’s simply an interface to DirectX that allows it to be used by managed code. This distinction is very important. The lower level DirectX layer is still just the same, and can still push polygons around just as fast as before. Just now you can call it from managed code.

Managed code does not always mean C# either. In Visual Studio, C++ can be compiled into IL simply by adding the /clr compile switch, which allows you to use managed DirectX.

WHY MANAGED CODE?

When asked what advantages you get from managed code, proponents will tell you the biggest advantage is productivity. Managed code, in theory, will allow you to write your programs faster. There are several reasons given for this.

Firstly, managed code is easier to write. Writing your code in C# generally results in shorter and more readable code. You don’t need to have header files. Compilation times are reduced. With managed DirectX using C#, the DirectX initialization code is greatly simplified. In addition the .NET framework supplies you with a lot of components you might otherwise have to write yourself.

Secondly, managed code removes the causes of many bugs. Variables are always initialized, so you can’t have bugs resulting from uninitialized memory. Memory management is automated with garbage collection so there should be no memory leaks and no dangling pointers.

Another advantage of managed code that is often touted is that of “interoperability” . This is the ability to mix and match languages, both managed and unmanaged, in developing an application. Regardless of which language a particular component is written in, it is theoretically quite easy to interface it with other components written in different languages. This is of limited application to game developers, except as it pertains to the interface between managed and unmanaged code.

A final advantage of managed code is security. Firstly, managed code removes (or makes impossible) the potential security loopholes that often exist in native code, such as buffer overruns. Secondly, “managing” code controls its access to system resources, such as the file system and memory, in such a way that even if some nefarious code was introduced into the application, it would be unable to do much damage.

WHY NOT MANAGED CODE

Managed code is obviously not without its problems, and those problems strike fear into the heart of any battle hardened game programmer. Namely: framerate and memory.

Performance is nearly always going to be worse with managed code than it is with native code. This is because JIT compilers are currently not very good at optimizing code, and because the managing of code and the facilitation of that safety and memory management introduces a significant amount of overhead that drags down the speed of your code.

As well as pure code speed, the unpredictable nature of garbage collection means it is difficult to predict CPU usage. If a lot of garbage collection happens at once, it might cause framerate to drop

Memory usage is another problem. Since the code is complied into IL, the executable file can actually be smaller which is a momentary advantage. But once the program is loaded into memory, and JITed, the lack of optimization means the native footprint will be larger. The additional overhead of storing the CLR, boxing, and memory management also add to the total memory usage.

As a practical example, I took my “Blob” example (See Game Developer June/July 2006), and recompiled it with in Visual C++ with the /clr option. Three effects were apparent:
The size of the executable dropped from 140K to 116K
The frame rate dropped from 160 frames/second to 60 frame/second
The memory usage jumped from 29MB to 34MB

Why so slow? Well the “Blob” example is highly CPU intensive, and involves a lot of iterating over arrays and STL vectors of atomic objects and performing fairly complex operations on them, like collision detection and Verlet integration. This is simply not something that the .NET CLR is very good at doing. The code that is generated, and then JITed, ends up not being at all optimal, and since the CPU time is the bottleneck this causes the precipitous drop in frame rate.

MANAGED CODE FOR GAMES

So, if by using managed code we get this dreadful drop in frame-rate, why would any game programmer use it?

The most obvious answer is that not all games need all the CPU power or all the memory. Consider the rapidly growing market for casual games such as Diner Dash or Luxor. These games require very little in the way of processor resources, and are necessarily small to facilitate quicker downloads. The faster development times are also a big plus, as casual games are generally low budget, with a schedule of just a few months. The robustness provided by the automated memory management is a win again here, contributing to faster development, and easing the process of debugging around release. C# has not been too popular with casual games, due to the possibility of having to download the .NET framework, but that’s increasing installed by default on PCs or deployed automatically via Windows Update, so that objection is less relevant.

But what about games such as Half Life 2 or Neverwinter Nights 2? Is it possible to do high end games like this using managed code? The simple answer is “no, unless you want the game only playable with 2 gigs of memory and at half the frame-rate” . The more complex answer is “yes, as long as you use managed code for the right things” .

DIVISION OF LABOR

The key to successfully utilizing the benefits of managed code is to divide your code up in such a way that the code that would contributed most to performance degradation under managed code remains as unmanaged (native) code.

It’s often said that 90% of the (processing) time is spent in 10% of the code. That 10% (measured in lines of code) is code that performs large numbers of iterations, looping over data structures, performing repeated operations. These operations are things that are performed many times per frame, every frame, things such as collision detection, physics simulations and skeletal animations.

The remaining 90% of the lines of code (which takes only 10% of the processor time), is code that either is not executed every frame, contains very few iterations, or is only executed in cases where frame-rate is not an issue. Code such as user interface display, network packet marshalling, or artificial intelligence.

Managed

Player Control
Camera Motion
Combat Systems
User Interface
Game flow
State Transition AI
Saving and Loading
Data marshalling
Compositing Effects
In-game editors

Unmanaged

Collision Detection
Physics
Pathfinding
Skeletal Animation
Video Processing
Vertex Processing
Particle Systems
Visibility Determination
Fluid Dynamics

This table shows which types of code are suitable for managed code, and which are not. You might notice one thing about all the code tasks listed in the “unmanaged” column: they are all tasks that are commonly performed by commercially available engine components, or by a generic in-house engine. They are also typically components that are “close to the metal” , in that they may be hardware dependent, utilizing target specific resources. They are not game specific.

The code in the “managed” column, on the other hand, is higher level code, and generally platform independent. This code is often highly game specific, and can account for a very large portion of the actual code written for a particular game project, especially one that is based on an existing game engine.

So it’s clear how the division of labor works, low level engine components that require speed and efficient memory usage can remain in unmanaged (native) code. Game specific components that generally use less of the system resources can be written in managed code to gain the productivity benefits. If a game specific component ends up being a bit too inefficient in managed code, it probably is something that can eventually be made into a core engine component down the road.

MANAGED CODE IN EDUCATION

Since managed code is simpler to develop in than unmanaged code, it is an ideal language platform to use to initially instruct students in the craft of game programming. In addition, the easy accessibility of DirectX and XNA makes managed DirectX an obvious choice of platform for students to use when implementing their first game. Hence the modern student’s first exposure to game programming may well be in a fully managed code environment. Certainly courses in game development that are not structured along the lines of a traditional CS degree will be more heavily oriented towards a managed environment.

This means that there is a whole generation of games programmers coming along who are not only experience in programming for managed code languages and environments, but may actually be more experience in writing managed code than in unmanaged code. The result of this is that more your engine utilizes (or allows for) managed code, the greater your talent pool of potential game programmer will be. It’s quite possible that managed code will grow in popularity in the educational and hobbyist front to such an extent that there will be shortages of programmers who can write and debug code effectively in unmanaged C++, much like you would be hard pressed to find many young programmer comfortable in programming games in ASM, or even straight old-fashioned C.

THE MICROSOFT EFFECT

Perhaps the biggest influence on the future of managed code in games will be Microsoft’s popularization of the XNA framework for game development. Microsoft is aggressively pursuing the hobbyist game developer market to the extend of giving away for free the Express versions of XNA game Studio, including C# and C++ Visual Studio, all of which are tools which are quite capable of being used to create professional games.

Microsoft is also teaming up with the educational establishments to promote the XNA framework, with several universities adding courses based on this technology. But perhaps the biggest driving factor in all this is Microsoft’s decision to allow independent game development for the Xbox 360 console, with one caveat – the games have to be written entirely in safe managed code.

Why only managed code on the 360? Two simple reasons: firstly to prevent viruses and malware, and secondly, and most importantly, to prevent the development environment being used to pirate games and other paid content.

The ramifications could huge. Potentially a whole generation of hobbyist and student programmers will get their first experience of console programming on the Xbox 360, using XNA and C#. One the one hand this could be a great competitive advantage for Microsoft in a few years, as perhaps the majority of programmers will enter the game development industry with experience in Microsoft products. But on the other hand it could also be viewed as a great push for managed code in general. Aside from the DirectX framework, the .NET framework is portable (via the Mono project), and C# is an open standard which runs on Linux as well as Windows.

SUMMARY

Managed code can offer significant productivity gains, yet those gains come with equally significant speed and memory performance hits. For smaller games it’s quite reasonable to write the entire game in a managed language. In larger games managed code is not appropriate for engine components, but can work very well on a significant portion of the higher level code.

The popularity of managed code in education, and the easy availability of development tools may mean that the next generation of game programmers may feel most comfortable and productive programming in a managed language, and game developers would be wise to recognize this and incorporate managed code into their programming environment.

RESOURCES

Gamasutra, Microsoft to Enable User-Created XBox 360 Game, August 14 2006
http://www.gamasutra.com/php-bin/news_index.php?story=10458

Kyle Wilson, Why C++, GameArchitecht.net, July 2006
http://gamearchitect.net/Articles/WhyC++.html

« Newer PostsOlder Posts »

Powered by WordPress