Systematic Gaming

July 23, 2009

Asset Management Checklist

Filed under: asset management — Tags: , — systematicgaming @ 5:45 am

Now that we’ve looked at the layers of asset management, let’s reflect upon the key points and see how we can improve our existing engine and pipelines.

Each of these questions targets a specific part of the asset pipeline that deserve consideration.  By reflecting on these points you can see ways to improve your own asset pipeline and workflow.


Are shared assets duplicated unnecessarily?

Are instanced assets handled efficiently?  With minimal overhead and waste?

Do you support generated (procedural) assets seamlessly?

Are you wasting memory on strings or filenames?

Do you avoid unnecessary loading or reloading between levels?


Are you processing data to make runtime asset management more efficient?

Do you pack referenced assets together for faster loading?

Do you optimize disc layout based on asset usage?  (Duplicate data, sort of dependency graph, etc)

Content & Workflow

Is your source data versioned?

Can you re-create a previous version of your data?

Can you identify who last changed an asset? And why?

Are your tools integrated with your asset management system?

How many steps are required by a user to add a new asset?  How many to update or tweak an asset?

How long does it take to view a data change in game?


July 16, 2009

Asset Management – Content

Filed under: asset management, workflow — Tags: , , — systematicgaming @ 6:14 am

We’ve already looked at the runtime and processing layers of asset management, now we’re ready to look at the most involved and complex layer – the content layer.

This is the layer where the content producers – artists, level designers, sound artists,  anyone who creates game content truly interfaces with the game engine. At this level we’re not just dealing with runtime data – we need to manage digital content creation (DCC) tool data, such as PhotoShop PSD files, Maya files, etc.  In addition to custom data types produced from in-house tools or game engines.

This layer is all about how content creators use and interact with data and the game engine.  The key components of this layer are:

  • Asset tracking and versioning
  • Tool integration and workflow


July 2, 2009

Asset Management – Processing

Filed under: asset management, game programming — Tags: , , — systematicgaming @ 4:21 am

Last time we looked into the lowest levels of asset management, which can be handled by referencing counting and intelligent loading.  With the next level, the processing layer, we look at how we prepare assets for our runtime.  This is an important step and where a lot of optimization occurs.  We can break the processing layer down into a few distinct stages:

  • Asset referencing
  • Building (or baking)
  • Packing


June 25, 2009

Asset Management – Runtime

Filed under: asset management — Tags: , , — systematicgaming @ 1:00 am

The runtime layer is the lowest level of asset management encompassing:  file loading, reference counting, instancing and even procedural asset generation.  To meaningfully discuss handling assets at runtime we’ll need to define the different types of assets.  Here’s a few basic types of assets:

  1. Raw data
  2. Instanced data
  3. Procedural assets
  4. Composite data


June 24, 2009

Asset Management

Filed under: asset management — Tags: , , — systematicgaming @ 2:25 am

Modern games have gigabytes of data, and thousands of individual assets.  Managing all this data well can be a very complex task, and impacts game development at all levels, from concept artist to low level bit-pushing coder.  The next few articles will look at the issues of asset management in games.  First we must clarify our goals with some simple definitions.

What exactly is an asset?

For the purpose of this series we’ll define an asset as a set of piece of data used by the game.  This is very inclusive, and should help illustrate the scope of the problem.  A texture, a model, an sound, an AI script, really just about any data used can be considered an asset.

What is asset management?

Its the process of tracking data used by the game: when it is used, how it is used and how individual pieces of data relate to each other.  For example, to load a character we need to need: a model, the material and shaders, the textures, the animation data, and possibly more (such as AI or motion graphs).  Asset management is an integral part of workflow, data processing and runtime optimizations.

Proper asset management is a major issue in game development.  Solid asset management is required to handle the amount of content modern games use.

In this series we’ll take a closer look at the major layers of asset management:

  • Runtime Layer – Where the game engine deals with assets
  • Processing Layer – Where assets are processed for the engine
  • Content Layer – Where assets are created and used by designers

Each layer has its own issues to handle, and we’ll investigate these issues and look into possible management solutions.

Part 1: The Runtime Layer

Part 2: The Processing Layer

Part 3: The Content Layer

Finally we have a checklist of important points to consider when building and maintaining your asset management system.

January 31, 2009

Game-Tool Communication

Filed under: game programming, workflow — Tags: , , — systematicgaming @ 12:20 am

Modern games are extremely content heavy – most of the people on a game team are part of the art or design departments.  The larger a the team get the greater the portion of content producers becomes, and the trend will only continue.

So what does this have to do with game systems programming?

The more content you game has the better your tools have to be.  The best tools are robust, efficient and as seamless as possible.  The key to a seamless workflow is close integration with your tools and game engine.

There are basically two ways to accomplish this:

  • Direct integration of the game engine with the tool
  • Remote integration via a communication layer


January 13, 2009

Performance: C# vs C++ – Part 2

Filed under: c++ vs c#, optimization, profiling — Tags: , , — systematicgaming @ 1:01 pm

I was pretty content with my previous tests on C# sorting performance, which were pretty disappointing.  Normally I’d be happy to move on, but one commenter did pique my curiosity pointing to a MSDN article about how lousy the 2.0 CLR is with structs.  So I downloaded the Visual Studio 2008 Express Edition to give it a try.  I admit, I was a bit surprised and impressed with the differences.


January 4, 2009

Performance: C# vs C++ – Revisted

Filed under: c++ vs c#, optimization, profiling — Tags: , , — systematicgaming @ 2:18 am

I thought I’d comment on my own post about C# vs C++ performance.  The purpose of the test was to compare the time to sort 128-byte objects in C++ vs C#, nothing more.  Again if you understood that, then you can propbably see that the performance measured has more to do with execution envrionments than languange differences.

That said, I’d like to address out the following issues in my testing:

  1. I used a char array in C# to simulate a similar char array in C++.  This incorrect since C# chars are 16 bits while C++ they are only 8 bits.  I admit this was pretty sloppy.  Replacing char with byte fixes this, and the result is the C# version runs about 15% faster.  Faster, yes, but still not fast enough to make a significant difference.  The most striking thing is that halving the memory usage only gives a 15% difference, in C++ the memory had direct linear impact on runtime.
  2. Some people complained about using the fixed keyword to make the array part of the Data struct.  It’s trivial to replace it with a number of int or float to create a 128-byte struct and makes no difference in performance.  The point was to approximate C++ and to represent a larger class, fixed was a simple way to do this.
  3. Some people complained this isn’t an apple to apple comparison – I disagree.  It’s about as direct a way as possible of comparing two very different languages and operating environments – perform the same task and see how long it takes.
  4. I was running Visual Studio 2005 SP1 for both tests, newer/different versions may have varying results.

The purpose of the previous post isn’t to claim that C++ is better than C#, but many people seem to think that I am – I suppose this is what I get for posting such an inflammatory title. I use C# on a daily basis and like it.

There were a number of comments on my last post claiming to “debunk” my tests referring to what is  posted here.  This debunking itself admits it’s comparing apples to oranges – however it completely misses the point of creating a fixed size array in the first place.   Replacing the fixed array with a heap pointer shows nothing about C# performance – only that copying a smaller struct is faster than copying a larger one.  This is a fact I addressed previously.

The claim is made that this test isn’t representative of what a C# programmer would write.  True, a C# programmer would very rarely used a struct with a fixed array.  However, it’s fairly easy to create objects with 128-bytes or more of data.  That’s 32 4-bytes variables, which isn’t a huge amount of variables to have in a class, especially one with a few parent classes each with a few member variables.

Anyways, other that the char/byte mistake, I stand by the original results – C# takes about 10 times longer to sort an array on my computer, compared to a similar C++ verision.

January 3, 2009

Performance: C# vs C++

Filed under: c++ vs c#, optimization, profiling — Tags: , , — systematicgaming @ 6:02 am

In my last post I discussed cache pressure and how important the cache is for performance.  Out of mild curiosity and a bit of boredom, I decided to see how the same code (or near enough) performed in C#.  Why C#?  A lot of tools are written in C# now, mostly because it’s a nice language with fantastic library support and solid GUI integration.  It’s a more productive environment for all those programs that don’t have to run at 60 fps.

Now given how straightforward the test is – sorting an array – I’d expected performance to not differ by much compared to the C++ version. Before I ran this code I was guessing that C# would be maybe 50% slower and at most half the speed than the C++ version.  Especially since the sorting should be mostly algorithmically and memory bandwidth bound.

You can get both versions of the code here: C++, C# to test for yourself. [EDIT: fixed link to C# code]

Now for the results.

[EDIT: a number of issues were raised about this test in the comments and elsewhere – here’s my response]


December 23, 2008

Patterns in Performance: Cache Pressure

Filed under: game programming, optimization — Tags: , — systematicgaming @ 3:16 am

I’ve been mentioning memory and the cache a lot, because proper cache utilization really is a critical to good performance. This time we’ll look at cache pressure – a term that refers to overworking the memory cache with too many or wasteful memory accesses.  Basically putting pressure on the memory cache means you’re wasting time accessing more memory than you need to.  The main cause is inefficient data access, often by using data structures that are too large, or too sparse reducing any benefit from memory locality.


December 6, 2008

How to write a check-in comment in Japanese

Filed under: japan — Tags: , — systematicgaming @ 3:43 am

Ok, you find yourself coding in Japan one day and have to check your stuff into source control.  What do you write?  You could just type it up in English, and nobody will complain.  Probably because they’re a little afraid of talking to you.  But nobody will read your English comment, so they won’t know what you’ve been doing, and you’ll have no way of validating all the time you spend browsing YouTube at work.


November 23, 2008

Patterns in Performance – Cache Manipulation

Filed under: optimization — Tags: , — systematicgaming @ 2:37 am

With modern processors greatly outpacing the speed of memory we need to properly utilize the memory cache to achieve high performance.  Whenever we touch a memory address, the memory is pulled into the cache from main memory.  This is a slow operation, taking hundreds of cycles on modern CPUs.

To address this problem we’ll look at another pattern in performance: cache manipulation.  Instead of waiting for the memory controller to fetch memory when at the time we access it, we tell the memory controller to fetch the memory in advance.  By the time we need the memory it has already been put into the cache.

[EDIT: Updated to more correctly show load-wait timings.  Loads cause stalls when the loaded data is first accessed, not when the load instruction is invoked (which is what was previously implied).]


November 7, 2008

Patterns in Performance – Caching

Filed under: optimization — Tags: , — systematicgaming @ 4:20 am

When optimizing code there are a number of techniques than are frequently used – tried and true methods to speed up our code.  These optimization patterns occur again and again.  In this series we’ll look at various patterns in performance than we can apply in numerous situations.

Almost all programming can be viewed as an exercise in caching – Terje Mathisen

The first pattern we’ll look at is caching.  Caching is simply storing the result of calculations for later use.  This is a traditional space-time trade-off, where we use a limited amount of memory to reduce recalculating functions, trading memory of speed.

October 23, 2008

Game Optimization 101

Filed under: game programming, optimization — Tags: , , — systematicgaming @ 6:23 am

In this article we’ll look at the process of optimization.  There’s no single way to optimize a game, but there are optimization methodologies to follow, which will help us prioritize our optimization strategy.

In previous articles we looked at how we can profile parts of our game.  At this point we only know how long sections of our game code are taking.  We haven’t looked into how to use this information to make our game faster.  We need to know how to target the slow parts of our game.

The process of optimization is two parts methodology, one part knowledge and one part intuition.  We’ll start with a simple process and show how it can help us optimize effectively.


October 12, 2008

Game Profiling 102

Filed under: game programming, profiling — Tags: , — systematicgaming @ 1:41 am

In the last article we implemented a simple stopwatch, but didn’t really do much with our timing data other than print it out.  Simply printing out our timing isn’t very helpful either, first of all you can’t always see standard output on a console, and secondly it’s a pretty awkward way to collect our data, we can do much better.

Our goal this time is to design a system that allows us to profile out entire game and display the results in realtime.   We’re not going for detailed performance data, just a high level picture of our game’s performance.  The key goals are:

  • Collect data from each major subsystem
  • Present our profiling data to the user in an understandable format
  • Keep a multi-frame history of our data to detect performance spikes


September 30, 2008

Game Profiling 101

Filed under: game programming, profiling — Tags: , — systematicgaming @ 8:21 am

On this blog I keep mentioning efficiency, resource usage and optimization as key requirement for any well designed game system.  When discussing memory I put a large emphasis on tracking and profiling memory usage.

This time we’ll look at the various ways we can profile our game. We’ll also look at how to instrument our game for custom profiling, focusing on CPU usage.


September 24, 2008

Data Compression Evaluation

Filed under: file management — Tags: , — systematicgaming @ 1:38 pm

When discussing load times I mentioned that compression is an important way of reducing load times.  I also said that memory usage can be reduced by compressing data into smaller fixed sized blocks – instead of compressing a file as a single block.  There is also a variety of compression algorithms available, with different performance characteristics and strengths.

So how do you choose the proper compressor?  There is no single best algorithm, many have different strengths and weaknesses and are useful in different situations.  We’ll investigate a few different compression algorithms and more importantly evaluate what to look for when deciding on how to compress your data.


September 18, 2008

Load Times: Layouts and More

Filed under: file management, game programming — Tags: , — systematicgaming @ 11:41 pm

We know that seek time is a major problem with optical media, so we’ll need to reduce it as much as possible to achieve our fast load times. There’s really only so much we can do to solve the problem

  • Seek less by reading less files
  • Seek less by organizing our files better

We’ve seen how to reduce the number of files read by using packfiles, which also helped by seek times a little bit.  Since seek times are related to the physical distance and direction a drive head has to move, we’ll look at how to reduce seek times by arranging files on disc to minimize their distance.

We’ll also wrap up this series on load times with some tips and tricks to get that last bit of optimization.


September 16, 2008

Load Times: Packfiles

Filed under: file management, game programming — Tags: , — systematicgaming @ 5:27 am

A packfile is simply a single file containing one or more other files. They’re very useful for load times, and can be used in a number of ways. We can put files that get loaded together into a single packfile and load them at once, such as putting the data for a single level into a single packfile and loading in a single read. We can also put our entire filesystem into one or more packfiles, which lets us handle compression cleanly as well as making distribution easier.

So what do we need to do to create a packfile? No much:

  • File name and path, usually relative to a specific root directory
  • File size and compressed size

In this article now we’ll look at how to implement a packfile system.


September 11, 2008

Load Times: Compression

Filed under: game programming, memory management — Tags: , — systematicgaming @ 11:49 am

We’ve gone over what happens when we read files, and came up with some ways to reduce stalls between our game and the OS and hardware. We designed a file manager that allows us to read files asynchronously and remove all the wait time. What next?

Well the simplest way to reduce load times is to load less data. If we simply compress our data we’ll reduce load times a lot. General compression algorithms can often get a 50% or more reduction in file size, which translates directly into reduced load time.


Older Posts »

Blog at