On Simulators

I've mentioned a couple of times on this blog that I use simulators in my game design. I'm a software developer in my day job, and I've been obsessed with computers for just about as long as I've been obsessed with games. It only makes sense that I would combine these pursuits whenever I'm given the opportunity. I'll talk about some of the techniques I've used in this article in the hopes that it'll help some other folks.

It's worth examining, first, what sorts of things simulators can help with. The biggest area where I make use if them is if I have complex mathematical interactions in any of my designs. While I do have the background to sort that stuff out using real mathematical tools, those skills are really rusty. My programming skills are not, so it's a lot easier to figure things out using a simulation. The next place I make use of simulations is when I'm sorting out the parameters in a game. Finally, I'll use a simulator to test out alternative rules in an easier fashion than lining up an entire playtest.

Some examples of things I've tested recently with simulators (all example code here is in the public domain, knock yourself out if they help):

  • Probabilities of oddball poker hands for Wozzle. While we didn't end up using them, it was interesting to see what the relative probabilities of things like three-pairs and two-threes-of-a-kind were compared to existing poker hands. You can see that script, although like all examples here, it's kind of rough.
  • Looking at the average gains of pass and run plays in Gridiron Solitaire, helping Bill Harris balance out those functions. That code is available as well.
  • A combat simulator for a tabletop adventure game, a big, complex beast of a thing that eventually would fit into the same basic space as Warhammer Quest or Descent. That game isn't really ready to be discussed publically, but one day.
  • Examining dice probabilities in KMATTS to try and weigh different dice combinations. That code is again available for download.

This last example is worth exploring in greater detail, in the hope it will help someone else out hoping to use similar techniques. First, all of these simulators are written in Python. Python is simply the easiest language to work in for this type of prototyping. The presence of a robust numerical package, not to mention simple data handling and rapid productivity make the choice something of a no-brainer. I'm afraid that a Python primer is somewhat beyond the scope of this article, but there are a lot of resources on the web for getting up to speed.

Getting beyond that, into the code, the KMATTS simulator features some simple command line arguments (look at the main() method in the code):

parser = argparse.ArgumentParser()  
parser.add_argument('-c', '--count', dest='trials', type=int,  
    default=TRIALS, help='How many trials to run?')
parser.add_argument('-f', '--filename', dest='filename', type=str,  
    default='output.csv', help='Output file name')
args = parser.parse_args()  

I always make sure that my simulators have a count argument in them. I'll run shorter trials until I have confidence I'm testing the right thing, and an output parameter allows me to compare runs of the program easily. After that, I open up a CSV file for output, and then I run a series of trials, with a number of dice ranging from 3 to 10. The goal there was to compare how the value of a reroll (or other power) varied depending on the number of dice I was rolling. The CSV made it easy to pull into Excel for additional analysis. You can see a Python feature that makes things easy in this section

        for base in ['Mean Scoring (w/ {0} defense choices)']:
            for title in title_row_gen(base):
                output_row.append(title)

This loop, using a generator, allows me to really easily create a header row for the output. The heart of the simulator is in attack_trials(), which contains a loop running over all of my trials. The code that I have on the site is when I was exploring the impact of selecting a value for a defense die on attack rolls, so it starts by setting up some housekeeping. It then accumulates results in a set of arrays, one for scoring, and one for each defense die count. After finishing all of the trials, it uses numpy to compute means:

    output_row = [str(dice_count), str(numpy.array(scoring).mean())]

Those are the basics. One of the interesting things about this script is that there are a lot of parts left in there for previous things I tested. You can see three different scoring variations (scoring_version_1, scoring_version_2, and, logically enough, scoring_version_3), nine different dice manipulations, two different types of defense dice (random and selecting), and four different dice combinations (only three of which made it into the system). By running the script, saving the output, and modifying as I went, I was able to slowly accumulate the data that I wanted to try and tune my system.

Using this simulator, I was able to quickly and easily run millions of sets of dice, and see what the effects of different choices would be. While it's no substitute for playing the game, it still gave me a lot of insight into what some smart choices were as a starting point. When it's appropriate, I will continue to create simulators to give me labs to change up the parameters and rules of my games.