I’ve enjoyed playing games as long as I can remember. Among my earliest memories are playing Candy Land, Chutes and Ladders, Don’t Break the Ice, and Don’t Spill the Beans. When I was a child, whenever someone did not know what to get me for a birthday or Christmas present, a game was always a good choice. Today, in the back room of our house, we have a closet filled with games that my children and I have accumulated over the years. The rest of our games are either in a closet upstairs or in one of several large boxes in the attic. Periodically we rotate the location of the games for variety.
Many of the games I enjoyed playing involve a combination of strategy and randomness: card games of various sorts, backgammon, and board games like Monopoly and Parcheesi. Some games that rely exclusively on chance (like War and Candy Land) or too heavily on chance (like Sorry) quickly became uninteresting to me. In fact, for Sorry, War, and several other games, I introduced additional rules to change the balance of strategy and luck—for example, by allowing each player to hold a hand of cards rather than merely flip a card and follow its bidding.
When my children were young, I played many games with them, especially those involving some amount of chance. I always play to win, so games of pure strategy like chess gave me too great an advantage—at least when they were still young. I still remember the first time I played the German game Mitternachtspartie with my children and some of their cousins. The game uses a die on which the number 5 has been replaced with the image of Hugo the ghost. Each player rolls the die and moves one of his figures the specified number of squares, unless Hugo is rolled, in which case Hugo moves instead.
I quickly worked out the expected distance Hugo would move for each of my turns and the expected number of squares I would get to move my own figures each turn. Using that information, I could strategically place my figures in the opening portion of the game. I fully expected to win this first game, since my young children were going to have to learn from experience what I already knew by the mathematics of probability. I lost—badly. As it turned out, the die had two Hugos on it. So compared to my expectations, Hugo moved twice as often, and my figures moved slightly less far. That combination turned the carefully calculated positioning of my figures into a disaster.
From Fun and Games to Science
I still enjoy playing games, including games that involve chance. But these days I encounter randomness even more often in my profession. I was trained as a mathematician and now work at the intersection of mathematics, statistics, and computer science. Like many scientists, I use randomness on a daily basis as part of our toolkit for modeling and investigating all sorts of phenomena. Models known as stochastic models, which explicitly incorporate random components, often via simulation in computer software, are used to model everything from diffusion to genetics to quantum mechanics. Insurance companies and financial institutions use stochastic models to manage risk. If we include all the applications of statistics, then almost no area of science is untouched by the use of randomness.
Most of the time, scientists and game players alike don’t devote much thought to just what makes randomness tick. But they both know that the better they understand the probabilities, the more successful they are. Nevertheless, if you ask many of them what it means for something to be random, they may struggle to put it into words. I won’t try to give a precise definition either, but it is important that we have some idea what we are talking about, so let’s consider one of the prototypical examples of randomness: the tossing of a fair coin.
If I flip a coin, the result could be heads or tails. Until I flip the coin, I don’t know which it will be. In this sense, the coin toss is unpredictable. If the coin is fair, each result is equally likely, so while I cannot say in advance whether a particular result will be heads or tails, I can say something about a large number of flips: approximately half should be heads and the other half tails.
A little mathematics even allows me to determine a range around 50% in which the percentage will almost surely lie. For example, if I flip a fair coin 1,000 times, the percentage of heads will most likely be between 45% and 55% (where “most likely” means a 99% chance). If the percentage of heads lies outside this range—especially if it is quite far outside this range—I am going to be suspicious that the coin flipping process is not fair. That’s one of the key ideas in statistics: not only can we calculate the frequency with which an event occurs, but we can compare data to a stochastic model to see if they are compatible or incompatible.
There are several interesting things we can learn by considering a coin toss. First, probability calculations rely on assumptions. If the assumptions are incorrect, then the probability calculations will also be incorrect. For example, if the coin is biased (such as one that is heads 60% of the time), but we assume it is fair, then the probability calculations given above will be wrong. Of course, if the assumptions are not too far from correct, the results may still be sufficiently accurate for scientific conclusions. If we have an appropriate way to collect data, then we can test our assumptions by comparing data to projections made based on the assumptions.
Second, “random” does not imply “equally likely.” A fair coin should have equal probabilities of heads or tails, but a biased coin is no less random. It’s just different. It is not as simple to handle arithmetically as a situation in which all outcomes are equally likely, but it is not otherwise special. It is a common mistake to assume random events are equally likely when they are not (or when that assumption is not justified).
Third, randomness is about the process. It is a fun experiment to flip a penny 100 times, then spin a penny 100 times and record the side that is showing when it finally tips over, then to stand the penny on end (this takes a steady hand and a little practice) and record which side is showing after pounding the table. These are three different processes, and they do not yield the same results.
Fourth, random processes produce patterns. I sometimes ask my students to mentally flip a coin and record the results as a sequence of letters (e.g., “HTTHHTHT”). Then I have them actually flip a coin and record the results. If the sequences are long enough, I can almost always tell them which is which. The sequences imagined by the students tend to have too few runs of consecutive heads or tails. The sequences based on real coin flips usually include several heads in a row. People not familiar with randomness are often surprised at the patterns that result and assume that the process must not have been random when they perceive a pattern. Our eyes and minds are drawn to similarities and patterns—even those that are produced purely randomly. This can lead us to draw false conclusions from coincidences of all sorts.
Consider the image in Figure 1. It was constructed using a computer to randomly throw 300 darts at a square board. Every position on the board was equally likely to be hit by a dart. This does not, however, mean that the dots are evenly spaced. There are 100 smaller squares. The average is three dots per square. But your eye is likely drawn to some clusters and voids. My eye also catches a graceful downward swoop in the lower part of the upper left quarter. All of this is exactly what we should expect from this random process. If we repeated this experiment, we should expect similar results. Several of the smaller squares would be empty and some others would have two or three times the average number of dots, but these clusters and voids would appear in different places.
Finally, randomness can be used to produce patterns intentionally. Consider the two pictures in Figure 2. You may think the two pictures are identical, but they are not. However, they were each constructed using the same random process:
- Start at the lower left corner of the big triangle.
- Randomly choose one of the three corners of the big triangle.
- Move half way to that corner, placing a dot at the new location.
- Repeat steps 2 and 3, 50,000 times.
The first few steps of this process for each image are illustrated in Figure 3. Although the final images look very similar, the route taken to get there is very different. In fact, the only point the two images have in common is the starting point. As the creator of the program that generated these images, I knew full well that the result would resemble a fractal image known to mathematicians as Sierpinski’s Triangle, even though I did not know or exercise any control over how the individual points would be selected.
Despite our familiarity with children’s games and the importance of stochastic models throughout the sciences, many Christians have a reaction to randomness that falls somewhere between uneasy and antagonistic. And yet, those same Christians may well watch the evening news to learn about public opinion polls forecasting upcoming elections, take prescription drugs approved by the FDA based on statistics found in clinical trials, obtain electrical power from a nuclear power plant that uses random fission reactions, and insure their cars with companies that rely on stochastic models to set the rates. The foundation of each of these activities is a thorough understanding of randomness that begins with the simple description above.
So where does the uneasiness come from? Likely it comes from the feeling that taking randomness seriously means not taking God seriously. Or put more strongly, it comes from a fear that believing in randomness means not believing in God. Next week we’ll address that problem by asking the question, “Could God use randomness to achieve his purposes?”