- Peter Ivan Edwards

# Uniform Waves: Part 1, in which a new form of randomness is introduced

In the previous series of posts, in which *Shuffled* was created, I relied exclusively on random permutation to make pitch determinations. Before I learned about computer music, I thought that the results of randomly generated values were like those of random permutation, that is, I presumed - honestly, I'd never really thought about it - that if the process of randomly choosing a value from a collection of 10 numbers - from 0 to 9 for the sake of this example - was repeated 10 times, then a set of results would include all 10 numbers. It would look something like [5, 1, 9, 8, 4, 0, 7, 2, 3, 6]. In other words, the 10 values would be randomly permuted. It turns out that this is a particular approach toward randomness. There are many different approaches toward randomness, random permutation being just one. The method addressed in this new series of posts, which will end in the creation of a work called *Uniform Waves* for 2 pianos, is a primary one. It's called uniform distribution.

Uniform distribution is very familiar to us. A coin toss is an example of it. There are two sides to a coin and an equal chance of getting either with each new toss. A die is another example. There are 6 sides, each with a 16.67% chance of being the top side with each new roll. There is, however, a very important distinction between random permutation and uniform distribution, which can be demonstrated with a series of coin tosses.

If we allow the value 1 to represent heads and -1 to represent tails, then the series of coin tosses I just carried out before writing these words yields the following collection:

[1, 1, -1, -1, -1, -1, -1, -1, -1, 1]

If such a series were generated with random permutation, then the values would just alternate back and forth between 1 and -1 since those are the only 2 values possible. Instead, the results here repeat values consecutively; -1 is repeated 7 times, for instance. On a graph, it looks like a particularly bad day in the stock market.

You may have experienced a coin toss in which you kept choosing one side, waiting for your luck to turn, but it never did. After tails was drawn 10 times, you thought, "If each side of a coin has a 50% chance of occurring, then heads must come next!" But if that were true, then tails, in this example, would need to have a decreasing chance of occurrence with each repeated appearance. Instead, the probability is always 50/50. A toss has no memory of its ancestors, so it doesn't matter that the last 10 favored a particular side. In this regard, uniform distribution is not compatible with human expectations. We also need to remember that a sample size of 10 coin tosses is very small compared to a theoretically infinite number of coin tosses. The 50/50 probability becomes increasingly evident as we carry out increasingly more coin tosses. But who has time for that kind of highfalutin thinking when trying to win the last cherry popsicle rather than getting stuck with the lousy grape one?

A fundamental difference between random permutation and a uniform distribution lies in their generation procedures, more specifically, whether selected values are replaced or removed. In probability theory, an urn scenario is usually used to explain this. Imagine an urn filled with balls on which each has a unique number from 0 to 9. You stick your hand in, shuffle the balls around, and then choose one, taking it out of the urn. Finally, you record the number you see on the ball. But a crucial procedural move occurs next. What do you do with this ball? Put it back in the urn, or put in to the side? That is, do you replace it or remove it? With random permutation, you remove it. That process could looks something like this in code.

```
(setf urn-1 '(0 1 2 3 4 5 6 7 8 9))
(setf first-selection (rnd-pick urn-1)) => 3
(setf urn-2 (remove first-selection urn-1)) => '(0 1 2 4 5 6 7 8 9)
(setf second-selection (rnd-pick urn-2) => 0
(setf urn-3 (remove second-selection urn-2)) => '(1 2 4 5 6 7 8 9)
...
(setf urn-9 (remove eight-selection urn-8))) => '(9 5)
(setf ninth-selection (rnd-pick urn-9)) => 9
(setf urn-10 (remove ninth-selection urn-9))) => '(5)
(setf tenth-selection (rnd-pick urn-10)) => 5
```

This is a very longwinded way to do random permutation, but it demonstrates the process clearly. The variable **urn-1** is just a list of the integers from 0 to 9. One of the values is randomly picked. That is set to the variable **first-selection**. In the scenario above, it returns 3. For the next selection, we remove **first-selection** from** urn-1** and set that new list as a variable called **urn-2**. In this way, we ensure the **first-selection** value cannot be selected again. Then, a random value is selected from **urn-2**. This process continues. Showing all selections isn't necessary. Hence, only the first 3 and the last 2,** urn-9** and **urn-10**, are given. Notice **urn-9** only has 2 values. Effectively, in **ninth-selection** each value has a 50/50 chance of being selected. That's very different from **first-selection**, where each has a 10% chance of being selected. With **tenth-selection** there is a 100% chance that 5 will be selected since it is the only remaining value in **urn-10**. This increase in probability for unselected values is what we, on some level, presume is happening with a coin toss. But a coin toss is a different scenario.

Only a small change to the code above is needed to demonstrate what happens with uniform distribution, but it's a crucial difference.

```
(setf urn '(0 1 2 3 4 5 6 7 8 9))
(setf first-selection (rnd-pick urn)) => 5
(setf second-selection (rnd-pick urn)) => 3
(setf third-selection (rnd-pick urn)) => 5
(setf fourth-selection (rnd-pick urn)) => 6
(setf fifth-selection (rnd-pick urn)) => 6
(setf sixth-selection (rnd-pick urn)) => 2
(setf seventh-selection (rnd-pick urn)) => 3
(setf eighth-selection (rnd-pick urn)) => 4
(setf ninth-selection (rnd-pick urn)) => 1
(setf tenth-selection (rnd-pick urn)) => 1
```

Note that there is only 1 urn variable called **urn**. There's no need for successive urns because the state of the urn never changes. In this scenario, if a value is selected from the imaginary urn, then it is replaced before selecting the next value. All 10 values always have a 10% chance of being selected. Note the repetition of the values 1, 3, 5, and 6 in the results. Repetition in selections is impossible with random permutation due to the algorithm.

In the next post, we'll explore what all of this could possible have to do with music.