## Optimized Evolutionary Algorithm for Keyboard Design Part 1

**Warning: Math-heavy post ahead. Read at your own risk.**

Trying to optimize a keyboard layout is tricky business. There are 30 factorial (30*29*28…) possible keyboard layouts. That’s about 10 to the 32nd power. This is far too many to search through every single possible keyboard layout to find the best. So an evolutionary algorithm (sometimes I call it a genetic algorithm, but there are subtle differences so that’s technically wrong) is a good way to find an optimal solution. This site provides a good introduction to genetic algorithms.

A keyboard optimization consists of several steps.

`Step 1: Create a pool containing `

*p* randomly-generated keyboard layouts.

Step 2: Score each keyboard according to a fitness function and sort the keyboards by score.

Step 3: Delete the lowest-scoring half of the pool and create a copy of each remaining keyboard.

Step 4: Mutate the copies by randomly swapping the positions of two keys *m* times.

Step 5: Repeat steps 2-4 until an optimal keyboard is reached.

This is not the complete algorithm, as you will see later. And the fitness function is an entirely different matter. But even so, there are a few questions. What values should be used for p and m? How do we know when an optimal keyboard is reached?

I’ll start by answering that last question. I know of three ways to know when to stop. The first and simplest is to just run the algorithm for some predetermined number of rounds. The advantage to this is that it’s easy and doesn’t require much computing power. The disadvantage is that you don’t actually know if the resulting keyboard is the best you can get. To be very sure that the keyboard is the best you can get, you have to do a huge number of rounds, so if you really want to be sure then it’s not fast at all.

The second way is to stop when the best keyboard in the pool has reached a certain level of fitness. This can be tricky. You have to know what level of fitness you want. If you know that, this method can get you to exactly where you want but no better. The problem here is, what if it’s impossible to reach the desired level of fitness? Sometimes a keyboard will get “stuck” where it’s not yet at the optimal keyboard, but there is no possible improvement. Let’s imagine a scenario. Say that QWERTY is the perfect keyboard layout, and we’re trying to evolve towards it. QWERTY is of course not the perfect keyboard layout, but for the purpose of this exercise let’s say it is. Let’s say our algorithm has come up with this:

Current Optimal Keyboard-------QWERTY

QSCWD PO;IL------------------QWERT YUIOP

VEFBR UKYJM------------------ASDFG HJKL;

ZAXGT HN,./------------------ZXCVB NM,./

Let’s say that any possible change will make the keyboard worse. But it still hasn’t reached the optimal keyboard. So it’s stuck. It can’t change, so it can’t reach QWERTY. It has to get worse before it gets better. Are there any solutions to this problem? Perhaps; I’ll discuss this more later. What this shows for now is that the second method is not optimal.

The third method is this: look at the best keyboard layout in the pool and remember it. If the best layout stays the same for *b* rounds, that is, it isn’t replaced with any better keyboard, then we can be pretty sure that the algorithm will never find a better keyboard layout. This method can be unpredictable, but it is the most likely to efficiently find the best possible keyboard layout.

Now let’s revisit the problem of getting “stuck”. I know of three ways to fix this. Since these three are not mutually exclusive, any or all of them can be used.

The first method is fairly simple. Generate an optimized layout as described above. Then generate another optimized layout. Continue generating optimized layouts some number of times. Any of the three stopping methods described above can also be used here to decide when to stop. The difference here is that each resulting keyboard is created independently, so they don’t get “stuck”. Advantages: the layouts can be very different, and variety is good; getting stuck every time becomes extremely unlikely. Disadvantages: it’s very slow. To get really optimal results, this method can make the whole process take a thousand times longer or more.

The second method is to use roulette selection. Instead of deleting the worst half of the layouts in the pool, delete keyboards randomly. But not completely randomly: worse keyboards have a higher probability of being deleted. Advantages: it is much less likely that a layout gets stuck; it’s faster than the first method. Disadvantages: as with any random function, it’s unpredictable so it’s possible to completely miss a really good layout; this doesn’t matter so much, but it requires significantly more program code than there would be otherwise.

The third method involves increasing the value of *m*, the number of mutations. If there are more mutations per iteration, it will increase the potential for change. This allows a layout to get a lot better. But it also allows a layout to get a lot worse, so a high value for m will seriously slow down the program.

For most effective optimization, the first two methods will be used. Now let’s look at our new and improved algorithm.

`Step 1: Create a pool containing `

*p* randomly-generated keyboard layouts.

Step 2: Score each keyboard according to a fitness function and sort the keyboards by score.

Step 3: Randomly delete half of the pool (giving preference to keyboards with lower fitness) and create a copy of each remaining keyboard.

Step 4: Mutate the copies by randomly swapping the positions of two keys *m* times.

Step 5: Repeat steps 2-4 until the best keyboard in the pool has not changed for *b* rounds.

Step 6: Place this best keyboard in pool O and sort each keyboard in O by score.

Step 7: Repeat steps 2-6 until the best keyboard in pool O has not changed for *o* rounds.

Step 8: Repeat steps 2-4 using pool O until the best keyboard in the pool has not changed for *q* rounds.

This is a bit longer, but still pretty simple. The remaining question is, what should the values be for the constants *p*, *m*, *b*, *o* and *q*?

Let us assume that p = 1 and m = 1 (that is, there is one layout in the pool, and a mutation swaps the positions of two keys). This makes it very simple to calculate b. We want to iterate long enough to be pretty sure that no improvement will be made. The keyboard that is least likely to improve is one where every key is in the best spot, except for two which are exactly swapped. So swapping those two keys is the only possible way to make the keyboard better. The probability of swapping those two keys is one out of 435 (two out of 30 chance to select one of the two keys multiplied by a one out of 29 chance to select the matching key).

Since If the keys to swap are being selected randomly, then b ≈ 1000. But if the key swaps are not actually random but are instead cyclical in a pseudorandom way, then after 435 iterations you can be guaranteed to hit the one swap that’s an improvement. So if there are no changes for 435 iterations, you can be positive that the keyboard is not going to get any better.

This can be extrapolated for higher values of m. Let’s say that m = 2. Worst-case scenario, the only possible improvement is two specific swaps. The probability of swapping those two pairs of keys is one out of 94612.5 (four out of 30 chance to select one of the four keys multiplied by a one out of 29 chance to select the matching key, multiplied by a two out of 30 chance to select a key from the remaining pair multiplied by a one out of 29 chance to select the matching key). We can then create a formula for any value of m where p = 1.

`b = (1/29)^m * [(2m / 30) * (2(m-1) / 30) * (2(m-2) / 30) * . . . * (2(1) / 30)]`

From this we can try to find an optimal value for m. We want to find a balance between mutability and efficiency. Perhaps the balance is as low as m = 1, since there are better ways to allow for mutability.

So we have m = 1 and b = 435. But should b really be such a large number? If it’s smaller, there will be a lower chance of reaching the optimal layout, but the program will run much faster. Instead of looking at the second best layout, let’s look at the third best. That is, there are two possible swaps that would be an improvement. The probability of getting one of these swaps in a random mutation is 1 out of 218. (This is of course an oversimplification; the second best layout might not be at all similar to the best layout. But this model works for our purposes.)

In general, if we are satisfied with the *n*th best layout, we should set `b = (2n/30) * (1/29)`

. You may say, Why would we compromise? Well, remember that we are later going to move on to an optimized pool O, and in this pool we can be more stingy. The average layout has about 218 possible improvements, so a layout with only, say, 50 possible improvements is going to be one of the better ones. To reach a layout with 50 possible improvements, only about 9 iterations are necessary. If we want a layout with only 5 possible improvement, 87 iterations are necessary. All things considered, that’s a pretty good equilibrium.

Now we must factor in what happens when we have p > 1; that is, there is more than one keyboard in the pool. Things become much more complicated. That will be the topic of discussion in part 2.

Here’s something I’ve had some success with in preventing early plateauing: Watch the standard deviations of the parameters you’re optimizing (in your case, keys). When a parameter’s standard deviation gets below a certain point (i.e., most individuals have the same character on a particular key), randomly choose an individual and reset that parameter (i.e., swap the “stale” key with any other key). Do this for all parameters at the beginning or end of every generation.

The nice thing about this method is, since it’s not fitness-based, it can help shake the population off a local extremum before it has the chance to get really stuck there (i.e., when only one or two parameters have settled across the population, rather than every parameter).