Archive for the ‘Keyboarding Theory’ Category

Typing Data: Preliminary Analysis

November 27, 2011 10 comments

I have collected a large quantity typing data using Amphetype, on both QWERTY and MTGAP 2.0 (the two layouts that I currently know). I do not have any conclusive results, but I have some interesting data that I thought worth sharing.

My most interesting discovery is that there is a statistically significant correlation between frequency of a trigram and the average speed at which it is typed. On MTGAP 2.0 the correlation is 0.34 and on QWERTY it is 0.33. This means that a trigram’s frequency accounts for about 10% of the variation in typing speed—not a lot, but still enough to merit consideration.

Then I analyzed the speeds of various key combinations. For example, on MTGAP 2.0, the average speed for a trigram containing an inward roll is 121 words per minute (wpm); for a trigram containing an outward roll, the average is 110 wpm, and for a trigram containing neither, it is 111 wpm.

When all three keys are typed with one hand, the average is 104 wpm; when two are typed with one hand and one with the other, the average is 118 wpm; and where the hand alternates, 107 wpm.

Where the total finger travel distance is short, the average is 120 wpm; for medium distance, 111; and for a long distance, 105.

It would be premature to draw conclusions from these data. For example, the reason why short finger travel distance is faster may be because MTGAP 2.0 intentionally places common keys on the home row, and common keys tend to be typed faster. On QWERTY, the average speeds for short, medium, and long distance are 96, 102, and 104 wpm, respectively. In this case, the short-distance keys are the slowest.

I am currently looking for anyone who uses Amphetype or is willing to contribute some time to using it. I want to get as much typing data as possible, especially on a variety of keyboard layouts. Leave a comment if you are interested.

For those who are interested, here are all the data I have acquired.


Average WPM: 112

near distance average: 120
medium distance average: 111
far distance average: 105

inward close keys average: 120
outward close keys average: 107
not close keys average: 110

in roll average: 121
out roll average: 110
not roll average: 111

same hand average: 104
two and one average: 118
alternation average: 107

triple finger average: 73
same finger average: 91
different finger average: 115

twice jump average: 73
home jump average: 92
home jump index average: 113
not jump average: 112

twice to center average: 105
to center average: 116
not to center average: 112


Average WPM: 104

near distance average: 96
medium distance average: 102
far distance average: 104

inward close keys average: 106
outward close keys average: 113
not close keys average: 102

in roll average: 104
out roll average: 116
not roll average: 102

same hand average: 98
two and one average: 106
alternation average: 105

triple finger average: 72
same finger average: 87
different finger average: 107

twice jump average: 71
home jump average: 82
home jump index average: 116
not jump average: 104

twice to center average: 100
to center average: 108
not to center average: 102


New Keyboard Layout Project: Have We Been Mistaken All Along?

Everyone who has designed a prominent keyboard layout, and I mean everyone, assumes that finger travel distance is by far the most important factor. It makes sense on an intuitive level: we should move our fingers around as little as possible. Colemak places the eight most common keys on the home row, as does Arensito, Michael Capewell’s layout, and others. I used to agree.

But have we been mistaken all along?

Enormous benefits can be gained if we are willing to sacrifice a little finger travel distance. I was running my keyboard generator program and it came up with this layout:

b l o u ; j d c p y
h r e a , m t s n i
k x ‘ . z w g f v q

This surprised me at first. I thought, I must have something wrong. The ‘o’ isn’t on the home row. That can’t be right. But then I considered further. Maybe it’s worth it to sacrifice some finger travel distance in order to gain other benefits. This layout boasts great inward rolls (notice ‘he’, ‘in’, ‘is’, ‘re’, ‘it’) and very few outward rolls. With four vowels on one hand and only one on the other, it also has pretty good hand alternation, thus pleasing both the “rolls” crowd and the “alternation” crowd. Same finger usage is amazingly low — lower than any other major keyboard layout.

The trouble is, I’ve never tried a layout like this, nor do I have the time to. I want to stick with the layout I have and try to get faster using that one layout; remembering both it and QWERTY is not too hard, but remembering three layouts is far more difficult. It would be really nice if I had a research grant and could hire a group of 50 or so college students to learn this layout, and compare it to one where finger travel distance is valued more highly.

Perhaps a different sort of layout is better than the conventional type. The trouble is, we don’t really know. But there’s still the possibility that we’ve been mistaken all along.

Should a keyboard layout optimize for hand alternation or for rolls?

January 9, 2010 8 comments

Thanks to a really nice typing program called Amphetype, I have recently been able to collect some good data on my typing habits. I compiled some data and did a rudimentary analysis of my fastest and my slowest trigraphs. I analyzed my 180 fastest trigraphs and my 156 slowest trigraphs, classifying each one in one of three categories: fully alternating, alternating and rolling, or fully rolling. If it is fully alternating, then each key is typed on the opposite hand from the previous key. If it is fully rolling, then each key is typed on the same hand. And if the trigraph is alternating and rolling, then there are two consecutive keys on one hand, and the third key is on the other hand.

Among the fastest trigraphs, 10% were fully alternating, 75% were alternating and rolling, and 15% were fully rolling.

Among the slowest trigraphs, 21% were fully alternating, 38% were alternating and rolling, and 40% were fully rolling.

So what does this mean? First, let us remember that there are twice as many ways for a trigraph to be alternating and rolling as to be fully alternating or fully rolling. So given a random sample, we would expect a distribution of 25%, 50%, and 25%. The data I have isn’t totally accurate, but it should be pretty close. What’s clear from this data is that fully alternating keys and fully rolling are rarely very fast. Not only that, but you have to count down to the 13th fastest trigraph before you find one that isn’t alternating and rolling. So alternating and rolling is clearly the fastest possibility.

Now let’s look at the slowest trigraphs. These are more evenly distributed. But notice that there are not as many alternating and rolling trigraphs as you’d expect, and there are a lot more trigraphs that are fully rolling. So there are a lot of very slow trigraphs that are fully rolling.

As simple as this data may be, it still gives us some useful information. To optimize our keyboard, we should try to maximize combos where you type two keys on one hand and then switch to the other hand. Getting a computer to do this in practice, though, is tricky. My program is designed to use digraphs; it can use trigraphs with a small modification, but using trigraphs is orders of magnitude slower. We still may be willing to sacrifice speed for accuracy; but is there any way to still maximize our goal of two-keys-at-a-time using digraphs and not trigraphs? I certainly don’t see any way.

Why Only 30?

December 18, 2009 2 comments

As you may have noticed, my keyboard designs have been limited to only the central 30 characters — on a traditional QWERTY keyboard these keys include the alphabet, period, comma, semicolon and slash. Why have I not expanded my program to include other keys? It is certainly not because those keys are in optimal positions already. Many of the keys outside of the main 30 have the very worst placement. So why not try to optimize them as well?

1. They are too hard to re-learn.

I have tried to learn a layout where the all of the keys were optimized, but it did not go well. I found myself completely unable to switch back and forth between it and QWERTY. The layout was simply too complicated, so I ended up just putting all the outlying keys back into their original positions.

2. Many of them rely on aesthetics that a computer program won’t notice.

Look at the number keys. They are neatly lined up in an easy-to-remember fashion. However, their order of frequency is not so simple. A computer algorithm would end up completely jumbling these numbers. It would also likely not put the open and close brackets next to each other, as well as numerous other aesthetic benefits. A computer program would simply miss these little nuances.

3. That program would be harder to write.

Yes, I admit it, I am somewhat driven by laziness. This new program would require modification of many parts of the program, and would make it harder to evaluate the keyboard’s score. The set of digraphs used to score the keyboards would be larger, causing both accuracy and program efficiency to suffer. Evaluating the score would require taking into account all four (or even five) rows, and the extra keys on the side. The score evaluation process would be much more complicated, and therefore harder to get right. Overall, I didn’t see the benefits as worth the effort.

New Keyboard Layout Project: Fast Typing Combinations

December 13, 2009 2 comments

It’s been a while since I posted anything about the New Keyboard Layout Project. But I recently downloaded Amphetype and have been analyzing my typing patterns, using MTGAP 2.0. So I now have some results, and will probably get more in the future.

The fastest trigraphs to type almost all are either a type of one key on one hand followed by two keys on the other hand, or they are a roll on one hand in one direction. Most of the slowest trigraphs alternate hands every time, and a good number of them are all on one hand in awkward combinations. The fastest words have easy rolls on both hands: what is currently the fastest word, “should” with an average of 176 WPM (hint: my average typing speed is about 85 WPM), uses a combination of hand alternations and easy rolls. In QWERTY, “should” would be typed as “jeaior”. The “ul”/”io” combination is very fast; also, “od”/”ar” is very fast, and the difference between the finger strokes to type “o” and “d” are very brief because the two letters in between are typed too fast. (Does that make sense?)

I will report more fast combinations after the program gets enough data for some better results.

Biases of Genetic Algorithms and Simulated Annealing

September 7, 2009 2 comments

Both genetic algorithms and simulated annealing have a serious problem: they get stuck. There are some possible layouts which could be very good, but which the algorithm will never get to since it requires getting past a certain hurdle. So how can this be solved?

1. Let a layout live and mutate for a while before you kill it. The problem with this is one of memory. You’re going to need some really huge arrays to hold all of those layouts.

2. Make one or a few keys inactive. This was inspired by a bug which led to some interesting results. The idea here is that you make a particular letter cost nothing, and run the algorithm as normal. If this letter was acting as a “wall” preventing the algorithm from getting to a good layout, the wall will be taken down. Then, after a few iterations, insert the letter again. I tried this out, and it had minimal effect. On the other hand, it didn’t slow the program down by much, either.\

3. Make one or a few scoring criteria inactive. This could more effectively break down potential walls than #2. This is tricky to implement reliably, though. If you rotate through the scoring criteria, making each one inactive in turn, then the definition of “best” changes every generation and so the program never comes to a stopping point. If you remove each criterion for a few generations but only one time, then you don’t know if it is adequately hurdle-skipping. And then there’s the added problem that the actual fitness will be reduced, and that has to be balanced somehow.

Are there any other methods that could actually work?

Simulated Annealing vs. Genetic Algorithm

September 7, 2009 2 comments

When I read about simulated annealing, I considered implementing it instead of a genetic algorithm. But I decided not to. Why? Simulated annealing relies on the assumption that the best layout is next to the second best layout, which is next to the third best layout, etc. But this is likely to be untrue. The second best keyboard probably looks nothing like the best keyboard. But genetic algorithms avoid this problem, through repetition. The “all star” round contains many very good layouts, so it is more likely to converge on the best layout.

But when I saw some results, I had to reconsider. Simulated annealing is seemingly much faster. But how can we get it to converge on the best possible layout? Could we do something like simulated annealing, but then repeat it and pool all the best layouts and evolve those using a genetic algorithm?

I ran some empirical tests, and it turns out that simulated annealing is indeed many times faster than a comparable genetic algorithm. Chris Johnson sent me some source code, and it turns out that it is indeed much faster.

Simulated annealing works sort of like a “smart” genetic algorithm with a pool size of only one. Instead of just making any old mutation, it looks for good mutations to make. This allows much faster convergence. But this strategy, as well as the genetic algorithm strategy, can sometimes skip over very good layouts, or even the best. I will explain in a later post, coming soon.