via Daring Fireball
The top comment on the YouTube video sums up the failings of this new system compared to the old. User yexuyouyong writes:
A flaw with this is you need to know the key word before you could search. Say someone wants to create a bookmark but doesn’t know the ‘bookmark’ keyword. Instead he/she types in something like ‘remember this page’ and he won’t get the bookmark option. A menu system presents all options up front, while a search-only system puts you in the wild with the whole dictionary as options. Until AI can understand natural languages, I see this as a step backward.
Using a phrase from Microsoft, I doubt this will work because it requires a “posture change”. Consider a GUI heavy app like a drawing app will likely have the user’s hands divided to keyboard and mouse, where the keyboard hand is likely executing known shortcuts. For the rest, the primary mouse hand locates the menu item or button until the referenced keyboard shortcut is learned by the keyboard hand. Everything can be accomplished within reach of the wrist.
In the new HUD system, a posture change occurs when the user must remove one hand from the mouse, and the other away from the Ctrl or Meta keys. When the command is found, the hands return. Functions the user can’t remember or discover a shortcut require moving outside the distance covered by the wrists.
Effectively, this increases the time to complete a task per Fitt’s law, because the user has to contend not only with GUI targets, but repetitively switching to and away from the home row (or comfortable hunt-and-peck typing posistion).