MUST-SEE: A Radical Vision of the Desktop’s Future
We talk a lot about mobile technology here, but what’s in the cards for desktop computing?
After all, a touchscreen desktop makes no sense. Who’d want to stand there all day with their hand in the air? Ouch.
Something radically different has to be done to bridge the gap between “app culture”, with all its self-contained, single-window tasks, and the notion of “the desktop”– you know, where you do projects.
Clayton Miller, a graphic designer from Chicago, created a video demonstrating what he sees as the obvious bridge between the current “touch craze” and the longstanding (and really obvious) issues with using touch on a desktop. In the process, he’s pretty much reinvented the desktop UI. It looks and sounds great, and definitely strikes me as a step in the right direction.
This is a fascinating concept, and a really intelligent look at the current state of computer GUIs. I’ve taken the liberty of transcribing the video below, so that all its material is laid out in one place.
In another article, we’ll take a look at what makes 10/GUI so impressive, and where it needs work. I’m working on an interview with Clayton which should be pretty interesting, too. Enjoy the transcript, and let me know what you think in the comments!
Transcript is below:
1 – The Hardware Problem
The predominant mode of spatial human-computer interaction for the past quarter-century has proven itself powerful and flexible. But is the mouse to be indelibly the sole conduit for desktop interaction? Despite the mouse’s pedigree, its constriction to a single point of interaction greatly limits the bandwidth of information that it can convey.
A user’s entire hand, capable in the real world of intricate manipulations, is reduced to a single pair of coordinates, making actions of any greater complexity than the classic “drag and drop” require multiple divisions of action. There is a rough consensus in the industry that the future lies in the domain of multi-touch technology, by definition a vast increase in interaction bandwidth. But while its sucess in the mobile sector seems to portend a future on the desktop, there are obstacles that stand in the way of this transition. The primary obstacle is disappointingly simple. A multitouch desktop monitor isn’t compatible with the human body for day-to-day use. Studies bearing out intuition show that extended use of a traditionally-oriented touchscreen induces unacceptable levels of fatigue in the user’s arms. Even in a drafting-table configuration, there’s the potential for neck strain and, regardless of the configuration, there remains the simple matter of visual obstruction of the user’s hands. These are at the heart of the mouse’s longevity. The mouse remains in a resting position while the user is free to look straight ahead, and as the mouse tracks for the on-screen pointer, there’s no obstruction outside the pointer itself. Rather than require users to adapt to interface technology in often-uncomfortable ways, 10/GUI adapts the touch interface to the proven posture of the mouse.
By splitting the touch surface from the screen, 10/GUI creates a one-to-one pad upon which all ten fingers can rest, as well as interact. A hybrid resistive capacitive array detects the fingers’ positions, and can sense pressure registered from individual finger presses. At last, the vast ergonomic advantages of the mouse are united with the vast potential of multi-touch. This is just the beginning.
2 – The Software Problem
Ever since XEROX PARC’s STAR system, tasks in a graphical display have been managed primarily in windows arranged in two dimensions. While this was ideal for a limited number of windows, modern workflows with many windowed areas and objects make lateral organization more complex. Numerous approaches have been undertaken to ease the management of these increasing number of windows, but despite often novel ideas, none attacked the root problem of the messy, windowed desktop.
Compounding this is the issue of touch interfaces. Whether an environment uses direct multitouch, or 10/GUI’s one-to-one interaction surface, the window-based paradigm plays only to the strength of the mouse’s single pair of coordinates, ignoring the potential of a full ten fingers. Some say that a move to three dimensions is the answer. 10/GUI looks in the opposite direction, away from the clutter caused by the arbitrary scattering of windows, and toward a more efficient system that puts the user in control. We call it con10uum.
Con10uum represents a radical rethinking of the way information and interaction spaces are managed on a flat display. Rather than creating windows at often-arbitrary positions on a two-dimensional plane, con10uum organizes them linearly. New windows arrive from one side, occupy the full height of the screen, and are manipulable through a consistent vocabulary of multi-touch interaction.
A linear window manager may sound restrictive at first. But brief demonstration should make plain the power of this approach. There is one detail of the interaction surface that’s added for con10uum — global and local edges. A subtle ridge informs the fingers at the edge of the pad but the touch surface extends slightly beyond, to the left and to the right. The right is the global edge. Touching it invokes a menu to launch and manage applications. Left is the local edge. Touching it invokes the current application’s menu.
3 – In practice
[...] The edges are where everything starts. Notice the ten circle crosshairs. They disappear once you take your fingers off the pad but, as long as they’re there, you can use them just as you would on a touchscreen– only, without your fingers and palms in the way. Press with any finger, and you’ll see a confirmation that the finger is “clicking”. Let’s launch a web browser. The global edge invokes the global menu– this is the same no matter where you are. The applications are arranged in a way to allow easy selection with your three middle fingers. Other frequently-used features such as a systemwide search are positioned in the “spacebar” area. Here’s our web browser, but let’s launch a few other things so we can see how con10uum manages windows. Now we’re launching a photo editor. The new app slides in from the right, moving the Web browser over. Now let’s launch a text editor.
Navigating through windows follows a consistent vocabulary of multi-touch. The more fingers you use, the higher the level where they have effect. A single finger manipulates objects inside applications. Two fingers scroll, or “pinch-zoom” inside applications. Three fingers move applications around the application space: pinching resizes the application. Four fingers scroll the application space. Pinching zooms the entire application space. You can use both hands, too. For instance, you can scroll and zoom the application space with one hand while relocating the window with the other. With these apps open, let’s open some files. The left edge opens menus based on our context which, in this case, is the text editor. Let’s open a few files. The open button is located in our “spacebar” area. In the photo editor, let’s do the same. For good measure, let’s also add a few web pages. On a traditional X-and-Y window manager, these windows would be placed overlapping at arbitrary positions on the screen. Finding any particular window would require sorting through the pile. In con10uum, they’re stacked neatly, and the finger-hierarchical navigation system makes it simple to find your way quickly.
Of course, even this approach has its limits with enough windows. That’s why there’s one more step to window management in con10uum. Keep zooming the application space, and you get an annotated thumbnail view of all windows, separated into their applications. We think it’s never been quite this easy.
10/GUI gives next-generation hardware technologies a home, with a fresh approach to the desktop model of human-computer interaction. It builds on conventions we’re accustomed to, while laying the foundations for a new generation of user interfaces. It’s a plan for taking the desktop into the next quarter-century.