You are not logged in.

sprezzatech blog #0014

Dirty South Supercomputers and Waffles
UNIX, HPC, cyberwarfare, and chasing perf in Atlanta.

New Directions in Window Management, Part I.
Wed, 13 Mar 2013 13:06:39 -0400

it's a unix system! i know this!
It's a UNIX system! I know this! One day, I hope to have a working XF86Config conffile for it!


DISCLAIMER: The author is by no means an authority on user interfaces, human-computer interaction, or computer graphics (indeed, he is something of an anti-authority on at least the first two topics). This report is likely full of errors, omissions, and balderdash. Reader discretion is advised.


See also The Desktop Environment on the SprezzOS Wiki for a possibly useful companion to this report...

  • The only thing worse than generalizing from one example is generalizing from no examples at all.
  • If a problem is not completely understood, it is probably best to provide no solution at all.
  • Isolate complexity as much as possible.
  • Provide mechanism rather than policy. In particular, place user interface policy in the clients' hands.
--Robert W. Scheifler and James Gettys
X Window System: Core and Extension Protocols: X Version 11 Release 6


The WIMP (Window, Icon, Menus and Pointer) paradigm, an outgrowth of Engelbart's NLS (oN-Line System) and Sutherland's Sketchpad, has dominated GUIs (Graphical User Interfaces, i.e. interfaces based on pixels or vectors rather than character cells or linear bitstreams) since its introduction by Kay and McDaniel in Xerox PARC's 1973 Alto. WIMP is, essentially, the obvious outgrowth of workstation hardware and the "desktop metaphor":



Rectilinear windows and two-dimensional navigation, and to a lesser extent the desktop metaphor in its entirety, owe their pervasiveness to the ubiquity of rectilinear, two-dimensional display technology (it is uncommon to hear the term GUI used in reference, for instance, to volumetric displays) and the rarity of input devices other than the mechanical keyboard and mouse. Icons, menus, and stackable windows are a compromise between ease-of-use (compared to command-line and keystroke-based interfaces) and limited screen real estate.


Rectilinear display technology seems to owe its ubiquity to at least four factors:
  1. the linear nature of motion picture film is most fully utilized by successive quadrilaterals
  2. human eyesight typically enjoys a roughly 9:5 field of view
  3. the widespread prevalence of tabular data
  4. rectangular walls
One wonders if hackers residing on denser planets, with more intense gravity at their surfaces, favor triangular displays for their geodesic domes and tetrahedral homes, but this takes us rather far afield...


Rectilinear displays are still with us, with a little more than an order of magnitude finer resolution (I used a 320x200 4-color CGA for most of my computing in the 80s; my current Dell U3011 features 2560x1600 resolution on a 30" diagonal), a little less than an order of magnitude more DPI (the newest Retina displays boast 250+ DPI), largely unchanged frame rates, and slightly longer latencies. More starkly changed are gamut (10 orders of magnitude between CGA's 4 colors and 32-bit deep color) and aspect ratio (16:9 widescreen monitors and phones have become prevalent). Moore's Law doesn't appear to hold for video outputs, unfortunately.

Drastically changed, however, are the display adapters driving these devices (as well as the general-purpose processors and I/O buses feeding these adapters). My ATARI 400's GTIA TVA and MOS 6502B cranked out 8-bit values at 1.79MHz (i.e., half-pumped against 3.575 NTSC color clocks), sharing 16K of 4K-banked RAM operating at timings of multiple hundreds of nanoseconds. NVIDIA's current flagship single-GPU display adapter, the GTX 680, drives 1536 cores at over 1GHz with their own 256-bit interface to 2GB of 6GHz GDDR5 VRAM operating at timings less than 10ns. This represents seven orders of magnitude more processing power and about three orders of magnitude increased memory throughput.

If the fruits of technology are to be employed in better human computer-interaction, then, it is probably better to look at what can be done with this excess compute capacity. Even if displays boasting ~400km diagonals were to become price-competitive, they're awfully hard to get home without scratching something, and tend to throw off-balance whatever room into which they're placed.

On the input side, what we've neither gained nor needed in terms of throughput we have won in variety. Touchscreens exhibiting four to five degrees of freedom are commonplace on mobile devices. Advances in solid state electronics and discrete signal processing have made real-time, high-fidelity speech processing a reality. Gyroscope-based mice offering six true degrees of freedom are in wide use among content creation professionals. Interfaces driven by eye movement and even EEGs are on the horizon. The humble keyboard is absent on most mobile devices, supplanted in some degree by text prediction. Modern interfaces must support multiple devices, which may come and go at any time, and must not assume any particular input device to be available.

I considered all this recently, and hid old SIGCHI reports behind copies of Intel instruction set manuals so as not to draw contemptuous comments from friends, and read a bunch of topology and differential geometry I kinda partially understood maybe, and thought upon it whilst smoking many Newports. The strewmentarium compositing window manager resulted. More about that the next time we meet. Same dank time, same dank channel. Hack on!

Part 2 will follow Real Soon Now...


SprezzOS logo