At First, Mechanical Controls Were Nowhere
In the first sci-fi film, Le voyage dans la lune, one detail that may be surprising to modern viewers is that it contains nothing that a modern audience would recognize as an interface. When the “astronomers” open the rocket door, they simply push on it—there’s not even a handle (see below). To launch the rocket, they load it, bullet style, into an oversized gun and shoot it at the moon. That there are no interfaces isn’t really surprising, because this short movie is a vaudevillian comedy sketch put to film. But more to the point, when the film was released at the turn of the 20th century, very few interfaces existed in the modern sense. Audiences and filmmakers alike were working in an industrial age paradigm. The few controls that did exist in the world at this time were mechanical. People interacted with them using physical force, such as pulling a lever, pushing a button, or turning a knob.
Le voyage dans la lune (1902, restoration of the hand-colored release).
Then They Were Everywhere
In the 1920s and 1930s, as the developed world moved into the electric age, buttons, switches, and knobs made their way into industrial machinery and consumer goods that people used every day. As a result, these mechanical controls began to appear everywhere in sci-fi, too. In one example, the control panel from the Lower City in the 1927 dystopian film Metropolis shows an interface crowded with electric outputs and controls. As we continue to trace interfaces throughout this section, note the continued dominance of mechanical controls like momentary buttons, sliders, and knobs.
World War I played a role in shaping the physical appearance of sci-fi interfaces as well, as servicemen brought their experiences with military technology back home as consumers, audiences, and sci-fi makers. In the 1939 serial Buck Rogers, we see this in action. Buttons already inhabit the interface at this point, as in the “Tele-vi” wall viewer, controlled by just a few knobs, like televisions of the day.
Buck Rogers, “Tragedy on Saturn” (c. 1939).
When Captain Rankin and Professor Huer surmise that one of Killer Kane’s ships they’ve detected is being flown by Buck, they want to contact the ship. Instead of invoking audio functions right there at the screen, they move to an adjacent “radio room” where they can hail him. To modern audiences this seems silly. Why aren’t these two capabilities located in the same spot? But the state of military technology at the time held that the radio room was a special place where this equipment was operated, even if it was set far apart from a periscope or other viewing device.
Buck Rogers, “Tragedy on Saturn” (c. 1939).
Sci-fi has long built its spacefaring notions by extending seafaring metaphors. (The word astronaut literally means “star sailor.”) By the 1940s and 1950s, sci-fi films like Forbidden Planet typically depicted its starship interfaces with large banks of mechanical controls of many types, such as those that sailors might have seen in the control rooms of great ships of World War II.
Forbidden Planet (1956).
Lesson: Build On What Users Already Know
As the examples in Metropolis and Buck Rogers show, new interfaces are most understandable when they build on what users (and audiences) already know. If an interface is too foreign, it’s easy for users to get lost trying to understand what the interface is or how it works. This is true of novice users and those who are not interested in technology for its own sake. It’s also true for applications that are meant to be used intermittently or in a state of distraction.
Make the interface easier to learn by providing familiar cues to what its elements are and how they fit together. This could mean building on current interface conventions or controls that map to the physical world. Metaphors can also be a bridge to this kind of learning as they help form analogies for users to make connections between things they already know and new interface elements that confront them. But take care, because holding too closely to a metaphor can become pointless skeuomorphism or confuse users when the interface’s capabilities and metaphor diverge.
Often, the mechanical controls of early sci-fi seemed disconnected from
displays and neatly ordered by type in rows as in the image from Forbidden
Planet. In some cases, like the 1951 film When Worlds Collide, production
designers imagined putting the controls around the displays, where
the user’s actions and the system’s results would be more connected. In the illustration below, the V and F knobs control the spaceship’s trajectory, seen on the
display as white points along the red and green lines.
When Worlds Collide (1951).
In Buck Rogers, the two parts of the communication interface are in separate
rooms. If Professor Huer wanted to tell Buck how to level his spaceship,
the professor would have to run to the radio room to provide spoken
instructions, or input, and back to the Tele-vi to check on Buck’s progress,
the output (see Figure 2.3b, c). Where the Buck Rogers scenario requires far
too much work to be considered an efficient feedback loop, the navigation
interface from When Worlds Collide is much tighter, with its controls
abutting the output screen and fuel gauge, making much less work for its
navigator. Imagine the disaster if the V and F controls were in the next room.
Lesson: Tighten Feedback Loops
Interaction designers call the cycle between input and output
while optimizing toward some desired state a feedback loop.
The faster and more fluid these loops are, the more a user
can get into the flow of use and concentrate on managing the
system to the desired state. Even when the controls in question
are all on screen rather than mechanical, the more that a
designer can do to tighten these loops, the more effective the
user’s interaction will be.
This period also saw early sci-fi endeavors to depict the future with a
dedicated realism. For example, Destination Moon made an earnest
attempt to describe a trip to the moon with real science and plot, 19 years
before the Apollo 11 mission would launch. Renowned science fiction
author Robert Heinlein acted as contributor and technical advisor to the
film. Unlike competing films of the era, which simply crammed as many
buttons, switches, and knobs as possible onto the set to make them look
sophisticated, Destination Moon portrayed a more serious and believable
story through its constrained interfaces of more-considered controls and
displays. These suggested a thoughtful reality behind them, even
though at the time they were still entirely fictional. They were designed as if
they could be real, not unlike a real prototype of a spaceship might be.
Destination Moon (1950).
By the 1950s, buttons, switches, and knobs were seen as a panacea to life’s
drudgeries. A delightful example of this appears in General Motors’ 1956
production of its annual touring auto show Motorama. It was one of the first
examples of a corporation creating speculative fiction to promote its brand
(and although not technically sci-fi, illustrative enough to bear mention here).
It included the wonderfully melodramatic industrial film Design for Dreaming, in which a near-future housewife prepares a meal while dancing around her
Frigidaire kitchen of the future. All of her once-dreary tasks, such
as baking a cake, carrying dishes, and cleaning up, were accomplished simply
by pushing buttons.
Design for Dreaming (1956).
Between the 1950s and 1980s, the trend for mechanical controls continued,
despite a few new interface paradigms appearing. There were voice
interfaces on robots, like Gort in The Day the Earth Stood Still (1951), and
Twiki with Dr. Theopolis from the 1979 TV series Buck Rogers in the 25th
Century, as well as artificial intelligences like the ship computers in Star Trek
and 2001: A Space Odyssey. There was also a gestural interface in The Day
the Earth Stood Still. These alternative interfaces were in the minority,
though, until some budget constraints introduced a new paradigm.
For a While, Mechanical Controls Started Disappearing
When Star Trek: The Next Generation was green-lighted for production in the mid-1980s, the budget didn’t allow for the same kind of jewel-like buttons as in the original series.
Star Trek: The Original Series (1968)
The money for so many buttons, individually installed and lit, simply wasn’t available. Instead, production designer Michael Okuda and his staff devised an elegant and much less expensive solution—vast backlit panels of plastic film with simply printed graphics representing controls. The result was thoughtfully futuristic as well as cost-effective, and the result inadvertently launched a new paradigm in interfaces that we see throughout the sci-fi genre today, in which controls exist only as a flat touch-screen surface. In Star Trek, this interface is known as LCARS (Library Computer Access and Retrieval System). Though the characters never use this term in the shows or films, it has appeared in some of the on-screen interfaces.
Star Trek: The Next Generation (1987) LCARS interface.
Setting the stage for Okuda’s solution were new, experimental interfaces that had been developing between the first and second Star Trek series. One example is the Aesthedes computer, a graphics workstation produced by the Dutch company Claessens in 1982 (shown below). The computer’s functions were arrayed across a seamless tabletop surface, with each function given a separate button, arranged in logical groups. This made it wide enough to be a literal desktop of controls. Like the LCARS interface, these buttons had almost no depth. Instead, they were part of a seamless membrane that stretched over the entire surface, printed with labels and borders, with simple contact sensors positioned underneath. They were a transitional technology, still mechanical but very different than the kinds of buttons seen and used before. They represented a half step toward interfaces like LCARS, but, unlike LCARS, they weren’t changeable, since they weren’t also a display. This style of mechanical interface, on the border between mechanical and virtual, didn’t catch on, but during this time of experimentation, it was an option very much like what was explored in Star Trek: The Next Generation.
Aesthedes computer (c. 1982)
Now They Coexist with Other Interfaces
We see that even in modern sci-fi with advanced digital controls, mechanical
controls are still present. In the reboot of Star Trek the interfaces on the
Enterprise use a blend of touch-screen surfaces and mechanical controls. The
throttle for the helm is mechanical and familiar for the audience, who has
experienced or seen similar controls on ships and airplanes.
Star Trek (2009).
One of the benefits of mechanical controls is that, unlike touch-screen
controls, they can be well-designed for our entire hands rather than just the
fingertips, offering ergonomic shapes and rich haptic feedback. Additionally,
mechanical controls can take advantage of a user’s finer motor control
and offer industrial design that telegraphs to the user how it can be used.
For example, the shape of a button or knob might better communicate
optimal position or the amount of force to be used. The diameter of a knob
can make fine control easier or more difficult, depending on the exertion
needed for fingers or hands to move it. This not only increases control and
comfort, making some actions easier but also communicating function
through the physical form, itself, the property that interaction designers call
Lesson: Use Mechanical Controls When Fine Motor Control Is Needed
Mechanical controls are more appropriate when fine motor
control is needed. It’s not that screen controls can’t accept fine
movement, but, as many users find with their trackpads, touch
interfaces are often so sensitive to movement that holding a
specific position is difficult. For example, taking your fingers off
of a knob doesn’t change its position, but it can change a sensitive
touch-screen control, even if that isn’t the intent (as can
touching it again). Users can “have their finger on the button”
without actually depressing it, but on-screen buttons can be
unintentionally or prematurely activated in this way.
Lesson: Don’t Get Caught Up In The New For Its Own Sake
Even with new advances in natural gesture technologies, such
as Microsoft’s Kinect, there is comfort and ease in mechanical
controls for some operations. Take typing for example. It’s
terrible with the Kinect, OK with a controller, and much better
with a physical keyboard that has been optimized for this
purpose. Voice control may make even keyboards obsolete,
but they have their own limitations.
Screen-based controls can mimic some of what mechanical
controls have always offered, like a satisfying click when turned
to the desired position, or alignment between several buttons
to indicate common settings. And screen-based controls can
do many more things than mechanical controls, such as incorporating
animation, appearing only when needed, or changing
entirely based on context. Finding the right control for the job
is why even the most advanced smartphones still have a few
mechanical buttons for controls like Volume, Home, and Power.
Just as in real design and engineering, the presence of mechanical controls
followed trends pertaining to material costs and scarcity. At one time, buttons
and other components were relatively inexpensive due to their materials
and manufacturing processes, so they abounded in both real and fictional
interfaces. When the sheer number of controls and their expense (both for
materials, installation, and maintenance) rose, however, they began to be
used more sparingly. We see this today as touch-screen interfaces are able to
incorporate many functions with no added cost, so many mechanical controls
are disappearing. In addition, too many undifferentiated buttons can breed
confusion, and overload for audiences and users alike.
For example, we see this in the lineage of Star Trek interfaces. The original
series used many, individually lit buttons, often positioned in rows circling
the user (which is more complicated than rectilinear rows). But when
the production budget available for the next series, Star Trek: The Next
Generation, didn’t go as far, that was no longer an option. Instead, the
mechanical buttons disappeared almost entirely from control interfaces,
in lieu of flat-panel touch screens. This persisted throughout the following
TV series and many of the films, until the latest film, Star Trek, created
a rebooted aesthetic based on a complement of mechanical and touchscreen
Be warned, though, that hybrid controls can still seem out of place or even
laughable. A funny example comes from the climactic sequence at the end
of Star Trek: Insurrection. After an entire film of Star Trek’s signature touchscreen
(LCARS) interfaces for controlling nearly everything that happens on
the ship, Commander Riker calls for the “manual weapons interface,” and a
1990s-era joystick pops up from a console designed specifically for this one
purpose. It isn’t that this is a worse interface for this use—indeed,
it may actually be better. The joke is that everything in the film leading up
to this, including flying the ship and shooting weapons, never used such an
interface. If it was a useful and even better weapons control, why wouldn’t it
have been the used in battles before this moment in the story?
Star Trek: Insurrection (1998).
Lesson: Mix Mechanical And Other Controls Where Appropriate
Mechanical controls are better for some uses, though they can’t
as easily serve multiple functions. Nonmechanical controls, like
touch-screen buttons, are easier to change into other controls
but don’t offer the same kind of haptic feedback, making them
impossible to identify without looking at them and creating
questions about whether they’ve been actuated. Design interfaces
with an appropriate combination that best fits the various
uses and characteristics.