Analytical design and human factors
Please elaborate on the differences between your analytical information presentation principles and the human factors design approach. I thought the latter was to present information in a way that was within the human capabilities to perceive and process. If true, wouldn't that design imperative completment the content drives and cognative needs which you stress underlie your univeral principles. But I understood you to say your approach and human factors' were opposites in an answer at UCLA. Thank you.
-- J. D. McCubbin (email)
ET on analytical design and human factors
The purpose of analytical displays of information is to assist thinking about evidence. Consequently, in designing an analytical graphics, the first question should be: What are the evidence-thinking tasks that this display is supposed to serve?
Analytical grapics should be constructed to serve the fundamental cognitive tasks in reasoning about evidence: describing the data, making comparisons, understanding causality, assessing credibility of data and analysis. Thus the logic of design replicates the logic of analysis; design reasoning emulates evidence reasoning.
Converting principles of thinking into principles of design helps answer the most difficult question of all in the theory of analytical design: Where do principles of analytical design come from? The deep principles of analytical design are derived from cognitive tasks of analytical reasoning. This is appropriate, for the purpose of analytical displays is to assist evidence-thinking.
All this might have something to do with the field of human factors. But in practice, nearly all the great analytical designs have come from those possessed by the content; people who have learned something important and want to tell the world about what they have learned. That is, content-driven and thinking-driven, and not at all driven by bureaucratic externalities of marketing, human factors, commercial art, focus groups, or ISO standards.
In working on 4 books on analytical design, I have often turned to the human factors literature, and then left in despair, finding few examples or ideas (beyond common-sensical) that were useful in my own work. This contrasts to the work of scientists, artists, art historians, and architects--work overflowing with ideas about evidence, seeing, and the craft of making analytical displays.
I believe that work about analytical displays should be self-exemplifying; that is, the work should show us amazing displays of evidence. My despair about human factors began many years ago upon going through volumes and volumes of the journal, Human Factors, where evidence was reported using statistical graphics of wretched quality, with thinner data and worse designs than even in corporate annual reports. Also the methodological quality of the research was poor, and so nothing was credible. The findings seemed entirely context-dependent, univariate (design and seeing are profoundly multivariate), and without scope: what did it matter if some students in freshman psychology in Iowa preferred one clunky font compared to another clunky font in an experiment conducted by a teaching assistant? Later, while consulting, I saw this naive dust-bowl empiricism fail again and again for nearly a decade in trying design a competent PC OS interface. (And with the Mac interface sitting there, smiling, all the time. Apple's superb interface guidelines seemed to me to be a retrospective account of the beautiful hands-on craft of a few brilliant designers, not a reason to have experimental psychologists attempt to design OS/2 and Windows.)
At any rate, if this was the scientific practice and the design craft of applied psychology, I concluded the field did not have much to contribute to my own work on analytical design.
I happily fled to the classics of science, art, and architecture.
-- Edward Tufte
Interface comes first, leads the way
Here are some more thoughts (borrowed from another answer I gave on software design) on this issue.
This book by Ron Baecker and Aaron Marcus seems to me to be a notable contribution on how design arrangements might help programming: Baecker, R.M., Marcus, A., Human Factors and Typography for More Readable Programs, ACM Press, 1990.
I believe the interface should be designed FIRST, by people who deeply understand the specific content and the specific analytic tasks that the interface screens are supposed to help with. Screen after screen should be specified in intense detail by content experts, completely independently and without reference to how those screens might be created.
Only then do we turn to the technical implementation, which becomes simply a by-product of the interface screens and interface activites. The interface design, the content design should drive the entire development process. Thus the lead managers for development of a project management program, for example, would be people who actually manage projects and who teach courses in project management. Too often, the available software drives the design, rather than the content/analysis needs of the user.
There are a lot of software solutions around desperately looking for some kind of problem to solve--that is, inside-out design. But better tools for users will be more likely the product of outside-in design which make the content-substance and analytical tasks of the user the driving priority. Doing good outside-in design probably requires a thorough-going independence in specifying the interface; that is, the interface should be content-specified by people completely independent of the software development process. If not, the content-specification will be governed and distorted by the needs of the already-existing software.
Content-driven design requires a radical shift in power and control. The Vice-President for Programming reports to the Senior Vice President for Content!
-- Edward Tufte
Visual clutter and interpretive error
An experiment-based argument for keeping graphical designs a simple and spare as
Visual Clutter Causes High-Magnitude
Stefano Baldassi1, Nicola Megna1, David C. Burr1,2
Perceptual decisions are often made in cluttered environments, where a
target may be confounded with competing ``distractor'' stimuli. Although many studies
and theoretical treatments have highlighted the effect of distractors on performance, it
remains unclear how they affect the quality of perceptual decisions. Here we show that
perceptual clutter leads not only to an increase in judgment errors, but also to an increase
in perceived signal strength and decision confidence on erroneous trials. Observers
reported simultaneously the direction and magnitude of the tilt of a target grating
presented either alone, or together with vertical distractor stimuli. When presented in
isolation, observers perceived isolated targets as only slightly tilted on error trials, and
had little confidence in their decision. When the target was embedded in distractors,
however, they perceived it to be strongly tilted on error trials, and had high confidence of
their (erroneous) decisions. The results are well explained by assuming that the observers'
internal representation of stimulus orientation arises from a nonlinear combination of the
outputs of independent noise-perturbed front-end detectors. The implication that
erroneous perceptual decisions in cluttered environments are made with high confidence
has many potential practical consequences, and may be extendable to decision-making in
Ironically, and a bit humorously, there is substantial use of data-imprisoning grids
and moire effects in the paper's data graphics.
-- Alexey Merz (email)
Don't think much of human factors studies that support my views
It is not just spare design, it is also content resolution. "Simple designs, complex
information" is the short form.
If possible, design architectures should be spare, straightforward, conventional, and self-
effacing; AND the evidence presented should be rich, complex, multivariate, causal. The
idea is to maximize the viewer's content reasoning time, and to minimize the design
figuring out time and to reduce impediments to content reasoning.
This theme runs through the first 3 books, with increasing elaborations over time: the
"data-ink ratio" in The Visual Display of Quantitative Information (1983, 2001); micro/
macro designs, and layering and separation in Envisioning Information (1990); and the
"smallest effective difference" in Visual Explanations (1997). Models include good maps,
aerial photographs, and the day-to-day practices of scientific research that involves a
flood of data.
This study above seems to deal with the a small part of the approach of VDQI: the "non-
data-ink" side. It's more complicated that that.
As mentioned in an earlier answer, I have a deep skepticism of such experiments, no
matter whether the results favor or disfavor my theories. One source of my skepticism is
the inability of human factors researchers to produce decent statistical displays for their
own research. Another is the barefoot empiricism and the naivete of such research, which
sometimes appears to be a mechanistic parody of work in behavioral science.
I do however greatly admire Bill Cleveland's work on graphical perception. His work is
closely tied to assisting the analytical tasks of serious researchers when they look at their
data. To my knowledge, his work is the most analytically aware of any of the experimental
-- Edward Tufte
Apple's Jonathan Ive: "We don't do focus groups."
Jonathan Ive is Apple's lead designer for iMac, iPod, iPhone.
From Rory Cellan-Jones, BBC dot.life, "Listening to Mr. iPhone:"
And what emerged were some fascinating insights into the culture of Apple and the craft of industrial
design. Ive was insistent that the key to Apple's success was that it was not driven by money--a claim that may raise
eyebrows amongst shareholders and customers--but by a complete focus on delivering just a few desirable and useful
"For a large multi-billion dollar company we don't actually make many different products," he explained. "We're so
focused, we're very clear about our goals."
He said that Steve Jobs had always made it very clear that this focus on products was the only reason for Apple to
exist--and contrasted the culture with that of other companies who talk about having similar aims: "If you have to
time institutionalising that, talking about it, you end up chasing your tail."
So how did the company decide what customers wanted--surely by using focus groups? "We don't do focus groups," he
said firmly, explaining that they resulted in bland products designed not to offend anyone.
Christopher Frayling reminded us at that point of Henry Ford's line about what his customers would have demanded if
asked--"a faster horse"--and it's surely true that the point of innovative companies is to come up with products that
customers don't yet know they need.
But it was the physicality of design work that Jonathan Ive was keen to stress--from the Apple design workshop full of
machines, throwing off a lot of noise and dust, to visits to Japanese aluminium craftsmen to learn how that material
could be crafted into a laptop casing. Yes, of course he and his team use all the latest computer-aided design tools,
but he also likes to knock out a physical prototype and feel the weight of it in his hand.
He told a story about how, as a boy, he'd taken apart an old-fashioned alarm clock, and inside the spare outer casing
found a mass of workings, "an entire watch factory".
Extraordinary complexity wrapped in a simple, functional, touchable, beautiful case--that seems to be the Apple
-- Edward Tufte
Sacha Greif has an excellent essay on current digital design trends. Whether you agree with
his conclusions, he presents 20 years of evidence thoughtfully and well.
-- Niels Olson (email)