All 5 books, Edward Tufte paperback $180
All 5 clothbound books, autographed by ET $280
Visual Display of Quantitative Information
Envisioning Information
Visual Explanations
Beautiful Evidence
Seeing With Fresh Eyes
catalog + shopping cart
Edward Tufte e-books
Immediate download to any computer:
Visual and Statistical Thinking $5
The Cognitive Style of Powerpoint $5
Seeing Around + Feynman Diagrams $5
Data Analysis for Politics and Policy $9
catalog + shopping cart
New ET Book
Seeing with Fresh Eyes:
Meaning, Space, Data, Truth
catalog + shopping cart
Analyzing/Presenting Data/Information
All 5 books + 4-hour ET online video course, keyed to the 5 books.
Image manipulation in science

Two short pieces from Nature 432 (12 May 2005) raise serious issues concerning image manipulation, which reach the outright lie level for the womb movie. The first is an essay by the great art historian, Martin Kemp:

-- Edward Tufte


Image manipulation is a significant problem in physiology/biochemistry. It often occurs in published images of 'gels'. This is jargon for a flat sheet of polyacrylamide that has had proteins pushed into it from one edge and moved through it by an electric field. The polyacrylamide acts as a type of obstacle course, the smaller proteins move faster and larger proteins move slower thus separating the proteins by molecular weight. The gel is then immersed in a stain that colors the proteins. The amount of protein present is related to the optical density of the stain spot on the gel and is correctly measured by densitometry.

Many of the current generation of science graduate students and post doctoral fellows, having grown-up with Photoshop use it to analyse their gels. While this is not inherently bad, most students/post-docs don't understand the relationship between intensity (what the desktop scanner records and Photoshop displays) and optical density (what is really wanted) and make supposedly quantitative measurements from the gel in ignorance of what the numbers represent. This can introduce significant errors is estimates of protein concentration. Unfortunately gel imaging has become so commonplace that details are often ommitted from the description in the paper, and it is often impossible to determine if the quantitation made from the images is correct.

A related problem is the manipulation of the gel images to improve their appearance in print. This is not as big a problem since many gels are often boolean (the protein is present or it's not present) but photoshop

I think at the corer of the problem is the pervasiveness of desktop applications in science. Photoshop, unlike powerpoint, is very good at manipulating images for designers and artists, allowing them to achieve the effect that they want. However, like powerpoint, it encourages a style of usage think not conducive to analytical thinking. This is not to say Photoshop cannot be used in this fashion, but simply it was not designed to be easy to use in this fashion. I have had success in re-educating students about gel image handling only by getting them to change tools. I ask them to use ImageJ a freely available image processing tool designed for quantitative image analysis.

I think this type of mistake in image processing in science will continue until journal editors demand that full details of the image processing from capture to final image is detailed in the methods section of scientific papers. Like statistics, image processing has fallen into habitually bad usage in part because how you did something is relegated to the small print of journals and is often skipped by busy reviewers.

-- John Walker (email)


John is right, of course, that image manipulation software offers tremendous scope for fraud in biochemistry, which, to a greater extent than most sciences, depends on things that you cannot see directly -- almost everything is seen at second hand. Nowadays, as he says, this often involves studying stains on gels. When such first software first came on the market (around 15 years ago) I heard a colleague say that we would no longer be able to believe any gel data that were published.

Nonetheless, I think this was too pessimistic, because people who want to cheat have always been able to falsify their results, and there is nothing essentially new in the capacity to "improve" one's gels. What I mean is that some degree of trust has always been needed: you have to believe that people are telling the truth (and nothing but the truth, though not necessarily the whole truth) when they report what they have observed. Sometimes they aren't, of course, and when this happens to a serious extent the truth will usually come out after other people try to repeat the observations.

-- Athel Cornish-Bowden (email)


Athel, I didn't mean to sound so pessimistic and in my tendency to write lengthy contributions I may have obscured my point. I do believe that scientists are in general an honorable lot and that what they write they believe to be true.

What I was trying to say was that ignorance of one's tools (in this case Photoshop) can lead to mistaken claims. The extent of the mistaken claims is almost impossible to track. I don't think these mistakes are wilful or deliberate but due in part to the hiding of the messy details of image processing behind user-friendly, ignorance-producing interfaces and in part to the omission of the details of that processing from published accounts.

Teaching can only succeed if there is a perceived need. Journal editors can generate that perceived need by demanding additional details be present in papers.

Of course, as in any human endeavour, there are the cheats and frauds but there are far easier climes in which they may survive than science and this keeps them fewer than in politics or law :).

-- John Walker (email)


Dear Dr. Tufte,

In the July 13, 2006 issue of The New York Review of Books, two images of southern Florida, stills taken from the movie "An Inconvenient Truth," one a recent photo apparently from a satellite, the other an image, also apparently from a satellite photograph, of what Florida is projected by the maker of the image to look like after coastal erosion due to rising sea levels as the result of global warming in the future, accompany an article by Jim Hansen. But the actual image and the projected image do not appear to be on the same scale, the latter appearing to be from a greater height, enhancing the impression of coastal erosion.

I have three related questions: Does an image that is the image-maker's projection of the future constitue scientific evidence? Has the scaling been manipulated in the projected image of southern Florida? If so, what standard should be applied to the manipulation of scientific evidence in a non-scientific forum?

Sincerely,

Dan Haines

-- Daniel Haines (email)


This is the tip of the iceberg. I have been actively engaged in training bioscientists in how to conduct QUANTITATIVE microscopical experiments for 15 years. The theory shows that what is needed are randomly sampled micrographs, yet the reality is that the vast majority of images shown in scientfic papers are not "typical micrographs" at all but carefully chosen to "illustrate" a point. How different is it to nudge the x-y stage of the microscope (to a more interesting field of view) before taking the picture for publication and jigging it with Photoshop? The technology may have changed but the (often unconcious) aim is the same.

-- Matt Reed (email)


Cornelia Dean reviews Felice Frankel in today's New York Times: She Calls It `Phenomena.' Everyone Else Calls It Art.

With colleagues, she is working to set up an online site where researchers can talk across disciplines about the concepts they want to convey in images.

But she does not feel that her photographs have to explain everything. "To me the idea is to engage somebody to look at something, and they don't even know it's science," she said. "People are not intimidated by pictures. It permits them to ask questions."

To achieve this goal, she sometimes alters the images. For example, when she photographed bacteria growing on agar, "the agar was cracking," she said. "But I wanted the reader to pay attention to the bacteria pattern. So I digitally deleted the cracks."

Another time, she photographed rod-shaped orange bacteria, and her film was somehow unable to reproduce the orange she could see when she looked through the microscope. "I added it," she recalled.

These practices are acceptable, she said, because their purpose is not to disguise or twist scientific information, but to make it clearer. And when images like this appear in scientific journals, Dr. Whitesides said, the "untinkered original" is posted online with supplementary material.

For Ms. Frankel, the main point is that "I always tell the reader what I do when I manipulate an image." And she negotiates with her research colleagues about how to go.

"I think this should be part of every scientist's education, the manipulation and enhancement of images," Ms. Frankel said. "To just have a blanket statement -- `You cannot do anything to your image' -- that does not make sense."

"You can get a little crazy with objectivity. If enhancing your image gets you to see something better," it's acceptable, she said, "as long as we indicate what we are doing."

Felice Frankel's books include "Envisioning Science" (M.I.T. Press, 2002), "On the Surface of Things," (Harvard University Press, 1997), and "Modern Landscape Architecture: Redefining the Garden" (Abbeville Press, 1991).

-- Niels Olson (email)


Another piece on Frankel's work is hosted at the Apple website:

http://www.apple.com/science/profiles/frankel/

The article gives 2 side by side "before and after" examples.

Frankel's emphasis on retaining versioning information is also very interesting. Rather than keeping only the final image, one can keep the original photo and information the exact sequence of changes made on that image. This would then be the most appropriate bundle of information for a scientific documentation of an image's post-processing.

-- Sean Garrett-Roe (email)


Content aware image resizing.

-- Niels Olson (email)


Wow. Fantastic technology built on beautiful mathematics.

But like with most highly-powerful new technologies, it is spooky and frightening. Perhaps the biggest shutter I felt followed the implication that we need to change the image to fit the display. It is a short trip from there to changing the data to fit the conclusion.

How about larger high-resolution displays? Again, the astronomical value of printing on paper is revealed.

Rafe

-- rafe donahue (email)


While shopping around the Adobe site for the right upgrade path the Photoshop Science page jumped out as a good footnote for this thread:

http://www.adobe.com/products/photoshop/photoshopextended/science/

-- Tchad (email)


The field of astronomy confronts issues of digital manipulation every day. Telescopes collect "light" from all across the electromagnetic spectrum, from radio to microwave to visible to gamma. The concept of "colour" exists only in the narrow range of "visible" wavelengths. What colour is a microwave or an X-ray, anyway?

And so, how should an observation be coloured? The debate rages. At the heart of the issue is this conflict between science and our every day experience: Scientifically, hotter objects emit light with shorter wavelengths, so blue stars are hotter than red stars. Artistically, though, red/orange/yellow are warm colours while blue is cold. Show a blue star to a non-expert and their first impression is it must be a cold star, not a hot one.

This same pattern extends to other parts of the spectrum. Hotter X-rays are "bluer" (i.e., have shorter wavelength) than colder X-rays. It's natural they would be coloured blue in an scientific image. But that suggests they are cold and dormant, and nothing could be farther from the truth: hot X-rays are fired out of the accretion disks surrounding blackholes!

The Hubble Heritage project strives to present the observations collected with the Hubble Space Telescope in a way that non-experts can appreciate the scientific content. They are are leading the way to adopt some guidelines for adding colour, trying to find the right balance between science and culture. Surprisingly, they often find ways to incorporate these seemingly contradictory forces. Check out their scientific approach to the conflict at

http://heritage.stsci.edu/commonpages/infoindex/ourimages/index.html

The next time you see a spectacular astronomical image, remember that the colours are very likely artificial. But also remember the astronomers who produced the image fully admit they have added colour, hopefully in a way that highlights the important phenomenon without sacrificing science.

Peter

-- Peter (email)


As a student of scientific illustration I have to write an essay on a relevant subject. I am thinking about scientific images that have acquired an iconic status over the years although they were proven wrong in a later stage. For example, the Bohr atom model which shows electrons running in defined orbits around the nucleus is an image still used a lot, although it is outdated. Or the "march of progress", the image which shows evolution from ape to man in a direct line is an iconic image used in lots and lots of ways although we know that evolution was not linear. It would be interesting to research the reasons such images are still in circulation.

I wonder if anyone knows of other examples and/or of literature that has already dealt with the issue of iconic scientific images that are wrong or outdated.

-- Alexandra (email)


Alexandra, you might like to consider Haeckel's embryo drawings, which are certainly iconic, and have been extremely controversial. Nick Hopwood (a Senior Lecturer in the Department of History and Philosophy of Science at the University of Cambridge) is writing a book on the topic, and his homepage has links to several relevant articles by him.

-- James Scott-Brown (email)




Threads relevant to evidence reasoning:


Privacy Policy