All 4 books by Edward Tufte now in
paperback editions, $100 for all 4
Visual Display of Quantitative Information
Envisioning Information
Visual Explanations
Beautiful Evidence
Paper/printing = original clothbound books.
Only available through ET's Graphics Press:
catalog + shopping cart
All 4 clothbound books, autographed by the author $150
catalog + shopping cart
Edward Tufte e-books
Immediate download to any computer:
Visual and Statistical Thinking $2
The Cognitive Style of Powerpoint $2
Seeing Around + Feynman Diagrams $2
Data Analysis for Politics and Policy $2
catalog + shopping cart
Edward Tufte one-day course,
Presenting Data and Information
Seattle WA, July 11, 12
Portland OR, July 14
Denver CO, July 17
Minneapolis MN, August 15
Chicago IL, August 17, 18
Click here for more information about ET's course and to register.
Formalizing photographic aesthetics

Here are some remarks on aesthetic controls for photographers by Philip Greenspun and E.T.

Philip's essay, reproduced below, was originally posted on his site.

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

These are some research ideas that I'd like to pursue but don't (because I keep working on integrating relational databases and the Web).

Better controls for photographers

What are the appropriate controls for a camera? Traditional cameras have controls that are artifacts of the physical process of silver-halide photography, e.g., aperture and shutter speed. Computerized cameras with their "portrait" and "action" modes pull away from these direct physical controls in a crude manner. In an age of digital imaging, it is worth asking if cameras could be built with direct aesthetic controls, e.g., a "romantic/clinical" knob or an "Ansel Adams/Gary Winogrand" slider.

Before heading out to take pictures, a serious photographer makes choices that have profound aesthetic consequences given the limitations of silver-halide processes. For example, if a 1950s style is desired, Kodak Tri-X black & white film will be selected for its distinctive grain and look. Even if shot under ideal conditions in a well-lit studio, the resulting images can never be used where color is needed or a grainless modern style is desired. Perhaps with appropriate digital imaging technology, the photographer could capture detailed information on fine-grain color film or with an all-digital camera, then process the images to look as though they'd been shot on Tri-X.

For a photograph of a tall building that preserves the architectural lines is required, a photographer is currently forced to use a view camera, i.e., a camera where the lens and film are not fixed parallel to each other. It may be possible to simulate some view camera movements with digital processing and thereby enable the photograph to be taken with a simple Nikon.

Achieving a grainless look and high color saturation in a traditional print requires using low-speed (e.g., ISO 25 or 100) film that may require compromises in depth-of-field, freezing action, and lens weight. If the final image is intended for distribution via the World Wide Web and for presentation on a computer screen, it may be possible to achieve indistinguishable results using ISO 400 or ISO 800 film via digital processing. It would be nice to have a formal method for answering a question of the form: What is the maximum speed film I can get away with given my desired output size?

Sheet-film photographers such as Ansel Adams affirmatively control the contrast of each image by varying the development time. This is not possible with roll film, e.g., 35mm, where the entire roll must be developed at once. However, by using wide-latitude film and making multiple exposures at different shutter speeds, it should be possible via digital processing to assemble a final image of any desired contrast with detail in the shadows and highlights limited only by the output medium. If the film or imaging sensor has the property of silver halide crystals that a threshold certain number of photons are needed before any image will be registered, it may be desirable to pre-flash the entire frame before trying to capture shadow detail. The question here: How many exposures and of what kind do I need with a static subject to completely capture the scene for all practical aethestic purposes?

Producing exhibitions

Why is it that a book like Manhole Covers (MIT Press) is art and a collection of snapshots, each containing a manhole, is not? If a principal reason that Manhole Covers is art is the consistency of presentation, i.e., same cropping, same proportions, same toning, same grain, then can we turn snapshots into art under computer control? Can we help the would-be artist by giving a collection a photos a series of "looks" and letting the artist choose the best one? (Note that in a photo exhibition, you usually see only one kind of photo, e.g., black and white prints of 16x20" in size, framed in silver aluminum, or vertical Cibachromes, 20x24" in size, all from Fuji Velvia originals.)

-- Philip Greenspun


Response to Formalizing Photographic Aesthetics

Imagine locating many different photographs in a multidimensional space, whose dimensions need to be figured out (character of the color, depth of field, composition, and 3 to 6 others). This space needs to characterize, gracefully and reasonably, a great variety of different photographic styles that we may seek at one time or another to emulate.

The task of the automatic-aesthetic camera is to sample that space so that a reasonable number of points (photographs) are available for post-shoot processing. One picture cannot capture all your want. So smart bracketing replaces traditional, naive bracketing (which just brackets on +/-exposure steps around the original shot). Thus smart bracketing will provide a set of wildly different pictures as it systematically moves through the multidimensional space of possible photographic styles--all applied to the same scene.

Now this is similar to what you are trying to do now with nouns (names of different photographers), as each noun in effect describes levels of 4 to 10 variables. So you could characterize 20 photographers by where they are located on the fundamental aesthetic variables. But my proposed method of sampling a multivariate space will produce combinations of aesthetic variables that no one has come up with or turned into a style.

This all would displace the act of making a photographic style from camera controls to the computer editing. Other than editing, the key action in making good photograph then becomes a matter of just showing up, which it already is. But with the AAC multiples to edit afterwards, you get a lot more information out of just showing up.

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

What an interesting pair of comments. It will take me a long time to think though all the possibilities you have uncovered.

I gave up photography about 5 years ago, mainly because it was too expensive for me to do what I wanted at the quality level I wanted. I decided to wait until digital photography had advanced enough.

Several years later, it seems to be almost there.

I'm looking forward to the widespread adoption of the X3 camera chip. You can read about it here:

http://www.discover.com/issues/dec-02/features/featphoto/ (requires registration)

Thanks for the insights,

-dp-

[link updated January 2005]

-- David Person (email)


Response to Formalizing Photographic Aesthetics

The comment about consistency of style distinguishing "art" from "snapshots" is reminiscent of some ideas developed at greater length by Janet Malcolm in Diana and Nikon, where she discusses a show (compiled by John Szarkowski at MOMA if I recall correctly) many years ago in which amateurs' snapshots were presented side by side with photographs by recognized artists such as Lartigue, Cartier-Bresson, and Kertesz. Malcolm notes that the amateurs' pictures were often superior, when viewed as art, to those of the artists.

Malcolm revisits this train of thought in the latest edition of the book (Aperture, 1997) in a new chapter on the different ways two curators have presented the Storyville photographs by Bellocq (one exhibition portrays them as art; another aims to demolish the myths surrounding Bellocq and position his pictures as ordinary commercial portraits).

Another variation on these ideas is considered by Thomas Weski in his introduction to William Eggleson's Los Alamos.

-- Jonathan Gilligan (email)


Response to Formalizing Photographic Aesthetics

Image processing is still in its infancy, I believe.

Here are two products which exploit simple concepts, and package them into easy-to-use applications:

Whiteboard capture: This program takes a photo of a whiteboard, adjusts the exposure, perspective, lighting variations, etc., automatically. http://www.polyvision.com/products/wbp.asp

Stills from video: a moving video image contains a lot of information lost in a single frame capture, even with digital video. This product produces a very nice still image from a video. http://www.salientstills.com/product/product_intro.html

[link updated January 2005]

-- David Person (email)


Response to Formalizing Photographic Aesthetics

Jeffrey Berg's good point is nicely demonstrated. A lot of information is hidden in film as well. It looks like the dynamic range of any particular image is narrowed, so variations in contrast catch the full dynamic range of film and digital imaging. That dynamic range is several orders of magnitude, I believe, less the the human eye.

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

It appears Philip and Edward are posing two questions.

For Edward's AAC question: Given the nature of optics (Fourier transforms, Wiem's Law, Stephan's Law, Maxwell's equations, etc), it seems to me you need to show up at least twice, but in slightly different locations. The biggest problem is focus: the location of a point of light that becomes a circle of confusion is irrecoverable information after you place it next to all it's neighbors. I simply can't figure out which bright point in 3D led to which pale disc in 2D.

So the only option for an AAC with depth-of-field control is to make stereo images. The easy version (for the photographer): take a few pictures with a lot of overlap, using a small aperature to get the maximum volume in focus, then use stitching software to correlate the overlap regions and, knowing where the camera was in a relative coordinate system, determine the distance each point from the various camera locations. The hard version (for the photographer): make a systematic set of images of the scene like a CT scanner would. In either case you get a 3D virtual scene which can be displayed on screen and you can choose position, lens focal length, depth of field, light sources, cropping, etc, etc. It seems to me this is what digital animators do all day long.

Now for Philip's controls-on-a-camera question: the controls currently employed represent the fewest necessary to reach the entire aesthetic space for the exposure. John Lind does a good job relating physics to photography via the Exposure Value system http://johnlind.tripod.com/science/ ; the ISO, shutterspeed, lighting and aperature variables are orthogonal and necessary. Each CCD on Hubble has an optimal sensitivity; each Hubble image was taken over a known period of time, the light was of known intensity, the aperature diameter was known; each Hubble lens has a focal length and diameter. There are a few variables beyond the EV system: 1) medium response varies with wavelength, exposure time, and intensity, and 2) Intensity distribution over frequency varies with temperature (blackbody radiation: Stephen's and Wiem's laws). Everything else, warming, cropping, etc, can be accomplished in post-production.

But this issue of camera controls is really a subset of the AAC question. If you can solve the AAC problem, then the "controls" problem is easy: Maplethorp used his gear a few particular ways, Weston used his gear quite a few ways, but they're all replicable in post-production: Photoshop Elements has a host of "Recipes", we would just need a version of photoshop capable of handling a few more variables (aperature, shutter speed, focal length, sensitivity, light sources). Perhaps a relational scheme could be developed to navigate the recipes by more than a hierarchical method: I want more warmth than Eggleston, less DOF than Meyerowitz, fewer frames than Hockney, but just the same correction as Renger-Patzch's _"Katharina" Colliery in Essen, 1954_.

-- Niels Olson (email)


Response to Formalizing Photographic Aesthetics

This is probably too simplistic an approach to be of interest- however....

There's a big difference between oils and watercolor. By layering up glazes you allow the light to go through those layers of color- a dark ground will absorb and light ground will transmit. The luminosity of a computer screen gives you a very bright "ground" (as do slides). Unfortunately the print process does not allow for this level of luminosity unless you print onto a translucent material.

IME if you create an image in photoshop in the same way in which you create an oil painting you are able to reveal many aspects of the image which remain hidden in a single layer.

All of that information is already in the digital capture (given a high resolution image) It is simply up to you to reveal it in separate layers- emphasizing what you wish to emphasize and vice versa.

The power of the image is still chiefly conveyed through other types of narrative- POV and Style.

-- Ziska Childs (email)


Response to Formalizing Photographic Aesthetics

The camera probably should take multiple images (each with a different style filter) from the same position, so that there is fairly full information about each style in the sequence of bracketed images. Then the editing would involve a choice among styles or finding various combinations or intermediate points in the style space.

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

Not unlike sound recording the "multiple" filters are there in photoshop. Layers work almost exactly as multiple tracks. A single recording can give you multiple effects. Combined they can give an almost infinite number of effects.

-- Ziska Childs (email)


Response to Formalizing Photographic Aesthetics

More technically-minded people might be interested in the work of Columbia Computer Science professor Shree Nayar. He and his students have built a number of unusual computer-aided cameras.

They can, for example, take very high dynamic range images with a single shot, or create a 360 3D image in one shot, or create photos that can be edited with scene-appropriate lighting still in place. These cameras often use mirrored or non-traditional lenses; computer post-processing then makes the images more suitable for viewing.

A gallery of some images can be found here: http://www1.cs.columbia.edu/CAVE/

-- Mike Cafarella (email)


Response to Formalizing Photographic Aesthetics

Mike,

Thank you very much for this!

-- Ziska Childs (email)


Response to Formalizing Photographic Aesthetics

http://www.geospatial-online.com/geospatialsolutions/article/articleDetail.jsp?id=125725

-- Ziska Childs (email)


Response to Formalizing Photographic Aesthetics

Thinking back to the beginning of the thread, I would love
my digital camera to have settings such as: Ansel Adams, Doisneau,
Koudelka, Lomo, etc. It might even encourage the younger generation
to explore the past masters.

For now, the one of the best/creative attempts at getting to nuanced
black and white pictures from digital originals comes from
Daan van Eijndhoven [ http://www.digidaan.nl/index2.html ]

Orinal


Photoshop Greyscale


Daan's Normal Contrast


Daan's High Contrast


Daan's Orange Filter



-- Tchad (email)


Response to Formalizing Photographic Aesthetics

For a remarkable panorama, see

http://www.digidaan.nl/index2.html

I like to do this sort of thing, in color, for landscape photographs of sculpture installations. Any advice?

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

A Widelux would do the job. Depending on the field you wish to capture a Brooks Veriwide may work too. You can also try putting a 2 1/4 X 3 1/4 film back on a 4X5 and use a 65mm reclinear lense; this is what a Brooks Veriwide is essentially. But a camera like the Widelux with a rotating lense will do the job best and provide the greatest angle of acceptance.

If you're simply looking for a strong foreground image while capturing a fairly large field in the background, you can get by with using a very wide reclinear lense such as a 65mm for 4X5 or 18 - 21mm for the 35mm format. However you must remember to keep the film plane parallel to the forground object or you will get keystoning, etc.

-- Ed Mikula (email)


Response to Formalizing Photographic Aesthetics

-- "Traditional cameras have controls that are
-- artifacts of the physical process of silver-halide
-- photography, e.g., aperture and shutter speed."

This is the case for many recently-computerized crafts. Consider music production. One could imagine controls that are truly expressive and imaginative, yet the software goes to *excruciatingly painful* lengths to duplicate the constrictive interfaces of old-school hardware. I'm talking about software that duplicates the old 2-line text LCDs of the old hardware, or includes special UI code so that a mouse can be (clumsily) used to turn a knob!

One of the most popular software packages actually duplicates the tangled wires behind devices, forcing the user to plug and patch (I suppose we should be grateful that they don't make us crawl on the floor).

In case you think me guilty of hyperbole, witness here:

   

I'd like to see cameras, and audio devices, and all kinds of computerized tools that have controls mapped to the intentions of the creator, not duplicating archaic abstractions.

-- LeMel (email)


Response to Formalizing Photographic Aesthetics

I suppose you're right (need to see the alternatives) but gosh that patch board looks good and probably helps those making the transition from physical to electronic links. Is the schematic diagram the more appropriate metaphor?

Is there the same problem with computer-sceen images of mixers their pots and sliders?

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

Here is a 2.5 billion pixel (78,797 x 31,565) panorama made by stitching 600 photo frames together.

http://www.tpd.tno.nl/smartsite966.html

If you click on the 'Related Work' link you will find a panorama of Bryce Canyon National Park Utah.

-- Andrew Nicholls (email)


More Panoramas

Check out this wonderful series of 360-degree panoramas from John Brownlow, one of my very favorite photographers.

-- Alex Merz (email)


Response to Formalizing Photographic Aesthetics

Can these still-land wonders be compared with a video camera view?

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

I'd like to see cameras, and audio devices, and all kinds of computerized tools that have controls mapped to the intentions of the creator, not duplicating archaic abstractions.

I agree with Dr Tufte's comment regarding the patch cord interface. I tend to disagree with Dr Greenspun concerning the features added to cameras. In my experience I find that I merely want as fine a resolution and as much memory and camera recording speed as I can afford. The other features seem to be shadow art and only serve to confuse the issue. Which is, if an image is not on the negative or in the data file, you can't manipulate it. The works of most photographers are as much a product of the darkroom as the camera. While what Dr Greenspun proposes would be a great addition to Photoshop or Microsoft Pictureit, do we really need more things to fiddle with while recording a camera image? Evolving electronic camera controls to exposure time, lens opening, and recording surface speed, would make sense. While an Anne Geddes and Precious Moments filter may have commercial appeal, you can't package ability. However, paint by numbers is enjoying a resurgence, maybe Dr Greenspun is on to something.

-- Ed Mikula (email)


Response to Formalizing Photographic Aesthetics

Here is a revised statement of Philip Greenspun's objective. Happened across this in a March 2003 article Philip wrote about his hypothetical lab Labus Novus.

Commercial tools such as Adobe PhotoShop address the needs of graphic artists fitting a single photo into a spot on a printed page. Labus Novus will build a system that directly supports photographers' aesthetic goals. First and foremost is assisting with the challenge of mapping the real world's very high contrast into the narrow range of contrast that can be displayed on a monitor or printed on paper. Second we would like to give photographers aesthetic and stylistic controls rather than options such as "Gaussian blur". Third we want to allow the processing of photos in a group, e.g., "let's see whether my 85 vacation snapshots from Greece will look better in black and white." Fourth we want to support the presentation of photos on the Web and to mobile devices.
This is markedly different from the idea orginally presented, which was to improve the controls on board the camera. This iteration seems more in line with the reports of the discussants above.

-- Niels Olson (email)


Response to Formalizing Photographic Aesthetics

Niels' comment appears, appropriately, as Philip visited here today. We took a sculpture walk and Philip photographed a new sculpture piece in progress. He somehow routinely creates superbly composed documentary photographs while working at a quick snapshot pace. That's probably because he's taken hundreds of thousands photographs.

Earlier today I took about 100 medium-resolution photographs on an old dairy farm that's being looked over for a landscape sculpture field. To document (the key word here) the farm for later contemplative review, my strategy was to take a sequence of wide-angle shots sweeping around 360 degrees at 5 or 6 elevated points scattered over the 96 acres of pastures and fields on the farm. (The medium resolution mode--yielding some 340 images on a 1 gb memory card!-- helps the download memory keep up, since after about 5 TIFF- or RAW-resolution photos the camera shuts down while downloading the big files). In many of the pictures I tried to include a person in the scene for scaling since the dogs did not come along to the farm. The wide-angle zoom lens also allowed good close-ups, and so I avoided changing lenses on the digital SLR camera (risking specks on the CCD chip) while walking around the farm. (Camera = Nikon D100, with 20-35mm 2.8 zoom, which translates into something like a 30-50mm zoom because the lens focuses on the smaller digital chip rather than on the larger piece 35 mm film, if I have that right. The manual, if I could only find it, must give the scaling factor.) Right now this is all talk since the photographs have not yet been downloaded to my computer and I've only seen the ridiculously small views on the little screen on the back of the camera (digital cameras would be greatly improved by a larger and higher resolution review screen, so more in- camera discarding of failed photographs could be done and so it would be easier to review a bunch of pictures with a colleague from the back of the camera, benefits I would quickly trade for several megapixels of resolution in the original photographic image itself).

Most photographic aesthetics result from the scene and from those who hold the camera, something everyone already knows. More interestingly, it might helpful to formalize photographic strategies that camera users can employ. I guess this is what books on photography are about. My main strategy is to fill the frame in an interesting way with a fresh and interesting scene. The importance of the scene means, for me anyway, that about 80% of photography is just showing up--that is, going out and finding good scenes. (I use the 80%-slogan to motivate myself to go out and take pictures when the snow is deep and cold, or to travel to see something fresh.) My compositional strategies are largely automatic, probably because I already devote most of my time to thinking about composition and how visual elements interact in flatland and spaceland and spacetimeland.

Is there a really good book on photographic aesthetic strategies? Philip mentioned today a more general book by John Szarkowski (of MOMA) on photography. We also looked at a few prints in a series called "Moving Pictures," pictures I've taken through the window of the New Haven-Washington train (great images south of Philadelphia) and tour buses. At least this a fine exercise in rapid composition of photographs!

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

... on photographic aesthetic strategies.

I'd say that developing these would be the same as in any of the arts: make a lot of stuff, look at a lot of stuff, think about it a lot, make a lot more stuff, see a lot more stuff, think about it a lot more, and so on. Remember Mondrian's admonition: "First comes the work," and Eliot's adage (here paraphrased): "Good poets borrow, great poets steal."

Good works of art will, over time, reveal the varied "strategies" of their makers; cultivating a study of these will bear riper fruit in developing one's own aesthetic, over time, than a tour through the how-to manual (not that that doesn't have value). One to look at is brought to mind by Dr. Tufte's final note: John Szarkowski's beautifully produced book, "The Idea of Louis Sullivan," 1956 originally and reprinted by Bullfinch in 2000. A magnificent set of black and white photographs on the work of the master American architect; as complete a synthesis of "scene," aesthetic insight, photographic skill and commentary as I've seen.

-- Steve Sprague (email)


Response to Formalizing Photographic Aesthetics

I just reread Jeffrey Berg's post above regarding the risks in misreading a medical image. The medical community seems to be pretty up on this: they've been using the DICOM standard for some time now. If you can get your hands on some DICOM files (they're big), several viewer programs are available for free. The next time you get an xray, MRI, or other imaging study take an empty zip disk and writable CD with you. It's quite likely your study is digital and the technician can give you the images.

-- Niels Olson (email)


Response to Formalizing Photographic Aesthetics

Flickr's Technique forum is developing a number of recipes. Faking the "Lomo Effect" is a particularly good example.

-- Niels Olson (email)


Response to Formalizing Photographic Aesthetics

Re lomo, very interesting.

PDN, my favorite photographic magazine, has lots of lomo pictures.

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

One contemporary photographer creating outsdanding panoramas and dealing with the problems of converting from spaceland to flatland, is Andreas Gursky:

http://www.moma.org/exhibitions/2001/gursky/

Altough I don't know the specifics, he manipulates his photographs to remove distortion, with astonishing results.

-- robert simmon (email)


Response to Formalizing Photographic Aesthetics

Just for fun, here are some experiments using Photoshop to generate the Lomo effect. Here are 3 before/afters.

Cutting stainless steel       The lomo processing lights up the center of action in the picture and fortuitously obscures the uninteresting top one-fifth.

Bird #3       Extra brightness and contrast in the sculpture is good; the lomo treatment here is perhaps a bit overdone, creating a Hollywoodish look.

Electrical grid       I took this picture from our tour bus in the Mojave Desert, when we went from Los Angeles to Phoenix. The lomo effect here is interesting but overdone. Maybe a more subtle lomo treatment would be better.

-- Edward Tufte


A lomo control panel in Photoshop with semi-auto processing

Here's what I envision: Open Photoshop and click file/open... Select a series of photographs to lomoize, say, all the architectural pictures I took in the French Quarter, about 200. Click on batch operations, select the multivariate adjustment of choice from a short list of popular multivariate options (lomo, black&white, etc), and then the first picture opens with a custom control panel for, say, lomo, that contains sliders and adjoining numeric input windows that immediately implement on screen so you can eyeball contrast, brightness, saturation, and vignetting feather and gradient. Click save, it saves and automatically opens the next picture. Semi-auto.

If you wanted to convert a batch of silo pictures to black and white, A black-and-white panel could be selected that would contain sliders for levels, contrast, and saturation. Other adjustments would still be available through the usual interface, but this could increase throughput when the desired output falls within a well understood category.

-- Niels Olson (email)


Response to Formalizing Photographic Aesthetics, NOW with amazing lomo effects!

Here's a real, non-Photoshop lomo effect that resulted from the actual available light shining on the pieces Escaping Flatland 5, 6, and 7. The dark surround brightens the areas surrounded.

-- Edward Tufte


Response to Formalizing Photographic Aesthetics

The world is full of books that purport to explain how to use a camera to see better, but (no surprise) most of them are poor. Based on my own non random sample of the genre, I would recommend the first six chapter of Bruce Barnbaum's "The Art of Photography." (The rest of the book is technical, until the last three chapters which are philosophical; all of it is valuable, but the first six chapters are on point to the thread.) Barnbaum is an experienced photographer who has also spent a lot of time teaching photography and thinking about what he does in both photography and its teaching. The book is self-published. It is the latest revision of notes he prepared for teaching his workshops. I you would like a copy, you have to send him the money directly. All of which is a recommendation. See www.barnbaum.com

-- Lewis LaRue (email)


Digital Black and White

Another digital black and white technique http://www.prime-junta.net/pont/How_to/n_Digital_BW/a_Digital_Black_and_White.html

Look out for the awkward navigation at the bottom of the page to continue reading the article [ 1 ] > [ 2 ]

-- Tchad (email)


This may well be old news to most on these boards, but I thought this link might be of interest - http://www.cliffordross.com. It describes the photos of Clifford Ross and his incredible R1 camera.

The following is an excerpt from a New York Times article about Mr. Ross and the R1.

"Mr. Ross pulls a cloth over his head and the back of his contraption when he takes a picture. But with this camera that he concocted out of 60-year-old camera parts, mirrors, a microscope and other items - none of them digital - Mr. Ross has taken photographs on 9-by-18-inch negatives that when slowly processed by hand and digitally scanned contain 100 times as much data as the average professional digital camera.

For example, in the mountain photographs that Mr. Ross took in Colorado - of Mount Sopris, near Carbondale - shingles on a barn appear in sharp focus 4,000 feet from the camera, as does a tree on a ridge four miles away."

-- Estes (email)


The work is very interesting, I wonder if he could photograph my sculptures as the light changes? If the camera stays in the same position on a tripod, as his does, then we could bracket exposures to catch the extremes of sun reflection and shadows, and then stitch them together to get excellent dynamic range, which is a great big problem in photographing the stainless steel pieces.

-- Edward Tufte


To obtain a high dynamic range images from multiple exposures see:

http://www1.cs.columbia.edu/CAVE/software/rascal/rrhome.php

for some examples see:

http://www1.cs.columbia.edu/CAVE/software/rascal/rrslrr.php

See this paper and software for object relighting:

http://grail.cs.washington.edu/projects/photomontage/

Both work quite well for me.

Cheers, Ivan

-- itt (email)


Ross, I believe, is using the same recon camera back as the gigapxl project.

-- Niels Olson (email)


Paul Debevec has another page on high dynamic range, Recovering High Dynamic Range Radiance Maps from Photographs, and an interesting paper on the same. Presumably the HDRShop software can create these HDR images from a series of photos taken at different shutter speeds, but I have not personally used it. (I'm currently trying to work out the math by hand as detailed in the paper because it's more fun for me and because I lack Windows.)

-- Patrick Mahoney (email)


The Fuji S3 Pro uses high and low sensitivity detectors to achieve higher dynamic range than most digital SLRs. I don't own one, so can't vouch for it, but the technology intrigued me.

-- Robert Simmon (email)


The S3 seems to have a very interesting idea: to build increased dynamic range into the CCD chip. The S3 has a Nikon lens mount (almost a necessity for me) and the Fuji-Velvia digital equivalent intense saturation of color (which I like). Not all the test comparisons in the long review (link immediately above) are convincing in favor of the S3, although a few of the comparisons appear ill-controlled.

Since my photographed objects that have a dynamic range problem are perfectly still (the stainless steel pieces), I'll try bracketing exposures with the camera on a tripod and then stitching the images together in Photoshop or a plug-in to Photoshop. Or how about printing a stereo pair, one under-exposed, the other over-exposed, and then have viewers do their own stitching to get detail in both the lights and the darks when they combine the images to make the apparent 3-D view? Probably a Kindly Contributor will tell us why that won't work, however.

The loss of dynamic range has become noticeable as we've been converting sculpture photos for CMYK (+ a touchplate) printing in Beautiful Evidence. Our test images were originally printed on an Epson and they look amazing--and of course set too high a standard for CMYK+ printing. And the images look good on a computer screen, for backlighting is vivid compared to reflected light off of paper. There are a lot of stages between the glowing reality seen by the eyes of the live observer and the final published flatland version seen by readers.

The other trade-off involved is coated stock vs. uncoated stock for printing. On coated stock, the inks sits up, making the image bright, saturated, crisp, and apparently somewhat greater dynamic range. But there is a big cost: there are often all sorts of reflections of local ambient light off of the coated stock. Thus I've flirted with coated stock for the last 3 books, or at least a mix of coated and uncoated stock for different forms, but have always come around to uncoated stock because of the optically annoying glossy reflections. In Beautiful Evidence, we're using an uncoated stock (Mohawk Options), which, according to our test printing, seems to have pretty good ink hold-out. Options seems to get us about 30% to 50% of the way to coated stock (compared with our usual Dulcet stock).

-- Edward Tufte


Dynamic range is a sensor property, not a processing feature. If the information isn't in the film or the digital file, it can't be created by the absence of in-camera processing (= RAW files) or by in-camera processing, or by post-shoot Photoshop.

I've taken all sorts of pictures of stainless steel sculptures by David Smith and others and, with sun and shadow, something is always blown out--seemingly more so in digital than in Velvia. And neither comes close to direct observation by eye.

The point of the Fuji S3 is to design a chip that builds in multiple sensors to try to increase the dynamic range. That is the point of multiple exposures on a tripod as well. These efforts work at the sensor level, where the dynamic range work needs to be done, and have nothing to do with processing in RAW or not.

-- Edward Tufte


First, the actual performance of the S3. More on that after this quote: Here's the first article google searching for dynamic range ccd velvia: Photoshop for Astrophotographers:

Different recording media can accommodate a variety of dynamic ranges. Early model consumer digital cameras could capture detail in a brightness range of only perhaps 3 to 4 F/stops and the latest models can record about 6 F/stops. High contrast transparency films, such as Kodachrome and Velvia, can capture about 5 - 6 F/stops. Color negative film can capture 8 or more F/stops of usable detail if correctly exposed.

Note that if we measure the actual optical density of transparency film, there will be more optical dynamic range from the absolute blackest black (the D-Max)to the absolute whitest white (totally transparent) than recording dynamic range. This optical density range is greater than the range that the film can actually record detail in, so the effective useful recording dynamic range is less than the optical density would indicate. For instance, Kodak's Kodachrome has a D-Max of about 3.7, which is almost twelve stops, but its usable dynamic range is only about six stops.

Astronomical CCD cameras have a dynamic range of about 10 to 11 F/stops.

Thom Hogan reports about 7.5 stops dynamic range with the D100 or S2. If Fuji's claims are correct, then they're realizing about a 3 stop (400%) improvement. However, it's not at all clear to me that those 3 stops can be realized in prints, even from an Epson printer. I'm even more pessimistic about CMYK.

There are also certain visual deceits the visual system plays on the rest of the brain in real time. The eye very quickly changes aperture, by dilating the pupil, based on what part of a scene falls on the fovea, the very central high resolution optic disk, which is only a couple millimeters across. The subconscious also considers the ambient light of the entire scene. There are other control systems for enhancing visual acuity, inhibitory surround being the classic example. It increases contrast through neuromodulation.

Some additional articles by Roger N Clark and others have some interesting tidbits. I found them with the search dynamic range human eye stops.

-- Niels Olson (email)


One last one, and I'm done. From the Wikipedia:

At any given instant, the retina can resolve a contrast ratio of around 100:1 (about 6 1/2 stops). As soon as your eye moves (saccades) it re-adjusts its exposure both chemically and by adjusting the iris. Initial dark adaptation takes place in approximately four seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal chemistry (the Purkinje effect) are mostly complete in thirty minutes. Hence, over time, a contrast ratio of about 1,000,000:1 (about 20 stops) can be resolved. The process is nonlinear and multifaceted, so an interruption by light nearly starts the adaptation process over again. Full adaptation is dependent on good blood flow; thus dark adaptation may be hampered by poor circulation, and vasoconstrictors like alcohol or tobacco.

There is also the phenomenon of bleaching: to much light can overwhelm retinal pigments. Thus the spot after staring at a bright light or a white paper looking faintly red after staring and a green sheet of paper.

20 stops of resolution seems enormous to me. Racing mountain bikes in forested areas, one experiences some tremendous variations in light, ranging from the darkness under the edges of roots to full sunlight in the same "frame", all at 3 to 30 miles an hour and looking with a central axis of the visual field regularly, forcibly, ranging between 45 degrees above the horizon to 45 degrees below the horizon. Imagine a wooden rollercoaster inside a forest but with a terrible need to understand what one is seeing because there are no rails. The decision points are coming very fast and have serious consequences. Trees hurt, but not as much as the rocks at the bottom of a ravine. The sensations of blow-out and blur are quite familiar to this crowd. I suspect the five to eight stops of dynamic range that engineers seem to have been targetting in most production films and DSLRs may hint at the working range of the eye most of the time.

-- Niels Olson (email)


One solution to the dynamic range problem is to photograph the Flatland series of sculptures using extended exposure techniques. You can either photograph a long exposure by the light of the moon at night or by using a neutral density filter during the day.

Extended exposures also can eliminate shadows as the shadows move with the sun/moon and you get an averaging effect.

There is a brief tutorial here -

http://www.danheller.com/tech-longexp.html

-- Andrew Nicholls (email)


Response to Formalizing photographic aesthetics (Photosynth software)

I recently discovered the Photosynth tool from Microsoft Labs (http://labs.live.com/photosynth) via an online video demonstration from TED (http://www.ted.com/index.php/talks/view/id/129).

I could see this new technology (SeaDragon) having implications on a number of topics on the ET forums discussion boards (perhaps a new topic thread is in order?).

After watching the presentation and restarting my Macbook Pro into Windows XP to test it out, I was reminded of this thought experiment from ET:

"Imagine locating many different photographs in a multidimensional space, whose dimensions need to be figured out (character of the color, depth of field, composition, and 3 to 6 others)."

As a photographer, that statement has challenged my thinking about the aesthetic choices one makes with a camera. Exploring the data set of images of San Marco's Square or quickly inspecting the 80 megapixel detail of artist Gary Faigin's work (Photosynth coordinates: http://url.sepukoo.com/8Tb7) is an experience that warrants further study. I would be pleased to read other reactions to this.

In response to ET's above statement - I could see this application's ability to map camera angles, positions, and focal lengths as a useful tool to investigate and instruct on the photographic techniques as discussed above. If only it ran natively on a Mac - I can only imagine its magnificence on a 30" Cinema Display.

-- Wyatt Hull (email)


Recovering information from single exposure

Back to Niels Olson's February 22, 2004 post: by using tricks during capture, enough information can be left in a single 2D photographic exposure so that post-processing can remove motion blur and change the focus via frequency-domain transforms.

-- Nathan Vander Wilt (email)


Hacking dappled light

Julius von Bismarck's Image Fulgurator: Wired and the Cyber Arts exhibition. The physics are quite similar to dappled light.

-- Niels Olson (email)




Threads relevant to photography:

Seeing Around: New ET essay published