When is a photograph no longer a photograph? At what point is an image so “pimped out” that it leaves the realm of photography, and enters the province of illustration? If you clone a crumpled beer can from of a landscape shot, is it still a photograph? If you merge multiple shots into a single image, can you call it a photograph? If you heal all the pimples on your model’s face, is it still their photograph?
Where is the soft grey line between photography and illustration, and when do we cross it?
I suspect most people would say that the “line” is crossed once a photographer does something to alter the reality of a scene. But where is the reality line? The moment a photographer chooses to point the lens in one direction and not another, reality is altered. The direction in which a photographer points the camera introduces bias (at best) or outright fabrication (at worst). I shot the following photo, from my book INSTINCT, at Vancouver’s Canada Day celebration on July 1. There were thousands of happy people at that event, but I was a bit bored — so, to amuse myself, I sought and photographed only those people who appeared as bored as I.
On the left, the man’s yawn is real. The woman on the right was indeed, just as it appears, carefully scrutinizing a french fry. And the woman in the center was really and truly staring off into space. The scene, as shown in this photograph, is completely unaltered. Nothing’s been cloned out. Nothing’s been pasted in. No one’ s had any wrinkles removed or blemishes tamed. And yet, the photo is an out-and-out lie. So does that mean it’s not a photograph? After all, I crossed the “line” by altering the reality of a scene — I photographed a joyful event and made it appear boring.
Many of you will probably say, “No, it’s still a photograph because you didn’t alter what was captured within the frame.”
Didn’t I?
I shot this image in RAW format, using a Leica M9 camera. That means I captured millions of bits of data onto a computer chip — data that is completely and utterly unintelligible without some type of computer program to interpolate and apply meaning to it. In the case of this photo, I used a software algorithm within Adobe Lightroom to artificially create an image from the digital data collected by the M9. Lightroom’s proprietary algorithms interpret that data in a way that’s unique to Lightroom. If I used a different RAW converter, like Capture One, DxO or Aperture, then the images they create would each look somewhat different.
I could just have easily taken that same M9 data file and run it through a program like Metasynth. Had I done that, I wouldn’t have an image at all — I’d have a sound. That’s because Metasynth is a program that maps data from image files to various sonic attributes, which it then uses to create audio. So, if I ran this M9 capture through Metasynth, my computer would play a sound, rather than display an image. And that sound would sound nothing like what I heard when I released the shutter on the M9. So is a straight RAW capture really a photograph when, in reality, it’s simply data that any software application can interpret in any way it wants?
Many of you will probably say, “Yes, it’s still a photograph because the people who program photographic RAW conversion algorithms are striving for photographic realism.”
But the image you see above is in black & white, while the RAW data collected by the M9 contains a significant amount of color information. In fact, couldn’t one argue that most humans see color, so black & white images are unrealistic and therefore cross the line between photography and illustration?
If I had shot this same image on black and white negative film, no one would suggest I crossed the line between photography and illustration. Yet many seem to think that a digital black & white image does cross that line — mostly because the sensor itself has captured color data, so removing it from the photo seems fraudulent. But, if you accept the fact that RAW camera data is simply data that can be interpreted in any way, who’s to say that you can’t interpret color data as luminosity? Black and white film photographers do exactly this when they put color filters on their camera lenses. If a black and white film photographer wants a landscape photo to display a darker sky, they might shoot the scene with a yellow filter. The yellow filter decreases the luminosity of blue objects — meaning the black and white film photographer is using color as a luminosity control. Black & white film photographers sometimes take portraits with red filters on their lenses, since this tends to reduce tonal variations in the skin, and makes people “pop” out of the scene a bit better. The only difference between a black and white photo shot on black and white film, and a black and white photo shot on a color digital sensor is when the color filtration takes place. The film photographer makes his color filtration choices before taking the photo. The digital photographer makes his color filtration choices after taking the photo. Why does it matter when the actual color filtration and desaturation process occurs? If the photos yield identical results, why would the black and white film shot be a “photograph” and the black and white digital shot be an “illustration?” To me, this line is so soft and so light grey that it’s actually invisible.
Where are you going? Don’t leave yet. I’m just getting warmed up.
If I dodge and burn a photograph, is it still a photograph? Does it make any difference if I do the dodging and burning under an enlarger or if I do it digitally in Photoshop? Since the earliest days of photography, photographers have selectively lightened and darkened various areas of a photograph to help draw a viewer’s eyes to the most important elements. If the elements themselves are not altered, is there anything inherently wrong with subtly altering the lightness and darkness of an image?
Again, most photographers would probably think this is perfectly acceptable, and that a dodged and burned photograph does not cross the line into illustration. But is there a limit to how much dodging and burning one can do before crossing that line? Look, for example, at some of Sebastiao Salgado’s photographs. Or W. Eugene Smith’s. Or Ansel Adams’. Dodging and burning increases the tonal variation within their images to such a point that they no longer represent how a human eye would perceive the scene. These photographers are masters at “amplifying” tonal differences — they take a slight shadow and burn it into a deep abyss. They take a highlight and dodge it into a luminous halo. Ansel Adams said, “dodging and burning are steps to take care of mistakes that God made in establishing tonal relationships.” Eugene Smith’s later Life Magazine photos would look nothing like the photos that would appear if you or I stuck that same negative under the enlarger. Gene’s images were dodged, burned, and bleached into perfection. Were they still photos? Or were they illustrations?
The line grows softer when you compare these images to today’s digital manipulations. In many ways, images like Ansel Adams’ and Eugene Smith’s were progenitors of the HDR “look.” I put the word “look” in quotes because many of the images people associate with HDR are not, in fact, HDR images — rather they’re simply tone mapped single images with freakishly pumped local contrast and copious quantities of high pass filtering. Although these images are wildly popular, no self-respecting news magazine would ever publish one because they don’t look “realistic,” and we all know that news photos need to be “realistic.” Yet these images aren’t really doing anything that our buddy Gene didn’t do with his dodging and his burning and his chemical compounds. Maybe it’s simply a matter of “taste.” To produce this look in the analog days required great skill and dedication, so it was always applied with purpose. To produce this look with software takes a few seconds, and is usually applied liberally and without consideration. Maybe this is why I find Smith’s and Salgado’s images riveting and inspirational, while I find faux-HDR images repulsive.
Let’s take another step further in our search for the soft grey line. What if the dodging and burning isn’t used to amplify, but to obscure? What happens, for example, if you burn some part of an image to solid black, thus obscuring any detail in that part of the photo? Is it still a photo? Photographers have long used this technique to eliminate clutter from an image — burning (or dodging) insignificant background or foreground areas so as not to distract from the important parts of an image. Look, for example, at the photo I recently posted of Cannon Beach, Oregon:
In the lower left corner of the image is a hillside. The nearest point of that hillside contains a rather unsightly wooden fence — a portion of which was visible in the extreme foreground of this photograph. Because the fence was an insignificant distraction to the scene at hand, I simply burned that part of the image so severely that all details — fence, gravel, and grass — disappeared into shadow. Did I cross the line? Did the photograph become an illustration?
I suspect most photographers would say that I did not cross the line, because this has been an accepted photographic process since the early days of film. I’m not altering the reality of a scene — I’m just darkening it to such a point that no actual reality exists.
But what if, instead of darkening that area, I simply cloned out the fence by painting over it with some of the neighbouring grass and gravel? Would that cross the line?
The majority of photographers would probably think this crossed a line, if not the line. Specifically, most photojournalists would think it crossed the line of journalistic integrity, since the photograph would imply a reality that is, in fact, a lie. But many fine arts or commercial photographers would think it did not cross the line because reality is not the point of this particular picture. The line is thus as soft, grey, and arbitrary as can be.
What if a photographer retouches a single acne blemish on a model? Is the resulting image an illustration or a photo? “Photo!” I hear you shout. What if the photographer retouches 300 acne blemishes on a model? Now is the resulting image an illustration or a photo?
What if a photographer uses a computer algorithm to remove noise from a photo? Has it become an illustration? “No,” say the masses. What if a photographer uses a computer algorithm to add noise to a photo (artificial film grain, for example)? Now has it become an illustration? The chorus no longer replies in a unanimous voice. But why? With both operations, you’re essentially just painting new pixels onto the image. In the case of de-noising you’re painting with computer-generated pixels that have less pixel-to-pixel tonal variance than the ones you’re replacing. In the case of artificial film grain, you’re painting with computer-generated pixels that have more pixel-to-pixel tonal variance than the ones you’re replacing. So why is painting over an image with a “smooth” wash of color considered to be “OK,” while painting over an image with a speckled wash of color is not?
What happens when you take a photo using a lens that exhibits barrel distortion? Is it all right to remove the distortion with software, or have you crossed the line? On one hand, the act of correcting barrel distortion will effectively modify every single pixel in your image — which certainly sounds like “illustration” to me. But on the other hand, such distortion did not actually exist in the scene you photographed — so you’re actually helping to “right” the “wrong” created by your lens. And, if we say it’s not all right to correct lens distortion via software, does that mean we can’t use a camera that corrects optical flaws internally and stores this data as part of the RAW file? If that’s so, then neither my Leica M9 nor my Panasonic GH2 take actual photographs, since both have built-in software to correct for lens aberrations.
And what about software perspective correction? Is that crossing the line? Anyone who’s ever tilted their camera upward to take a picture of a tall building knows this will make parallel lines appear to converge. To avoid this optical phenomenon, photographers have historically employed a mechanism that shifts their lenses vertically or horizontally in relation to the film/sensor. Most large format cameras have this capability, while medium- and small-format photographers can purchase special tilt/shift lenses for this very purpose. I’ve taken many architectural shots with a tilt/shift lens and, in the analog days, it was the only way to avoid converging lines when photographing architectural elements. No one ever suggested that using a shift lens lead to an “illustration” rather than a “photograph” — quite the opposite, since such lenses render a more ‘realistic’ interpretation of the scene. What about today? With software, we have the ability to correct converging lines in programs like Photoshop and Lightroom. Just like the barrel distortion correction algorithms, these software tools alter nearly every pixel in an image — completely transforming it geometrically from the image that was actually shot. Is this crossing the line? We’re simply correcting an optical aberration in software. Is that any different than correcting the same optical aberration in hardware? And, if so, why?
What’s the difference between illustration and photography? Where is the line? Truth is, it seems to shift every year. Originally, it was thought that if a digital modification had no analog parallel, then it crossed the line. For example, people dodged and burned in the old days, so dodging and burning were allowed. People diffused photos under their enlargers, so software-based diffusion was allowed. But this is far too simplistic a division. Our buddy Gene Smith could, with a single drop of white paint, make it appear as if someone’s eyes were looking in a direction they weren’t. Gene certainly lived in the analog days, and his trickery then was not only accepted but revered. Yet, if someone today used Photoshop to do this, the photo would instantly be labelled a fraud.
So where is that line? And how thick is it? And how fuzzy? When is a photograph no longer a photograph?
This blog would be rather pointless if I only posed questions, but didn’t answer them. So here we go: In my opinion, the line isn’t soft. The line isn’t fuzzy. The line isn’t even grey. Everything dealing with photography is a manipulation — some of which I believe enhance a photo’s appeal, and some of which detract from it. Any photo will automatically and irrevocably alter the very thing it purports to show. It is, by design, an abstraction of a moment — an illustration. Therefore, I can confidently say that each of us crosses that line the very instant we touch our cameras.
So grab your camera, and get out there and take some illustrations.
©2010 grEGORy simpson
ABOUT THESE PHOTOS: “Post Processing – Street Incarnation,” “Bean,” and “Parallel Lines” were all shot using a Panasonic DMC-GH2 with a Voigtlander 15mm f/4.5 M-mount lens. “Canada Day Festivities,” and “Cannon Beach from Ecola State Park” were both published previously, and were both shot using a Leica M9 with a v4 Leica 35mm f/2 Summicron lens.
If you find these photos enjoyable or the articles beneficial, please consider making a DONATION to this site’s continuing evolution. As you’ve likely realized, ULTRAsomething is not an aggregator site — serious time and effort go into developing the original content contained within these virtual walls.
good discussion. nice images.
i think photographers work in service to the images. the images stand on their own merits. if someone critiques an image by saying it doesn’t represent reality, or it is too manipulated, or whatever, the viewer is really trying to make the image serve the viewer’s conception of acceptability (based upon cultural, psychological and many other factors).
so an image either works or not, and it can work or not for many different reasons, but none of those reasons related to concerns outside the four corners of the image, imo.
I believe the question is not whether an image is a photograph or an illustration, beacause all is in people’s mind: the question is actually about the way the image was produced and whether the process used is acceptable (by the person who will answer the question) or not. Then the question is what is acceptable by people, photojournalists, crtics, etc. And you pointed it out: everything is judged through the prism of what has been made in the analog days. So if we want pure digital process (something that doesn’t copy an analog process result) to be accepted, we’ll have to wait say 10 to 50 years to build the academic ground on the basis of which images will be judged in the future.
Fortunately, I read the last paragraph first, your conclusion, so I didn’t had to read the beginning and the end!
Nuh, seriously, I guess, you shot yourself into your foot or knee, or how do you say that…
But I enjoyed the ride.
Keep it rollin’, bro!
Unretouched photographs don’t exist, it’s only a conservative objective ( or bias) that separates photography from the rest of the Arts so we put the label of “un-manipulated” on it, it’s like saying this image was not created by an person, it was created only by a machine, both untruths.
But what do photographers do, but embrace this untruth and created our own biases. Go figure. Nobody said photographers were revolutionaries. In fact, most photographers are conservative by nature, how long did it really take you to change to digital, and don’t you want the digital image you create to look like film? Or are you still using film because it creates the “feeling” we associate with photography.
Go to any contest website, Leica’s included, and you will see the words, “no manipulated photos accepted”. What does this really mean? Would this be the same as a painting contest saying No Cubists accepted, or no blue images accepted, landscapes only? Then it wouldn’t be a painting contest it would be “landscape contest” and we would be segregating by subject not medium. That would seems odd wouldn’t it, but we as photographs do that….we have accepted the Art world’s bias and taken it one step further, we’ve segregated ourselves.
What is a photograph? Anything taken with a camera, pinhole, polaroid, film or digital…..it’s a way of creating an image not to be defined by the subject or what we do to it after the machine captures the image. Bt that’s my bias..