By Andrew Appleton, Appleton Photo Training.
Why is the scene in front of us often so much more beautiful, more dramatic than the one on the back of the camera?
Learn the 6 main reasons and take better pictures.
You see an amazing landscape or your subject’s face bathed in soft natural light. You’re excited. You take the shot but the result is disappointing. Why? It’s a question I am asked time and time again so I decided to include a seminar on the subject at the Photovision Roadshows, the first one of which is this Tuesday in Dublin. In the meantime, I’ll share what I know with you right here and now.
There are 6 major differences between the way we see and the way the camera ‘sees’ what we point it at.
1: Dynamic Range
The human eye is amazingly sensitive to light. It has the ability to see around 24 stops of dynamic range. This means that, typically, we can see shades from deep blacks to bright highlights and this is particularly noticeable when the subject is backlit by the sun; you can make out the detail of their face but you can also see the bright background. In contrast, most cameras have a dynamic range of around 12 stops. Some are better than others at handling wide dynamic range; for example Hasselblads have an additional two stops and, although this doesn’t sound very much, it is really noticeable when working in bright harsh conditions.
Whatever camera you are using, the relative limitation, compared to the scope of the human eye, means that you either have to a) select the 12 stops out of the 24 most appropriate for the task in hand, or b) try and compress the 24 stops that the eye can see down to 12, by filling the shadows with light or c) use some form of HDR. Since the last solution involves taking multiple images, it is only suitable for genres such as landscape and architecture where the subject matter is static.
This image is a quick HDR in Lightroom from 5 separate exposures ranging from 6 seconds to 1/5th second.
2. Field of view
Back in my early days of photography, 35mm film cameras came with a 50mm lens as standard and there’s a perfectly good reason for this. 50mm is the closest to the human eye’s field of view. However, we have something that a camera lacks: a brain. Your brain has the capacity to be selective; it can – and does – decide what to prioritise within that field of view. Our job, as photographers, is to make choices that help the camera to emulate how the brain allows us to see a subject. Two decisions we must make are the choice of lens and our distance from the subject. Many people get confused here; it is the distance between the lens and the subject which affects perspective, not the focal length of the lens. 24mm or 200mm lens, if you photograph that headshot from the same distance, the perspective on the face remains exactly the same.
3. Focus and Depth of Field
In order to steer the viewer to the element of the picture we deem most important, it’s currently very popular to blur the background. With the human eye, a far more sophisticated piece of kit altogether, the process is different. The brain does the same job by filtering out all extraneous clutter and drawing your eye to the important stuff. An example I often give is when you are watching TV, you don’t notice the various items around it (dog, magazines, washing basket etc.) which would otherwise distract. You are just watching telly – but the brain is very busy indeed doing a virtual tidy-up (even if it is only temporary). Take a picture of the scene, without using something like DOF to separate the TV from its surroundings, and everything in the image will be vying for your attention, causing unwanted distraction. Of course, a shallow DOF is only one way to separate your subject from the background. It might be tempting, but having the latest f/1.2 lens does not mean you have to shoot everything at f/1.2. And remember, DOF is not just controlled by aperture but also focal length, sensor size and distance from subject.
4. White Balance
In photography speak, our brains are set to ‘Auto White Balance’ mode. Tungsten and daylight are averaged out so we don’t see a difference in colour between different light sources. Of course, your camera can also be in auto white balance but you have the freedom to be much more creative by doing things like playing with gels on your light sources and adjusting white balance accordingly. For example, at dusk the colour of the sky can be transformed into a deep blue by using either a tungsten or incandescent WB setting. Then, by using CTO (colour temperature orange) gels on your artificial light source, your subject can be correctly colour-balanced.
This image, taken in Oxford with just available light, looks quite different to the way my eyes perceived the scene. The combination of warm sodium street lights and a colour temperature of around 3200k has rendered the sky an intense blue.
Unlike our eyes, which act effectively as video cameras, the stills camera can freeze action or create blur. This shot of fruit being dropped into a tank of water is frozen by a very short flash duration of around 10,000th sec. At the other extreme, the racing car would look static if shot with a high shutter speed but a slow shutter speed gives a sense of movement.
Back in the day, I used medium format cameras with waist level finders – and I loved them. Looking down into a viewfinder really helps with composition. You become detached because your plane of view is different and you are no longer any part of the scene. It becomes an image, and you the observer, rather like looking at a painting. For me, this makes composition choices easier and more obvious.
When composing an image, you are faced with a range of things to consider. There are the obvious five guides: rule of thirds leading lines, frame in frame, fill the frame and patterns. But add to those basics, decisions relating to angle of view, colour, balancing of elements, visual mass, contrasts etc. and you can very quickly change how an image is perceived. The world of cinema is very fond of using colour to influence our emotional response to a scene; this is known as colour grading. The most prevalent colour combination in the industry goes by the name of the ‘orange and teal’ or ‘blockbuster’ look. These are not arbitrary labels; there’s plenty of psychology behind the success of this duo. At its simplest level, teal in the shadows and orange on the subject draws on our perception of good (orange = warmth) and bad (blue = cold). Watch out for it in movies.
Once you begin to see like a camera you can make more informed decisions about how to shoot any subject to best effect. It is, after all, just a piece of equipment. We’re the ones with the brains.
Andrew Appleton is a professional photographer and educator,
more of his work can be found on appleton.photography