How White the Whites???

BirdPhotographers.net

Help Support BirdPhotographers.net:

Thanks Dieter for your many excellent comments above. I did add some editor's notes on exposure to your post as your were treading dangerously with your exposure advice. I do not think that we run into too many birds that are pure white...

Later and love, artie
 
The only things in an image that I don't mind being blown out (255,255,255) are strong specular reflections. The catchlight in eyes is a good example. A direct reflection of the sun (or flash) should probably be way overexposed to avoid way underexposing everything else. We have to make compromises somewhere. A real world scene may have 15 stops of exposure to cover 'correctly' while our film/transparency/sensor can only cover 8-11 or so. We have to judge how to compress this information into the available dynamic range.

For those that want to see all this in a technical manner this link explains things fairly well (http://www.normankoren.com/digital_tonality.html). I'm a chemist by trade so the technical aspects are fascinating to me. I also think it helps to understand what our cameras are doing with the light that comes through the lens. Noise is a big issue that is talked about frequently, especially in shadows. As a scientist this is simply a signal to noise ratio (S/N) problem. The thing to do is to overexpose by one or two stops with a raw capture then pull the exposure back to 'correct' after. The noise levels for a given sensor/ISO combination are fairly constant so by over exposing you've increased the signal and the apparent noise is less. The idea of "expose to the right" as talked about on the Luminous Landscapes site is based on this.
 
Excellent post all the way around. Tons of useful information in this. Thanks.
 
There are a few more issues - calibration and quality of the display itself. On my poor LG I hardly see anythiong above 245 :-( so I rely purely on measuring RGB. Often I must make compromises and say myself "sometimes white is white" and keep it ?blown.
 
One other thing to consider amongst all this great info - this is one more reason to use 16bit until the final output. Disk space is cheap and computers are fast. Getting 15bits of color data per channel (2^15 = 32768 "steps" rather than 2^8 = 256) is priceless when dealing with challenging exposures like Artie's fabulous photo.
 
White 255,255,255, is not necessarily blown. It is just the whitest-white beyond which not whiter white can be demonstrated. If however, we had lights for each pixel, the flux could be dialed in to higher levels, which would just burn our eyes. So it's an issue of the scale being stopped at 255,255,255 such that is perceived at white-white in a modestly illuminated room.

The critical thing about shooting white is a collection of rules and ideas we mostly already know but do not always integrate when thrilled by the site of a wonderful bird in flight.

  • Cameras all have bias, like looking through a very slightly tinted window: The camera should have been already profiled and then we can remove the fingerprints of the camera on whites and one can record the birds colors as seen and colors of sunsets maintained! If we want to remove the cast of foliage reflected light (or even of sunsets), a gray card reference shot is used.

  • Light cannot overwhelm the shot, that's obvious! Use higher speed and neutral density filters if needed. In almost all circumstances, one can reduce aperture and increase speed. For the very best structural detail in the whites, however, if you can, avoid restrictive apertures tinier than f8 or 11 with a 1DIII or one is marching into degradation of the detail (as the light waves get overlapped) by the diffraction, smoothing out micro detail!

  • A modest illumination is best, so that the different reflectivities of the feathers of each part of the plumes can be defined, one micro detail against another. So for this, a slight under exposure may be best. Otherwise, the resolving power of the lens has nothing to cut into different pieces. All the pieces record close to the same.

  • Next, the direction of the light should be from the side to create micro shadows and bring out detail in three dimensional forms that may actually have identical reflectivity of 250,250,250! If not for some directivity of the light, the dimensional information in areas of unifform reflectivity would be lost. This is part of the reason for photographing early or late in the day, but of course you all know that already!


I hope this is of some value to a few to put this together.

Asher
 
Last edited:
Great thread. Let me recap what I've learned:

'Avoid shooting white birds. It is much, much too complicated for me'.

I'm sticking to LBJ's (little brown jobs) from here on out!!!

(But seriously folks--great thread--that image in the final iteration is just fabulous!)

Best to ya,
John
 
Wow great info here. One thing to add is that when we start talking about prints, the numbers change once again. Paper white is usually around 247 so anything higher than that will result in no ink and therefore no detail.
 
Another factor not discussed in this thread so for is that of visual perception, which is quite complex. For example, in bright office light, small details around 1-arc-minute in apparent size need on the order of 20% contrast to be just discernible wheres at about 20 arc-minutes in apparent size one can detect around 0.5% contrast details. The contrast detection drops (you need higher contrast) as light levels drop. So if you want to show detail in the highlights, you need to stretch the contrast for the target illumination.

Reference:
Blackwell, R.H., Contrast Thresholds of the Human Eye, Journal of the Optical Society of America, v36, p624-643, 1946.

Roger
http://www.clarkvision.com
 
Another factor not discussed in this thread so for is that of visual perception, which is quite complex. For example, in bright office light, small details around 1-arc-minute in apparent size need on the order of 20% contrast to be just discernible wheres at about 20 arc-minutes in apparent size one can detect around 0.5% contrast details. The contrast detection drops (you need higher contrast) as light levels drop. So if you want to show detail in the highlights, you need to stretch the contrast for the target illumination. Reference: Blackwell, R.H., Contrast Thresholds of the Human Eye, Journal of the Optical Society of America, v36, p624-643, 1946. Roger

Hi Roger, Can you translate that for us? It appears in the fist part that you are saying the it is easier to see contrast when viewing an image in bright light than in soft light. Is this correct? (I would have thought that it would have been the opposite...)

You wrote: "So if you want to show detail in the highlights, you need to stretch the contrast for the target illumination." What do you meant by stretch the contrast and what is the target illumination? (Is the latter the light that you will be viewing the image in?)

Thanks and later and love, artie

ps (respectually) to all: While several of the highly technical comments here may be entirely accurate, the goal is to provide information so that the average Joe can make their images look better. I have never been accused of being at all knowledgeable as far as the techinical aspects of photography are concerned, but that has never stopped me from making good images.
 
Last edited:
Keeping (capturing and showing) details in the whites is a lot like keeping the details in the reds of bright red flowers. The colour of interest has texture but the texture consists of subtle variations near the maximum displayable level on your screen or print, and if it is too subtle then you cannot distinguish it.

Assuming that we have captured the details in the first place we can show them better by reducing overall exposure. However, that ruins the overall impression of the subject and other tones. That's where this thread started - with a grey egret instead of a white egret.

The next best solution may be to apply curves or contrast adjustments so accentuate the apparent difference between those whites (or reds) that have been recorded up near the maximum showable level. This keeps the middle tones near the middle and the dark tones dark, but increases the differences in the bright tones enough to be readily visible. It can work well but be careful when doing this because if your monitor - calibrated and profiled or not - does not have sufficient colour gamut then you will not be seeing what someone with a better monitor or printer will see, and you may actually ruin your image. Laptop screens have a small colour gamut and are prone to causing this sort of over zealous editing.

A third method, a bit like the first, involves desaturating colours. It's not so good for white subjects as it is for reds or blues or greens but by discarding colour you may leave visible tonal details. Trouble is that the colours then look wrong. Also, it may not be needed if you have a monitor with adequate colour gamut. (that, by the way, is how I ended up with an Eizo CG monitor to use with my laptop).

Robert mentioned the sunny f/22 rule. I'll have to read up on it's applicability to modern digital photography but I'm not convinced that it is appropriate because it affects the capture of mid-tones by darkening them. If you have a good digital SLR with lots of dynamic range (such as a 1Ds2 or 1D3 that can capture over 9 stops of useful dynamic range) then you ought to be able to capture the whites and keep the midtones correct if you shoot raw format. Shooting jpegs is far too limiting to get the dynamic range that you need. Displaying those captured whites is quite separate from capturing them, as the display device or print probably has less dynamic range than the camera.

The bottom line, however, touched on in the post by Dieter, is that maybe we are not meant to see the details. If you look at an egret in sunlight how much fine detail do you actually see ? I'd suggest that unless he's dead or been captured and you take him into the shade and study him up close you'll see precious little other than "bright white". So then we have the photographer's dilemma: should we make the image realistic or should we enhance it to look detailed. With some subjects we just cannot have it both ways no matter how good our lens and camera are. I still struggle with this dilemma. Generally I prefer to see the details but if it can only be done by making the subject "look wrong" then I avoid it.

- Alan
 
Hi Roger, Can you translate that for us? It appears in the fist part that you are saying the it is easier to see contrast when viewing an image in bright light than in soft light. Is this correct? (I would have thought that it would have been the opposite...)

You wrote: "So if you want to show detail in the highlights, you need to stretch the contrast for the target illumination." What do you meant by stretch the contrast and what is the target illumination? (Is the latter the light that you will be viewing the image in?)

Thanks and later and love, artie

ps (respectually) to all: While several of the highly technical comments here may be entirely accurate, the goal is to provide information so that the average Joe can make their images look better. I have never been accused of being at all knowledgeable as far as the techinical aspects of photography are concerned, but that has never stopped me from making good images.

Hello Artie,

There are several implications for the effects of visual perception. You are correct: it is easier to see lower contrast details in bright light than dim light. But it also depends on the size of the subject. For example, further up this thread are test examples with large patches (e.g. the 255,255,255 versus 245,245,245 that have a contrast of (255-245)/245 = 4%. In good light that is easily discerned on a print or on screen. But that same 4% contrast difference would be difficult in dim light. But more important for the example of your egret is details in the feathers. Those details will appear a small fraction of a degree to our eyes when viewing the image. Those details if stretched to have 4% contrast would not likely be visible, and one would need to stretch the image further in order to bring the contrast to a level that can be discerned. (I haven't measured the contrast in some of the fine detail in the egret feathers, but I bet it is 10%, 20% and greater in some of the images posted.)

When I said stretch for the target illumination, I meant, for example, the light that the print would be viewed in, or the brightness of the monitor. A print displayed in a bright office environment needs different contrast than a print viewed in typically dimmer lighting in someone's living room.

There was some discussion that 254,254,254 is not white. But the contrast difference between 254 and 255 is so small, that even with large patches it would be difficult to distinguish between the two. So to our eye, 254 appears very white.

The bottom line, ignoring all the technical details of 255 versus 254 and contrast threshold of the human eye: stretch the details so that you can easily discern the details you want people to discern. Monitors are one thing (most people these days seem to be getting uncalibrated LCD monitors with too high a contrast) and are uncontrollable with regard to web viewing. But a print should be tested by viewing it in similar lighting where it will be displayed.

Another factor with digital capture and bringing out detail in the highlights: digital capture (in raw) is linear. The raw converter applies a variable gamma curve compressing the highlights. If you have trouble recovering details in the highlights, they are really there in the raw file (unless saturated), and a linear conversion (which many raw converters unfortunately do not have) will bring out the maximum detail in the highlights.

Roger
 
I would like to echo and elaborate on Alan's response: While it is necessary and beneficial to understand the technology and how it works, the ultimate goal is to produce a visually pleasing image. I am not nearly as concerned with touching the right side of my histogram as I used to be for a number of reasons some of which have been touched on here but all of which are based on my experience in attempting to produce visually pleasing images. When we looked at the first image posted above, we made a subjective judgement about how it appeared to us. After finding it lacking in some quality, some elected to further analyze the root cause or correction. The analysis led us to an arguably much improved result, but that again is a subjective evaluation and not necessarily dependant on all numbers reaching a certain threshhold and no more. I have taken images of egrets where areas of the plumage had no apparent detail but it was appropriate for the lighting and location and created a visually pleasing result. I have also taken images of egrets in dense shade where pushing the whites to their absolute maximum would have created an unnatural appearance to my eye. In that situation the slightly gray egret was more visually appealing and nobody would have argued based on that photo that I had shot an image of a gray bird. (I do not profess to be a photographer of the caliber of many posting here but my website does have examples of the above) Know and understand the technology and how to manipulate it but do not be so constrained by the rules that you may distroy a visually appealing yet techically incorrect image. To beat the horse just a tad more, what we are really talking about is correcting exposure and I think we all know that the correct exposure is the one that gives the result you are trying to achieve.

Thanx for letting me join in the discussion,
Pat
 
Following on Pat's theme, how about this image? I have had numerous emails/(other) forum posters say the image is saturated and there is no detail in the brightest areas. Yet no pixels have a value of 255 (the posted image has a max of 242 (excluding the lettering). Are you able to see details in the brightest parts? I can see detail just find in the brightest areas on my calibrated monitor.

This image was about the fullest full moon one can get without an eclipse, and the bright portions appeared quite bright to my eye, and that is what I wanted to show in the image. Of course I could pull the brights down and add contrast, but that is not what it looked like to me. What do you think?

Photo details:
Canon 1D Mark II camera, a 500 mm f/4 L IS lens with a 2x TC and IS was on. The total focal length is 1000 mm for a full scale of 1.7 arc-seconds per pixel. The image is a single frame HAND HELD 1/500 second at f/9.1, ISO 200. Exposure was manual.
More details and the full resolution image is available here:
http://www.clarkvision.com/gallerie...rk.handheld.c10.25.2007.jz3f6583f-8s-800.html

Roger
 

Attachments

  • moon.rnclark.clarkvision.jz3f6583f.jpg
    moon.rnclark.clarkvision.jz3f6583f.jpg
    110 KB
Hi Roger,

re:

Following on Pat's theme, how about this image? I have had numerous emails/(other) forum posters say the image is saturated and there is no detail in the brightest areas. Yet no pixels have a value of 255 (the posted image has a max of 242 (excluding the lettering). Are you able to see details in the brightest parts? I can see detail just fine in the brightest areas on my calibrated monitor.

AM: Same here; everything looks fine. And a nice image to boot. And sharp.

This image was about the fullest full moon one can get without an eclipse, and the bright portions appeared quite bright to my eye, and that is what I wanted to show in the image. Of course I could pull the brights down and add contrast, but that is not what it looked like to me. What do you think?

AM: Here is my problem. You wrote, "I could pull the whites down and add contrast." My understanding is that when you add contrast the whites get brighter so the above does not make sense to me.


Photo details:
Canon 1D Mark II camera, a 500 mm f/4 L IS lens with a 2x TC and IS was on. The total focal length is 1000 mm for a full scale of 1.7 arc-seconds per pixel. The image is a single frame HAND HELD 1/500 second at f/9.1, ISO 200. Exposure was manual.

AM: Nothing personal but why in the world would you not use a tripod when working at 26X magnification?????????????

Later and love, artie

ps: Please expain what a full scale of 1.7 arc-seconds per pixel means. I am pretty sure that less than 1% of the folks reading this (me included) have any clue as to what that means. As I stated previously, while it does not hurt to understand the technical stuff the idea is to help the average photographer make better images. If we (obviously) have no clue as to what arc seconds per pixel means, how can we learn anything?
 
Hello Artie,

AM: Here is my problem. You wrote, "I could pull the whites down and add contrast." My understanding is that when you add contrast the whites get brighter so the above does not make sense to me.

If I reduce the intensity of the whites, then when I boost the contrast, the brighter portions of the image will be increased back to a higher level so one can balance the decrease in brightness with the increase in contrast so the brightest pixels do not saturate. I bet you've done this many times and I'm probably not explaining my idea well enough. This can be done, for example, with curves, or with a selection and levels, and the photoshop shadow/highlight tool can do a nice job.

AM: Nothing personal but why in the world would you not use a tripod when working at 26X magnification?????????????

Yeah, normally I would use a tripod. I find the moon is a good test subject that can be repeated by anyone anywhere in the world. Sometimes I've found myself grabbing my 500 mm lens with no time to set up a tripod, so I'll quickly fire off a few frames before the bird/animal flees. Then if the situation is stable, I'll go ahead and set up the tripod/window mount/beanbag. So for the moon shot, I was testing my technique. It certainly was not easy to hold the 500 steady, especially with a 2X TC. But I proved to myself that I can take a sharp image when I must react fast.

AM: ps: Please explain what a full scale of 1.7 arc-seconds per pixel means.

Sorry, I should have explained it. But it does illustrate an interesting concept (well maybe for math geeks). If one wants to achieve maximum resolution on a subject, e.g. a distant bird, the angular resolution tells you what you'll get. For example, which gives more detail: on a distant bird: 1) a 1D Mark II with a 500 mm lens or 2) a 400 mm lens on a 40D? The answer has nothing to do with crop factor and everything to do with pixel spacing and focal length. To calculate the angular spacing between pixels use the following formula:

plate scale = 206265 * pixel_pitch / lens_focal_length, result in arc-seconds/pixel

The 206265 factor is the number of arc-seconds in one radian. Plate scale is an old astronomer term for what one got on photographic plates back in the days when glass plates were used.

1) 1D Mark II has 8.2 micron pixel spacing, so with a 500 mm lens, the angular resolution, or the plate scale is:
206265 * (8.2 microns / 1000 microns/mm) / 500 mm = 3.4 arc-seconds/pixel

2) 40D with 5.7 microns pixel spacing, 400 mm lens: plate scale is:
206265 * (5.7 microns / 1000 microns/mm) / 400 mm = 2.9 arc-seconds/pixel

So the 40D combination has smaller angular resolution and a bird would 17% more pixels taller in the 40D image.

(The 1D Mark II image would appear less noisy due to its larger pixels, however.)

As one approaches 1-arc-second, it is such a small angular size, that maintaining steady pointing becomes very difficult,
and if the subject is very far away, atmospheric turbulence can be an added blurring factor.

If the digital camera exif data includes range to the subject, then one could use the angular resolution and range to measure things in the image. For example, how long are the claws on an eagle? I've always wanted to measure the claw lengths on some of my Alaskan brown bear photos, but can't find the range data in the exif data for my 1D Mark II. People ask me how close I was to an animal when they see the photo. Any ideas if the range data is in the Canon exif data, and if so, where?
I've had people say it's there but not where. (I'm way of subject topic now.)

Roger
 
Hi Roger I'm sure you are correct in all your are explaining I'm also sure you must know it si going over everyones head

This brings to mind the time I spent in doing fine art b&w photography It seem the only way to arrive at the right exposure had to be the most complicated and complex.

btw please send me a pm with your full name so I can make the update We would like all members having full names for user names Thanks
 
This thread has developed beyond a discussion about exposure to include the topic of visible detail and resolution. Otherwise I would not post the following here. My intention is to make the concept easier to understand but if I have failed then perhaps you know why I'm not a teacher :)


When we look with our eyes we see an angle of view. The level of details that we can resolve depends on the level of contrast of those details and also on the angle of view they cover. This is a physiological characteristic of our eyes and brain. If the details cover a wide angle of view then we can hardly miss them. If they cover a small angle of view then they become harder to resolve.

The angle of view can be measured in various units much like temperature or distance can. Most of us know about degrees, minutes and seconds, in which a second is 1/60 of a minute and a minute is 1/60 of a degree. 360 degrees covers a full circle of view (fisheye). Another measure is gradians (400 grads to a circle) and another is radians (approximately 6.3 (2 x Pi) radians to a circle) but to keep it simple I'll stick with degrees.

There are a number of ways that we can use to increase the angle of view covered by each detail. e.g.:
1. we can get closer (but too close and we cannot focus, or the thing bites us on the nose or just flies away). As we get closer the angle of view that it covers gets bigger.
2. we can use a lens that magnifies the size of the details more than our eyes can (binoculars, telescope, etc.)
3. we can take a photo and enlarge it and look at that instead of the real thing (so long as the photo captured the necessary detail in the first place)

I've been using the term angle of view, but I could easily call it an angle of arc. If you imagine stright lines from the extreme sides of an object to your eye, then the angle betweeen those lines where they meet at your eye is the angle or view covered by that object. An arc-second defines an arc that covers an angle equal to 1 second, or (1/60) x (1/60) x 1 degree. An arc-minute covers (1/60) x 1 degree. We can see things that are bigger than about 0.3 arc-minutes or 18 arc-seconds. Roger was explaining that each pixel on the various cameras with long lenses sees a part of the image that can resolve the equivalent of an angle of view of about 3 arc seconds. That means that with a tele lens on the camera we can capture details that are one sixth of the smallest size that we can see with the naked eye. Yay. But if we had a shorter lens then each pixel may cover less detail than we can see with the naked eye. That's partly why wide angle shots never seem to capture the same level of detail that we saw - lots of items but each one is a bit small and so it becomes vague and undetailed. Even a standard lens on these cameras is struggling to do better than good eyes.

What Roger did not mention is that his camera / lens angle of view was only theoretical to the extent that it assumed a perfect lens. It was about lens angle of view rather than actual real-life lens resolution. A real lens has its own optical limitations that further reduce the amount of detail captured by the camera. The camera pixels still see the same angle of view through the lens but the view is somewhat clouded.

Of course it doesn't matter what the camera captured until we display it or print it. Until then it is invisible. So how much detail can we see in a print ?

It has been published somewhere (probably by Norman Koren in one of his highly detailed articles on scanning and resolution - I'll try to find a reference) that people with excellent eyesight can resolve printed details in good lighting as long as the details are big enough to cover at least 0.3 arc minutes or 5 thousandths of a degree. How big is that ? Well, it depends how far away those details are. At 10 inches it is about one thousandth of an inch. It would need a printer at about 1000 dots per inch to show it. [At 100 yards it is about 0.3 inches. Have you noticed how hard it is to see a fence wire at 100 yards, other than when the sun is glinting off it ? At 10 yards we are struggling to see 0.03 inches or 0.8mm. That's why we can't see all the fine detail of a feather at that distance.] Back to our print at 10 inches... We can see something that is bigger than 0.001 inches. If we put two of those somethings beside each other then they become a big something of 0.002 inches. We need a gap between them if we are to see them separately in a group or pattern, but the gap also has to be at least 0.001 inches or we won't see it either. So now our print resolution is down to about 0.002 inches. That means we can see a repeating pattern of maybe 500 lines per inch. If the source material happens to not line up with the printer dots then we can resolve less. Say 300 lines per inch. This is why 300 pixels per inch is about as good as most of us need when sending images to the printer. We let the printer use 1200 or whatever dots per inch so that it can use a pattern of dots to make up the required colour and tone for each image pixel, but the pixels that are smaller than 1/300 inches start getting too hard to separate and resolve consistently. Sometimes it works and sometimes it doesn't. 300 ppi is pretty reliable.

Now we can get to the other factor touched on previously. As well as physical size we need optical contrast in order to separate and resolve the details. Two adjacent dots with almost identical colour and tone are effectively one big dot. We cannot tell them apart. Our bright white egret feathers are represented in our photos by a bunch of almost identical pixels. We are limited in how much we can enlarge the detail by the resolution of the photo. Too much enlargement adds no new info and doesn't help. To see the details in a smaller print or on screen we need to exaggerate the contrast between adjacent pixels that are very similar in colour and tone. We can use curves in photoshop to do this. One of those "S" curves that enhance contrat at the bright end of the scale but reduces it at the middle tones. Alternatively we apply lots of sharpening until we start to see halos but that looks rather too unnatural.

Mention has been made of using a maximum value of 254 instead of 255 for whites. By using something less than 255 we are sure to get at least some ink on the page. That might prevent those horrible bare patches where the glossiness of the page and the glossiness of the ink are so different that we can immediately tell where there no ink at all. As long as some ink is there the glossiness is consistent over the whole page.

I hope this has clarified some of the concepts for some of the readers but the fact is that many switch off as soon as they see numbers. That's partly why most photographers are point and shooters. No disrespect intended or needed, it's simply the way of the world. If you can understand the concepts then you'll be better for it than someone who does what they're told without knowing why. That's what learning is all about.

- Alan
 

Latest posts

Back
Top