Big pixels or Little pixles?

BirdPhotographers.net

Help Support BirdPhotographers.net:

Roger Clark

Banned
Joined
Feb 25, 2008
Messages
3,949
Location
Colorado
With a lot of cameras coming out lately with wildly different sized pixels, there are more choices than ever, and seemingly more confusion than ever. Large pixels alone do not improve high ISO noise performance. With the same lens at the same imaging position, larger pixels have less pixels on the subject, thus each pixel sees a larger area, which gathers more light, but there is less detail in the image. If you had very small pixels, one could average pixels together and improve signal-to-noise per pixel by trading detail and noise. Less noise and less detail, or more detail with more noise. So which to choose, a camera with larger pixels or smaller pixels?

In the subsequent posts to this thread, I'll show a series of images taken with 3 cameras, each with different sized pixels. The subject (the moon) was small in the frame, so this is a focal length limited situation. I used a 300 mm f/2.8 lens on the 3 cameras. The exposure = 1/ISO at f/5.6 on each camera. So each sensor received the same amount of light at a given ISO. The 3 cameras have 4.3, 5.7 and 6.4 micron pixels. The 6.4 micron pixel images have the highest signal-to-noise ratio, but the least detail. The images were converted from raw with identical settings on all images.

Your job, if interested, is to examine the series and evaluate which camera produces the best image at each ISO for this condition. (Note: a frame filling subject will produce different results.)

Roger
 
Last edited:
The ISO 100 set.
 

Attachments

  • moon.1d4.7d.5d2.iso0100.jpg
    moon.1d4.7d.5d2.iso0100.jpg
    237.5 KB
Very good points Roger,

this is one of the topics where things get complicated as you factor in real-world situations, For your subject (focal-length limited, high contrast subject, no color detail) I'd pick the small pixel camera and do advanced image processing to recover the details/remove noise.

For bird photography there are too many factors, especially when you have different sensor size and thus different FOV, it really depends on how much closer you can get, what is the purpose of final image, etc. For my style of photography, a larger sensor will often provide better IQ because I can get closer to the subject and collect more light at the same FOV. But that doesn't apply to everyone. So I don't think there is one answer...a larger sensor with too few pixels is not ideal either...

I think a better way is to fix sensor size and just examine pixel size alone (like the 5D3 vs. D800) both receive the same number of photons but just divide them differently. In principal since Shot noise is white you can recover the SNR of the large pixel by averaging the smaller pixels as we all know. In practice however the Demosaic makes things complicated. How exactly the demosaic process affects the noise PSD around the Nyquist depends on the particular algorithm. For e.g. some RAW convertors like ACR result in color blotches or large grain that are a few pixels large. Some RAW converters produce very fine grain that is around Nyquist frequency. Of course the spectrum is no longer white in the former case and down-sampling in that case doesn't quite recover the SNR of the larger pixel because of the correlation induced by the demosaic algorithm ....There is also complications from read noise in the shadow areas...if read noise is patterned (FPN) down-sampling can actually make it worse because it makes the low frequency harmonics more dominant once the random component has been reducing by low-pass filtering. So overall especially for subjects like feathers that have low-contrast detail preserving this detail becomes difficult at high ISOs with very small pixels...

On the other hand, with scaling of pixels, cross talk (optical and electrical), read noise, reset noise as well as QE improves so when comparing cameras of different generation the newer camera usually comes out better. Here is a ISO 3200 file from the Nikon D800, which has been processed (poorly because I did not have the RAW file) and down-sampled to 12 Mpixles (native D700/D3 size). I think everyone would agree that it looks better than the output from those cameras... http://www.arihazeghiphotography.com/photos/D800NR.jpg The D800 is a perfect example of Moore's law for image sensors!

I think there is crossover ISO from which point it becomes practically impossible with commercial image processing software to recover the SNR of the large pixel 5D3 from a small pixel D800 output. My wild guess is that the crossover is probably just above ISO 6400 or higher (may 12K). Since most wildlife images are not made at night, I think Nikon's approach is best in this case for wildlife photographers. When light is that poor, avian photos usually suck so those insane high ISOs aren't really useful in bird photography IMHO.

I think a few years ago we concluded the optimal pixel size was 5.6um but now it seems with technology scaling the optimal size is close to where D7000/D800 land which is 4.8um...of course for general applications, for specific applications where absolute low light performance is needed it is probably better to stick with large pixels for a clean file out of the camera with minimal post processing and small file size...

Any ways good discussion, I am not sure what the future road map for Canon is now that they have announced major pro bodies for the next 3 years...
 
Last edited:
I think a better way is to fix sensor size and just examine pixel size alone (like the 5D3 vs. D800) both receive the same number of photons but just divide them differently. In principal since Shot noise is white you can recover the SNR of the large pixel by averaging the smaller pixels as we all know. In practice however the Demosaic makes things complicated. How exactly the demosaic process affects the noise PSD around the Nyquist depends on the particular algorithm. For e.g. some RAW convertors like ACR result in color blotches or large grain that are a few pixels large. Some RAW converters produce very fine grain that is around Nyquist frequency. Of course the spectrum is no longer white in the former case and down-sampling in that case doesn't quite recover the SNR of the larger pixel because of the correlation induced by the demosaic algorithm ....There is also complications from read noise in the shadow areas...if read noise is patterned (FPN) down-sampling can actually make it worse because it makes the low frequency harmonics more dominant once the random component has been reducing by low-pass filtering. So overall especially for subjects like feathers that have low-contrast detail preserving this detail becomes difficult at high ISOs with very small pixels...

Arash,

Greetings. The above is just a great paragraph. Thanks much.

Cheers,

-Michael-
 
Roger- Whether is should theoretically or not, in all the images the 7D shows more noise than the 1DIV or 5DII, which seem more or less equivalent (5DII appears slightly better). For me this is a big factor that outweighs the extra detail in the 7D image and is one reason I love my 1DIV and 5DII so much! Less noise for me means much easier and efficient processing of the RAW image. I am not sure if Arash is saying this exactly but one counter to my choice above might be that if I downsample the 7D image to make it equivalent to the other two, this would average the noise and make the images comparable.

What is "noise PSD" BTW?
 
snip

On the other hand, with scaling of pixels, cross talk (optical and electrical), read noise, reset noise as well as QE improves so when comparing cameras of different generation the newer camera usually comes out better. Here is a ISO 3200 file from the Nikon D800, which has been processed (poorly because I did not have the RAW file) and down-sampled to 12 Mpixles (native D700/D3 size). I think everyone would agree that it looks better than the output from those cameras... http://www.arihazeghiphotography.com/photos/D800NR.jpg The D800 is a perfect example of Moore's law for image sensors!

snip

Very interesting Arash. I wonder when this breaks down if you take it further? In other words I could envision a FF sensor which is massively sampled with very tiny pixels (say the size of a point-and-shoot or smaller), then down-sampled to say 20mp in-camera. Would this be a viable approach to eliminating noise at higher ISOs?
 
Roger- Whether is should theoretically or not, in all the images the 7D shows more noise than the 1DIV or 5DII, which seem more or less equivalent (5DII appears slightly better). For me this is a big factor that outweighs the extra detail in the 7D image and is one reason I love my 1DIV and 5DII so much! Less noise for me means much easier and efficient processing of the RAW image. I am not sure if Arash is saying this exactly but one counter to my choice above might be that if I downsample the 7D image to make it equivalent to the other two, this would average the noise and make the images comparable.

What is "noise PSD" BTW?

yes that's what I was saying. PSD=Power Spectral Density, it's a measure of noise as a function of spatial frequency. One other complication is that the bi-cubic down-sampling in PS is not a an ideal low-pass filter either.
 
Very interesting Arash. I wonder when this breaks down if you take it further? In other words I could envision a FF sensor which is massively sampled with very tiny pixels (say the size of a point-and-shoot or smaller), then down-sampled to say 20mp in-camera. Would this be a viable approach to eliminating noise at higher ISOs?

The overall SNR is fixed by sensor size, a FF sensor at a given QE collects x number of photons, pixel size is just how you divide those photons into buckets (pixels).

But yes it is possible to provide a native lower resolution but lower noise output using a high-res sensor. When I was dealing a little bit with CMOS sensors there was this idea of averaging pixels pre-demosaic in a high res sensor to improve per pixel SNR. This would be somewhat the ideal low-pass filter which would give you identical results to a larger pixel sensor in the shot noise limit. Canon did implement this with the s-RAW and m-RAW but instead they used a cheap software method to interpolate the already demosaiced RAW data. the final results were worse than processing the images off line in your computer.

The idea of "RAW" pixel averaging was abandoned because the electronics didn't have enough bandwidth to perform this function at that time.

FYI, apparently there is a 40 Mega pixel Nokia phone :O that uses this technique with the right hardware!
 
Last edited:
Roger(or anybody else), can u do the same test( assuming u have all three cameras right now) by putting a teddy bear( or something similar with texture ) at a reasonable distance to simulate 'photographing a duck with 300mm lens from the bank' situation. same iso, SS, lens and aperture please :)

One set of test can be properly exposed images. Another set can be underexposed by a stop and then pushed one stop in raw conversion.
 
Roger(or anybody else), can u do the same test( assuming u have all three cameras right now) by putting a teddy bear( or something similar with texture ) at a reasonable distance to simulate 'photographing a duck with 300mm lens from the bank' situation. same iso, SS, lens and aperture please :)

One set of test can be properly exposed images. Another set can be underexposed by a stop and then pushed one stop in raw conversion.

in focal length limited situations results are similar to what Roger has already shown above, especially lower ISOs and when you don't have to deal with pattern noise in the shadows.
 
Arash,

Wouldn't a very high res sensor increase the impact of any color filter array issues (edges, angles, gaps)?

Here is a ISO 3200 file from the Nikon D800, which has been processed (poorly because I did not have the RAW file) and down-sampled to 12 Mpixles (native D700/D3 size). I think everyone would agree that it looks better than the output from those cameras... http://www.arihazeghiphotography.com/photos/D800NR.jpg The D800 is a perfect example of Moore's law for image sensors!

Luminance noise is better for the down-sampled D800 shot but color noise is much worse from what I get with my D3 at 3200. IMO.

Cheers,

-Michael-
 
Roger(or anybody else), can u do the same test( assuming u have all three cameras right now) by putting a teddy bear( or something similar with texture ) at a reasonable distance to simulate 'photographing a duck with 300mm lens from the bank' situation. same iso, SS, lens and aperture please :)

And kittens....got to have some kittens too :bg3:
 
Arash,

Wouldn't a very high res sensor increase the impact of any color filter array issues (edges, angles, gaps)?



Luminance noise is better for the down-sampled D800 shot but color noise is much worse from what I get with my D3 at 3200. IMO.

Cheers,

-Michael-

You are right. The optical cross talk and aberration from MLA becomes worse with scaled pixels for a given technology generation, the FF slightly drops as well. But usually this falls on the Moore's law trend meaning that advances in semiconductor processing technology between two generations will, at least partially, compensate for these effects.

several years ago one of the main issues in large sensors was "pixel vignetting" the angle of incidence was too large for pixels located near the border of a FF sensor so they would see less light, there was too much optical leakage too. But today this issue is solved by careful micro-lens design. this is one of the old publications from that time, it's no longer relevant. It's amazing how fast things change...

http://isl.stanford.edu/groups/elgamal/abbas_publications/C074.pdf
 
Last edited:
Roger, I'm following John here completely with his analysis of the images regarding the noise. Especially in the high ISO's, the 5D is a clear winner, retaining much more detail than the 1D, while producing less color noise.
However, wouldn't it be fair to judge detail vs. noise on images that are cropped so as to show the subject in the same size? I can't judge from the images presented how much detail will remain in the images from the full frame camera's when enlarged so as to show the subject in similar size as the 7D. It's the quality of the final image (i.e. cropped until the subject has the size you would want in your frame) that counts. I agree with Aresh that when you want to study the effect of pixel size alone that it would be better to compare equal sized sensors (but even then you don't eliminate other factors that may cause differences in IQ).
 
You are right. The optical cross talk and aberration from MLA becomes worse with scaled pixels for a given technology generation, the FF slightly drops as well. But usually this falls on the Moore's law trend meaning that advances in semiconductor processing technology between two generations will, at least partially, compensate for these effects.

several years ago one of the main issues in large sensors was "pixel vignetting" the angle of incidence was too large for pixels located near the border of a FF sensor so they would see less light, there was too much optical leakage too. But today this issue is solved by careful micro-lens design. this is one of the old publications from that time, it's no longer relevant. It's amazing how fast things change...

http://isl.stanford.edu/groups/elgamal/abbas_publications/C074.pdf

Thanks for the ref... interesting. I would think, too, as pixel size shrinks toward 1-2 microns that just manufacturing tolerances would have an impact (can't map bad cells out either)... reducing the effectiveness of down-sampling for nr.

Cheers,

-Michael-
 
Thanks for the ref... interesting. I would think, too, as pixel size shrinks toward 1-2 microns that just manufacturing tolerances would have an impact (can't map bad cells out either)... reducing the effectiveness of down-sampling for nr.

Cheers,

-Michael-

Sony showed a 1.25um pixel (on a small sensor) at the IEDM last Dec. good pixel performance but it is somewhat pointless at that size since you start off diffraction limited and yield drops exponentially for larger area....

checkout this reference, it shows very nice noise measurements etc. I cannot post here due to copyright

BTW, Sony is the current industry leader in image sensor technology in terms of R&D bandwidth and high profile publications

Extremely-Low-Noise CMOS Image Sensor with High Saturation Capacity
K. Itonaga, K. Mizuta, T. Kataoka, M. Yanagita, H. Ikeda, H. Ishiwata, Y. Tanaka,
T. Wakano, Y. Matoba, T. Oishi, R. Yamamoto, S. Arakawa, J. Komachi, M. Katsumata, S. Watanabe,
S. Saito, T. Haruta, S. Matsumoto, K. Ohno, T. Ezaki, T. Nagano, and T. Hirayama
Semiconductor Technology Development Division, Core Device Development Group, R&D Platform,
Sony Corporation, Kanagawa, Japan. Tel: 81-46-202-4673, Fax: 81-46-230-6170,
 

Latest posts

Back
Top