Why "crop factor" is so pervasive

BirdPhotographers.net

Help Support BirdPhotographers.net:

John Chardine

Well-known member
Joined
Jan 29, 2008
Messages
6,588
Location
Canada
Subject line is a little unclear- what I mean is why do we continue to read posts mentioning the importance of crop factor in reach for distant subjects? The explanation that crop factor is irrelevant and that it is in fact pixel size, pitch or density which is the relevant metric, is often given by Roger et al. but it seems to need repeating. This is not just a BPN phenomenon, it's all over the web.

There are several reasons and I'm sure the fact that a crop factor camera trims off the edges of the image (in relation to FF) and makes the subject appear relatively bigger in the frame must be compelling, however, I think there is another reason, illustrated in the attached image.

I took sensor data published on his Roger's web site or elsewhere and picked some current DSLR camera models and a modern point 'n shoot (PNS) as an outlier. The graph shows pixel size (pixel area or pixel pitch squared) against sensor size (sensor area or crop factor) and illustrates a simple relationship I had suspected for some time. As sensor size increases pixel size increases, in other words in modern cameras, larger sensors tend to be sampled more coarsely than smaller sensors. Coarsely sampled sensors provide less reach than finely sampled sensors but because of the positive relationship between pixel size and sensor size, the two variables get confused and reach is assumed in error to be determined by sensor size.

The line on the graph is fitted to the points from sensors smaller than FF and shows that the D4 is right where is "should" be but that the new D800 is way off the trend as are most of the newer FF cameras. I guess the D3x was way ahead of its time!
 

Attachments

  • Sensor.jpg
    Sensor.jpg
    69.9 KB
Last edited:
John,

Greetings. One of the differences between cropped sensor cameras and full frame sensor cameras is the magnification of the viewfinder. So the simple comparison of just looking through the viewfinder of one camera and the the other gives the appearance of the cropped sensor camera having the advantage of greater magnification, which it does... but only in the viewfinder. :w3

Figuring out pixel sizes, pixels on subject and the like are much less satisfying than "seeing" the difference.

Cheers,

-Michael-
 
"having the advantage of greater magnification, which it does... but only in the viewfinder"

But so many people miss this insight. They think they can multiply the length of their lens by 1.6 so my 400 becomes a 640. I have had this conversation 3 times in the last month.
 
But pixels on the subject is important. If I get as close to my subject as I can (and let's assume I would like to get closer but cannot - a quite common occurrence here in New England) I have to crop. With a FX Nikon I have to crop more than I would with a DX Nikon. That is where a 16 mp D4 would give me few pixels on the final crop than the 12 mp D300s. Right?
 
Hi John,

You've hit the nail squarely on the head. I put crop factor as probably the most misunderstood concept in photography these days, but is only slightly more misunderstood than ISO (which doesn't change sensitivity), but up and coming is the very misunderstood big pixels are less noisy idea. A few years ago when there were fewer choices (e.g. before about 2006), the trend of large versus small pixels showed a clearer trend with crop factor so the (wrong) idea was born. Now with a large range of pixel size in the same sensor size, we will see more myths exposed.

Roger
 
Along with "full frame" has to be best. "Full", obviously being better than "not full". Might as well give up, marketing always triumphs over reality.
Rant over - Tom

A larger sensor is always better than a smaller sensor for general photography because it collects more light. Plus you can pack it with a higher number of pixels (like D800). It is not marketing.
 
Good comments all.

Michael- The viewfinder shows more or less (97-100%) of what the sensor is "seeing" so I think we are talking the same thing regarding magnification in the viewfinder and my comment that crop sensors cut the edges off the a full frame image.

Allan- Pixels on the subject is essentially what is graphed on the y-axis in the figure above, with more pixels on the subject as the pixel area decreases. Smaller sensor cameras tend to have more pixels on the subject. Correct on your last statement. Only the D800 and marginally the D3x puts as many pixels on the subject as the mid-crop cameras.

Roger- The modern trend for sure is to bring those FF points down on the graph but I was surprised to see at least two modern cameras (D4 and 1Dx) fit the trend quite well. Overall I guess we will see the slope of the line decrease as more and more pixels are stuffed onto sensors but I know you have said in the past that there is a point of diminishing returns beyond a certain pixel size.

Arash- Agree it's not marketing. I suppose large sensors with lots of pixels are hard to make perfect and this is why they have been relatively slow to come to the consumer market?

I don't know whether folks are familiar with gapminder.org. Hans Rosling has developed this amazing (and quite simple) data visualisation concept that animates 2-D graphs with the time-dimension creating the movement. I have often thought that visualising how camera sensors have evolved over the years using this software would be really instructive.
 
Last edited:
A larger sensor is always better than a smaller sensor for general photography because it collects more light.

Arash,

Couldn't one use the same argument for pixels? A larger pixel is always better than a smaller pixel for general photography because it collects more light?

Hard to make apples to apples comparison. A large sensor captures more light but also from increasingly worse parts of the lens (the edges worsening the more you include)... while with large pixels you have fewer edge effects (of the CFA) than a collection of smaller pixels (per area), I think. Unless the CFA performance improves with a smaller size (which would surprise me).

John, I'm a Rosling fan... you've seen Tufte's books?

Cheers,

-Michael-
 
John,

There is a new twist to the crop factor/fov confusion. It's in Nikon's new video options (D4). For the same 1080P, output the pixels are sampled from the full frame, 1.5 crop or 2.7 crop region of the sensor (with the 2.7 crop the actual central 1920x1080 pixels of the sensor). So instead of a "crop" a different spacial sampling for the same pixel density (HD video). The talk for video is 1.0, 1.5, 2.7 telextenders with a mode change (18-200 zoom turns into 18-540 :w3 ).

The extra giggle in the D4 is with the 1920x1080 size selected one can shoot 24 frames per second (images not video) completely silently (no shutter at all).

Cheers,

-Michael-
 
Arash,

Couldn't one use the same argument for pixels? A larger pixel is always better than a smaller pixel for general photography because it collects more light?

Hard to make apples to apples comparison. A large sensor captures more light but also from increasingly worse parts of the lens (the edges worsening the more you include)... while with large pixels you have fewer edge effects (of the CFA) than a collection of smaller pixels (per area), I think. Unless the CFA performance improves with a smaller size (which would surprise me).

John, I'm a Rosling fan... you've seen Tufte's books?

Cheers,

-Michael-

yes Mike larger pixels have higher SNR but if the pixels are scaled perfectly according to Moore's law (this addresses the issues you mention) ,like Nikon D800 you can combine the smaller ones to recover the SNR of the larger pixel. So you can get identical high ISO performance when you need it and at the same time get excellent resolution in low ISO. If you look at the NEF files from D800 and D4, the D800 is noisier at pixel level but when you down-sample to 16 mpixel at least the visual noise is comparable up to very high ISOs with more detail in D800 files. And needless to say D800 is much better than D700 despite having smaller pixels. However, this is not always true e.g. Canon 7D which suffers from poor FPN, bad CFA spectral response etc. which was failure of pixel scaling at its time. So it really depends.

BTW, BTW Nikon D800 just achieved the highest DxO sensor mark ever given to any digital camera (including medium format). the score is 95 while D4 scored 89. This shows the power of Moore's law scaling :D
http://www.dxomark.com/index.php/Publications/DxOMark-Reviews/Nikon-D800-Review/Sensor-performance
 
Last edited:
Agree it's not marketing. I suppose large sensors with lots of pixels are hard to make perfect and this is why they have been relatively slow to come to the consumer market?
.

yes they are basically two issues, 1) yield and 2) the electronics needed to process the massive files. Also beyond certain point it becomes diffraction-limited so diminishing returns.
 
Last edited by a moderator:
John,

There is a new twist to the crop factor/fov confusion. It's in Nikon's new video options (D4). For the same 1080P, output the pixels are sampled from the full frame, 1.5 crop or 2.7 crop region of the sensor (with the 2.7 crop the actual central 1920x1080 pixels of the sensor). So instead of a "crop" a different spacial sampling for the same pixel density (HD video). The talk for video is 1.0, 1.5, 2.7 telextenders with a mode change (18-200 zoom turns into 18-540 :w3 ).

The extra giggle in the D4 is with the 1920x1080 size selected one can shoot 24 frames per second (images not video) completely silently (no shutter at all).

Cheers,

-Michael-

I didn't know Nikon could do cropped video! since video resolution is fixed cropped video does translate to better reach! great feature.
 
A larger pixel is always better than a smaller pixel for general photography because it collects more light?

I mentioned this myth in my post above: "but up and coming is the very misunderstood big pixels are less noisy idea."

A larger pixel enables the collection of more light, not that they collect more light. Consider this analogy: You have two buckets, one that holds 2 gallons of water and one that holds 1 gallon of water. You put the 2-gallon bucket under the faucet and turn on the water for 1 second. Now you put the 1 gallon bucket under the faucet and turn on the water at the same intensity for one second. Assume the amount of water was not enough to overfill either bucket. Which bucket has more water? (If you answer I hate story problems you fail the class.:w3) If your answer is both buckets have the same amount of water, you are correct. Now what controls how much water is in the bucket? It is not the size of the bucket; it is the force and duration of the water controlled by the fawcet.

In digital photography, the bucket is the pixel, the faucet is the lens and the time the faucet is on is the exposure time. There is one thing missing in the analogy, and that is focal length which spreads out the light so if the faucet has a spray nozzle on the end the spray would expand a further distance from the faucet. Now for the larger bucket, if it has a larger diameter, it would collect more water because it sees a larger area. But if the smaller bucket were moved closer to the sprayer, so it collected the same angular area, it would also collect the same amount of water. People talk about the same sensor field of view, but there is also the same pixel field of view. When the pixel field of view is the same, regardless of pixel size, the two pixels collect the same amount of light in the same amount of time and produce the same signal-to-noise ratio.

So in the case of digital cameras, the amount of light collected is controlled by the lens, its focal length and the exposure time. The larger pixels only ENABLE the collection of more light when the exposure time is long enough. With digital cameras, that only happens at the lowest ISO. At higher ISO, the buckets (pixels) never get filled.

So to manage noise in digital camera images, one must manage the lens aperture, the focal length, and the exposure time. The focal length manages the pixel field of view. So it is not the pixel that controls the observed noise in an image.

Roger
 
Last edited:
Roger,

I tend to think of noise as being an error in light measurement, specific to a pixel (rather than generic like light drop off from a lens). So, uh, refining your analogy... Noise would be a bucket not capturing the expected amount of water given a equalized spray, that is, out of 100 buckets the few that are outside a small variance would be noise. The question is what would cause the few buckets to capture less water and is it size dependant.

One possible way of thinking is well size doesn't matter out of 100 buckets no matter their size you will end up with about the same number of buckets with an error. One might add, the area that would see the same amount of error so if you see 4 flaws in 100 buckets you would only see 2 flaws in 50 buckets of twice the size (surface area).

Another might counter that so for a given area the smaller buckets have twice as many errors!

No, no. You can't compare that way... you have to add two of the smaller buckets together to normalize the bucket to surface area ratio. When you do that only two of the combined buckets still are outside of range which is the same as the larger buckets.

Oh, you can't count that! You end up reducing meaningful variance between all the buckets when you combine them.

on and on like that.

I don't think there is a universally satisfying solution for this conundrum nor do I think the new camera releases will expose the myth (which ever one it is :w3 )... I'm inclined to think that there are a sufficient number of variables between the sensor and discriminating output to render a definitive answer incalculable.

Cheers,

-Michael-
 
Roger,

I tend to think of noise as being an error in light measurement, specific to a pixel (rather than generic like light drop off from a lens). .......
I don't think there is a universally satisfying solution for this conundrum nor do I think the new camera releases will expose the myth (which ever one it is :w3 )... I'm inclined to think that there are a sufficient number of variables between the sensor and discriminating output to render a definitive answer incalculable.


Hi Michael,
While one can consider the noise we see in our digital camera images an error in measurement, most of the noise we see is due to the light itself, not the lens sensor and electronics recording the light.

Photons arrive at random times and we are counting photons for a relatively short interval (the exposure time in a camera). The noise is the square root of the number of photons collected. This is the dominant noise source (light itself) that we see in our images. The electronics in digital cameras add a small amount of noise but it usually only becomes a factor in the deepest shadows. It includes read noise from the sensor, and noise from the electronics including fixed pattern noise (FPN). But on any subject you photograph (excluding very long (minutes) exposure astrophotos), the main noise you see in images is due to photons. The photon noise is quite predictable and so is sensor read noise and other electronic noise. So the response of a camera is quite predictable. It is mostlly basic physics and engineering.

What is no predictable is how people react to noise sources. Different people seem to tolerate noise in images differently. One observation I find interesting is as digital cameras were emerging, people didn't like the images because they were "too smooth." People wanted that film grain as they thought images should look that way. Now people complain about the tiniest amount of noise.

Roger
 
I tend to think of noise as being an error in light measurement, specific to a pixel (rather than generic like light drop off from a lens).

While one can consider the noise we see in our digital camera images an error in measurement, most of the noise we see is due to the light itself, not the lens sensor and electronics recording the light.

Roger, what you're saying makes sense to me (regarding the light and light recording). I think I get it now. I've had this had this strange mental disconnect from the various web discussions about noise and noise comparisons between various new sensors/cameras (talk about noise :w3 )...

So for me at least... here is the ah, hah. The meaning of noise gets converted along with raw conversion. Subsequent to raw - it's all signal, selective smoothing and contrasting as we go.

Cheers,

-Michael-
 
So, we have sensor pixel bucket. And it has a filter on it so it collects only "red" wave length photons. The noise is square root of number of photons collected. Does that mean that different red pixel buckets are collecting -different number- of red photons? (Given same exposure etc). Thus each red pixel bucket, as it is processed by electronics, appears different to our eyes? This we call noise? Where did I make a "wrong turn" here?
Tom
 
So, we have sensor pixel bucket. And it has a filter on it so it collects only "red" wave length photons. The noise is square root of number of photons collected. Does that mean that different red pixel buckets are collecting -different number- of red photons? (Given same exposure etc). Thus each red pixel bucket, as it is processed by electronics, appears different to our eyes? This we call noise? Where did I make a "wrong turn" here?
Tom

Hi Tom.

Let's try this. Say you are imaging a uniform red target and the amount of light coming to the pixel on a long term average is 100 photons per second. But we expose for only 1 second. Say we have a 1 megapixel camera. In each pixel we would expect 100 photons, but when we look at each pixel, it varies, say 93 in one pixel 1.2 in the next, 88 in another, 114 in antoher and so on. If we average all the 1 million pixels we will a number very very close to 100. But the variation, the standard deviation, in each pixel will be 10. So the noise is ten (square root of 100) and the signal-tonoise ratio = 100 / sqrt(100) = sqrt(100) = 10.

The above was for a perfect camera. Then add electronics and sensor read noise on top of that, but that noise is pretty small. Since this was a low signal situation, we should have been imaging at high ISO. On a 1D Mark IV, the high ISO noise would be about 2 electrons. So the noise seen would be the 10 electrons from the detected photons and 2 electrons from the sensor and electronics (add as the square root of the squared noise, so sqrt (10*10 + 2*2) = 10.2. This illustrates why most of the noise we see in our images results from photon noise (Poisson counting statistics) except in the deepest shadows where we detect only a few photons per pixel.

Probably more than you wanted to know...

Roger
 

Latest posts

Back
Top