Results 1 to 50 of 54

Thread: Important Sharpening Information!

Hybrid View

  1. #1
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Quote Originally Posted by Arthur Morris View Post
    OK. Here we go. It is absolutley wrong to sharpen your full sized TIFF master files. No digital image should be sharpened until it is sized for final usage. I save my optimized master file. If we need to make a print, we open the master file, duplicate the image, close the master file, size the image to the print size and then sharpen the image. The larger the print (and the larger the file size), the more sharpening the image will need. Need a large j-peg for slide programs, downsize the copy, sharpen to taste, and then save. Same for a small j-peg. An 11 X 16 priint might need sharpening in the range of 450%, .8 (or more). A large jpeg, 325, .3. A small jpeg, 225, .25.
    Art,
    I do my sharpening on my highest resolution image. I describe why below.

    Before I start, a little background. I have been doing digital imaging since 1976 (started with experimental scientific applications with 480x512 pixel digital cameras at MIT; the system weight about 200 pounds not including two 3-foot racks of electronics). I routinely write my own image processing algorithms for scientific applications, usually using aircraft and spacecraft based sensors for studying ecosystems and mapping other planets. I understand the theory and principles of image processing.

    If anything can be said is absolutely wrong, it is calling unsharp mask sharpening! ;-) Unsharp mask is a method derived from film days as an analog darkroom process for edge enhancement. As such, unsharp mask actually does not sharpen; it increases accutance. See, for example: http://en.wikipedia.org/wiki/Sharpness_(visual)
    The unsharp mask tool in Photoshop and many other image processing systems actually does not change resolution, and in fact depending on the application can actually decrease resolution. But the human visual system is usually fooled into thinking the image looks sharper due to increased edge contrast.

    Having said that for background introduction, I disagree with your statement that it is absolutely wrong to sharpen the full size master file. I don't disagree that if you want to do it that way, that it is fine, but I'll use the posted image to illustrate why I do work on the full size image. Often there are multiple ways to achieve an image processing result, and no one method is usually right or wrong.

    Take a look at a typical full size image. Not all parts of the image are in equal focus. This is common to many photographs, due, for example limits in the depth of field, subject movement, or camera movement with too long of an exposure time. Thus, if one wants to compensate for some of these problems, one must selectively sharpen different areas of the image, just as one may do selective dodge and burn on an image. It is this basic reason why I feel it is best and most time efficient to complete the processing on the full resolution image, so you don't have to do it again and again for each size image one wants to produce.

    I produce a file that I believe will be my largest print size. This may start with up-sampling at the raw conversion, and/or include cubic spline interpolation to more pixels. Such an "up-rez'd" image is soft at the pixel or even couple of pixel level. So after increasing pixels, and after all other processing is completed (I archive this version of the image), the last processing step I do is to really sharpen. By this I mean to use algorithms that actually sharpens, not simply unsharp mask. A master with Photoshop and unsharp mask can do a pretty good job at making an image look great, but in my experience, the true sharpen algorithms produce better fine detail which when combined with a little unsharp mask can produce stunning results with more visible fine detail.

    The algorithm I use most is "Richardson-Lucy Image deconvolution." This algorithm, for example, was used to fix images from the Hubble telescope before the optical fix. Given a model of the blur, the algorithm tries, through iterative estimations, to put energy back into a better focus. Blur can be from many causes, from motion to lens aberrations, to defocus. It can repair, for example some motion blur. If you have an otherwise great image but there is a slight blur due to subject movement, one can reduce that blur with this algorithm. There is no free lunch, however, as the process increases noise. Fortunately, the low noise images we get from DSLRs are generally so good, that the algorithms can be used pretty effectively. Below, I'll show and discuss the results of the bird image in this thread as an example.



    Quote Originally Posted by Arthur Morris View Post
    And there is a huge difference in sharpening for print versus sharpening for the web. When sharpening for print, your sized file should look over-sharpened (to allow for the ink soaking into paper). When sharpening for web, you want to sharpen without any apparenet over-sharpening... What you see is what you want...
    I agree with this with a couple of caveats. With more and more people using LCDs, which have sharper edge response (high Modulation Transfer Function, MTF) if you prepare your image with a CRT, it might look over sharpened when viewed on an LCD. So use similar technology (LCDs these days) to prepare images that are going to be viewed on the web.
    If you are printing using other technology than ink, such as a Lightjet, you should not over sharpen in my opinion.

    Quote Originally Posted by Arthur Morris View Post
    If you sharpen your master file and then downsize it, the image should theoretically be well over-sharpened. You are the second person here in two days who states that they sharpen their master files and then downsize for the web (yet whose jpegs looked soft...) I have no explanation for that, but best to do it right and learn to sharpen for a given size...
    As you downsize a master file, the image is not over sharpened at all. In fact, due to sampling, it again becomes under sharpened, just as expected by theory. Thus, after downsizing, one must resharpen the image. Downsizing can not over sharpen, unless perhaps you use a pretty poor algorithm.

    An example thought experiment is to consider a perfectly sharp edge, one pixel black, one white. In the resampled image, it is a very low probability that two pixels will fall exactly on each side of the edge to maintain that perfect sharpness. More likely, one pixel will fall on the edge, so it takes 3 pixels to define the edge, the one in the middle would appear a shade of gray. So what formerly took 2 pixels to define an edge degrades to 3 upon resampling. The formerly perfectly sharp image then needs to be sharpened after resampling.


    Quote Originally Posted by Arthur Morris View Post
    (Aside from capture sharpening, which is a whole other can of worms), the only time that a master file should be sharpened at all is when the eyes or the face are selectively sharpened a small bit.
    If one only sharpens when producing the final image, then one has a lot of extra to do work for each size print one wants to produce.

    Now I'll get to the example. I saved the original bird image in this thread, converted it to a 16-bit tif file for processing, then applied multiple runs of Richardson-Lucy image deconvolution. I used 3x3 Gaussian point spread function with 1500 iterations, and 5x5 Gaussian point spread functions at 50 and 200 iterations, noise threshold = 2 standard deviations. That produced 3 output images. I then blended the images together using layers in Photoshop using the more aggressive portions for the most blurry regions (the leaves of the bush), and the least aggressive 3x3 for the overall image, the more aggressive layers where set to 50% opacity. This is by no means a universal set of parameters, just what I used, and the final image is sharpened more than I would normally do, I just did it for illustration.

    Then I flattened the image and in Photoshop I converted the image to LAB mode, then did an unsharp mask with radius=0.3, amount=128, threshold =6 only on the luminance channel. Unsharp mask can saturate highlights losing color. Doing the unsharp mask on the luminance channel reduces that problem.

    Quote Originally Posted by Arthur Morris View Post
    OK, now I can get to the image. I love the bird and the cedar and the BKGR. And love your framing of the cedar bough. The head turn is good but a few degrees short of perfect. And yes, the whole bird could stand a good round of selective sharpening. You paint a QM of the bird, and sharpen only that layer. This avoids sharpening any noise that is present in the background... The beauty of using a QM is the seamless blending--you do not have to worry much about painting exactly between the lines...
    Quote Originally Posted by Arthur Morris View Post
    I sharpened only the bird at 228, .3, 1 and then lightened the whole thing a bit as jpegs tend to get darker when they are re-saved. Hope that you like.
    Picking out one small detail do show the difference in methods, look at the catch light in the bird's eye. There are 2 spots, one larger, and the second a smaller one to the left. If you compare the spots in each image (original, Art's unsharp mask, and the Richard-Lucy, RL, result) you'll see the RL result shows the smallest spot. RL produces finer lines (e.g. hairs), and fine detail which we perceive as texture). Unsharp mask usually increases the widths of lines and in Photoshop's implementation (which use some additions to approximate multiplies to make the algorithm run faster) gives what I call a pasty look.

    I have an article about Richardson-Lucy Image deconvolution at:
    http://www.clarkvision.com/imagedeta...e-restoration1

    On my core 2 duo 1.8 GHz PC, the 3x3 deconvolution took 45 seconds per 100 iterations on the bird image here, so a large image can take a while to compute.

    Roger
    Last edited by Roger Clark; 08-31-2008 at 04:34 PM.

  2. Thanks Ed Erkes thanked for this post
  3. #2
    Tom Charles
    Guest

    Default

    This is what I was always lead to believe.

    Quote Originally Posted by rnclark View Post
    As you downsize a master file, the image is not over sharpened at all. In fact, due to sampling, it again becomes under sharpened, just as expected by theory. Thus, after downsizing, one must resharpen the image. Downsizing can not over sharpen, unless perhaps you use a pretty poor algorithm.

    An example thought experiment is to consider a perfectly sharp edge, one pixel black, one white. In the resampled image, it is a very low probability that two pixels will fall exactly on each side of the edge to maintain that perfect sharpness. More likely, one pixel will fall on the edge, so it takes 3 pixels to define the edge, the one in the middle would appear a shade of gray. So what formerly took 2 pixels to define an edge degrades to 3 upon resampling. The formerly perfectly sharp image then needs to be sharpened after resampling.


    Roger, I thoroughly enjoyed your explanation on this obviously, emotive subject. I have bookmarked the site you mention that discusses the Richard-Lucy Iteration.

    I have only been processing digital images, for about 5 years or so, and Ive reached a stage where I feel ready to be more 'clinical' in my sharpening and basic post-image editing.

    Thanks for your informative and fascinating explanation of the above techniques :)

    Regards,

    Tom

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Web Analytics