about the banner…

Water lilies at the Brooklyn Botanical Garden Orchid Show, April 5, 2014. Taken with the Nikon D610 + AF-S Zoom NIKKOR 24-85mm f/3.5-4.5 G ED VR. 1/600 s @ f/5.6 -0.67, ISO 800.

Thursday, September 23, 2010

Full Moon Tonight

Handheld shot of the moon, AF-S NIKKOR 70-300mm f/4.5-5.6 IF-ED.
Taken just moments ago. It seems to be a good test of any telephoto lens to attempt this. This was taken with the Nikon AF-S NIKKOR 70-300mm f/4.5-5.6G IF-ED at 1/400s, f/8, ISO 200, handheld. At the long end, it’s 450mm on the D90. Thank goodness for VR, not that shooting at 1/400 sec. is that challenging. The moon is so bright that you can shoot this fast with a low ISO and relatively small aperture to capture all the detail.

Saturday, September 18, 2010

Photoshop Giveth, and Photoshop Taketh Away

Black-Eyed Susans, In the Lomographic style
Yes, you can remove lots of ills with Photoshop, but you can also use the same tools in reverse to add those same ills in were they didn’t originally exist. This photo is an example of that.

Wednesday, September 15, 2010

Macro on the Cheap, Part II

Reversing a wide-angle lens with a reverse mount adapter is a pretty neat and inexpensive way of creating a manual focus, manual exposure macro setup without buying more lenses. The flip side to that is creating an auto-focus, auto exposure macro setup which is more convenient, albeit at the expense of speed.

Typically this is done by reversing a 50mm lens and attaching it to an existing auto exposure and/or auto focus lens. As with mounting the lens directly to the camera body in reverse, the wider the angle, the greater the magnification. However, this technique is prone to vignetting which may increase as the angle gets wider.

What we need to do this is known as a reverse coupler, a simple device with two male filter threads which allows you to attach two lenses front to front. The reversed lens behaves just like a closeup lens that screws onto your filter threads. A multi-element, multi-coated, high-diopter, closeup lens.

It works like this; if you add a magnifying (convex) lens in front of your main lens, it shortens the minimum focusing distance so that in order to achieve focus, you need to move closer to the subject. When you do, the subject is magnified in the process. Since a normal lens gathers light from the scene and reduces it to fit onto the camera’s film frame (or sensor) flipping it around has the opposite effect…it becomes a magnifier!

To figure out the diopter value, divide 1000 by the lens’ focal length. So, a 50mm lens would have a diopter of 20. Compare that to a stack of three typical closeup lenses which would give you a total diopter of 8, and degrade image quality with significant chromatic and spherical aberration.

Since this technique often produces vignetting effects, you might want to test your intended “diopter” lens first by “freelensing”; hold the reversed lens up to your main lens with the aperture wide open to see if the image circle is reduced significantly. Setting the “diopter” lens to its shortest focus distance will enlarge the image circle, but reduce the working distance. Also, make sure all filters are removed from the facing filter threads to reduce the lens-to-lens distance as much as possible.

The Nikon BR-6 Auto Diaphragm Ring helps here as well. Even though you won’t be using the aperture of the “diopter” lens, you can use the BR-6 to set it to its widest position, and add a filter to protect the rear element.


Tuesday, September 14, 2010

Color Management

In the world of professional imaging, known in these hi-tech times as premedia, color management is king. And, as a consumer, amateur or professional photographer, it’s equally important for you to know your way around this subject in order to achieve the level of quality you are expecting.

Color management, simply put, is the ability to adapt different capture devices with different color characteristics to different output devices, also with different color characteristics. This is achieved through ICC color profiles, data which tells the device how the color actually looks, and how to make it look the way it should.

It works this way; your camera captures and stores images, which exist in the natural color space of the camera, determined by its sensor and imaging engine. But your computer, and the application you plan on using to view or post-process your images, know nothing of these characteristics. So, when you import the images from your camera the computer doesn’t now how to make them look. So, they all have a color cast (let’s say blue), and they’re all a bit flat (have low contrast). Enter color management.

Most cameras export images to a standard color space, which can be interpreted by the computer. So, the job of the profile is to apply a curve that counteracts the blue cast, and restores the correct level of contrast. Use the correct profile, and the image looks as it should. Use the incorrect profile, and things begin to go horribly wrong. Use no profile, and you take your chances either way.

The Input Color Space
So, which profile should you be using? Naturally, there isn’t a straightforward answer to this question. But for now, let’s just say that practically all modern consumer capture devices (which includes not only cameras but scanners as well) export the images to the sRGB color space. sRGB IEC61966-2.1 to be exact. So, it’s likely that you’re already using this color space. No problem there.

The problem comes comes when you import the images to your computer, but it does’t know that they’re the the sRGB color space because the images were not tagged as such. And I’ll pass along a little story to explain this.

A few years ago, I had a color management epidemic, whereby most of the color was being kicked back by our clients for the same reason; the skintones were far too red, and the shadow detail far too dark. As it turned out, clients were supplying us with digital images which were untagged, meaning information regarding in which color space the image was exported was not included in the image file. So, when the images were first opened, Photoshop asked that an appropriate color profile be chosen. Naturally, having learned a little about digital color along the way, most of my colleagues chose Adobe RGB (1998), the superior color space.

A little knowledge can be a dangerous thing.

The fact is, Adobe RGB (1998) is a superior color space. It’s color gamut (the range of colors it can store) represents both what the eye can see, and what can be printed using the CMYK four-color process. Sadly, sRGB falls a little short. But, speaking English does’t help you when others understand only French. If you don’t speak the native tongue, you can’t interpret the information correctly.

The other, less common half of the epidemic was that the color was kicked back for being “flat and dirty”. Flat, meaning too little contrast, and dirty meaning that the fleshtones leaned toward cyan, the compliment of red. As it turned out, this is exactly what happens when you assign the wrong ICC color profile to an image.

So all this translated to two scenarios:

sRGB images which were not tagged sRGB were being interpreted in Adobe RGB (1998) color space, forcing them to gain contrast and shift to red.

Adobe RGB (1998) images (far less common) which were not tagged Adobe RGB (1998) were being interpreted in the sRGB color space, forcing them to lose contrast and shift toward cyan.

No one likes little green people, so even without the knowledge that Adobe RGB (1998) is a superior color space, folks tend to prefer the warmer look of Adobe ’98. That is, until it finds it’s way onto a contract proof.

The biggest problem in color management is the fact that you can easily assign a different profile to an image than the actual color space in which it exists, or, not assign one at all, leaving everyone guessing. And if the latter is the case, it takes a very experienced professional to know which is the correct profile to assign. Therefore, unless the image comes from a reliable source, never trust an embedded profile.

The Output Color Space
Incorrectly assigned profiles are easily corrected by simply assigning and embedding the correct profile, because assigning a profile (tagging an image file) is non-destructive (does not permanently affect image pixel values).

However, the real problem happens once the image is converted to another color space, such as the working space or the output space, which makes the interpreted color more or less permanent, or at the very least not easily undoable. So, you don’t want to archive your images in the output space, because it may change depending on the service provider or destination media. If the initial color space is interpreted correctly, you can freely convert from one color space to another, including L*a*b*, RGB’s close cousin.

Therefore, sRGB is the most common RGB output space, since it’s the native color space of computer displays, consumer photo printers, commercial photo printers, and the web, all of which are output destinations.

The other type of output profile would be for commercial printing, which most likely will be a CMYK output profile such U.S. Web Coated (SWOP) v2.

When an RGB image is converted to CMYK, it becomes a 48-bit image, each of the four channels containing 8 bits of grayscale information. The gray component of the three RGB additive primaries (colors which are neutral and dark) are removed from each of the RGB channels, and combined into the black channel. Each of these channels is then inverted to reflect the change from an additive (transmissive) color space to a subtractive (reflective) color space. The specifics of these conversions are very different depending on the press profile. Therefore, converting to a CMYK space is not reversible.


The Working Space
This is an intermediate color space that you use to work in, and the intended purpose for Adobe RGB 1998. The reason to convert to (NOT assign) a working space is to expand the color gamut to allow enough room for the enhancements you are likely to make during post-processing. For example, in the process of adding saturation, you’ll be changing the values of the existing pixels to brighter colors than can be stored in the sRGB space. Since the working space is the space you are going to archive your images in, you will always have all the information you’ll need for the best possible conversion to the output space, be it now or in the future.

Many new cameras include the ability to store images in the Adobe space, which works equally well as both a capture and working space. This has the advantage of simplifying workflow, since only one conversion to the output space is required.

One the other hand, if you do not intend on doing any post-processing or are planning on printing directly from your camera or memory card, sRGB is probably the best choice. Most commercial photo printers work with this color space exclusively, so if you hand them a card with Adobe 98 images, they'll come back red and contrasty.

ProPhoto is a color space designed with the professional photographer in mind. It’s large gamut is designed to store every concievable color from the widest variety of capture sources. It’s an excellent space in which to process and archive images, but conversion to an output space is an absolute must.

Color Gamut vs. Bit Depth
The other caveat to working space is bit depth, or the amount of tonal information stored in the original file. A standard “24-bit” RGB image file has 8 bits per channel for a total of 256 levels of tonality. A 48-bit CMYK image has the same number of levels, it just has an extra channel for black.

But a 16-bit file has 65,536 levels of gray. It is said that a print can only store about 100 levels. The question is, which 100 of the 65,536 will you choose? In other words. If you have a grossly underexposed digital image with 65,536 levels of information, lightening the image to fit those 100 on a paper print with no degradation of image quality is a snap.

The downside to a 16-bit working space is that it takes up a lot of space when archiving, so use it for only the most worthwhile images. Also, many Photoshop filters will not work with 16-bit images.

Summary
That’s a lot to digest, so here’s a quick summary for reference:

Use the sRGB color space when:
Capturing images that are to be printed directly from the camera or memory card.

Converting images for use on the web, or being handed off to a commercial photo printer.

Use the Adobe RGB (1998) color space when:
Capturing images that will be post-processed and archived with layers intact.

Converting images from sRGB that will be post-processed and archived with layers intact.

Use the ProPhoto RGB color space when:
Converting images from any capture color space that will be post-processed and archived with layers intact.

Use 16-bit color when:
Converting 12 or 14-bit Camera Raw images that will be heavily post-processed such as extreme exposure adjustments.

Working with high-quality grayscale images.

Converting images from any color space that will have gradients added to them in Photoshop.



Wednesday, September 8, 2010

Holga, The Lomography “System” Camera

Here’s my growing Holga outfit so far...
Clockwise from center: Holga 120FN, 6x4.5cm mask, 6x6cm mask, 35mm mask,
35mm back, filter adapter, cable release adapter, 46-52mm step-up ring
Being the camera geek that I am, I couldn’t resist putting together a system for the Holga, just as I did for the Nikon Coolpix P5100. With this setup, I can shoot two different formats of 120 film, and two different formats of 35mm film.

Monday, September 6, 2010