Options

Questions for Andrew Rodney

ruttrutt Registered Users Posts: 6,511 Major grins
edited May 3, 2009 in Finishing School
Andrew, I don't think you are being fair either to yourself or to Dan. Let me see if I can summarize both your ideas and his fairly. (Please feel free to correct or fine tune. I'm sure I understand Dan's philosophy better than yours, but I'm eager to learn both.)

Your philosophy:
Be a good photographer. Expose carefully. Give yourself the best possible starting point for post processing. Then use the best, most recent technology to help you use your eyes to perfect the image.

Dan's philosophy:
The human visual system captures in a very different way from the camera and adjusts in complex and different ways to viewing images in different media, for example, on the screen vs in print. What people remember seeing is often quite different than what the camera captures. What people see on even the best monitor with the best possible calibration may be quite different than what they see on a print, even one which "matches" the monitor perfectly. In particular, people adjust for casts best in real situations, second best on monitors, and least of all in print. Often it requires comparison with a color corrected version before people even notice the cast on their monitors.

Given all of this, the job of post processing is to make the image match the memory of the scene, express what we would have remembered seeing if we had been there. In pursuit of this goal, we'd like to have the largest possible tool box and best possible understanding of the actual theory behind the various techniques. The better this understanding, the better we'll be at applying the tools, and even improvising new ones as needed.

Just by the way I stated it, you can see that I have spent a lot more time thinking and writing about Dan's ideas than about yours. I'm eager for a better understanding of your ideas; I have your book and have been reading it; but I haven't yet found (or come to understand) your ideas at this level. So help me out.

But let's climb up and look down on this discussion from an even higher level. These are two radically different ways of thinking. I don't think anyone could deny that. And, I contend, they result in quite different looking images, at least they can in theory. Dan's approach would lead one to take more liberties with the image in order to bring make it seem more natural. Your approach will respect the original more.

I'd like to compare this to art traditions and theory. Impressionism was a very different theory of color and image sharpness than the classical school it supplanted. And the results were indeed very different. But I love both Goya and Monet. Cartier-Bresson and Ansel Adams had very different ideas of equipment, shooting, and the darkroom, but I love them both. In a way, what matters is not what they thought, but that they thought and devised coherent sets of ideas which in turn resulted in unique styles.

In short, it's more important to develop your own ideas and improve their internal integrity than to show that they are superior to others. We are not really operating in a area where it's clear that "better" has much meaning. Different people will find their own favorite way of working, of thinking about what they are doing, and finally their own style. And the variety of results and ideas is what makes it all so fascinating. Imagine a world where there was only Goya and no Monet!
arodney wrote:
I have the book! I stand by my points, especially when dealing initially with raw data. 90%+ of all such corrections can be accomplished faster, with better quality from the raw converter (assuming a good converter like ACR, LR or Raw Developer, the later which does provide LAB like controls over the raw rendering).

Much of the Lab like work can be done in RGB using Luminosity blend modes without spending the time to convert while throwing away a good deal of data (If you must, at least do it on 16-bit files).
If not now, when?
«134

Comments

  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 3, 2007
    rutt wrote:
    Dan's philosophy:
    The human visual system captures in a very different way from the camera and adjusts in complex and different ways to viewing images in different media, for example, on the screen vs in print. What people remember seeing is often quite different than what the camera captures. What people see on even the best monitor with the best possible calibration may be quite different than what they see on a print, even one which "matches" the monitor perfectly. In particular, people adjust for casts best in real situations, second best on monitors, and least of all in print. Often it requires comparison with a color corrected version before people even notice the cast on their monitors.

    I've never heard Dan say this and in fact, it's more of my philosophy in that the digital camera and our visual system ARE very different. This goes back to the idea of scene referred versus output referred colorimetery of which I co-authored this paper for the ICC:

    http://www.color.org/ICC_white_paper_20_Digital_photography_color_management_basics.pdf

    There's only one accurate color; the measured color of the scene. That's scene referred and doesn't look at all nice on an output referred device (a display or a print). The job of the raw converter and user is to make output referred images that express what they want to represent on some output device. This isn't image correction. Its image creation just as the color negative wasn't a reference to the photographed scene, someone had to print it, using some set of color filters to produce the desired color appearance and deal with the orange mask.

    Given all of this, the job of post processing is to make the image match the memory of the scene, express what we would have remembered seeing if we had been there. In pursuit of this goal, we'd like to have the largest possible tool box and best possible understanding of the actual theory behind the various techniques.

    Did Dan say that? Because I have lots of posts of his where he dismisses the use of wide gamut working space like ProPhoto RGB which is absolutely necessary if you want that larger toolbox. Same with working with high bit data yet Dan for years has dismissed the use of 16-bit files in Photoshop and challenged (unfairly, see URL below) that this has any advantage for our bigger toolbox when the math is undeniable.

    This is useful to review in context to the so called 16-bit challenge:

    http://www.brucelindbloom.com/DanMargulis.html
    Dan's approach would lead one to take more liberties with the image in order to bring make it seem more natural. Your approach will respect the original more.

    Not at all, just the opposite. In fact, if you subscribe to his list, you'll recall an exercise submitted by a photographer who had issues with the image captured JPEG of a night scene and buildings. Dan discussed setting the cement to be BTN (his term for By The Numbers) neutral. But it made the image look ridiculously poor IMHO as well as others because the scene was supposed to have a color cast, it was shot at night! My take would be, use whatever rendering controls available in your raw converter to produce color and tone you prefer and wish to express about the image. Lets not forget raw is Grayscale data, you have to build the color (tone). Even if you have a converter that would provide scene referred rendering, it would look pretty awful on screen (that's output referred). You have to apply toning here at the very least, probably some saturation boast and WB.

    Dan's also of the impression that JPEG is the way to go and not messing with Raw OR lock down the raw converter to give you flat data and correct in Photoshop. This is time consuming, not at all good for the quality of the data and dismisses a lot of the power in metadata editing of raw, linear data. But he really doesn't (yet) understand raw workflows (one file I submitted to the list as a DNG was rejected by Dan because its not the raw file according to him; I have the posts where he says this). But let me leave you with his quote about ACR and JPEG since we really disagree here on both the toolset and file formats one should be editing initially:
    On 2/5/07 5:52 PM, "DMargulis@aol.com" wrote:
    Lee Varis writes,

    << Given the limitations of the interface (and please, this is obvious
    regardless of features like Vibrance) Dan has suggested taking a
    conservative approach to the adjustments you apply in ACR. ( Andrew,
    before you jump on this, please READ Dan's book) This makes perfect
    sense in a "one image at a time" workflow. However, for many
    photographers this is not practical when dealing with a large volume
    of images that have to be delivered to a client. Fortunately, if
    you've done your homework and tested your camera to optimize your ACR
    settings (at least visually) to your shooting style, you shouldn't
    have to do that much in ACR to get a reasonably good image. >>

    Quite right. I write only to point out that the book clearly states that it is, as you say, only discussing "one image at a time" workflows. Camera Raw has nice features for processing batches of images but they are beyond the scope of what I write about.

    Similarly, we all have occasions when we are unwilling to spend time on
    images even when we know we might get better quality if we took a few minutes more. In such cases, of course I have no issue with someone who tries to get a fast result in Camera Raw and call it quits there.

    It does, however, beg the question: if saving time is so important that
    quality compromises need to be made, why is the raw format being used at all? With rare image-specific exceptions, essentially anybody who is not a beginner will get better final results by shooting JPEG and correcting in Photoshop than an expert can who shoots raw but is not allowed to do any manipulation outside of the acquisition module,And in less time, too.**The idea of a raw module is to *empower* the image-manipulation program, not replace it.

    Dan Margulis

    **This is silly, I've challenged him to do this at PhotoPlus or Photoshop world, he's ignored it.

    We fundamentally disagree on the toolbox itself.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 3, 2007
    I liked the beginning of your post more than the end. You put yourself in the best light when you express your ideas positively instead of explaining why Dan is wrong. Convince me that you are right instead. That's how Dan hooked me in the first place. I had problems, went in search of solutions, and found that Dan's books contained ideas that worked to solve them.
    Over the years, I found many solutions to my problems in his writings.

    My least favorite part of PP5E are those middle chapters which dwell on how Dan was right and the other guys were wrong about 8 vs 16 bits, color management, color spaces, &etc. It just doesn't matter to me who is right and who is wrong. As I said, I'm pretty sure there isn't any such thing in this domain.

    So please let's avoid that trap, take the high road, and explain what you think and show how well it works. That will keep my attention.

    A good place to start would be my statement of your philosophy. I was really hoping you'd flesh it out come up with something that more accurately represents what you think. I'm very familiar with Dan's work, but less with yours. Help me understand.
    If not now, when?
  • Options
    Duffy PrattDuffy Pratt Registered Users Posts: 260 Major grins
    edited July 3, 2007
    The first URL in Andrew's post is broken.

    Duffy
  • Options
    patch29patch29 Registered Users, Retired Mod Posts: 2,928 Major grins
    edited July 3, 2007
    The first URL in Andrew's post is broken.

    Duffy

    I think the link can be found on this page.

    Look for

    Digital photography color management basics
  • Options
    nikosnikos Registered Users Posts: 216 Major grins
    edited July 3, 2007
    arodney wrote:
    Much of the Lab like work can be done in RGB using Luminosity blend modes without spending the time to convert while throwing away a good deal of data (If you must, at least do it on 16-bit files).

    If I remember correctly, Dan Margulis tested out a file and converted it from RGB to Lab to RGB etc. numerous times with no or inconsequential signs of degradation. This was in his Lab book.

    You're stating quite the contrary by saying that "a good deal of data" is being tossed. Have you experienced this first hand or are you making an assumption.

    Thanks,
    Nikos


  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 3, 2007
    nikos wrote:
    If I remember correctly, Dan Margulis tested out a file and converted it from RGB to Lab to RGB etc. numerous times with no or inconsequential signs of degradation. This was in his Lab book.

    Well the test is far from conclusive and the damage isn't always inconsequential. First off, any data loss in an image adds up to the point where, depending on the image and output device, the result is banding. When will this happen? We don't know. We do know that these kinds of edits (all edits in fact) amount to data loss. So doing a few rounds of back and forth and printing a single image ink on paper to a course linescreen doesn't tell us a lot. What about a much finer, contone device? What about after applying more edits? IF you do this in high bit, something Dan dismisses, its not an issue. You have far more data than you need (up until more device drivers accept and use more than 8-bits per channel. Some today do, for example, any Epson that's driven by the ImagePrint RIP).

    You're stating quite the contrary by saying that "a good deal of data" is being tossed. Have you experienced this first hand or are you making an assumption.

    Good deal? That's not something I can define for you specifically but I can give you exact numbers and you can decide if that's a good deal or not (and just for this one set of edits). For me, its a good deal percentage wise, especially in ProPhoto RGB, the space I use (and always in high bit).

    Depending on the working space, for example, going from Adobe RGB, which has 256 values available, converting to 8- bit LAB reduces the data down to 234 values.The net result is a loss of 22 levels. Doing the same conversions from ProPhoto RGB reduces the data to only 225 values,producing a loss of 31 levels.

    When and will you see this? Hard to say. The math is undeniable. So you start out with a file in 8-bit with smooth gradients like a sky, or the bumper on a car. At what point will banding result on your output device? It might not, but it does happen, its something you don't have to worry about if you simply work in high bit:

    http://www.digitalphotopro.com/articles/2007/janfeb/bitdepth.php
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 3, 2007
    patch29 wrote:
    I think the link can be found on this page.

    Look for

    Digital photography color management basics

    Direct link:

    http://www.color.org/documents/ICC_white_paper_20_Digital_photography_color_management_basics.pdf
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    Duffy PrattDuffy Pratt Registered Users Posts: 260 Major grins
    edited July 3, 2007
    Thanks for posting a working link.

    I'm a bit confused by the jargon, or even the need for jargon here. I think I have something of a handle on "output referred." That seems to mean what the image ultimately looks like when displayed by some output device.

    But I don't understand "scene referred" at all. Does it mean what the photographer saw when shooting? Or does it mean what a colorometer, or some other machine would record at the scene when shooting? Or is it something else. If its either of the first two, then how could you ever show what was "scene referred" to anyone. The only way to display something that was scene referred, I think, is on some monitor or in some print. And once you do that, the image has been rendered and is no longer scene referred. If I'm right about this, wouldn't the idea of the scene-referred image simply drop out as irrelevant?

    Or am I missing something here?

    Duffy
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 3, 2007
    Thanks for posting a working link.

    I'm a bit confused by the jargon, or even the need for jargon here. I think I have something of a handle on "output referred." That seems to mean what the image ultimately looks like when displayed by some output device.

    Exactly.
    But I don't understand "scene referred" at all. Does it mean what the photographer saw when shooting?

    Its what the capture device recorded. So if you went into the field, measured the illuminant, some colors in the scene, calculated the dynamic range and could record as much of that as possible with the current technology, what the device 'saw' and captured would be scene referred.

    Some raw converters allow you to get to this data. The linear encoded data when not output referred looks pretty ugly (there's an example in the PDF). Flat, dark, it needs a saturation boast, a tone curve etc. This is what the in camera processing does (based on a matrix or look setting) when it creates the JPEG. You have no control, other than the look settings, over the scene to output referred conversions.

    The only way to display something that was scene referred, I think, is on some monitor or in some print. And once you do that, the image has been rendered and is no longer scene referred.

    You could send the scene referred numbers to a display or print but it would look pretty ugly. But this is accurate color! When photographers say they want accurate color, they don't really know what they are asking for. Accurate can only really be defined by the measured color of the scene. This is scene referred. What we really want is pleasing color, color that appears as we wish to express what we think (or remember if you buy that) of the scene. That's output referred.

    So when people talk about accurate color, or working by the numbers, you have to ask them just what they really want. If you're doing copy work of fine art in the studio, you control the dynamic range, the color of the lighting etc. You DO want Scene Referred color. It might be exactly right for reproduction of the art work with perhaps minimal tweaking for output referred. But for just about anything else, the results are not pleasing color, not color anyone would say appeared as they or you saw it at the scene. Look at the huge dynamic range differences between scene and print. At noon, full sun, you might have a 10,000:1 contrast ratio. On a print, even a fine art ink jet, you might be lucky to get 450:1 ratio. Got to squeeze on into the other. That's what rendering is all about in a raw converter. That and more.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    Duffy PrattDuffy Pratt Registered Users Posts: 260 Major grins
    edited July 3, 2007
    arodney wrote:
    You could send the scene referred numbers to a display or print but it would look pretty ugly. But this is accurate color! When photographers say they want accurate color, they don't really know what they are asking for. Accurate can only really be defined by the measured color of the scene. This is scene referred. What we really want is pleasing color, color that appears as we wish to express what we think (or remember if you buy that) of the scene. That's output referred.

    If a process yields colors which so obviously look wrong, I think its not a good idea to insist on calling those colors "accurate." There may not be any purely scientific way to prove that some other outcome is "accurate" or not. In that case, what you we are left with is a) the client rules or b) the majority rules. I'm not a scientist and I can live with that flexibility. It makes alot more sense to me than taking some machine produced vision that is obviously wrong and declaring it "accurate" for some reason.

    Basically, if what machines see, without further massaging, is both ugly and looks very different from what the scene looked like to the photographer, then I see even less reason to call the machine's rendition "accurate" than I would the photographer's unreliable memory. But whatever you call it, as a practical matter, I'm not very interested in it. The intellectual side of me thinks its a pretty cool curiousity, but I fail to see how any of this knowledge is going to help me make better pictures. (And that may well be my failing.)

    Duffy
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 3, 2007
    Here is a different way to think about this. Von Gough's "output device" compares pretty poorly to a good modern monitor in terms of gamut and dynamic range. Yet we don't feel that when we look at The Bridge at Arles. Walk into the Venice room of the Turner exhibit at the Tate and you can almost feel the warmth of the sun. Yet, he also worked with a very limited dynamic range and gamut.

    Go to a museum, buy a print of one of these paintings, take it back into the gallery, and hold up next to the original. Not so good.

    Something very subtle is going on. Somehow, these guys manage to trick your visual system into seeing something very different than what is there on the canvas. And it's very fragile. Change the colors just a little bit and the effect is ruined. Reproduce at a different size and the effect is diminished.

    Another more practical question, Andrew. Here is an experience I've had dozens of times here on dgrin. Someone posts a shot for critique. Looks pretty good on my calibrated monitor. But just to check I download and discover that the squirrel is blue, the horse is purple, the face is magenta, the dog is pink, &etc. You know what I mean. I use curves of some sort to push the colors toward away from the impossible and toward the possible. And presto-chango, it looks 100% better, the photographer thinks so, all the other viewers think so. But nobody saw it from looking at the original. If you like I'll dig up some of these dgrin threads. There are a lot of them.

    I know I've told this storry in Margulis-inspired language. Let's not waste words saying what was wrong about this. Instead tell me what happens in your workflow. What detects the possibility of this kind of improvement?
    If not now, when?
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 3, 2007
    If a process yields colors which so obviously look wrong, I think its not a good idea to insist on calling those colors "accurate."

    They are not wrong, they are simply not optimized for the current viewing media. Is a color neg wrong? It doesn't look like the scene.

    If you shoot both Velvia and Ektakcrome of the exact same scene, do they look identical? No. Which is accurate? Actually neither.

    I'd prefer to call it preferred or pleasing. Its real difficult to put a delta on accurate unless you use some way to measure the color and provide some numeric way of defining how close those colors are. Of course, if you want to use accurate, fine but I think it dilutes its meaning, it makes talking about such conditions ambiguous and in the end, is always up to interpretation.

    When you measure something, using some kind of reference grade measuring device, its pretty easy to express whether what you're measuring is accurate or not. Now if I was building a deck, I'd use a measuring tape instead of my foot, even though my foot is pretty close to 12 inchs. I don't need a ruler that's accurate to 1/10000 of an inch although measuring such distances can be done. But saying that using your foot to do this provides an accurate way to build the deck is a bit of a stretch.
    There may not be any purely scientific way to prove that some other outcome is "accurate" or not.

    Yet there are ways of doing this.
    In that case, what you we are left with is a) the client rules or b) the majority rules. I'm not a scientist and I can live with that flexibility. It makes alot more sense to me than taking some machine produced vision that is obviously wrong and declaring it "accurate" for some reason.

    Again, this is boiling down to semantics. But others do want accurate color, or they need a way to define what that means and we have the instruments and processes to do this. So I don't see why we should dilute the term. Pleasing color, color the client loves, color you love, that's all fine.
    Basically, if what machines see, without further massaging, is both ugly and looks very different from what the scene looked like to the photographer, then I see even less reason to call the machine's rendition "accurate" than I would the photographer's unreliable memory.

    One we can measure, one we can't. On is based on empirical data, the other isn't. That doesn't make one right for you, but, if you want to really know, based on some unit of measure if two colors match (without getting into human perception and the effect of optical illusions), then its useful to define accurate based on some measurements of an instrument.

    Try looking at this optical illusion. The two patches are the same, they measure the same, they don't look the same. Are they accurate based on what they really are or how you see them?

    http://web.mit.edu/persci/people/adelson/checkershadow_illusion.html
    But whatever you call it, as a practical matter, I'm not very interested in it. The intellectual side of me thinks its a pretty cool curiousity, but I fail to see how any of this knowledge is going to help me make better pictures. (And that may well be my failing.)

    If you understand the process and the limitations of the capture device, if you understand what a raw processor is actually doing and you don't use incorrect terms to ask for something (like accurate color), you have a better chance of getting what you wish. Yes, its a very practical matter.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    Duffy PrattDuffy Pratt Registered Users Posts: 260 Major grins
    edited July 3, 2007
    Wouldn't it be fairer to say that using 16 bit editing, it will take more edits and more severe edits before banding would become an issue. The mathematical principles are the same whether you are dealing with 8 bit or 16 bit. At some, as yet unknown point, the manipulation of the data will lead to visible banding.

    Right now, there are some who bemoan banding in the manipulation of 8 bit images (though I have never encountered any in the work I've been doing). I can readily imagine that when 32 bit becomes the norm, a bunch of people will say that you should do all your editing there, "just to be safe," because you never know when the banding will become visible, or when it will become an issue with the newest, latest, coolest output devices.

    The people who advocate using higher bit processing are definitely correct on their theory. The interesting thing is that that theory has so little real world practical application. Someday it might. I've tried the repeated conversion from RGB to LAB and back, making edits in between. I've done it on several pictures and have not been able to see any meaningful differences between the two. (By that, I mean that I haven't been able to find any areas where I could definitely say that one was superior to the other.) As a result, I typically don't hesitate to make a conversion to LAB and back.

    I also sometimes use Fade/Luminance to save time. I feel confidant in doing this as an alternative, because I know the types of images where this move will not be same as moving to LAB, and thus know when moving to LAB will make an actual difference. (This happens where extremely light areas of color will get halos that are completely blown. That will give a light colored object with a pure white halo when using Fade, but will give colored halos when taking the trip to LAB and back).

    As long as I'm saving the original files, I'm not to concerned with intermediate data for its own sake. If losing data makes the final output better, then I'm all for chucking the data. If losing data will save time and give the same output, then fine. The only time I would really be concerned with throwing away data through edits is when it makes the final product I'm aiming for look worse.

    And I'm not concerned with other possible final products that I might make in the future. I'm still learning this stuff and the technology is improving all the time. If, in a couple of years, I want to make a big print of something I took this year, I will start from scratch on the original and use the best I know how to get the result. I suspect that I will be able to do better in two years than I could do now. And since I'm improving my skills, and the number of old pictures that I'm interested tends to dwindle over time, I am not all to concerned with saving the "how" of my current work.

    Duffy
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 3, 2007
    Wouldn't it be fairer to say that using 16 bit editing, it will take more edits and more severe edits before banding would become an issue. The mathematical principles are the same whether you are dealing with 8 bit or 16 bit. At some, as yet unknown point, the manipulation of the data will lead to visible banding.

    Absolutely true. But lets look at the huge difference. An 8-bt file has 256 levels per color. A 12 bit file has 4096. Most camera provide at least 12 bits of data. Its providing this data from the get go for you. Why throw it away? Sending the best 256 values from 4096 is all we need at the very least.
    I can readily imagine that when 32 bit becomes the norm, a bunch of people will say that you should do all your editing there, "just to be safe," because you never know when the banding will become visible, or when it will become an issue with the newest, latest, coolest output devices.

    We'll need that for true HDR capture which will be amazing. PS supports its. The Adobe engineers don't put this stuff in for no reason. Look at all the increased high bit capabilities staring in Photoshop 5.
    The people who advocate using higher bit processing are definitely correct on their theory. The interesting thing is that that theory has so little real world practical application.

    Says who? How many high bit captures and images edited in that fashion since PS5?
    I've tried the repeated conversion from RGB to LAB and back, making edits in between. I've done it on several pictures and have not been able to see any meaningful differences between the two.

    I don't know what meaningful means to you (I know what it means to me). So the statement doesn't wash, it kind of sounds like Dan.

    We also don't know what future output device or edits will be used.

    I'm not suggesting you don't edit the numbers because there is resulting data loss. The very reason we use applications like Photoshop is to alter the numbers to provide better color appearance at the price of data loss. Of course, if you do all this at the raw rendering stage, there truly is no data loss! But none the less, once in Photoshop in high bit, no worries. You want to lose 31 of your 4096 levels by going into and out of LAB, so be it. If you want to do this on 8-bit, I'd have to wonder when, and why you tossed all the other data.
    I also sometimes use Fade/Luminance to save time.

    And bits. But really, with high bit data, the data you're capture device is providing anyway, no worries. But it is useful to know the benefits of Luminosity blend modes.
    I'm still learning this stuff and the technology is improving all the time.

    Which is why I question Dan's negativity towads both high bit data and wide gamut spaces, both or which ensure the data you started with can be used as technology improves. Go back say 3-4 years ago, the prevailing logic was that Adobe RGB (1998) would be fine as an encoding color space due to the output technology and gamut. Today, we have both Canon and Epson inks that exceed Adobe RGB gamut.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    pathfinderpathfinder Super Moderators Posts: 14,696 moderator
    edited July 3, 2007
    I am late to the party, and under dressed. ( rutt please notice that lack of a smiley here out of respect for you )

    I was initially not very pleased with my digital images, until I gave up the idea that they were "captured by the camera". I gradually came to accept that images we see published are not "as captured by the camera" despite many statements by photographers that they have not been manipulated, that they were just as captured. This is like folks who drop off a Kodachrome to be printed in a book, and say "do not manipulate it". It is impossible to match the color and the brightness ratios of a Kodachrome slide on a printed (online CMYK ) page. Someone has to do the pre-press prepping to get the slide's lighting ratios to fit onto the page. Just as the engineer's at Kodak, created the curves to allow Kodachrome to capture an illusion of the lighting ratios of reality on film itself.

    I KNOW that the colors and the lighting ratios in my images captured by my digital camera NEED to be altered when post processed by me as I edit them for printing or posting on the web. My images are creations of my mind, written into the digitally captured image data caught by my camera. I do not want to make this sound like I am suggesting that my images are Photoshop artistic drawings - they are photographs, not drawings or renderings - that have been altered to more closely match what I saw in my mind's eye when I pressed the shutter.

    I do not know about "accurate " color - I do know how I think my images should look in order to be more pleasing to the eye of the viewer.

    I do confess to using the RGB numbers to evaluate how neutrals should be represented. But I also feel free to ignore this data if I so choose. Usually I find it helpful to understand what the pixel data is saying about colors.

    I love Adelson's Illusion - Is that the right word?? Are the colors "accurate" here? What does the word 'accurate' really mean in this context. Colorimeter accurate or "looks" accuarate?

    Interesting discussion - I look forward to learning more as it progresses.
    Pathfinder - www.pathfinder.smugmug.com

    Moderator of the Technique Forum and Finishing School on Dgrin
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 4, 2007
    Andrew, I don't think you addressed this question, perhaps because I buried it beneath a defuse and not very practical observation.

    But for me, this is a key question. It's maybe the most important thing I have learned from Dan. Do you do the same thing? Something else?
    rutt wrote:
    Another more practical question, Andrew. Here is an experience I've had dozens of times here on dgrin. Someone posts a shot for critique. Looks pretty good on my calibrated monitor. But just to check I download and discover that the squirrel is blue, the horse is purple, the face is magenta, the dog is pink, &etc. You know what I mean. I use curves of some sort to push the colors toward away from the impossible and toward the possible. And presto-chango, it looks 100% better, the photographer thinks so, all the other viewers think so. But nobody saw it from looking at the original. If you like I'll dig up some of these dgrin threads. There are a lot of them.

    I know I've told this storry in Margulis-inspired language. Let's not waste words saying what was wrong about this. Instead tell me what happens in your workflow. What detects the possibility of this kind of improvement?
    If not now, when?
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 4, 2007
    pathfinder wrote:
    I do confess to using the RGB numbers to evaluate how neutrals should be represented. But I also feel free to ignore this data if I so choose. Usually I find it helpful to understand what the pixel data is saying about colors.

    Neutrality defined numerically along with values for highlight and shadow (and saturation clipping on either end) are about the most important, some would say only useful numbers you need to memorize.

    Note that not all RGB color spaces behave this way! RGB color spaces from capture devices (scanners) and printers do not always define a neutral where R=G=B. All RGB working spaces, which are mathematically constructed do. We call these 'well behaved' color spaces. For fun, make a neutral in your working space, convert to an RGB print space (for your Epson, Canon etc) and now view the RGB numbers. They are not equal.
    I love Adelson's Illusion - Is that the right word?? Are the colors "accurate" here? What does the word 'accurate' really mean in this context. Colorimeter accurate or "looks" accuarate?

    I have lots of URL's with similar optical illusions. Its helpful in showing users the necessity of using instruments instead of our eyes for SOME tasks. Our visual system is poor at viewing solids like this (where a colorimeter would of course measure accuractly by defining both squares as the same color). That would be accurate color. That our visual system is fooled and we think we see two different colors (even if we prefer this rendering), its clearly not accurate. Instruments are found in our kitchens, cars and airplanes among others for good reason.

    Our visual system is much better than instruments at viewing colors in context. An image is a prefect example. One could look at a night shot and gray balance the cement in the foreground only to see that this ruins the color appearance desired by the image creator. This BTN (By The Numbers) approach doesn't work. Same with a sunset scene.

    So, pleasing color may not be, and often isn't accurate (measured) color but most of the time, we want pleasing color. Using the term accurate color without defining what is meant by accurate and not backing that up with some measurement isn't a useful way to describe color. Least we forget, all these computers understand are numbers. What is Red? Its a word we use to define a sensations in our brain. What color is R234/0/0? Well its some shade of red but is that tomato red? Numbers alone don't define color appearance unless you're using LAB which is based on human vision.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 4, 2007
    rutt wrote:
    Another more practical question, Andrew. Here is an experience I've had dozens of times here on dgrin. Someone posts a shot for critique. Looks pretty good on my calibrated monitor. But just to check I download and discover that the squirrel is blue, the horse is purple, the face is magenta, the dog is pink, &etc. You know what I mean.
    Not really. First you say you view the image on a calibrated display and it looks good. Then you download it and it looks bad and you feel you need to fix it. This is clearly a color management issue from the start. Why are the same set of numbers providing two grossly different color apparences?

    You have to figure out what's broke before you just dig in Dan style and alter the numbers.

    I can provide you an image that appears at least 2 stops under exposed and I can fix this without altering a single color number. You can view an image that is numerically perfect (on your calibrated display) then open it in a web browser and it looks totally different and wrong. Do you alter the numbers when in fact, there's nothing wrong with them?

    If you opened an image on your system and it looked too dark, you wouldn’t crank up the luminance on your display to fix this problem right? Initially you'd want to alter the numbers. But the fact is, there are plenty of situations where the numbers are fine, the application simply isn't providing you the correct preview. This is often seen when users don't post sRGB to the web but it extends to other areas, even Photoshop.

    So I can't answer your question because something other than the numbers in the document seem to be wrong. Why do they look OK when properly viewed on a calibrated and profiled display within a color managed application but not elsewhere (the elsewhere you haven't defined). This could be a color management issue, an application issue or a number issue but I'd only be guessing without getting further info from you and how you've handled the document in both cases.

    With proper color management, the numbers and associated color space (you have to have a color space or the numbers are meaningless) should provide the same color previews to all users who handle them properly, meaning in a color managed application using a profiled display. Outside that, all bets are off.

    Avoid the temptation to change the numbers until you know that's what is required to fix the color appearance.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 4, 2007
    Nice post, Andrew. It helps me understand your position.

    But I still have the question. Do you have some way to test for impossible colors which we might not notice until they are "corrected" after which everyone prefers the new version? Maybe you don't accept the premise that the image looks better after the squirrels no longer measure negative in one or both of A and B, &etc. I've had a lot of experiences which lead me to believe that people nearly always prefer these corrections over the originals except in rare cases. (Those rare case are when we are trying to show interesting lighting or even more interesting interactions of lighting. This is an interesting topic all by itself, but please let's focus on the common case first.)
    If not now, when?
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 4, 2007
    rutt wrote:
    But I still have the question. Do you have some way to test for impossible colors which we might not notice until they are "corrected" after which everyone prefers the new version?

    Impossible colors. That's a Dan term, you better tell me what you feel it means.

    There are out of gamut colors.

    There are colors in some color spaces that are defined that are not visible to the human observer (this is rare and not really a big problem unless you go out of your way to define them numerically which isn't useful).
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 4, 2007
    arodney wrote:
    Not really. First you say you view the image on a calibrated display and it looks good. Then you download it and it looks bad and you feel you need to fix it. This is clearly a color management issue from the start. Why are the same set of numbers providing two grossly different color apparences?

    No, that's not what happens at all. Here is what happens:
    1. Someone posts an image on dgrin for critique.
    2. It gets basically positive responses.
    3. It looks OK to me on my calibrated monitor
    4. Just for fun I download and open in Photoshop
    5. Looks the same as in the browser
    6. I measure the face, sky, squirrel, horse, whatever and find a reading that I don't think should be possible. For example, the squirrel measures negative in either A and/or B.
    7. I "correct" somehow to push the reading toward what I think is possible.
    8. Now comparing before/after, it's obviously a big improvement. In fact, I wonder why I didn't notice it at first.
    9. I post the "corrected" version. Everyone prefers it.

    So I don't think this is a color management issue, at least not one I understand.
    If not now, when?
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 4, 2007
    rutt wrote:
    No, that's not what happens at all. Here is what happens:
    1. Someone posts an image on dgrin for critique.
    2. It gets basically positive responses.
    3. It looks OK to me on my calibrated monitor
    4. Just for fun I download and open in Photoshop
    5. Looks the same as in the browser
    6. I measure the face, sky, squirrel, horse, whatever and find a reading that I don't think should be possible. For example, the squirrel measures negative in either A and/or B.
    7. I "correct" somehow to push the reading toward what I think is possible.
    8. Now comparing before/after, it's obviously a big improvement. In fact, I wonder why I didn't notice it at first.
    9. I post the "corrected" version. Everyone prefers it.

    So I don't think this is a color management issue, at least not one I understand.

    First of all, the image is probably not previewing identically on the browser and Photoshop unless you're viewing that image in Safari but I digress.

    As to why it didn't occur at first, you'd have to ask the originator of the image. Did they post it because they didn't like the color appearance? Did they work with a calibrated display as well? Are you just a lot better at altering the color apparence? Why wasn't the color fixed on acquisition (at the scan stage or raw conversion)? IF a film scan, did it match the film and was the purpose to match or improve the original?

    What your telling me is you see images that you don't think appear as good as they should be, you alter the numbers and everyone (everyone?) agrees its better. You are validating your ability to use an image editor to improve the image which I heartily buy. It doesn't tell us much about the original person handling the image and what issues they had or didn't have.

    I guess I'm not sure what your point is, other than you looked at the image (or looked at the numbers) and altered them and improved the image.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 4, 2007
    Yes, I agree, Dan uses this term in two different ways.
    1. To describe LAB colors such as L=0, A=0, B=-128. We could discuss the semantics of the word, "impossible" as applied to these, but it's not what I mean in this instance, so it would be a digression.
    2. To describe evidence of a cast. For example human faces which measure negative in either A and/or B, midday skies which measure strongly negative in A, wedding dresses no neutral. This is what I was trying to describe. As I said, I apologize for using Dan's language, but he taught me to do this and it has proved to be a useful thing to do.
    arodney wrote:
    Impossible colors. That's a Dan term, you better tell me what you feel it means.

    There are out of gamut colors.

    There are colors in some color spaces that are defined that are not visible to the human observer (this is rare and not really a big problem unless you go out of your way to define them numerically which isn't useful).
    If not now, when?
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 4, 2007
    rutt wrote:
    Yes, I agree, Dan uses this term in two different ways.
    1. To describe LAB colors such as L=0, A=0, B=-128. We could discuss the semantics of the word, "impossible" as applied to these, but it's not what I mean in this instance, so it would be a digression.
    2. To describe evidence of a cast. For example human faces which measure negative in either A and/or B, midday skies which measure strongly negative in A, wedding dresses no neutral. This is what I was trying to describe. As I said, I apologize for using Dan's language, but he taught me to do this and it has proved to be a useful thing to do.

    There's no such thing as an impossible color when defined in LAB, it defines (with some holes) human vision based on its daddy, CIEXYZ.

    We've talked about describing a cast using the original RGB working space and the fact that, lots of images need a color cast.

    Its quick and easy to simply LOOK at the image in context, something I've discussed, then fiddle around looking at LAB numbers that tell you that the image you like is wrong. I'd frankly be looking at the source color space numbers (in my case, ProPhoto RGB).

    I would submit that all wedding dress images should not be neutral while some should be, but it seems easy just to look at the image and make the decision.

    If you're asking me if the first thing I recommend you do is mouse around an image and look for odd color values, my answer would be no. I' look at the entire image first and see if I think it needs work or not. Much of this work BTW would take place in my raw converter (or going back in time, on my film scanner).
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 4, 2007
    arodney wrote:
    First of all, the image is probably not previewing identically on the browser and Photoshop unless you're viewing that image in Safari but I digress.

    Yes it's a digression, but for a majority of images, it's so close that I can't tell the difference. Maybe my eyes aren't good enough. Sometimes, though you are right, and it really doesn't look the same. You've taught me to understand why in private correspondence. But we digress.
    arodney wrote:
    I guess I'm not sure what your point is, other than you looked at the image (or looked at the numbers) and altered them and improved the image.

    My point is that I wouldn't have known to try the particular improvement (better word than correction, thanks) unless I had measured and had some ideas about what the numbers should be. Not just the originator, but everyone(!), prefers the improved version, but until I measured, nobody(!) was particularly dissatisfied with the image.

    I have a theory about why this happens. But this thread isn't about my ideas, it's about yours. I'd like to know if there is a way of understand this story which you buy into. I'd like to know if you have an alternative way of detecting that an image is a candidate for this kind of improvement.
    If not now, when?
  • Options
    arodneyarodney Registered Users Posts: 2,005 Major grins
    edited July 4, 2007
    rutt wrote:
    My point is that I wouldn't have known to try the particular improvement (better word than correction, thanks) unless I had measured and had some ideas about what the numbers should be.

    Numbers are important but not to be used without looking at the color appearance they produce. Numbers based on what too? There are no capture or output devices that handle Lab. There's only one 'device' it defines, that's the human observer (you and I).

    In an RGB working space, an editing space that is well behaved for editing and archiving, you have some useful numbers to look at as I described but you don't need to go overboard and only rely on them. The best approach in my book is to use the numbers, where useful and the calibrated display.

    Then, if you want to output the numbers to any device other than that display, you need an output profile which will produce the correct numeric conversions from the original and handle the gamut mapping issues. The numbers provided for the output device are correct (assuming a good output profile). At some point you have to soft proof this, look at the image with the dynamic range being used (Paper white, ink black) and decide if what you're seeing is what you want to reproduce on the output device. The LAB numbers, any numbers other than for the output device are immaterial.
    Andrew Rodney
    Author "Color Management for Photographers"
    http://www.digitaldog.net/
  • Options
    patch29patch29 Registered Users, Retired Mod Posts: 2,928 Major grins
    edited July 4, 2007
    Rutt can you link to some of the examples you describe?

    and/or can Andy pick an image and each of you can post your version?
  • Options
    edgeworkedgework Registered Users Posts: 257 Major grins
    edited July 4, 2007
    I just read all the posts in this thread and quite probably have missed some points here and there. Nonetheless, here are a few observations, off the top of my head:

    Andrew—your knowledge and credentials are obvious and need no qualification, but forgive me if I say that you sound like you're all strategy and no tactics. Anyone who can acknowledge that "accurate color" will, nonetheless, be ugly color that pleases no one, yet not realize that the argument is essentially meaningless beyond that point, is mistaking theory for results.

    I've been producing color for magazine covers, photospreads, photographers and web sites, for nearly 20 years, and doing it well. I've plied my craft in shops that spit on the notion of color management, and I've done it in shops that worship the concept like it was the salvation of western civilization. I'm curently working in an environment that typifies most such environments: they pay lip service to the idea but really don't do it well.

    When we acquired a new proofing system a while back, our production wizard ordered up a proof from our old provider to compare with ours. Shock of shocks, they didn't match. So he emarked on a city wide quest, ordering contract proofs from about a dozen different services. All were different. There's nothing more pathetic than a so-called production wizard staring at a pile of proofs that don't match and wailing "Which one is correct?" I told him "The one with the client's initials in the lower right corner, the one a printer contracts to match on press. That's your correct color." That's tactics.

    Rutt makes a passionate case for Dan Margulis' approach, by the numbers, you make a passionate case for dismissing Rutt's case, also based on numbers... numbers, schmumbers. What we have here are photographers looking for tactics and theory be damned. Personally, I don't believe you fail to understand the point Rutt made in his last post; you're too smart. Talk all you want about this space or that profile, in the real world, when you need results, taking a look at the numbers and noting some general patterns about relationships between the different channels, is not only warranted, to refuse to do so is amateurish and to deny the usefulness ceases to be an artistic, or scientific, argument, and becomes mere politics.

    The myth of color management rests on the notion that there exists, somewhere, a device independent definition of color from which all others can be measured, and which can serve as a standard for translating from space to space. The fatal flaw is that by definition, we have no access to that mythical color save through a, you guess it, device. So it becomes an act of faith, not science, to talk about accurate color. It's all a compromise, a quest for pleasing color. There is no other. The only question is who you please, and usually that's the guy that signs the check.
    There are two ways to slide through life: to believe everything or to doubt everything; both save us from thinking.
    —Korzybski
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 4, 2007
    arodney wrote:
    There's no such thing as an impossible color when defined in LAB, it defines (with some holes) human vision based on its daddy, CIEXYZ.

    Let's put this aside for the purposes of this discussion. It only bears on the definition of the word "impossible". We can quibble, but it's a digression.
    arodney wrote:
    We've talked about describing a cast using the original RGB working space and the fact that, lots of images need a color cast.

    Agreed. I have lots of images like this. Most recently:

    153965064-L.jpg
    The Queen of the Dammed (actually Willis), Boston Ballet's 2007 production of Giselle, Kathleen Breen Combes

    Also mixed casts, such as:

    69321630-L.jpg
    Balanchine's Serenade, Boston Ballet, 2006
    arodney wrote:
    Its quick and easy to simply LOOK at the image in context, something I've discussed, then fiddle around looking at LAB numbers that tell you that the image you like is wrong. I'd frankly be looking at the source color space numbers (in my case, ProPhoto RGB).

    I would submit that all wedding dress images should not be neutral while some should be, but it seems easy just to look at the image and make the decision.

    If you're asking me if the first thing I recommend you do is mouse around an image and look for odd color values, my answer would be no. I' look at the entire image first and see if I think it needs work or not. Much of this work BTW would take place in my raw converter (or going back in time, on my film scanner).

    Your eyes must be much better than mine (and many others.) Lots (and I do mean LOTS) of times I don't see it. Once I measure, I know there is a cast of some sort and then make a decision about what to do about it. I don't always neutralize the cast (see the images above.) But after measuring I'm aware of it. And often, the at least partially neutralizing the cast does make an improvement which everyone prefers.

    Here are some examples of this phenomenon:
    There are many more, some of which happened via private emails. Try searching for my posts containing the word "cast".
    If not now, when?
  • Options
    ruttrutt Registered Users Posts: 6,511 Major grins
    edited July 4, 2007
    Edgework is better at measuring and figuring out what to do that I am. Here are some examples where he did this:
    If not now, when?
Sign In or Register to comment.