I'm with you Liesa, I don't get it either, they are lovely though.
John, did you say you downloaded something to get it on the Mac? I used to have photoshop on my old PC but not got around to getting anything for my Mac
Easiest way to explain I guess is that when our eyes look at a scene, they automatically adjust so that you see detail in shadowed areas and detail in the light areas and all the light in between.
They constantly change the exposure on the fly and we get a composite of lots of images all at once, then the brain processes it.
When a camera "sees" this same scene it can either "see" the detail in the shadow but then loses the highlight detail (think of a white sky when you know it was blue).
Or you can get the camera to "see" the detail in the highlight/sky but then you end up with silhouettes of people etc.
This is because the camera can only capture a certain range from dark to light, if it varies too much between the dark and light, something gives.
In a lot of cases this is why you get the "but it didn't look like that when I was there"...same reason the moon looks bigger when viewing it live than when you take a picture of it, your eyes automatically zoom without you realising it.
This process merges several photos at different exposures to mimic what the eye views...a composite of light and dark exposures and a few in between.
If any of that makes sense? It did when I started...lol
Bookmarks