I was reading somewhere (Discover Magazine I think) about CCD's Video Camera's and such that on a Pixel by Pixel basis That a good portion of the color assigned is based on the colors next to an individual Pixel.
Imagine a line of sensors each one only sensitive to Red, Green, or Blue. so you get a line of RGBRGBRGBRGB Etc. So a bit of light that is Yellow hits the 3 sensors. It stores a value in the red sensor and the Green sensor next to it but not the Blue on both sides. The camera's software makes an estimation based on the value of each group of 3 sensors and the groups of sensors surrounding it. If a group is heavily Red and the group to the left is heavily blue a border is established. If a group to the right is similar to the middle group it is assumed that they are probably the same. That seems to be why Red/Pink and Red/Orange combinations are so hard to scan.
The Details are probably a little foggy but something that stands out in my memory was a statement in the article said (that a digital picture is based on 1/3rd of the information that was accumulated in the CCD is used for the picture) or 2/3ds of the information was discarded.
Does that make sense to you?
Paul Hegge
|