|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
In article <B7A41FE6.809Ffirstname.lastname@example.org>, Wire Moore <email@example.com> writes > >I suggest that the term pixel is not abstract in the sense of the term >point. > Neither do I. Just because a pixel does not possess dimensional information does not make it a point, just as a square is nor a point, but does not have dimensions. in this respect, pixels certainly are more like squares, triangles, circles etc. in that their size is defined by the user under certain conditions. > >It is appropriate to talk about substantive pixels. I submit we find pixels >in the structure of camera CCDs, we find pixels on the faces of CRT >displays, we find pixels in printed output, we find pixels in the data >representation of images. These are all substantive, true, appropriate and >worthy uses of the term pixel. These all can be located, measured, >quantified and compared by any of us. Even within Photoshop, pixels have >definite extent, and this extent varies depending on what mode the image is >in, RGB vs. Lab, vs CMYK. Without such extent, a computer could not operate, >let alone represent images. Without extent, the term pixel would be useless >because it would refer to nothing. This is not true of the term point. > That is completely missing the point! (pun intended) The term pixel is neither point nor dimension. It is a picture element, pure and simple - the smallest picture element which conveys meaningful information about the image which contains it. The image equivalent of the "bit", from binary digit - the smallest amount of logical information. Certainly monitors, displays, prints, CCDs etc. contain elements which are capable of supporting pixels, isolating them from the rest of the image, identifying them but it is completely wrong to consider them to be containing pixels. Lets consider your argument concerning CCDs for example. You consider the CCD to contain pixels and that must, by definition, determine the number of pixels in the image that the CCD can produce - but you would be completely wrong, both at the upper end of the comparison and the lower. Binning of CCD elements, on chip or off, means that the image produced by the CCD can have substantially fewer pixels than CCD elements, so there is no match at the lower end. More importantly, each CCD element has a resolution which extends well beyond the capability of the array structure to reproduce. Using a technique known as microscan, this additional resolution can be extracted, resulting in many more pixels being produced in the image than there are CCD elements. This is a well established fact - I know because I hold the patents for many of the practical methods of achieving this. It has been deployed on the Hubble telescope and on the imager on the Mars rover. More recently, my group deeveloped the worlds highest resolution (confirmed by actual measurements) uncooled thermal imaging camera by producing images of more than twice as many pixels as the elements in the sensor array. Similar, but not identical techniques are implemented in many commercial systems, such as Epson scanners or Fuji digicams - all produce more pixels than the elements in the CCD would suggest was possible. These are real pixels which can be measured, so they are not simply obtained by interpolation, although many who do not understand the issues consider this to be the only description that makes sense to them. So, there is no match between the number of CCD elements and the number of pixels in the image at the upper end of the comparison either. In short, there is NO relationship between CCD elements - or pixels as you would wish to call them - and pixels, other than the wish of the simpleton that all other issues would disappear. So whilst the term may well be abstract and dimensionless, the issues caused by its misuse and needless fixation to dimensions are substantial. I am quite happy that many people, some highly qualified engineers, cannot grasp the difference between pixels (picture elements) and CCD cells or monitor dots. That is why my sensor designs consistently outperform my competitors on the resolution stakes. When the accountants ask how I can get more pixels in my camera's images than the sensors used to produce them have I simply refer them to Azimov's first law - "any technology is indistinguishable from magic to those who do not understand it". >Kennedy's definition of pixel obliterate my reality. Your failure to understand the abstraction says more about you than the concept you are attempting to refute. See previous paragraph! Breaking the link between sensor and pixel may be your first step to understanding image reality. > >Kennedy and Laurie: if you would put forth a more formal definition of >pixel, then your points could serve some purpose. I predict that if you do >provide such a definition, you will find your previous points suddenly >uninteresting. > As I previously posted Wire, there is no need of an additional definition. The original definition of "picture element" serves perfectly well. -- Kennedy Yes, Socrates himself is particularly missed; A lovely little thinker, but a bugger when he's pissed. Python Philosophers - Turn off HTML mail features. Keep quoted material short. Use accurate subject lines. http://www.leben.com/lists for list instructions.
[Photo] [Yosemite News] [Yosemite Photos] [Scanner] [Gimp] [Gimp] Users