Re: Image quality on the web

Chris Lilley, Computer Graphics Unit (lilley@v5.cgu.mcc.ac.uk)
Fri, 18 Nov 1994 17:23:24 GMT


Joel Crisp said:

>Chris Lilley said :
>>
>> Yechezkal-Shimon Gutfreund said
>>
> > I would think that accurate platform independent color reproduction
>> > should be the main priority. For many kinds of commercial sales
>> > (clothing, flowers, etc) color reproduction is key.
>>
>> OK, no complaints from me about that one.
>
>This is also important on monitors. The big problems are the degree of
>user control over the parameters at client end, user ignorance about
>viewing and display conditions, background illumination.

I agree that user ignorance is a problem. However the current situation is that
experienced users are currently unable to shareimages with any sort of colour
fidelity. Once we have that cracked, making it foolproof for naive users is
indeed the next step.

> The colour of an object depends on the texture of the surface, the
> reflectance and adsorption of the surface, the angle of incidence of
> the light source AND viewer, and the spectrum of the light source.

> All of these generate problems. Texture is obvious. Most colours
> which people wish to represent on the screen are originally adsorption
> based ( mostly ), whereas screens are emmision based ( and non-linear ).
> This causes another mapping problem.

Yes and no. We are talking screen display here, not physical objects. We can
either assume that the light has been modelled correctly to produce the image
(see, for example,
<http://info.mcc.ac.uk/CGU/research/raytracing/reflectance.html> or we can
assume that the image has been satisfactorily colour balanced using something
like Photoshop and the LAB values are known for each pixel.

The absorption vs emission question is a bit of a red herring in this instance.
Things we can see give off light. Sometimes this light is generated by the
object, sometimes it is a modification of light falling on it, often it is both.
Whatever, things give off light which has a defined spectrum and can thus be
reduced to an XYZ triple.

That being the case, we can largely ignore the factors you mention except in
that background illumination falling on the screen affects the light we see
from the screen.

The light entering our eyes when viewing an on screen image has three sources:

1) from those phosphors the electron beam is firing at. This we can affect;
given phosphor chromaticities, whitepoint and transfer functions or gamma we can
not just affect it but control it.

2) from the phosphors the beam has previously excited, the general dark grey
glow of a monitor displaying black.

3) from reflected background light. Remember that the monitor image forms the
primary adaptive stimulus except in cases of very high background illumination,
when users will notice that they cannot see properly because of glare on the
screen. So adaptation is helping us here.

2 and 3 are collectively known as the back raster, and can be accounted for in
accurate XYZ to RGB calculations. See relevant portions of

Oshima, Yuasa, Sakanoshita and Ogata "A CAD system for Color Design of a Car"
Proceedings Eurographics 92, Computer Graphics Forum 11(3) 1992 pp C-381 -
C-390.

> The angle of incidence is
> essentially uncontrollable,

Considering the magnitude of the effect, rather than just a catalogue of
possible effects, means this is fairly well down the list of things I would
worry about in the present context.

> as is the spectra of the light source.

See coments on adaptation above

> Any file format which wishes to produce a 'true' representation of
> a colour should be capable of specifing all of these at 'preferred' values -
> however, there are so many different ( and conflicting ) specifications
> of colour spaces and colour transformation equations that this is
> almost impossible.

Unless you have some direct experimental evidence to cite, I think you overstate
the case. What i want to avoid is a collerctive response of, "oh, this is too
hard, it is impossible, let's not bother, here is an RGB file".

Well understood solutions do exist that would bring a great improvement over the
current situation. Let us implement these, and implement them well, and proceed
from there, rather than give up in despair.

[About specified display sizes for images]
>And a standard for representing the units which these are specified in,

Yes, absolutely. HTML 3 DTD seems to use ems which is fair enough.

>along with prefered re-sampling method

That could be left to the client, really. Bicubic interpolation would be fine,
in whatever colour space the image is expressed in, given the fine granularity
of an image and the fairly limited resampling that is needed to cope with the
range of screen resolutions.

> and dither method.

DITHER!! This is accurate image display we are talking about here. If a client
is displaying on an 8 bit visual into the standard colour map, whatever dither
the client chooses will be just fine. Netscape does well here.

> Copyright and distribution rights ( and PGP authentication of content ? )
> are notable ommissions here.

Perhaps you misunderstand. I am not producing a catalogue of indexing terms
which I think all images should have. (If I were, the terms you suggest would be
excellent suggestions)

I am bringing to the attention of potential client writers that inline TIFF, if
implemented well, could provide a ready means to browse this handy information,
for which tags are already defined in the spec, bny constructing an on-the-fly
document to hold the information.

If you want to suggest your tags as future enhancements to the TIFF spec, the
address is tiff-input@aldus.com

On the other hand, if you are suggesting that images should have this
information in the HTML document, then putting it as a link from the copyright
symbol in the HTML 3 caption would seem like a good move.

> This would also be nice to be able to encapsulate, so that source 1
> supplies a content authenticated image to source 2, who then encapsulate
> it with additional meta-information with overall content authentication
> without having to affect the auth info on the original image.

Sounds good, this would mean inline multipart/parallel I suppose.

> We found that a number of companies we were suppling to ( in my previous
> job ) were not happy with CIE-LAB for colour representation on screen.

I am not all that happy with it either, but it sure as hell beats anonymous RGB
which is the current situation. Do you have a pointer to a better spec which is
implementable in the near future?

> Particularly car paint manufactures, who have a high gloss on the
> final surface.

That is becuase a single flat colour is different from a curved car panel
painted that colour and then viewed in normal daylight. So, specifying a
particular red in LAB does not tell you what a car will look like when painted
that colour. Fine, but wide of the current context.

Whereas what we are discussing is sending around a photo, or (as in the paper I
cited earlier) a daylight simulation of the car painted that colour, and
ensuring that the *image* is viewed with much better fidelity than the present
situation.

> I suppose what I'm really trying to say, is that colour
> representation is less important than 'appearance' representation.

I will agree with you for objects, and for printout (the Carisma project at
Loughborogh University of Technology springs to mind here). I would like to see
some evidence that this is a big enough win over LAB for on-screen viewing of
images before bypassing LAB and going for something new that perhaps has little
track record.

[About calibrated RGB spaces]
>This is going in the right direction, if the user is given the
>ability to correctly set up a display system using these parameters.

1) Surely you realise that a calibrated RGB space is just an alternative
representation of XYZ and that, if you are not happy with LAB (and by
implication XYZ) then you are not happy with calibrated RGB either.

2) Users do not "correctly set up a display stystem using these parameters".
How, pray, would you adjust your monitor chromaticities unless you have a
phenomenally specialised monitor? Calibrated RGB can be displayed one of two
ways:

- the quick and inaccurate way, call it RGNB and throw it at the screen
- convert from the known RGB space to XYZ and then display as with any other CIE
based colour space.

> But then, people don't tweak dot gain to
> compensate for miscalibrated imagesetters in this day and age, do they ;-P

I hope you spotted that was sarcasm.

> People do tweak monitor caliabration tho'

Tweak it for what purpose? What is your point here?

[About YCbCr colour space and subsampling]
> This is one of ( many ) my biggest problems with Photo-CD.

Kodak YCC, although based on YCbCr is not necessarily subject to the same
limitations. In oparticular it is not limited by broadcast video conventions
designed to stop transmitter overload and multiplexing artefacts. There is
subsampling on the domestic version, but you can get a better image by taking
the chrominance components from the next image resolution up. I don't know if
the Pro version has subsampling.

> Agreed. There is also the problem that in digitised images of any type,
> the backgound is not often uniform - black translates to about 48 shades

in RGB ;-) which is not perceptually iuniform. Those 48 shades all look much the
same, don't they? There would be a lot less than 48 shades in, for example, LAB.

> when digitising from our videodisk via a TARGA board. The noise inherent in
> the system makes automatic background detection difficult

Are you using a CCIR-709 style gamma function with a toe-in slope limit near the
origin, or a simple power law? If the latter, changing your gamma transfer
function will help with noise a lot.

[About output quality consistency problems]
Commercial houses deal with this by calibrating to quality standards daily or
hourly.

People with computer printers in the UK pounds 5 to 7k range should not be
surprised if they do not get magazine quality with zero effort on their part.
Especially if they throw raw RGB data at it.

Again, I am not sure whether you are saying the problem is insoluble and the
attempt should be abandoned, or what.

[About Photoshop]
>No comment. ( Other than it is a commercial package ).

Sure. Also a defacto standard of astonishing uniformity across that whole
industry. If there is already a MIME type for MS Word, I don't see a problem
with another one for Photoshop files. It's just one more piece of technology to
throw into the pot.

> I am more inclined to talk about power values at frequencies within a
> 10nm spectra.....with the conditions of sampling heavily specified. This
> may be converted into CIE et al

Why stop at 10? Values are available at 2 and 1 nm steps too - is this just a
counsel of perfection or do you honestly think that this will give better image
display on the monitor in a Web browser? I assume you are familiar with
trichromatic theory and that wildly different spectra can give indistinguishable
colours.

> BTW, all of this is great, but the first problem is persuading the
> content experts to give you decent samples in the first place. No matter
> how good your file format is, it can't correct for a pathologist taking
> a photo with the wrong aperture or the harsh artifacts introduced by
> flash lights. Or indeed, of the wrong bit of body being photographed. ;-(

Sure. But when they do get it right, currently there is no way to make use of
that. I would like to change this situation.

--
Chris