Chromatic adaptation is the ability of the human visual system to discount the colour of a light source and to approximately preserve the appearance of an object. For example, a white piece of paper appears to be white when viewed under sky light and tungsten light. However, the measured tristimulus values (the product of surface reflectance, spectral power distribution of the light source, and cone sensitivities, integrated over the visible spectrum) are quite different for the two viewing conditions: sky light is ``bluer'', it contains more short wavelength energy than tungsten light.
Digital imaging systems, such as digital cameras and scanners, do not have the ability to adapt to the light source. Scanners usually use fluorescent light sources. For digital cameras, the light source varies with the scene, and sometimes within a scene. Therefore, to achieve the same appearance of the original or original scene under different display conditions (such as a computer monitor or a light booth), the captured image tristimulus values have to be transformed to take into account the light source of the display viewing conditions. Such transformations are called chromatic adaptation transforms (CATs). There has been a significant amount of research [1,7,8,6] in determining accurate CATs - transforms which are able to accurately predict colour appearance across a change in illumination.
Many chromatic adaptation transforms described in the literature [1,7,6] are based on a modified form of the von Kries chromatic adaptation model , which states that chromatic adaptation is an independent gain regulation of the three sensors in the human visual system. Mathematically the von Kries model can be written as:
where and denote a pair of illuminants and , , and are the independent gain control factors. A more general model of chromatic adaptation is the so called generalised linear model. In this case chromatic adaptation is once again controlled by independent gain factors, but these operate not on the vision system's sensors but on a linear combination thereof:
In the Colour Group we are interested in determining which linear transformation of the sensors provides the best chromatic adaptation. That is, what values should the in the above Equation take. The recommended transforms currently in use are based on minimising perceptual error of experimental corresponding colour data sets  but in our initial work  on this subject we have shown that a chromatic adaptation transform derived through spectral sharpening  performs as well as the most popular CAT, the Bradford transform , and better than most other transforms.
In later work  we expanded the original study and determined a large set of sensor transforms which perform similarly to existing Chromatic Adaptation Transforms. Given that there are a large set of transformations which perform similarly, we have recently been investigating alternative error criteria on which to derive and test chromatic adaptation transforms. In particular  we have been looking at the performance of transforms which are derived on the premise that we would like to maintain the stability of all colour ratio pairs under different illumination conditions. This work is ongoing and our aim is to develop a deeper understanding of how our vision system is operating with regard to chromatic adaptation.