The human visual system can handle dynamic ranges that are several orders of magnitude larger than those of conventional acquisition and visualization devices. In order to fill the gap between the direct observation of a scene and its digital representation, high dynamic range sensors have recently been devised. Techniques are also used to build high-dinamics images from the output of low dinamics devices, e.g. by means of multi-esposure.
Recently, a number of systems have been proposed to extend the dynamic range of current visualization devices. However, the problem is far from being well investigated. As a consequence, the dynamic range of high-dynamics images must be reduced to fit the one of the visualization device at hand.
Several solutions exist for the tone mapping problem. However, most of them only cope with still images. These algorithms cannot be simply extended to video. Moreover, in driving assistance applications, video processing is usually performed on low-cost hardware, with devices often embedded in the camera box itself (e.g., smart cameras).
This project addresses the problem of tone mapping of high-dynamics video sequences. Special attention is put to temporal illumination variations.
Back to the research page.