The advance of technology continuously enables new luminaire designs and concepts. Evaluating such designs has
traditionally been done using actual prototypes, in a real environment. The iterations needed to build, verify, and
improve luminaire designs incur substantial costs and slow down the design process. A more attractive way is to evaluate
designs using simulations, as they can be made cheaper and quicker for a wider variety of prototypes. However, the
value of such simulations is determined by how closely they predict the outcome of actual perception experiments.
In this paper, we discuss an actual perception experiment including several lighting settings in a normal office
environment. The same office environment also has been modeled using different software tools, and photo-realistic
renderings have been created of these models. These renderings were subsequently processed using various tonemapping
operators in preparation for display. The total imaging chain can be considered a simulation setup, and we have
executed several perception experiments on different setups. Our real interest is in finding which imaging chain gives us
the best result, or in other words, which of them yields the closest match between virtual and real experiment.
To answer this question, first of all an answer has to be found to the question, "which simulation setup matches the real
world best?" As there is no unique, widely accepted measure to describe the performance of a certain setup, we consider
a number of options and discuss the reasoning behind them along with their advantages and disadvantages.
We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.