Museums all over the world store a large variety of digitized paintings and other works of art with significant historical value. Over time, these works of art deteriorate, making them lose their original splendour. For paintings, cracks and paint losses are the most prominent types of deterioration, mainly caused by environmental factors, such as fluctuations in temperature or humidity, improper storage conditions and even physical impacts. We propose a neural network architecture for the detection of crack patterns in paintings, using visual acquisitions from different modalities. The proposed architecture is composed of two neural network streams, one is a fully connected neural network while the other consists of a multiscale convolutional neural network. The convolutional neural network plays a leading role in the crack classification task, while the fully connected neural network plays an auxiliary role. To reduce the overall computational complexity of the proposed method, we use morphological filtering as a pre-processing step to safely exclude areas of the image that do not contain cracks and do not need further processing. We validate the proposed method on a multimodal visual dataset from the Ghent Altarpiece, a world famous polyptych by the Van Eyck brothers. The results show an encouraging performance of the proposed approach compared to traditional machine learning methods and the state-of-the-art Bayesian Conditional Tensor Factorization (BCTF) method for crack detection.
In the restoration process of classical paintings, one of the tasks is to map paint loss for documentation and analysing purposes. Because this is such a sizable and tedious job automatic techniques are highly on demand. The currently available tools allow only rough mapping of the paint loss areas while still requiring considerable manual work. We develop here a learning method for paint loss detection that makes use of multimodal image acquisitions and we apply it within the current restoration of the Ghent Altarpiece. Our neural network architecture is inspired by a multiscale convolutional neural network known as U-Net. In our proposed model, the downsampling of the pooling layers is omitted to enforce translation invariance and the convolutional layers are replaced with dilated convolutions. The dilated convolutions lead to denser computations and improved classification accuracy. Moreover, the proposed method is designed such to make use of multimodal data, which are nowadays routinely acquired during the restoration of master paintings, and which allow more accurate detection of features of interest, including paint losses. Our focus is on developing a robust approach with minimal user-interference. Adequate transfer learning is here crucial in order to extend the applicability of pre-trained models to the paintings that were not included in the training set, with only modest additional re-training. We introduce a pre-training strategy based on a multimodal, convolutional autoencoder and we fine-tune the model when applying it to other paintings. We evaluate the results by comparing the detected paint loss maps to manual expert annotations and also by running virtual inpainting based on the detected paint losses and comparing the virtually inpainted results with the actual physical restorations. The results indicate clearly the efficacy of the proposed method and its potential to assist in the art conservation and restoration processes.