Object-to-features vectorisation is a hard problem to solve for objects that can be hard to distinguish. Siamese and Triplet neural networks are one of the more recent tools used for such task. However, most networks used are very deep networks that prove to be hard to compute in the Internet of Things setting. In this paper, a computationally efficient neural network is proposed for real-time object-to-features vectorisation into a Euclidean metric space. We use L<sub>2</sub> distance to reflect feature vector similarity during both training and testing. In this way, feature vectors we develop can be easily classified using K-Nearest Neighbours classifier. Such approach can be used to train networks to vectorise such “problematic” objects like images of human faces, keypoint image patches, like keypoints on Arctic maps and surrounding marine areas.
Computing image patch descriptors for correspondence problems relies heavily on hand-crafted feature transformations, e.g. SIFT, SURF. In this paper, we explore a Siamese pairing of fully connected neural networks for the purpose of learning discriminative local feature descriptors. Resulting ANN computes 128-D descriptors, and demonstrates consistent speedup as compared to such state-of-the-art methods as SIFT and FREAK on PCs as well as in embedded systems. We use L<sub>2</sub> distance to reflect descriptor similarity during both training and testing. In this way, feature descriptors we propose can be easily compared to their hand-crafted counterparts. We also created a dataset augmented with synthetic data for learning local features, and it is available online. The augmentations provide training data for our descriptors to generalise well against scaling and rotation, shift, Gaussian noise, and illumination changes.