We introduce a simple but effective deep fully connected neural network (FNN) to solve the edit propagation as a multiclass pixel-level classification task. We construct the feature space using three- (or one-) dimensional normalized RGB (or grayscale) vectors and spatial coordinates. Our deep FNN-based model consists of color feature extraction, spatial feature extraction, feature combination, and classifier estimation. We train our model only using the feature within the region labeled by the user with strokes in a single image. Then our method directly outputs the edit propagation results after the forward pass without any refinement process. Our method automatically determines the importance of each image feature across the whole image jointly considering feature vectors from user strokes. Extensive experiments demonstrate that the proposed algorithm achieves superior performance over state-of-the-art methods, yet it remains simple and efficient.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.