In this paper the idea of deep learning classifier is developed. The effectiveness of discriminative classifier, as e.g. multilayer perceptron, support vector machine can be improved by adding the data preprocessing blocks: orthogonal feature selection (Gram-Schmidt method) and nonlinear principal component analysis. We present the case study of various structures of deep learning systems (scenarios).
This article presents a novel combination of the Recursive Auto-Associative Memory model with the Sensitivity-
Based Linear Learning Method. Training results on the syntactic trees dataset are presented, confirming that
the application of the SBLLM method to the RAAM model results in very fast learning and yields clustering
results of the same quality as the original RAAM model.
This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.
This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers.
SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such
as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the
SPDE algorithm to the task of selecting classifiers with best performance measures values.