KEYWORDS: Computer intrusion detection, Statistical modeling, Databases, Network security, Computing systems, Data mining, Matrices, Local area networks, Detection and tracking algorithms, Systems modeling
Signature-based intrusion detection systems look for known, suspicious patterns in the input data. In this paper we explore compression of labeled empirical data using threshold-based clustering with regularization. The main target of clustering is to compress training dataset to the limited number of signatures, and to minimize the number of comparisons that are necessary to determine the status of the input event as a result. Essentially, the process of clustering includes merging of the clusters which are close enough. As a consequence, we will reduce original dataset to the limited number of labeled centroids. In a complex with k-nearest-neighbor (kNN) method, this set of centroids may be used as a multi-class classifier. Clearly, different attributes have different importance depending on the particular training database. This importance may be regulated in the definition of the distance using linear weight coefficients. The paper introduces special procedure to estimate above weight coefficients. The experiments on the KDD-99 intrusion detection dataset have confirmed effectiveness of the proposed methods.
Dimensional reduction may be effective in order to compress data without loss of essential information. Also, it may be useful in order to smooth data and reduce random noise. The model presented in this paper was motivated by the structure of the msweb web-traffic dataset from the UCI archive. It is proposed to reduce dimension (number of the used web-areas or vroots) as a result of the unsupervised learning process maximizing specially defined average log-likelihood divergence. Two different web-areas will be merged in the case if these areas appear together frequently during the same sessions. Essentially, roles of the web-areas are not symmetrical in the merging process. The web-area or cluster with bigger weight will act as an attractor and will stimulate merging. In difference, the smaller cluster will try to keep independence. In both cases the powers of attraction or resistance will depend on the weights of the corresponding clusters. Above strategy will prevent creation of one super-big cluster, and will help to reduce number of non-significant clusters. The proposed method was illustrated using two synthetic examples. The first example is based on an ideal vlink matrix which characterizes weights of the vroots and relations between them. The vlink matrix for the second example was generated using specially designed web-traffic simulator.
KEYWORDS: Data modeling, Model-based design, Curium, Detection and tracking algorithms, Expectation maximization algorithms, Prototyping, Fuzzy logic, Bismuth, Internet, Chemical elements
Parametric, model-based algorithms learn generative models from the data, with each model corresponding to one particular cluster. Accordingly, the model-based partitional algorithm will select the most suitable model for any data object (Clustering step}, and will recompute parametric models using data specifically from the corresponding clusters {Maximization step). This Clustering-Maximization framework have been widely used and have shown promising results in many applications including complex variable-length data. The paper proposes (Experience-Innovation} (EI) method as a natural extension of the (Clustering-Maximization} framework. This method includes 3 components: (1) keep the best past experience and make empirical likelihood trajectory monotonical as a result; (2) find a new model as a function of existing models so that the corresponding cluster will split existing clusters with bigger number of elements and smaller uniformity; (3) heuristical innovations, for example, several trials with random initial settings. Also, we introduce clustering regularisation based on the balanced complex of two conditions: (1) significance of any particular cluster; (2) difference between any 2 clusters. We illustrate effectiveness of the proposed methods using first-order Markov model in application to the large web-traffic dataset. The aim of the experiment is to explain and understand the way people interact with web sites.
Conference Committee Involvement (1)
Data Mining, Intrusion Detection, Information Assurance, and Data Networks Security 2006
17 April 2006 | Orlando (Kissimmee), Florida, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.