This PDF file contains the front matter associated with SPIE Proceedings Volume 10185 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Making decisions about intrusion detection and/or prevention system (IDPS) enhancements are often limited to tool effectiveness (i.e., predictive performance). However, in many cases, the tools in an IDPS are operating in information environments, where the malicious behavior is difficult to discern, and computational resources are limited. We develop three novel IDPS performance models motivated by the return on investment (ROI) metric, where each model is designed to compare each tool’s relative contributions to the system-level performance over multiple scenarios and configurations. Each of our approaches combine statistical accuracy metrics and computational resource costs into one model to facilitate decision making on IDPS configurations.
We examine how the hardware level security features in the OS Friendly Microprocessor Architecture improves cybersecurity against a rootkit attack. A rootkit (root + kit) is a malicious program or tool -“kit” of programs designed to obtain “root” level privileges (root for Unix, admin for Windows). Rootkits operate at the same security ring level as an operating system. This gives rootkits access to kernel level data structures. Even with state-of-the-art security technologies, it is very difficult to detect a rootkit. Rootkits have been used for digital rights management and copy protection; however, the 2005 CD copy protection scandal illustrates how poor computer security can leave an open door for other malware. We present a security model of the OS Friendly Microprocessor Architecture and we present a short introduction to rootkits. For this paper, we will focus on OS-kernel level rootkits. We will illustrate how the hardware security features of the OS Friendly Microprocessor Architecture increases the difficulty for rootkit malware to compromise a computer system.
Previous work has demonstrated that machine learning-based network intrusion detection systems (IDS) can be constructed to provide a significant proportion of the accuracy of a conventional signature-based IDS while using a fraction of the resources. Such systems are ideally suited to mobile tactical networks, which typically require much denser sensor coverage to ensure complete network protection and have relatively limited size, weight, and power budgets within which to both protect and operate the network. In this study, we extend previous work on the Extremely Lightweight Intrusion Detection system (ELIDe) and examine its ability to both store a wide range of signatures and generalize to new data. We also demonstrate the following: (1) ELIDe weight vectors are capable of storing multiple signatures while not significantly affecting the false-positive rate; (2) such weight vectors can detect packets that match the signatures on which they were trained with a high degree of accuracy (low false-negative rate); and (3), in addition to approximating the output of a conventional set of signatures, ELIDe weight vectors can also weakly generalize to novel malicious traffic. We show that, despite the significant challenges mobile tactical networks pose for intrusion detection, the use of machine learning allows the deployment of approximate signaturebased intrusion detection in such networks.
Cybersecurity threats to autonomous robots present a particular danger, as compromised robots can directly and catastrophically effect their surroundings. A two-staged intrusion detection system is proposed which consists of a signature detection component and an anomaly detection component. The anomaly detection component utilizes a deep neural network that is trained to detect commands that deviate from expected behavior. This paper presents ongoing work on the development and testing of this system and concludes with a discussion of directions for future work.
The Internet of Things (IoT) and Internet of Everything (IoE) has driven the proliferation of processors into nearly every powered device around us: from thermostats to refrigerators to light bulbs. From a security perspective, IoT/IoE creates a new layer of signals and systems that can be exploited to access supporting network layers. Our research focuses on leveraging the analog side channels of IoT/IoE processors, for defensive purposes. We apply signal-processing and machine-learning techniques to collected RF emissions to detect if code running on the processor has been modified (i.e., corrupted or injected with malware). The paper describes our process for positioning a wide-bandwidth RF probe over the device under test (DuT). Classifiers are implemented for identifying the code running on the device. We demonstrate the ability to detect, identify, and isolate instructions based on signatures learned during initial DuT characterization. The probe is positioned to capture RF signals that support-vector machine (SVM) classifiers can accurately discriminate between instructions, rather than relying on raw power leakage. At this well-discriminated location, the signatures of each instruction are extracted by applying principal component analysis (PCA) to separate its signal into components (fetch, opcode, operands, and values). These signatures are used to identify instructions in the test code. Additionally, this paper discusses applying our methodology to blocks of code/algorithms using sequence learning algorithms. These techniques enable significant reduction in feature dimensions improving speed and accuracy of instruction level classification of low-SNR RF sidechannels.
Side-Channel Analysis (SCA) is an increasingly well-known method for non-invasively extracting information from unintended “side-channel” emissions given off by electronic devices. The common method for extracting side-channel information is via a near-field antenna probe placed in the vicinity (i.e., millimeters) of the target device. The antenna detects and amplifies the radio-frequency (RF) emissions given off by the device and transmits the information for analysis and testing. Side-channel attacks are most known for their utility in cryptanalytics; however, they can also be used to fingerprint devices or even determine the digital state of the system. In this work, characterization studies on a 1- GHz antenna using Riscure’s RF probe station are performed. For RF-SCA, the ultimate limits of signal sensitivity and frequency response are determined by the antenna characteristics. In addition, the effective source-receiver distance (SRD), cross-talk and spatial signal averaging at various SRDs have to be characterized for signal attenuation and normalization. From our testing, it appears that the Riscure probe has a peak frequency response at about 200 MHz. For example, the 418MHz antenna had multiple peaks at 130 MHz, 172 MHz, 213 MHz, and 370 MHz, as well as multiple less significant protrusions at higher frequencies. The BeeHive100C probe peaked at exactly 200 MHz but had a couple of side-lobes in the 600-800 MHz range. The Pharad 30-512 MHz antenna peaked at a slightly lower 193MHz, although, some response was observed in the 600-800 MHz range as in the other antennas. The Pharad 225-6000MHz antenna exhibited a similar peak but lesser roll-off and an elevated response at increased frequencies than its predecessor.
Many research efforts have been devoted to applying machine learning (ML) algorithms to the task of Automatic Target Recognition (ATR). In the 90’s, ML techniques such as Neural Networks were less popular due to various technological barriers and applications. Computational resources were scarce and expensive. Today, computational resources are not as expensive as in the past; however, an abundance of sensors and business data need to be analyzed in real-time. High performance computing (HPC) enables ML-based decision making in real-time or near real-time. This research explores the application of deep learning algorithms, specifically convolutional neural networks, to the task of ATR in synthetic aperture radar (SAR) imagery. We developed a Convolution Neural Networks (CNN) architecture for achieving ATR in SAR imagery and found that classification accuracy levels of 99% can be achieved through the application of neural networks. We used graphics processing units (GPU) to accomplish the computational tasks.
In this research, we propose to apply the Jaccard Similarity measure to quantify the extent of change in the neighborhood of a node in mobile sensor networks (MSNs) whose topology changes dynamically with time. We determine weighted average of the Jaccard Similarity (WJS) scores of the neighborhood of a node over a period of time and claim that nodes with larger WJS values (0 ≤ WJS ≤ 1) are more likely to have a stable neighborhood as well as be preferred for inclusion as intermediate nodes in communication topologies (like paths, trees, connected dominating sets, etc) for MSNs.
Cloud computing based cognitive radio networks (CCCRN) is an eye-catching research area in recent years to improve the spectrum sensing and spectrum management. Cognitive radio networks (CRN) are capable of adaptive learning and reconfiguration to provide consistent communications in dynamic environments. The adoption and learning in CRN demand fast process of big data. The performance and security in CRN do not meet such requirements due to its low computational power capabilities, particularly in low computational power devices. The advent of cloud capabilities mitigate these constraints. Due to this reason, we suggest the steganography with Advanced Encryption Standard (AES) cryptography technique to protect the cloud data. We identify the critical issues and challenges to implementing CCCRN and provide possible solutions. Even though, both techniques have the same objective, the cloud data in cognitive radio network requires a combination to keep the hackers away from the classified and unclassified data.
Integration of cloud computing and cognitive radio increases the performance with added security threats of cloud computing. If the integration overcome these security threats, CCCRN will replace traditional methods of radio operation. The proposed security model incorporated in CCCRN can help the primary user emulation and many other jamming problems. Integrating cognitive radio in cloud arrives secure problems along with real-time processing and energy supply problems. Cloud integration provides resource pooling with additional antennas to meet the real-time performance. Therefore, the cloud is one of the solutions that is facing by CRN. We discuss these problems in the current research paper.
Major features and system-level design considerations for 3-D array apertures with hemispherical coverage are presented. First, an ideal 3-D dome-like hemispherical aperture is simulated using physical optics. Second, 3-D smooth aperture shape is approximated by several planar facets each presenting identical 2-D aperture arrays. Optimal division of hemispherical field of view into sectors of regards with similar maximum angular scan extent is discussed along with optimization of major electrical features of planar array facets, their number and total component count.
Subarray modules are introduced for face arrays used to create 3-D aperture antenna systems. Several representative topologies and major electrical features are reviewed for beam-forming networks of subarray modules. Major performance measures such as Gain to Temperature ratio (G/T) are discussed. Three key components are identified and their impact on G/T is studied using circuit models.
Modern computer and communication infrastructures are highly vulnerable to malicious codes and activities. There are many different ways malicious codes such as viruses, worms, Trojan horses etc. can damage a multitude of services, computers, financial structures, cyber infrastructure and data privacy. Signature based detection are more prevalent in preventing these types of attacks than machine learning detection. Anti-virus vendors are facing huge quantities (thousands) of suspicious files every day. These files are collected from various sources including dedicated honeypots, third party providers and files reported by customers either automatically or explicitly. The large number of files makes efficient and effective inspection of codes particularly challenging. In this paper, we propose a two part hybrid detection system that is in two parts. One part is a misuse detection system and the second part is an anomaly detection system. Misuse dependent detection is based on a random forest classifier and anomaly based detection is based on a single class SVM with bagging technique. We depart from the usual approach by using Correlation Feature Selection algorithm (CFS) for feature selection. Our experiment shows that our hybrid detection system outperforms the existing hybrid systems with other machine learning algorithms.
Anti-virus software based on unsupervised hierarchical clustering (HC) of malware samples has been shown to be vulnerable to poisoning attacks. In this kind of attack, a malicious player degrades anti-virus performance by submitting to the database samples specifically designed to collapse the classification hierarchy utilized by the anti-virus (and constructed through HC) or otherwise deform it in a way that would render it useless. Though each poisoning attack needs to be tailored to the particular HC scheme deployed, existing research seems to indicate that no particular HC method by itself is immune. We present results on applying a new notion of entropy for combinatorial dendrograms to the problem of controlling the influx of samples into the data base and deflecting poisoning attacks. In a nutshell, effective and tractable measures of change in hierarchy complexity are derived from the above, enabling on-the-fly flagging and rejection of potentially damaging samples. The information-theoretic underpinnings of these measures ensure their indifference to which particular poisoning algorithm is being used by the attacker, rendering them particularly attractive in this setting.
Deep learning (DL) is a set of methods that automatically classify the raw data fed into the machine. Deep Convolutional nets composed of multiple processing layers to learn and representation of data with multiple levels of abstraction to process images, video, speech and audio. H2o deep learning architecture has many features that include supervised training protocol, memory efficient Java implementation, adaptive learning, and with related CRAN packages. H2o uses supervised training protocol with a uniform adaptive option which is an optimization based on the size of the network. It can take clusters of computing nodes to train on the entire data set but automatically shuffling the training examples for each iteration locally. The framework supports regularization techniques to prevent overfitting. H2o R has intuitive web interface using localhost and IP address. Using the H2o package in R is easy. The computations are performed in the H2o cluster and initiated by REST calls (in highly optimized Java code) from R. Since SPARK is available in R, H2o uses a single R session and communicates to the H2o Java cluster via REST calls. H2o runs inside the Spark executor JVM. Using these packages in R, we demonstrate the classification and automatic recognition of objects. Further, we use the h2o deep learning package in R Language to classify the NOAA VIIRS Night fires data to detect the persistent fire activity at a given location around the globe.
Computer security, information security and event management (SIEM) and non-event based raw data (NERD) is a feed activity for modern cyber domain network architecture. Each type of cyber domain such as Software Defined Networks, Virtualization, Service Orchestration or Cloud/Elastic computers, essential carryover characteristics. Each cyber domain might have slightly different properties. Enrichment NERD and SIEM models with Raw Activity Event Data allowed transformation the raw sensor flowing through the system into enriched data elements that are both descriptive and predictive in nature. This paper detail some scenarios for evidence collection, parsing, enrichment, the implementation k-Nearest Neighbor (kNN) classifier as a proof of concept (POC) for Apache Metron cyber security framework. For anomaly detection on Hadoop, utilizing Data Lake, data science and machine learning algorithm indicate this is a viable approach towards collecting, analyzing sensor data and analytical grid processing in a complex and ambiguous environment.
This paper investigates the fusion process of combining cyber sensors on a network to detect and classify cyber behaviors – good and bad. Some bad cyber activity can be confused as appropriate (good) activity and vice versa. To wrongly block good activity is an error. Also, to allow bad cyber activity to continue believing it to be good activity is also an error. We wish to minimize these errors. Some bad cyber activity can be classified according to its severity. Confusing an extremely severe cyber activity for a mildly bad cyber activity can be a costly mistake also. We assume there are several classification systems present on the network, that is, a sensor, processor and exploiter at a minimum for each system. Also, the sensors may be disparate. Assume each system has a ROC manifold that is known, or has a good approximation. The goal of this paper is to demonstrate that there a best combining rule.
A theoretical possibility of non-resonant, fast, and efficient (up to 40 percent) heating of very thin conducting cylindrical targets by broad electromagnetic beams was predicted in [Akhmeteli, arXiv:physics/0405091 and 0611169] based on rigorous solution of the diffraction problem. The diameter of the cylinder can be orders of magnitude smaller than the wavelength (for the transverse geometry) or the beam waist (for the longitudinal geometry) of the electromagnetic radiation. Experimental confirmation of the above results is presented [Akhmeteli, Kokodiy, Safronov, Balkashin, Priz, Tarasevitch, arXiv:1109.1626 and 1208.0066, Proc. SPIE 9097, Cyber Sensing 2014, 90970H (June 18, 2014); doi:10.1117/12.2053482].
An algorithm was created which identifies the number of unique clusters in a dataset and assigns the data to the clusters. A cluster is defined as a group of data which share similar characteristics. Similarity is measured using the dot product between two vectors where the data are input as vectors. Unlike other clustering algorithms such as K-means, no knowledge of the number of clusters is required. This allows for an unbiased analysis of the data. The automatic cluster detection algorithm (ACD), is executed in two phases: an averaging phase and a clustering phase. In the averaging phase, the number of unique clusters is detected. In the clustering phase, data are matched to the cluster to which they are most similar. The ACD algorithm takes a matrix of vectors as an input and outputs a 2D array of the clustered data. The indices of the output correspond to a cluster, and the elements in each cluster correspond to the position of the datum in the dataset. Clusters are vectors in N-dimensional space, where N is the length of the input vectors which make up the matrix. The algorithm is distributed, increasing computational efficiency
Previously, we proposed and implemented a Self-structuring Data Learning Algorithm. This realized software package and the concept are still progressing. Earlier, it was tested with synthetic data and exhibited interesting results. The objectives of this paper are testing the algorithm with raw infrared and visual images and updating the algorithm as required. We first performed registration transformation and detection from the images with an existing software package. We then registered the detections with the registration transformations from both infrared and visual images. The registered detections were delivered to the algorithm for target detection and tracking without modification. Results revealed inability to handle very noisy infrared image features. To overcome this problem, we developed multiscale grid processing to improve detection classification in the algorithm. This updated algorithm shows much better target detection and tracking with the real-world data. More algorithm enhancements are in work such as incorporating pattern recognition, classification, and fusion.