How can the human brain uncover patterns, associations and features in real-time, real-world data? There must be a general strategy used to transform raw signals into useful features, but representing this generalization in the context of our information extraction tool set is lacking. In contrast to Big Data (BD), Large Data Analysis (LDA) has become a reachable multi-disciplinary goal in recent years due in part to high performance computers and algorithm development, as well as the availability of large data sets. However, the experience of Machine Learning (ML) and information communities has not been generalized into an intuitive framework that is useful to researchers across disciplines. The data exploration phase of data mining is a prime example of this unspoken, ad-hoc nature of ML
– the Computer Scientist works with a Subject Matter Expert (SME) to understand the data, and then build tools (i.e. classifiers, etc.) which can benefit the SME and the rest of the researchers in that field. We ask, why is there not a tool to represent information in a meaningful way to the researcher asking the question? Meaning is subjective and contextual across disciplines, so to ensure robustness, we draw examples from several disciplines and propose a generalized LDA framework for independent data understanding of heterogeneous sources which contribute to Knowledge Discovery in Databases (KDD). Then, we explore the concept of adaptive Information resolution through a 6W unsupervised learning methodology feedback system. In this paper, we will describe the general process of man-machine interaction in terms of an asymmetric directed graph theory (digging for embedded knowledge), and model the inverse machine-man feedback (digging for tacit knowledge) as an ANN unsupervised learning methodology. Finally, we propose a collective learning framework which utilizes a 6W semantic topology to organize heterogeneous knowledge and diffuse information to entities within a society in a personalized way.
We seek to augment the current Common Access Control (CAC) card and Personal Identification Number (PIN) verification systems with an additional layer of classified access biometrics. Among proven devices such as fingerprint readers and cameras that can sense the human eye's iris pattern, we introduced a number of users to a sequence of 'grandmother images', or emotionally evoked stimuli response images from other users, as well as one of their own, for the purpose of authentication. We performed testing and evaluation of the Authenticity Privacy and Security (APS) brainwave biometrics, similar to the internal organ of the human eye’s iris which cannot easily be altered. ‘Aha’ recognition through stimulus-response habituation can serve as a biomarker, similar to keystroke dynamics analysis for inter and intra key fluctuation time of a memorized PIN number (FIST). Using a non-tethered Electroencephalogram (EEG) wireless smartphone/pc monitor interface, we explore the appropriate stimuli-response biomarker present in DTAB low frequency group waves. Prior to login, the user is shown a series of images on a computer display. They have been primed to click their mouse when the image is presented. DTAB waves are collected with a wireless EEG and are sent via Smartphone to a cloud based processing infrastructure. There, we measure fluctuations in DTAB waves from a wireless, non-tethered, single node EEG device between the Personal Graphic Image Number (PGIN) stimulus image and the response time from an individual’s mental performance baseline. Towards that goal, we describe an infrastructure that supports distributed verification for web-based EEG authentication. The performance of machine learning on the relative Power Spectral Density EEG data may uncover features required for subsequent access to web or media content. Our approach provides a scalable framework wrapped into a robust Neuro-Informatics toolkit, viable for use in the Biomedical and mental health communities, as well as numerous consumer applications.