PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12542, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we systematically investigate the Maximum-On-Ground (MOG) problem space, and explore candidate solutions. The MOG optimization refers to the management of the transport aircraft in-and-around an airfield. Effective and efficient daily MOG management enables the U.S. Air Force (USAF) Air Mobility Command (AMC) to rapidly deploy and sustain the equipment, and personnel anywhere in the world. However, the seemingly solved problem can quickly grow out of hand when the number of interruptions exceed past a certain point; this due to the combinatorial nature of the scheduling problem, where the order, and the mission dependencies matter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As cities grow, the risks of local crises grow with them. Researchers and practitioners alike seek to better understand how dense urban communities differentially prepare for, respond to, and recover from natural and anthropogenic shocks and stresses. Community resilience is a function of physical infrastructure, like levees and hospitals, and social capital, the valuable networks of human relationships that allow communities to thrive. In times of crisis, disparate access to these resources means that even adjacent neighborhoods can experience radically different outcomes. Unfortunately, while highly granular data about physical infrastructure is readily available, most research on social capital is limited to coarser, sparser survey data. To address this limitation, we present RESIDENT (Resilience and Stability in Dense Urban Terrain), a web application and data analysis framework that combines open-source and remote-sensed geographic data to characterize the resilience of urban neighborhoods. The user specifies a city, and RESIDENT identifies relevant infrastructure to calculate potential for social capital, visualizing this data with neighborhood- and city-level heat maps and histograms. To validate our approach, we compared RESIDENT’s social capital estimates to Nighttime Lights (NTL) data from the Visible and Infrared Imaging Suite, an established indicator of economic activity and disaster recovery. We found that increased potential for social capital predicted brighter NTL. Our results show that RESIDENT produces reliable estimates of social capital and may be used by social scientists as well as industry, government, and defense agencies to analyze, identify, and support vulnerable neighborhoods in dense urban areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advancements in Decentralized Data and Network Technologies
Blockchain technology has gained notoriety as the foundation for cryptocurrencies like Bitcoin. However, its possibilities go well beyond that, enabling the deployment of new applications that were not previously feasible as well as enormous improvements to already existing technological applications. Several factors impacting the consensus mechanism must fall within a specific range for a blockchain network to be efficient, sustainable and secure. The long-term sustainability of current networks, like Bitcoin, is in jeopardy due to their relatively uncompromising reconfiguration, which tends to be inflexible, and somewhat independent of environmental circumstances. To provide a systematic methodology for integrating a sustainable and secure adaptive framework, we propose the amalgamation of cognitive dynamic systems theory with blockchain technology, specifically regarding variant network difficulty. A respective architecture was designed with the employment of Long-Short Term Memory (LSTM) to control the difficulty of a network with Proof-of- Work Consensus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
All too often we find ourselves with the next technology that will solve all the problems. In the end, it just ends up being the next buzzword attached to research efforts without actually solving the problems, or adapting the technology in the ways it was intended. 5G, if we treat its innovations correctly, can break this pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decentralized approaches play an increasingly critical role in multi-stakeholder activities which require (a) shared situational awareness, (b) shared coordination points, and (c) historical record of activities, in disperse and distributed activities, such as manufacturing supply chains, space, and national security. These activities require stakeholders to mutually coordinate and cooperate by sharing information with each other. The common modes of information sharing are bi-lateral exchanges, and ‘walled garden’ sharing among a subset of stakeholders. Problems arise due to incomplete information sharing and lack of trust among stakeholders. The origins of mistrust are: (a) stakeholders may not trust (political, technical, risk) a single stakeholder to be fully in control of information flow and storage; (b) risk of central-point-of-failure is too high; (c) uncertain data integrity between stakeholders where all stakeholders do not trust each other. Multi-stakeholder data sharing challenges can be addressed with decentralized approaches and associated decentralized data technology such as ledgers and blockchain. Decentralized data technology includes decentralized identity (e.g., W3C DID), decentralized file storage, and blockchain/ledgers to provide cryptographically provable data integrity properties (e.g., tamper evident), as well as increased resilience to accident and attack. Fortunately, the decentralized data technology sector is maturing, and entering a phase of wider adoption in industry, pursuing decentralized data technologies at an increasing rate. This paper starts with a brief review of decentralized data technology, then describes multi-stakeholder examples in space sensor tasking and manufacturing supply chain where decentralized approaches and data technologies can improve coordination and cooperation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ever since human society entered the age of social media, every user has had a considerable amount of visual content stored online and shared in variant virtual communities. As an efficient information circulation measure, disastrous consequences are possible if the contents of images are tampered with by malicious actors. Specifically, we are witnessing the rapid development of machine learning (ML) based tools like DeepFake apps. They are capable of exploiting images on social media platforms to mimic a potential victim without their knowledge or consent. These content manipulation attacks can lead to the rapid spread of misinformation that may not only mislead friends or family members but also has the potential to cause chaos in public domains. Therefore, robust image authentication is critical to detect and filter off manipulated images. In this paper, we introduce a system that accurately AUthenticates SOcial MEdia images (AUSOME) uploaded to online platforms leveraging spectral analysis and ML. Images from DALL-E 2 are compared with genuine images from the Stanford image dataset. Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) are used to perform a spectral comparison. Additionally, based on the differences in their frequency response, an ML model is proposed to classify social media images as genuine or AI-generated. Using real-world scenarios, the AUSOME system is evaluated on its detection accuracy. The experimental results are encouraging and they verified the potential of the AUSOME scheme in social media image authentications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The information era has gained a lot of traction due to the abundant digital media contents through technological broadcasting resources. Among the information providers, the social media platform has remained a popular platform for the widespread reach of digital content. Along with accessibility and reach, social media platforms are also a huge venue for spreading misinformation since the data is not curated by trusted authorities. With many malicious participants involved, artificially generated media or strategically altered content could potentially result in affecting the integrity of targeted organizations. Popular content generation tools like DeepFake have allowed perpetrators to create realistic media content by manipulating the targeted subject with a fake identity or actions. Media metadata like time and location-based information are altered to create a false perception of real events. In this work, we propose a Decentralized Electrical Network Frequency (ENF)-based Media Authentication (DEMA) system to verify the metadata information and the digital multimedia integrity. Leveraging the environmental ENF fingerprint captured by digital media recorders, altered media content is detected by exploiting the ENF consistency based on its time and location of recording along with its spatial consistency throughout the captured frames. A decentralized and hierarchical ENF map is created as a reference database for time and location verification. For digital media uploaded to a broadcasting service, the proposed DEMA system correlates the underlying ENF fingerprint with the stored ENF map to authenticate the media metadata. With the media metadata intact, the embedded ENF in the recording is compared with a reference ENF based on the time of recording, and a correlation-based metric is used to evaluate the media authenticity. In case of missing metadata, the frames are divided spatially to compare the ENF consistency throughout the recording.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Federated machine learning (FML) for training of deep neural network models is a useful technique where insufficient sample data is available at a local level. In applications where data privacy must be preserved, such as in health care, financial services, and defense contexts, it is important that there is no exchange of data between constituents of the distributed network. It may also be desirable to protect the integrity and secrecy of the algorithms and trained models deployed within the network. Demonstrating the privacy-enhancing technology of Confidential Computing, we present a novel solution for FML implementation that supports extensible graph-based network topology configuration under federated, distributed, or centralized training regimes. The presented solution provides for policy-based control of model training and automated monitoring of model convergence and network performance. Owners of private datasets can retain independent control of their data through local encryption, while global data anonymization policies can be applied over the sample data. Full auditability of the model training process is provided to distributed data owners and the model owner using hardware-based cryptographic secrets that underpin zero-trust implementation of the training network. Operation of the proposed secure FML solution is discussed in the context of model training over distributed radiological image data for weakly-supervised learning and classification of common thorax diseases. Cross-domain adaptation of the proposed solution and integrated model integrity protection against adversarial attacks reflects a breakthrough technology for data science teams working with distributed datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the authors present the results of an experiment conducted at the 2022 NATO Cyber Coalition Exercise in Tallinn, Estonia wherein a prototype Autonomous Intelligent Cyberdefense Agent (AICA) was evaluated for its contributions to cyber operator efficacy. Six teams were given list of objectives to accomplish including preventing other teams from accomplishing the same. This included monitoring the operations of a simulated power micro-grid supporting each team’s network. Half of the teams (assigned at random) were given a AICA prototype. Evaluation of efficacy included self-reports from participating teams and measurement of total system uptime. Testing highlighted areas where future development can further enhance the prototype to improve automated responses and interactions with operators, as well as enhancements for future exercises of this type.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews prior work demonstrating the efficacy of a new artificial intelligence technique which is based on optimizing expert systems’ rule-fact networks. Systems of this type can learn from presented data and operations; however, they cannot learn any changes that ‘jump out of’ the human-created or validated pathways, ensuring that they don’t learn invalid or non-causal associations. This paper presents a review and assessment of the functionality provided by the base gradient descent-trained expert system, the functionality provided by an enhancement that facilitates automated network development, and several other enhancements. The benefits of each system variant are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advancements in Computer Algorithms and Artificial Intelligence
A technical solution is described for implementing a bridging system for Joint Analog & Digital Human-Computer Inter-connected messaging, through “Real” and “Imaginary” neural networks. The ‘real electroencephalogram’ neural network portion inputs sequenced patterns of Striatal Beat Frequencies (SBF) from an EEG while the ‘imaginary convolutional’ neural network (CNN) portion inputs digitized imagery. We will demonstrate our (real) SBF work in the context of epileptic seizures and our (imaginary) CNN work in the context of overcoming compromised sensors by using associative memory matrices to form inter-layer (bridging) connections between the left eye and right eye.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature classification and regression tasks for high dimensional data have been handled by many well-known algorithms like feed-forward multi-level perceptron, decision tree, support vector machine, and many others. Recently, a new approach called subspace learning machine (SLM) has been found which finds a balance between simplicity and effectiveness by partitioning an input feature space into multiple discriminant subspaces in a hierarchical manner. The technique has been extended in many directions to handle high dimensional data. We will emphasize the significance of these developments and present experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increasing reliance on collaborative and cloud-based systems, there is a drastic increase in attack surfaces and code vulnerabilities. Automation is key for fielding and defending software systems at scale. Researchers in Symbolic AI have had considerable success in finding flaws in human-created code. Also, run-time testing methods such as fuzzing do uncover numerous bugs. However, the major deficiency of both approaches is the inability of the methods to fix the discovered errors. They also do not scale and defy automation. Static analysis methods also suffer from the false positive problem – an overwhelming number of reported flaws are not real bugs. This brings up an interesting conundrum: Symbolic approaches actually have a detrimental impact on programmer productivity, and therefore do not necessarily contribute to improved code quality. What is needed is a combination of automation of code generation using large language models (LLMs), with scalable defect elimination methods using symbolic AI, to create an environment for the automated generation of defect-free code.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, engineering intelligent systems has become closely related to integrating artificial intelligence (AI) or machine learning (ML) components into systems. There is a general sense that as the availability of computational resources scales, the intelligence of AI and ML components will scale, thus scaling the intelligence of the system as a whole. While this bottom-up approach has merit, it takes the task of engineering intelligence out of the hands of systems engineers and gives it to AI and ML engineers. In this paper, an alternative approach to engineering intelligence is outlined based on combining the concepts of automated theorem proving (ATP) and digital engineering. Instead of using AI and ML at the component-level, ATP can be applied at the systems-level to digital models and environments. By systematically solving proofs related to properties, functional requirements, and performance, ATP can contribute to the design, operation, and regulation of intelligent systems. This paper substantiates the use of ATP in digital engineering by using model-based systems engineering as an interface between the two. This paper illustrates this interface with a descriptive example in unmanned aerial systems. Ultimately, the use of ATP with digital engineering provides a top-down, systems-centric alternative to using AI and ML components as the primary means of engineering intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional approaches using Deep Neural Networks for classification, while unquestionably successful, struggle with more general intelligence tasks such as “on the fly” learning as demonstrated by biological systems. Organisms possess myriad sensory organs for interacting with their environment. By the time these diverse sensory signals reach the brain; however, they are all converted into a spiking information representation, over which the brain itself operates. In a similar manner, myriad machine learning (ML) algorithms today compute on equally diverse data modalities; but without a consistent information representation for their respective outputs, these algorithms are frequently used independently of each other. Consequently, there is growing interest in information representations to unify these algorithms, with the larger goal of designing ML modules that may be arbitrarily arranged to solve larger-scale ML problems, analogous to digital circuit design today. One promising information representation is that of a “symbol” expressed as a high-dimensional vector, thousands of elements long. Hyperdimensional computing (HDC) is an algebra for the creation, manipulation, and measurement of correlations among “symbols” expressed as hypervectors. Towards this goal, an external plexiform layer (EPL) network, echo state network (ESN), and modern Hopfield network were adapted to implement the mathematical operations of complex phasor based HDC. Further, since symbol error correction is an important consideration for computing with networks of ML modules, a task agnostic minimum query similarity for complete symbol error correction was measured as a function of hypervector length. Based on these results, problem-independent similarities have been established within which HDC equations should be designed. Lastly, these ANNs were tested against several tasks representative of online and “plug & play” ML among expeditionary robots. For all criteria considered, the modern Hopfield network was the most capable ANN evaluated for use with complex phasor based HDC, providing 100% symbol recovery in a single time step for nearly all parameter settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Barriers and Enablers of Innovation and Transformative Technologies
The Air Force Research Laboratory's Information Directorate has a rich history of developing advanced computing technology for the warfighter guiding emerging technologies from the laboratory to the field. Memristors, also known as resistive random-access memory, is one such computing technology. This paper details AFRL's technical maturation of memristors for neuromorphic computing from early concept through device fabrication and architectural implementation using a combination of in-house programs, contractual efforts, and collaborative partnerships. It additionally explores recent DoD architectural advancements to further enable low size, weight, and power computationally efficient intelligent computing at the edge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discussions of future defense technology often call for new technology development and frequently use such words as “Disruptive”, “Transformative”, “Radical”, and “Revolutionary”. However, they often find there is a mis-match between what was discussed and what is put forward as potential solutions. This is due in many cases to using words that do not have agreed upon definitions. This discussion presents definitions for each, aligns them to an emerging Innovation Model (the 4GIM) which provides expected metrics, and ends with the use of a Modified Strategy Canvas that enables strategy gaps to be readily compared to new capability contribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A United States Department of Defense Metaverse could provide significant benefit to advance the missions of the military services, but there are significant challenges to realizing its full potential. The DoD Metaverse could quickly up -skill warfighters, allow for the integration of technologies and services across the DoD, enable unprecedented command and control capabilities, inform new design and deployment decisions for technologies coming out of the research laboratories and serve as a training tool to equip warfighters for real world scenarios. Digital prototyping could improve the adoption capacity of impactful disruptive technologies and help propel their transition path to the physical world. For these benefits to be fully realized, however, key obstacles to wide scale adopt ion need to be addressed. The DoD needs interoperable and secure architectures to be able to share investment into the development of these synthetic environments. Siloed networks, slow acquisition pipelines, and a lack of understanding are preventing the DoD from adopting the very tools that will help improve its adoption capacity of other critical technologies. As we learn to overcome these challenges, we will need to address the legal, moral, ethical, and security considerations of having service members immersed into these virtual worlds. This paper will help the reader better understand metaverse technologies and their fundamental concepts. It will highlight essential components, outstanding required functionality, and specific opportunities for early adoption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facing the issues of fake news, numerous researchers have been trying to devote efforts with different approaches, ranging from technical to social or behavioral views. This paper proposes a machine learning based framework that considers characteristics or features of various stakeholders or components of the fake news contexts in all technical, social and behavioral views, ranging from the fake news messages, users, contexts, fake news creators or senders, and fake news mitigators. With the end products as the classified real or fake news and the suggested action plans based on all of those features, the system is promising to be flexible in adapting to contextual changes through the time, which is the struggles of most solely technical systems. Such a framework not only contribute to the literature but also provide decision support tools to fake news mitigators, which can help predict, prevent, eliminate or minimize impacts from fake news issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the explosive growth of amount of information exchanged over the Internet, we have witnessed fast propagation of mis/disinformation. Such trend of mis/disinformation must be detected early and curbed effectively in order to mitigate its potential harm to the nation and society. Our previous work successfully identified distinctive patterns of the propagation of true and fake news in the form of text over social media, with Twitter as a case study. In this work, our goal is to extend the target to include multimedia mis/disinformation and study the characteristics of their dissemination using machine learning based techniques. We also aim to investigate countermeasures that can be employed to slow down or prevent further propagation based on the identified characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Individuals are easily exposed to fake news on social media, which could lead to devastating consequences. With its unique transparency and reliability features, such as smart contracts, securely stored hashtags, and reputation systems, blockchain technology is a promising solution to combat fake news. Although a few studies have examined this novel application of blockchain, none of them have empirically and systematically tested the applicability of such a framework in real-life contexts and the acceptance of related users. This paper contributes to the literature by combining three theories (Technology Acceptance Model, Social Capital Theory, and Social Cognitive Theory) into a proposed conceptual framework to empirically examine how people would accept the introduced blockchain – fake news systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deciding the right ways of investing money is complicated with the trade-offs of risks and returns. High Yield Investment Programs (HYIPs) have extremely high promised return rates, sometimes more than 100% daily, that can attract many investors. As a result, investing in most cryptocurrencies like bitcoins, HYIPs, and their websites is booming. Prior researchers have already classified HYIPs as scams. However, some websites still last for many years or constantly pay investors as promised. Therefore, we propose that trustworthiness of HYIP websites is questionable, with different affecting factors. Using a large secondary dataset, we empirically tested the relationships between response variables (trustworthiness) two groups of explanatory variables: stability, and media/platform factors. Overall, the effects of the media/platform group are much less than the stability group toward trustworthiness. This research provides theoretical and practical insights into the HYIP investment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A lower extremity exoskeleton for the right leg is assembled using 3D printed soft plastic parts within a semi-rigid frame and structure balancing the rigid and soft flexible components. One leg has been assembled at this time allowing us to test the function of the mechanical system’s frame and structure. The comfort and safety of the exoskeleton is important for increasing the time the exoskeleton can be tolerated by a patient. The knee is the largest and most complicated of the lower extremity joints. This exoskeleton joint accommodates the rotational and sliding movement of the femur at the knee to increase the comfort level by allowing for the inherent anatomical motion of the knee. This is often not taken into consideration for most lower extremity orthopedic braces, exoskeletons, or prosthetics. Surface EMGs and IMUs are utilized as input sensors for the exoskeleton control system. Myo-Ware EMG/IMU sensors from Thalmic labs originally designed for the upper extremity must be adapted for our lower extremity exoskeleton. Pneumatic artificial muscles and Bowdoin cables controlled by electric motors are incorporated to assist movement as needed. The EMG/IMU input data is modified and synchronized by a supervised machine learning sensor fusion algorithm. EMG and IMU characteristics help reduce noise and synchronize input data. A reinforcement unsupervised learning (RL) algorithm determines intention and control the exoskeleton actuators. This depends on adequately implementing the RL network algorithm, thus testing the activities of daily living (standing, sitting, squatting while maintaining balance) are needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remaining competitive in future conflicts with technologically-advanced competitors requires us to continue to invest in developing robust artificial intelligence (AI) for wargaming. Although deep reinforcement learning (RL) continues to show promising results in intelligent agent behavior development, it has yet to perform at or above the human level in the long-horizon, complex tasks typically found in combat modeling and simulation. Capitalizing on the proven potential of RL and recent successes of hierarchical reinforcement learning (HRL), our research aims to extend the use of HRL to create intelligent agents capable of performing effectively in these large and complex simulation environments. We plan to do so by developing a scalable HRL agent architecture and training framework, developing a dimension-invariant dynamic abstraction engine, and demonstrating scalability by incorporating our approach into a high-fidelity combat simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Explainable artificial intelligence (XAI) is an area of ongoing research for a variety of machine learning applications that aims to describe decision making employed by many artificial neural networks (ANNs) in an intuitive manner for humans. Another seemingly unsolved topic generating research interest is AI conceptual understanding, (such as in large language models) which falls within a more general problem of AI model generalizability and integrability. How might computational problems of that difficulty be solved, and more importantly, which approach will result in genuine advancement in the fields of neuroscience and/or artificial intelligence? An oftenused strategy in machine learning is to use knowledge about biological brains to inspire machine learning model design. That strategy can continue to be useful and novel because both neuroscience and artificial intelligence are rapidly growing fields of research; the same unification strategy between both fields can yield significantly different models as neuroscience knowledge and machine learning capabilities both grow and develop. In this paper, we focus on the usefulness of using current neuroscience knowledge to design biologically hierarchical and modular architecture (BHMA) models of the brain that can solve spatial learning tasks. We discuss the potential implications of biologically constrained hierarchical neural networks for the future of human-computer interaction, computational neuroscience, and machine learning given their potential for generating more explainable models with verifiable aspects of conceptual understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Software-Defined Networking paradigm has becoming one of the most studied network concept thanks to its main characteristics: open and programmable. SDN suggests a centralized approach of the network intelligence decoupling the packet forwarding process (Data Plane) from the routing process (Control Plane) in the network devices. Hence the switches only have packet forwarding capability and cannot make any routing decisions, while decision making is done by the controller. OpenFlow is the most popular protocol used to help the switches and the controller to communicate. So, the controller can instruct the forwarding devices through a flow table logic that differs from the traditional destination-based forwarding to a more efficient generalized-based forwarding. This paper want to present an example of software application that runs on the controller and instructs the network to find the shortest path between each node. An implementation of Dijkstra and Bellman-Ford algorithms on a Ryu SDN controller is presented and a comparison between these two approaches is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.