HeartID biometric authentication technology is integrated into the multi-faceted steering wheel and car seat, allowing only authorized personnel to operate the vehicle, with access to the vehicle's connected devices and computers. The application of this HearID will be used for law enforcement and ride-sharing services where the person can access the car using keyless entry technology. In this study, we investigate the possibility of incorporating human heart signal called ECG into autonomous cars. Our platform can facilitate a secure authentication for end users using their heart signal to enable entry to the car. In this paper, we have presented the ECG-based biometric authentication for connecting autonomous vehicle that can act as an interface between humans and sensors for authentication purposes. In this study, we turn the ECG noise into the good feature where the noise is used for random number generators with high entropy. For evaluation of HeartID, NIST test suit is applied to evaluate the randomness of TRNG.
The wearable IoT monitor device becomes increasingly popular in a market where heart-rate monitors, pulse oximeters sensor are integrated into a device and already play an important role in everyday life. With these considerations in mind, it is important to maintain the security and privacy of users. Biometric authentication offers several benefits such as improved facilitation, enablement, and automation. However, traditional biometric modalities such as fingerprint, face, and iris require specific hardware or sensors to capture the biometric. In this paper, we introduce next-generation biometric called photoplethysmograph (PPG) that are internal to the body, offering a number of advantages. First of all, they are harder to clone, to harvest and to potentially hack, by the nature of the fact they are internal. Other benefits include liveness detection, and interoperability, which traditional modalities don't necessarily have. In this study, we developed the PPG biometric-based key generation that can be extracted by our adaptive quantization approach. The experimental result is shown that 175 key bits with 99.9% average reliability and 0.89 min-entropy can be achieved.
Wearable technology is growing exponentially and becoming as part of our life where daily activities will be tracked and monitored. As more and more wearable device are connected to the internet, the more urgent the need for authentication is required. Biometrics is rapidly gaining popularity as a powerful authenticator to meet this challenge. Biometric technology enables users to identify themselves quickly and securely. However, because of the nature of the IoT device, there is a need for continuous authentication. Cardiovascular biometric technology such as ECG and PPG are already moving forward as a biometric continuous authentication. However, it was recently shown that an ECG signal is vulnerable to presentation attack. Since PPG is widely used in wearable devices, they are more vulnerable to presentation attack. In this paper, we introduce a systematic presentation attack on PPG biometric where a short template of the victim’s PPG is collected by an attacker and used to map the adversarial’s PPG into the victims.
Physical Unclonable Functions (PUFs) act as functions encoded in hardware, which produce a unique output, being referred to as a response, for a specific input, being called a challenge. PUFs provide a varying level of security, and can, therefore, be used in different applications, depending on the number of their available inputoutput pairs, which are referred to as Challenge-Response Pairs (CRPs). For example, a PUF with only a single challenge-response pair can be used for identification, while a PUF with multiple CRPs can be used to provide multiple different session keys for authentication.1, 2 In the first case, the response needs to be secret, while, in the second one, responses can be also used without any secrecy, as long as the related CRPs are not used again. Generally, PUFs are vulnerable to modeling and machine learning attacks. In this paper we investigate and show the resiliency of DRAM-based PUFs against Machine Learning (Naive Bayes (NB), Logistic Regression (LR) and Support Vector machine (SVM)) and also Deep Learning (in particular convolutional neural network (CNN) attacks. We are the first to provide a detailed analysis of on-board DRAM startup values for the purpose of generating unique IDs and their vulnerabilities to attacks. We performed our experiments on the Digilent Atlys board (Xilinx Spartan 6 FPGA); using the on-board DRAM memories (MIRA P3R1GE3EGF G8E DDR2). Our results indicates that the 3 startup value-based DRAM PUFs (DRAM1, DRAM2, and DRAM3) are robust against machine learning attacks.
Dental caries is a microbial disease that results in localized dissolution of the mineral content of dental tissue. Despite
considerable decline in the incidence of dental caries, it remains a major health problem in many societies. Early
detection of incipient lesions at initial stages of demineralization can result in the implementation of non-surgical
preventive approaches to reverse the demineralization process. In this paper, we present a novel approach combining
deep convolutional neural networks (CNN) and optical coherence tomography (OCT) imaging modality for
classification of human oral tissues to detect early dental caries. OCT images of oral tissues with various densities were
input to a CNN classifier to determine variations in tissue densities resembling the demineralization process. The CNN
automatically learns a hierarchy of increasingly complex features and a related classifier directly from training data sets.
The initial CNN layer parameters were randomly selected. The training set is split into minibatches, with 10 OCT
images per batch. Given a batch of training patches, the CNN employs two convolutional and pooling layers to extract
features and then classify each patch based on the probabilities from the SoftMax classification layer (output-layer).
Afterward, the CNN calculates the error between the classification result and the reference label, and then utilizes the
backpropagation process to fine-tune all the layer parameters to minimize this error using batch gradient descent
algorithm. We validated our proposed technique on ex-vivo OCT images of human oral tissues (enamel, cortical-bone,
trabecular-bone, muscular-tissue, and fatty-tissue), which attested to effectiveness of our proposed method.