Open Access
2 August 2024 Deep inner-knuckle-print recognition using lightweight Siamese network
Hongxia Wang, Hongwu Yuan
Author Affiliations +
Abstract

Texture features and stability have attracted much attention in the field of biometric recognition. The inner-knuckle print is unique and not easy to forge, so it is widely used in personal identity authentication, criminal detection, and other fields. In recent years, the rapid development of deep learning technology has brought new opportunities for internal-knuckle recognition. We propose a deep inner-knuckle print recognition method named LSKNet network. By establishing a lightweight Siamese network model and combining it with a robust cost function, we can realize efficient and accurate recognition of the inner-knuckle print. Compared to traditional methods and other deep learning methods, the network has lower model complexity and computational resource requirements, which enables it to run under lower hardware configurations. In addition, this paper also uses all the knuckle prints of four fingers for concatenated fusion recognition. Experimental results demonstrate that this method has achieved satisfactory results in the task of internal-knuckle print recognition.

1.

Introduction

In the information-based network society, we often need a reliable way to effectively identify an individual’s true identity. Biometric recognition technology is such a solution. It utilizes the inherent physical or behavioral characteristics of the human body to verify personal identity by means of image processing and pattern recognition. Compared with the traditional authentication methods based on passwords or identity cards, biometric identification technology has unique advantages. It can be carried around without additional memory burden and is difficult to be faked. These characteristics endow biometric identification technology with higher security, reliability, and practicability. Therefore, biometric technology has been widely applied in various fields, providing an efficient and reliable solution for identity verification.

With the rise of the “Internet of Things,” biometric identification technology is showing broad application prospects. In the field of intelligent visual Internet of Things, biometric identification technology, as one of the core technologies, is mainly applied to the acquisition of human identification. At present, fingerprint recognition, face recognition, and iris recognition are the most successful three kinds of biometric recognition technology. In addition to these technologies, academia and industry are also actively researching and promoting other biometric identification technologies with great market potential. These constantly evolving technologies will bring more application opportunities and value to various industries and help build an intelligent and efficient Things ecosystem.

In recent years, researchers have shown widespread interest in emerging biometric identification technologies based on human hand features.15 In addition to the traditional palmprint recognition, palmar vein recognition, and finger vein recognition, knuckle print recognition has also become one of the most concerned technologies. The recognition of knuckle print has unique advantages. First, the texture and line features of knuckle print are rich, which can achieve high recognition accuracy.6 Second, the collection of knuckle prints is convenient, which can be obtained only by using ordinary low-resolution cameras. Nowadays, cameras are low-cost and widely used, which provides convenience for the promotion and application of knuckle print recognition.7,8 In addition, the knuckle print can be combined with palmprint, hand shape, and finger vein to form a high-precision recognition system.911 Finally, knuckle prints exhibit distinct features such as line distance and direction, which could potentially enhance their suitability for large-scale retrieval tasks, particularly in scenarios where extensive data collection is feasible.

Knuckle prints refer to the curved muscle lines or textured areas located on the first, second, and third joints of a person’s fingers. It has its own unique rules and is clearly distinguished from other biological features such as fingerprints and palmprints. These regions contain the fine structure and texture features of the knuckles of the hand, which can be used for individual identification and recognition. Compared with fingerprints and palmprints, knuckle prints present certain differences in morphology and features, making them an independent and valuable biometric recognition technology. Compared with fingerprints, its lines are slightly thicker, and the small furrows between the lines are slightly wider. Overall, the texture structure of the knuckle print is not complicated, and most of it is composed of horizontal or oblique straight lines, wavy lines, and curved lines. Compared with the palmprint, its line is generally shorter, without lines as long as the main line of the palmprint, and the direction of the line is also relatively single.

All ten fingers of the human hands have knuckle prints, which can be divided into two types: knuckle prints on the back of the hand and knuckle prints on the palm. The knuckles on the back of the hand are also called the dorsal knuckles, whereas the knuckles on the palm are called the inner knuckles. The two types of knuckle prints differ in position and features, providing more sources of information for biometric identification. More accurate and comprehensive individual recognition can be achieved by analyzing and comparing the dorsal and inner knuckles of the hand. Finger textures are considered to be unique and do not change over time,12 even as fingerprints in identical twins.13,14

In a finger, there are usually three distinct areas of flexor muscle lines, corresponding to three knuckles. Among them, the knuckle prints in the middle region contain rich information and are called the main knuckle prints. The area closest to the nail tip of the flexor muscle line is called the first little knuckle line. The area closest to the flexor muscle line of the palm is called the second little knuckle. These different knuckle areas differ in position and features, providing more detailed information for individual finger recognition. By analyzing and comparing the main knuckle print, the first little knuckle print, and the second little knuckle print, a more accurate and comprehensive finger feature recognition can be achieved.

This paper mainly studies the recognition algorithm of inner-knuckle print. The main contributions of this research are as follows:

  • 1. the first method to utilize similarity as a deep network metric for knuckle print recognition

  • 2. propose a fast and universal method for obtaining the region of interest (ROI) of knuckle prints

  • 3. provide a self-collected dataset in the absence of a public dataset on the inner-knuckle prints

  • 4. propose a lightweight network (LKSNet) as a branch of the Siamese network model to extract the similarity of knuckle prints, which improves both speed and accuracy compared to the original twin network

  • 5. propose robust loss to improve training accuracy and eliminate the imbalance between the categories of some knuckle databases, which solves the problem of difficult case mining to a certain extent

  • 6. propose the Multi-Inner-knuckle-print Fusion Network (MIKPF) algorithm is to fuse the ROI of four fingers to achieve the best recognition rate.

2.

Related Work

In this section, we briefly review the basic models of ROI extraction, the basic models of orientation feature extraction, and the orientation feature representation of palmprint images. In general, the inner-knuckle print recognition process is illustrated in Fig. 1.

Fig. 1

Recognition process of inner-knuckle print.

JEI_33_4_043034_f001.png

First, collect the knuckle print image. Second, preprocess the image, such as image normalization processing and ROI cutting. Then extract features for matching and recognition. For the multi-modal recognition scheme, multiple features need to be fused and output. Finally, obtain the recognition results.

2.1.

ROI Extraction Algorithm for Inner-Knuckle Print

To create a new document, do the following: The pre-processing of knuckle prints mainly includes knuckle print ROI image cutting and image quality assessment. For the preprocessing algorithm of the inner knuckle line, the ROI of the knuckle line is generally determined according to the energy intensity of the inner-knuckle line. Kang et al.15 proposed a preprocessing algorithm for cutting ROI images of inner knuckles, which is quite representative. The specific processing flow is illustrated in Fig. 2. This paper will perform similar ROI extraction operations on the provided palmprint original image, and finetune some of the details to achieve better extraction effects, which will be described in detail in Sec. 4.1.

Fig. 2

Inner-knuckle print ROI extraction process.

JEI_33_4_043034_f002.png

2.2.

Inner-Knuckle Print Feature Extraction and Matching

The inner-knuckle print recognition algorithm is the same as the palmprint recognition algorithm, so we classify the recognition algorithms of inner-knuckle print. One approach is structure-based identification. Liu et al.16 used the Gabor filter and derivative line feature extraction algorithms to extract the line features of the knuckle prints, fused the lines extracted by the two, and used normalized cross-correlation to carry out line feature distance matching. Xu et al.17 first used competitive code to calculate the Gabor energy diagram and extracted line features from it. They then constructed structure-context descriptor (SCD) feature representation for line features and used the Earth Mover’s Distance (EMD) for matching.

In addition, the recognition method based on subspace learning has also been applied to related research. Savic and Pavei18 proposed a knuckle-print recognition scheme based on LDA and regularized linear discriminant analysis (RDLDA). Sanches19 proposed a recognition method based on PCALDA, which first used principal component analysis (PCA) to reduce the dimensionality of knuckle print images and then used linear discriminant analysis (LDA) to increase discriminability. Zhang et al.20 proposed a recognition algorithm based on locality preserving projection (LPP), which first performed wavelet transform on the knuckle-print image, and then used LPP to reduce the dimensionality of the wavelet transform coefficients.

The recognition method based on direction coding has always been an important scheme in the field of biometric recognition, which has high robustness and stability. Meraoumi et al.21 proposed a knuckle-print recognition algorithm based on competitive code to extract directional coding features from the main knuckle-print and the first little knuckle-print of the inner-knuckl-print for matching. Michael et al.22 proposed the directional coding recognition method of knuckle prints and performed wavelet transform on knuckle prints to obtain low-resolution representation. They then used the Sobel gradient operator for edge detection in horizontal, vertical, and 45-deg and negative 45-deg directions and compared the energy levels in the four directions at the same location. The directional index value is used as the directional feature of the position, and the Hamming distance is used for feature matching. In the first little knuckle region, Kumar and Zhou23 proposed a knuckle-print recognition method based on competitive code and local Radon transform and designed a matching scheme based on the fusion of global and local matching values to obtain robust recognition results.

Local features have the advantages of extracting the local structure of the image, being robust, unaffected by scale and rotation, and compact feature representation, which makes them an important tool and technology in image processing and computer vision tasks. The recognition method based on the image local descriptor is also one of the common methods in the field of inner-knuckle print recognition. Liu et al.24 proposed that enhanced LBP for knuckle-print recognition does not encode from 3×3 neighborhoods but encodes in the four neighborhoods on the left and right sides of a horizontal line. After obtaining the coding image, the real-valued coding image is decomposed into multilayer binary images, and cross-correlation is used for feature matching. Nanni et al.25 proposed an inner-knuckle print recognition algorithm based on a multi-resolution local ternary pattern (LTP) and compared the recognition performance with LBP. Bahmed and Mammar26 proposed an improved feature extraction method average line local binary pattern (ALLBP), which improved the feature extraction of the finger inner-knuckle print region.

The inner-knuckle prints have clear texture features, unlike palmprints and fingerprints, which have some fine lines to do interference, so the recognition methods based on texture features can often extract important image information. Goh et al.13 proposed a texture feature extraction method based on Ridgelet, which firstly divided the knuckle print image into blocks, and then carried out Ridgelet transform on the image blocks, taking the normalized Ridgelet coefficients of each image block as texture features. Nezhadian and Rashidi27 adopted two feature extraction methods, Gabor wavelet filtering and wavelet energy, and among all features, the forward feature selection algorithm selected 50 better features for recognition.

Due to the fact that the inner knuckle print is located in the inner palm, some researchers carry out multi-modal recognition by simultaneously extracting other features of the hand. Kanhangad et al.28 proposed a unified recognition framework that fuses the 2D palmprint, 2D knuckle print, 3D handshape, and 3D palmprint, in which the 2D knuckle print uses the competitive code algorithm for feature extraction and matching. Zhu and Zhang29 proposed a hierarchical multi-modal recognition scheme, in which the first layer uses the geometric features of fingers for matching, the second layer extracts the features of inner knuckle print for matching, and the third layer extracts the palmprint features for matching. Guan et al.30 proposed a multi-modal recognition algorithm that fused internal knuckle print and finger veins, where line features are extracted for both the inner knuckle prints and the finger veins, and feature layer fusion is performed on the line features. Arulalan et al.31 proposed a multi-modal biometric recognition system based on iris and inner knuckle prints. Bahmed et al.32 proposed a multi-modal biometrics system that utilizes fingerprints and geometric features within finger joints. In addition, both primary and secondary knuckle marks are used.

In recent years, deep learning has flourished in the field of computer vision, and there are many new researches in the field of biometric recognition, which have achieved good success. Xue et al.33 input the inner knuckle print image into the convolutional neural network for feature extraction and studied the learning rate, the number of convolutional kernels, the number of neurons in the fully connected layer, the number of convolutional layers in the network, and the influence of different optimization algorithms on the recognition results and obtained the best network parameters. Prasanna and Deepika34 trained the same convolutional neural network topology on palmprint and inner knuckle print respectively to adapt the neural model to different biological features and then carried out feature-level fusion. Most studies are not focused on the recognition of inner-knuckle print recognition, and they often use the knuckle prints on the back of the hand to individually recognize or cascade other features such as cascade palm prints.6,3538 Using the knuckle prints on the back of the hand requires collecting an additional image of the back of the hand, which is inconvenient and time-consuming. Moreover, the knuckle prints on the back of the hand are complex, which can lead to errors in extracting ROI.

3.

Method

3.1.

Lightweight Knuckle Print Siamese Network for Feature Extraction (LKSNet)

Figure 3 illustrates the algorithm flowchart of this paper. We input RGB images by splicing the knuckle prints of three different parts of a single finger through channel level, which has the advantage of reducing the defect of a few features in the gray image of a single part. Different from the Siamese network in,39 we propose a new backbone network LKSNet as a branch network of the Siamese network, which can fully extract highly recognizable features. At the same time, we use two fully connected layers for multi-feature prediction, which are used for feature extraction and category classification respectively. In addition, the depth separable module and inverted residual module from MobileNet-V340 are widely utilized in LKSNet, which makes the accuracy and speed of the network have a good performance. Table 1 presents the specific structure of LKSNet, Mb_i is an inverted-residual bottleneck module in MobileNet-V3.

Fig. 3

Algorithm flow chart.

JEI_33_4_043034_f003.png

Table 1

Model architecture.

InputOperatorOut-channelStride
100 × 100 × 3(stem) Conv2d322
50 × 50 × 32(separable_conv) Conv2d321
25 × 25 × 32(separable_conv) Conv2d161
25 × 25 × 16mb_0322
13 × 13 × 32mb_1322
7 × 7 × 32mb_2802
7 × 7 × 80mb_3801
7 × 7 × 80mb_4801
7 × 7 × 80mb_5801
4 × 4 × 80mb_61922
4 × 4 × 192mb_71921
4 × 4 × 192mb_81921
4 × 4 × 192mb_91921
4 × 4 × 192mb_103201
4 × 4 × 320(conv_before_pooling) Conv2d12801
4 × 4 × 1280AvgPooling12801
1 × 1 × 1280FC_1500
1 × 1 × 500FC_210
1 × 1 × 10FC_31

3.2.

Robust Loss

To train LKSNet networks well, it is necessary to define a differentiable cost function. Because twin networks are not designed to classify inputs, directly using the cost function of classification (such as cross-entropy) is not suitable. We propose a cost function named Robust_Loss, which consists of two parts, illustrated in Eq. (1). The first half is contrastive loss with modulation factor, and the second part uses the binary cross-entropy cost function BCE loss as an auxiliary cost function. Let x1 and x2 be the inputs to the LKSNet network, and bel denotes the binary label of whether x1 and x2 match or not, with label{0,1}. If x1 and x2 are similar, the label is 0. If not, the label is 1.

Eq. (1)

Robust_Loss(ω,label,x1,x2,pred)=  12·α·(1label)·dω2+12·β·label·[max(margindω,0)]2+BCE(pred,label),
where α=num_1/(num_0+num_1) and β=num_0/(num_0+num_1) are the modulation factor, which has a certain effect of eliminating class imbalance. dω is the Euclidian distance of two feature vectors output by the LKSNet network, namely dω=  F(x1)F(x2); F denotes that LKSNet maps inputs x1 and x2 to their eigenvectors; and ω denotes the weights. Margin is used to define a boundary on F such that only negative samples within that range have an effect on the cost function. For all training samples, the total cost function is given by Eq. (2)

Eq. (2)

L(ω)=1Ni=1NRobust_Loss(ω,(label,x1,x2,pred)i).

3.3.

Multi-Inner-Knuckle Print Fusion Network

This section will use all the inner knuckle prints of the four finger regions for fusion recognition to achieve higher recognition accuracy. As shown in Fig. 4, we proposed a simple fusion network framework, which fed the RGB-ROI of four fingers into LKSNet as inputs to obtain four similarity values, and then made fusion decisions on the four similarity values to obtain the final forecast output. When making a positive prediction, the decision fusion equation is given by Eq. (3).

Eq. (3)

(Score1<1&&Score2<1&&Score3<1)||Score1<1&&Score2<1&&Score4<1)||(Score2<1&&Score3<1&&Score4<1),

Eq. (4)

((Score1<1&&Score2<1)||(Score1<1&&Score3<1)||(Score2<1&&Score3<1))&&(Score1+Score2+Score3+Score4<4).

Fig. 4

MIKPF framework.

JEI_33_4_043034_f004.png

Since the results of the knuckle prints within the four fingers are fused at the decision layer, the number of fingers is not an odd number, so a constraint needs to be added to limit the case where only two fingers predict correctly [i.e., Eq. (4)]. When the decision of Eqs. (3) and (4) is satisfied, we consider it to be a correct match. Among them, Eq. (3) can effectively prevent decision-making errors affected by excessive similarity weight, and Eq. (4) can ensure that only half of the predictions are correct, preventing the problem of excessive proportion of negative cases.

4.

Experiment

In this section, we will conduct relevant experiments. First, we introduce the algorithm for extracting the ROI of knuckle print and obtain relatively accurate ROI images. Second, we introduce the relevant settings of the experiment in this paper. Finally, we selected nine methods as comparative experiments, including non-Siamese methods: competitive code,41 ordinal code,42 RLOC,43 LLDP,44 EEPNet,45 CCNet46, CO3Net,47 and Siamese methods: FK-Siamese48 and CHKM-Siamese.49 The effectiveness of LKSNet and MIKPF algorithms has been verified through experimental comparisons from multiple angles and levels.

4.1.

Performance Metrics

Generally speaking, the system performance evaluation standard of the inner knuckle print recognition algorithm is shown in Table 2. Average recognition rate (ARR), equal error rate (EER), gallium nitride (GAN, false acceptance rate (FAR), and receiver operating characteristic (ROC) curve will be used as experimental evaluation indexes in the following experiments.

Table 2

The main performance metrics of biometric recognition systems.

Performance metricsAbbreviationDescription
RecognitionARRProportion of the total number of correctly predicted results
False acceptance rateFARProbability of accepting biometric features from non-A as features of A by mistake.
Genuine rejection rateGRRProbability of judging non-A biological features as non-A features, where FAR + GRR = 1.
False rejection rateFRRProbability of rejecting the biometric feature from A as the feature of other individuals.
Genuine acceptance rateGARJudging the biological characteristics of A as the probability of biological characteristics of A, where FRR + GAR = 1.
Equal error rateEERError rate when FAR and FRR are equal
Receiver operating characteristic curveROC curveFAR and FRR show two curves with the change of threshold, one is ROC, and the other is FAR and GAR.

4.2.

ROI Extraction

4.2.1.

Dataset introduction

This study has received the necessary ethical approval from the Institutional Review Committee of Anhui Xinhua University and strives to obtain informed consent for the palm picture from the participant or their authorized representative office.

The region of interest (ROI) for knuckle creases, also known as knuckle crease analysis, is a vital area of study in various fields, including biometrics, forensic science, and medical diagnostics. Knuckle creases, or dermatoglyphics, refer to the intricate patterns formed by the folds and ridges on the skin’s surface around the joints, particularly prominent at the knuckles.

Because the published datasets of inner-knuckle print are very few and the fact that they do not include the inner-knuckle prints of all fingers except the thumb, the datasets used in this article were obtained from the original hand images for the required ROI.

XINHUA is a dataset collected by ourselves, which contains 2000 hand images of 50 subjects, including 41 males and 9 females, all aged between 20 and 30. The database was collected in two phases from January 2022 to April 2022, with each stage providing 10 left-hand and 10 right-hand images per person. It collects the indoor scene through an iPhone XR smartphone.

The IIT Delhi50 contact palmprint database contains 2601 images collected from 460 palms, with a total of 230 people providing data, among which 5 to 7 palmprint images per palm were collected under different hand postures. In addition to the original images, the Indian Institute of Technology (ITT) Delhi palmprint database also provides 150×150  pixel normalized and cropped palmprint images.

BJTU-V251 contains 2663 hand images of 148 volunteers, including 91 males and 57 females, ranging in age from 8 to 73. The database was collected in two s from November 2015 to December 2017, with each person providing 3 to 5 left-handed images and 3 to 5 right-handed images in each stage. BJTU-V2 has been built in both indoor and outdoor scenes via smartphones such as iPhone 6, Nexus 6p, Huawei Mate8, Nubia Z9, and Xiaomi Redmi 1S.

Figure 5 presents 12 hand images, where Fig. 5(a) is a XINHUA image, Fig. 5(b) is an IIT Delhi image, and Fig. 5(c) is a BJTU-V2 image.

Fig. 5

Palm images of various datasets: (a) XINHU, (b) IIT Delhi, and (c) BJTU-V.

JEI_33_4_043034_f005.png

4.2.2.

ROI extraction method

For the preprocessing algorithm of the inner knuckle print, the position of the region of interest of the inner knuckle print is generally determined based on the energy intensity of the inner knuckle line. To obtain the approximate position of each finger region, it is necessary to accurately localize to the four boundaries of the finger region: top, bottom, left, and right. Here, the position of the finger area is preliminarily located in the form of a rectangular box, illustrated in Fig. 6. To obtain the rectangular box, it is necessary to traverse the coordinate sequence of the finger outline. As long as the coordinate positions of the two points P1 and P2 can be obtained, the final ROI can be obtained through subsequent simple preprocessing.

Fig. 6

Cut the finger image region.

JEI_33_4_043034_f006.png

Next, we need to locate the starting point P1 and the ending point P2 of the finger contour sequence. We first use the palmprint ROI linear cluster algorithm52 to find each finger gap point. Suppose we have found the two key points K1 between the index and middle fingers and K3 between the thumb and index fingers. The position information of these two key points can be used to help us obtain the starting point P1 and the ending point P2 of different finger contour sequences.

Here, taking the region of interest of the inner-knuckle prints of the index finger as an example, and other finger extraction algorithms are similar and will not be described again. The steps are illustrated in Fig. 7.

Fig. 7

Process of locating the finger region target rectangular box: (a) traversal of contour sequence, (b) contour point location, and (c) finger region processing.

JEI_33_4_043034_f007.png

Let the starting point of the contour sequence be P1 and the ending point be P2. Assume that the key points between the thumb and index finger are K1, the key points at the tip of the index finger are K2, and the key points between the index and middle finger are K3. Let the contour coordinate sequence F1(x,y) traverse from K1 to K2, the Euclidean distance minimum point from the contour point to K3 is P1. Similarly, if the contour coordinate sequence F2(x,y) traverses from K2 to K3, the Euclidean distance minimum point from the contour point to K1 is P2.

To keep the inner knuckle print information of the second little knuckle, we take the points P3 and P4 with a certain unit length (50 coordinate distances here) from the key points P1 and P2 on the contour line and connect P1P3 and P2P4 to make an extension line (50 coordinate distances here) to P5 and P6. Up to this point, all the coordinates of the finger outline sequence used to describe the rectangular frame have been collected. By traversing the coordinate sequence, it is easy to locate the top, bottom, left, and right boundaries of the target rectangular frame in the finger region.

Based on the principle of image connectivity, the finger image containing only the target finger is extracted from the finger image that has been cut. In the last step of cutting the finger image, we have ensured that the connected component belonging to the finger part is the largest connected component (using the default eight-neighborhood connectivity). Therefore, we only need to find the maximum connected component in the already cut finger image, that is, the finger image that we need to extract. First of all, the image needs to be low-pass filtering, binary, and then select the largest connected region and finally select the corresponding target region in the original image according to the coordinates of the connected region, that is, the finger image we need to extract.

Although the extracted finger image has included all the contents of the target finger region, to extract the ROI of the inner knuckle print in the future, it is inevitable to perform directional normalization processing. As illustrated in Fig. 8, this step involves finding the centroid of the finger and the position of the tip and calculating the angle to correct.

Fig. 8

Direction normalization of finger images: (a) original image, (b) calculate the angle between the finger direction and the vertical axis, and (c) finger image after direction normalization.

JEI_33_4_043034_f008.png

In the inner finger image, there are generally three distinct flexor muscle line areas, corresponding to the three knuckle prints, respectively. They are the first knuckle print, the main knuckle print, and the second knuckle print. This section extracts the rectangular ROI for the first knuckle-print, main knuckle-print, and second knuckle-print regions. The main idea is that in the gray image, the gray value of the knuckle print area has an obvious gradient change in the horizontal direction, so its position can be determined according to this feature, and the coordinate system can be established on this basis to extract ROI of the knuckle print.

In the inner knuckle print region, especially in the main knuckle print region, due to the wide pixel distribution of the knuckle print, many edge information that can be found only by the human eye is easily ignored in the gradient convolution of traditional edge extraction operators (such as Sobel, Prewitt). In addition, the convolution kernel here only considers the gradient change in the vertical direction. In fact, most of the knuckle print information is tilted at an angle in the vertical direction.

To solve this problem, we constructed a 9×9 medication-related falls risk assessment tool (MFRAT)33 filter template suitable for the edge detection of the knuckle print region, as illustrated in Fig. 9. The line width L is set to 3, which is used to solve the problem of wide pixel distribution and tilting of knuckle prints in the vertical direction. The edge detection effect after convolution using the MFRAT template is illustrated in Fig. 10.

Fig. 9

MFRAT filter template.

JEI_33_4_043034_f009.png

Fig. 10

MFRAT template detection result: (a) original image and (b) edge detection image.

JEI_33_4_043034_f010.png

According to the gray energy of the inner knuckle print in the binary diagram, a curve graph was plotted [Fig. 11(a)], and the position of the inner knuckle prints was determined [Fig. 11(b)]. Finally, a coordinate system was established to cut out the region of interest of the inner knuckle prints [Fig. 11(c)].

Fig. 11

Inner knuckle print ROI extraction: (a) a curve graph, (b) location graph, and (c) region of interest.

JEI_33_4_043034_f011.png

4.3.

Experimental Setting

The setting of the database is particularly important, and this paper carries on two kinds of database settings, namely staged type and mixed type. For the setting of a staged database, the data is divided into two stages according to the collection stage, with the first stage data used as the training set and the second stage data used as the testing set. It should be noted that due to the excessive number of negative samples (that is, sample pairs that do not belong to the same class), our positive and negative samples are set in a ratio of nearly 1:1. To enhance the generalization of data, we do not force the positive and negative ratio to be set at 50%, but simulate the generation of positive and negative labels by a random function.

In addition, our experiments were carried out on NVIDIA GPU GTX2080 and Intel CPU i7-8700 hosts with 8 GB of memory on the graphics card. The network learning rate in the experiment is 0.0005, the optimizer uses the Adam optimizer, the batch size is 4, and the epoch is 150.

4.4.

Results

Figure 12 illustrates some examples of similarity matching, where Figs. 12(a) and 12(b) are data pairs of the same type and Figs. 12(c) and 12(d) are data pairs of different type. When the similarity is less than 1, we consider it to be of the same kind; otherwise, it is considered to be of different kind. The smaller the similarity, the more similar it is.

Fig. 12

Example of similarity matching.

JEI_33_4_043034_f012.png

4.4.1.

LKSNet experiment results and comparison with non-Siamese methods

Tables 3, 4, and 5 present the experimental results on the XINHUA dataset, IIT Delhi dataset, and BJTU-V2 dataset, respectively. It can be seen that the recognition rate of LKSNet is the best on any data set and any finger data, and we only need to train 150 iterations on LKSNet to achieve a good generalization effect. In other training based on the depth model, we have trained 300 iterations, which also proves that LKSNet is a lightweight model that can be trained quickly. For each method, we focus on its ARR and EER values for each finger’s inner knuckle print. ARR represents the correct recognition rate in model prediction, whereas EER indicates the probability of incorrectly accepting authorized users when rejecting unauthorized ones.

Table 3

Experimental results of XINHUA dataset.

XINHUAMiddle fingerRing fingerIndex fingerLittle finger
ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)
Competitive code88.307.899097.802.800095.803.900594.606.1025
Ordinal code86.108.601598.001.958696.003.086996.502.8121
LLDP76.5011.488095.205.994087.407.892095.306.0240
RLOC88.106.863697.902.403596.003.637497.102.4520
EEPNet91.304.045392.903.711190.904.484592.805.3809
CCNet78.909.748195.903.959891.906.913396.503.4997
CO3Net79.108.410191.506.363284.807.650194.805.1475
LKSNet(ours)97.302.982598.301.471497.602.687597.802.1442

Table 4

Experimental results of the IIT Delhi dataset.

IIT DelhiMiddle fingerRing fingerIndex fingerLittle finger
ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)
Competitive code97.851.357896.722.012494.712.805688.165.3449
Ordinal code98.461.109398.271.357197.012.664196.392.4787
LLDP96.211.966395.092.396594.243.596990.734.2525
RLOC97.751.575597.101.734995.843.034194.903.3851
EEPNet95.422.427994.022.869592.343.934684.408.6311
CCNet89.356.270587.535.826484.346.530175.448.4102
CO3Net84.356.890281.647.972076.348.795068.9512.6605
LKSNet(ours)99.780.249699.130.533299.780.183499.570.4688

Table 5

Experimental results of BJTU-V2 dataset.

BJTU-V2Middle fingerRing fingerIndex fingerLittle finger
ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)
Competitive code99.550.676199.211.520998.701.195292.684.9465
Ordinal code99.770.281699.610.456399.150.856197.242.6742
LLDP99.100.955898.141.576598.141.701695.383.5863
RLOC99.720.675598.871.229498.651.388296.282.8996
EEPNet1000.098599.770.402199.770.349597.842.2894
CCNet98.820.963597.671.617397.691.926694.594.1719
CO3Net96.001.900393.972.804893.803.519786.144.5206
LKSNet(ours)1000.077899.830.199599.930.082698.370.8977

The following conclusions can be drawn from Table 3. First, it is notable that different methods show performance disparities across different fingers. For instance, regarding the middle finger, we observe that LKSNet (our proposed method) achieves the best performance with an ARR of 97.30% and an EER of 2.9825%. In comparison, other methods exhibit slightly different performances, but overall, LKSNet demonstrates superior performance on this finger. Similarly, for other finger knuckle prints, LKSNet also exhibits similar advantages. Second, differences between methods are observed in terms of ARR and EER. For example, compared to competitive methods, LKSNet achieves higher ARR values for all fingers, indicating its ability to more effectively reduce risks. Furthermore, LKSNet also achieves lower EER values for all fingers, implying its reliability in rejecting incorrectly identified unauthorized users (Fig. 13).

Fig. 13

ROC curve of knuckle prints in XINHUA: (a) middle finger, (b) ring finger, (c) index finger, and (d) little finger.

JEI_33_4_043034_f013.png

From Table 4, finger recognition experiments were conducted on the IIT Delhi dataset, and the performance of different algorithms on this dataset was analyzed. We observed that the image quality of the IIT Delhi dataset is relatively poor, which poses challenges for finger recognition tasks. However, through a comparative analysis of the experimental results, we identified some interesting findings. First, we noted that most algorithms exhibited relatively similar average recognition rates (ARR) and average equal error rates (EER) on this dataset, but there were noticeable differences in certain cases. Particularly, we found that the LKSNet algorithm performed the best on the IIT Delhi dataset, due to its combination of deep separable convolution, lightweight convolution, and attention mechanism in the design process to optimize the network structure and improve the feature representation ability. Second, we observed that some algorithms exhibited poorer performance on the IIT Delhi dataset, with relatively higher EER. These included algorithms such as CCNet and CO3Net, which demonstrated weaker performance when dealing with noisy and low-quality images in the dataset. This further underscores the challenges posed by the IIT Delhi dataset and highlights the sensitivity of algorithms to dataset characteristics (Figs. 14 and 15).

Fig. 14

ROC curve of knuckle prints in IIT Delhi: (a) middle finger, (b) ring finger, (c) index finger, and (d) little finger.

JEI_33_4_043034_f014.png

Fig. 15

ROC curve of knuckle prints in BJTU-V2: (a) middle finger, (b) ring finger, (c) index finger, and (d) little finger.

JEI_33_4_043034_f015.png

Analyzing the results from Table 5, several observations can be made. Overall, the algorithms achieved high average recognition rates (ARR) across all finger positions. This indicates that they were generally effective in identifying fingers in the BJTU-V2 dataset. While the ARR is high for most algorithms, there are variations in the average error rates (EER) across different finger positions. Some algorithms, such as CCNet and CO3Net, show relatively high EER for some finger positions compared with other algorithms. This is because CCNet and CO3Net use competitive coding with adjustable parameters, and their adjustment of Gabor filter parameters is not as good as the empirical value set manually. Notably, LKSNet, the algorithm developed in this study, achieved the highest ARR and lowest EER across all finger positions. This indicates its superior performance compared to other algorithms on the BJTU-V2 dataset. LKSNet consistently demonstrated excellent accuracy and robustness in identifying fingers, showcasing its effectiveness for finger recognition tasks.

Overall, the recognition effect of traditional methods and deep learning methods have mutual advantages, but traditional methods tend to have better stability. On one hand, deep learning requires big data to enhance the generalization of the model; on the other hand, due to the problem of finger image resolution, it has to be scaled to adapt to the input of the model, resulting in the loss of original data information. In addition, a good training strategy is often the key to success. In the comparative experiment in this paper, the methods based on deep learning all adopt the experimental settings in53 to pursue the best effect, but only training 300 iterations may not be able to achieve the optimal solution of the model.

4.4.2.

LKSNet experiment results and comparison with Siamese methods

This section will comprehensively compare the similarities and differences between LSKNet and Siamese methods. Table 6 offers a comprehensive comparison between our proposed LKSNet method and two Siamese network-based approaches, FK-Siamese and CHKM-Siamese, across different finger positions and datasets. Upon meticulous examination of the outcomes, several crucial insights emerge. LKSNet consistently outperforms both FK-Siamese and CHKM-Siamese methods in terms of both average recognition rate (ARR) and equal error rate (EER) across all datasets and finger positions. This is because FK-Siamese and CHKM-Siamese have simple neural network structures, which only combine some convolution and fully connected layers, whereas our method combines multiple separable convolution operations. In terms of the loss function, FK-Siamese uses contrastive loss, CHKM-Siamese uses binary cross-entropy loss, and LKSNet uses a combination of the two, which makes our method make better use of the correlation and nonlinear characteristics of data. In addition, our model uses more complex optimization strategies and regularization techniques in the training process to improve the generalization ability and stability of the model.

Table 6

Experimental results and comparison with Siamese methods.

Middle fingerRing fingerIndex fingerLittle finger
ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)ARR (%)EER (%)
XINHUA
FK-Siamese87.507.334893.603.932691.805.332894.303.7214
CHKM-Siamese92.404.971095.503.347294.903.682595.203.3892
LKSNet(ours)97.302.982598.301.471497.602.687597.802.1442
IIT Delhi
FK-Siamese96.372.854296.183.051696.862.671596.552.7044
CHKM-Siamese99.050.977398.041.752698.331.652498.191.6822
LKSNet(ours)99.780.249699.130.533299.780.183499.570.4688
BTJU-V2
FK-Siamese99.260.901294.674.032896.382.834592.155.0716
CHKM-Siamese99.450.723697.561.978398.121.712496.442.8044
LKSNet(ours)1000.077899.830.199599.930.082698.370.8977

4.4.3.

MIKPF experiment results

In this section, the inner knuckle prints of four fingers will be fused and output. The recognition scores of the inner knuckle prints of each finger will be obtained by the LKSNet network, and the decision output will be made by voting fusion. Table 7 presents the recognition results of MIKPF networks, and it can be seen that the fusion results have been further improved. It should be noted that the number of training iterations for MIKPF has been reduced from 150 to 100, which also indicates that each branch model works together to accelerate the model training and achieve a good balance between accuracy and speed. It can be seen from the experimental results that a 100% recognition rate is achieved on the IIT Delhi and BJTU-V2 datasets, which is a satisfactory result and fully proves the effectiveness of the MIKPF strategy. Although the XINHUA dataset failed to achieve a very high recognition rate, it also achieved acceptable results.

Table 7

MIKPF experiment results.

DatasetsARREER
XINHUA98.900.9374
IIT Delhi1000.1034
BJTU-V21000.0635

4.5.

Properties

Here, we mainly compare indicators such as model convergence speed and model size in deep learning methods. A model with good generalization ability has the characteristics of fast convergence speed, high accuracy, and small model parameters. In this section, we will focus on performance evaluation in deep learning methods.

Table 8 presents several common model indicators. It can be seen that LKSNet far outperforms other methods in terms of total memory, computational complexity, floating-point arithmetic, and time delay indicators. It should be noted here that latency refers to the time required to complete a prediction, which is particularly important. Although total MAdd and total Flops can reflect the performance of the model to a certain extent, the model may not be suitable when combined with hardware. Therefore, it is necessary and reasonable to compare latency.

Table 8

Depth model performance indicators.

MethodTotal MAdd (G)Total Flops (G)Latency (ms)
EEPNet0.770.397.8
CCNet0.320.175
CO3Net0.410.244.8
LKSNet(ours)0.310.153

Figure 16 illustrates the corresponding indicator histogram. It can be seen that there are obvious differences in the performance of four different model methods in total MAdd, total Flops, and latency. Total MAdd and total flops are usually considered important indicators to evaluate the computational complexity of the model, whereas latency directly affects the speed and efficiency of the model in practical application. First of all, by comparing EEPNet, CCNet, and CO3Net, we can observe that the values of these models are quite different on total MAdd and total Flops. EEPNet shows the highest computational complexity, whereas CCNet and CO3Net are relatively low. This shows that CCNet and CO3Net may be more attractive choices in the case of limited computing resources because they can reduce computing costs while maintaining good performance. LSKNet shows lower total MAdd and total Flops. This means that LSKNet needs fewer computing resources when performing the same task, thus saving time and energy costs. On the other hand, we notice that the value of latency also shows obvious differences. EEPNet has the highest latency, which indicates that it takes a long time to complete the prediction of an image. In contrast, CCNet and CO3Net show lower latency values, which means that they may have higher response speeds in practical applications. LSKNet has the lowest latency value, which indicates that our proposed model is very efficient in completing the prediction task and may be of great significance to real-time applications.

Fig. 16

Histogram of performance indicators.

JEI_33_4_043034_f016.png

5.

Conclusion

In conclusion, this paper introduces LSKNet, a novel deep inner-knuckle print recognition method that leverages a lightweight Siamese network model and a robust cost function. Our method represents a significant advancement in knuckle print recognition for several reasons. First, it is the first approach to utilize similarity as a deep network metric for knuckle print recognition, enhancing the accuracy of recognition. Second, we propose a fast and universal method for obtaining the region of interest (ROI) of knuckle prints, simplifying the preprocessing step. Third, in the absence of a public dataset on inner-knuckle prints, we provide a self-collected dataset, facilitating further research in this domain. Fourth, our lightweight network, LKSNet, outperforms traditional methods and other deep learning approaches in terms of both speed and accuracy. In addition, by introducing the robust loss function, we improve training accuracy and address the imbalance between categories in knuckle databases, enhancing the model’s robustness. Finally, the proposed MIKPF algorithm demonstrates the effectiveness of fusing the ROI of four fingers, achieving the best recognition rate. Overall, our contributions advance the field of inner-knuckle print recognition by providing an efficient, accurate, and robust method for recognition tasks.

One of the difficulties in large-scale retrieval is the retrieval speed. In future research, we will search the inner finger knuckles hierarchically according to gender and age to speed up the retrieval speed and recognition accuracy, which will be beneficial to promote the inner finger knuckles as the biological features of large-scale retrieval scenes. In addition, the task of multimodal recognition has been paid more and more attention recently, and we will explore multimodal fusion recognition modes, such as the fusion algorithm of inner finger knuckles, palm prints, and faces, to achieve higher recognition accuracy.

Code and Data Availability

To replicate or interpret the findings reported in the paper, access to the computer code, data, and materials is necessary. The computer code used in the study can be found at the GitHub repository: https://github.com/HewelXX/LKSNet. The code and database will be publicly accessible and can be downloaded or cloned from the repository.

Acknowledgments

The work was partially supported by the Natural Science Foundation of the Anhui Xinhua University (2023zr003). Anhui Provincial Quality Engineering Project (2020ylzy01). Anhui Province Universities’ excellent scientific research and innovation team (2022AH010099).

References

1. 

M. Oudah, A. Al-Naji and J. Chahl, “Hand gesture recognition based on computer vision: a review of techniques,” J. Imaging, 6 (8), 73 https://doi.org/10.3390/jimaging6080073 (2020). Google Scholar

2. 

W. Wu et al., “Review of palm vein recognition,” IET Biom., 9 (1), 1 –10 https://doi.org/10.1049/iet-bmt.2019.0034 (2020). Google Scholar

3. 

R. V. Adiraju et al, “An extensive survey on finger and palm vein recognition system,” Mater. Today:. Proc., 45 1804 –1808 https://doi.org/10.1016/j.matpr.2020.08.742 (2021). Google Scholar

4. 

B. Hou, H. Zhang and R. Yan, “Finger-vein biometric recognition: a review,” IEEE Trans. Instrum. Meas., 71 1 –26 https://doi.org/10.1109/TIM.2022.3200087 IEIMAO 0018-9456 (2022). Google Scholar

5. 

G. Jaswal, A. Kaul and R. Nath, “Knuckle print biometrics and fusion schemes–overview, challenges, and solutions,” ACM Comput. Surv. (CSUR), 49 (2), 1 –46 https://doi.org/10.1145/2938727 (2016). Google Scholar

6. 

A. S. Tarawneh et al., “DeepKnuckle: deep learning for finger knuckle print recognition,” Electronics, 11 (4), 513 https://doi.org/10.3390/electronics11040513 ELECAD 0013-5070 (2022). Google Scholar

7. 

K. H. M. Cheng and A. Kumar, “Deep feature collaboration for challenging 3D finger knuckle identification,” IEEE Trans. Inf. Forensics Secur., 16 1158 –1173 https://doi.org/10.1109/TIFS.2020.3029906 (2020). Google Scholar

8. 

A. Attia et al., “Deep learning-driven palmprint and finger knuckle pattern-based multimodal Person recognition system,” Multimedia Tools Appl., 81 (8), 10961 –10980 https://doi.org/10.1007/s11042-022-12384-3 (2022). Google Scholar

9. 

J. Khodadoust et al., “A multibiometric system based on the fusion of fingerprint, finger-vein, and finger-knuckle-print,” Expert Syst. Appl., 176 (8), 114687 https://doi.org/10.1016/j.eswa.2021.114687 ESAPEH 0957-4174 (2021). Google Scholar

10. 

L. Zhu and S. Zhang, “Multimodal biometric identification system based on finger geometry, knuckle print and palm print,” Pattern Recognit. Lett., 31 (12), 1641 –1649 https://doi.org/10.1016/j.patrec.2010.05.010 PRLEDG 0167-8655 (2010). Google Scholar

11. 

L. Jiang et al., “Finger vein and inner knuckle print recognition based on multilevel feature fusion network,” Appl. Sci., 12 (21), 11182 https://doi.org/10.3390/app122111182 (2022). Google Scholar

12. 

S. Ribaric and I. Fratric, “A biometric identification system based on eigenpalm and eigenfinger features,” IEEE Trans. Pattern Anal. Mach. Intell., 27 (11), 1698 –1709 https://doi.org/10.1109/TPAMI.2005.209 (2005). Google Scholar

13. 

M. K. O. Goh, C. Tee and A. B. J. Teoh, “Bimodal palm print and knuckle print recognition system,” J. IT Asia, 3 (1), 85 –106 https://doi.org/10.33736/jita.37.2010 (2010). Google Scholar

14. 

B. Bhaskar and S. Veluchamy, “Hand based multibiometric authentication using local feature extraction,” in 4th Int. Conf. Recent Trends in Inf. Technol. (ICRTIT), (2014). https://doi.org/10.1109/ICRTIT.2014.6996136 Google Scholar

15. 

W. Kang, X. Chen and Q. Wu, “The biometric recognition on contactless multi-spectrum finger images,” Infrared Phys. Technol., 68 19 –27 https://doi.org/10.1016/j.infrared.2014.10.007 (2015). Google Scholar

16. 

M. Liu, Y. Tian and L. Lihua, “A new approach for inner-knuckle-print recognition,” J. Vis. Lang. Comput., 25 (1), 33 –42 https://doi.org/10.1016/j.jvlc.2013.10.003 (2014). Google Scholar

17. 

X. Xu et al, “Illumination-invariant and deformation-tolerant inner knuckle print recognition using portable devices,” Sensors, 15 (2), 4326 –4352 https://doi.org/10.3390/s150204326 SNSRES 0746-9462 (2015). Google Scholar

18. 

T. Savic and N. Pavei, “Personal recognition based on an image of the palmar surface of the hand,” Pattern Recognit., 40 (11), 3152 –3163 https://doi.org/10.1016/j.patcog.2007.03.005 PTNRA8 0031-3203 (2007). Google Scholar

19. 

T. Sanches, “Hand surface biometrics for personal recognition,” (2008). Google Scholar

20. 

Y. Zhang, D. Sun and Z. Qiu, “Hand-based feature level fusion for single sample biometrics recognition,” in Int. Workshop Emerg. Tech. and Challenges for Hand-Based Biometrics (ETCHB), (2010). https://doi.org/10.1109/ETCHB.2010.5559289 Google Scholar

21. 

A. Meraoumia et al, “Finger-Knuckle-Print identification based on histogram of oriented gradients and SVM classifier,” in First Int. Conf. New Technol. of Inf. and Commun., (2016). https://doi.org/10.1109/NTIC.2015.7368749 Google Scholar

22. 

M. G. K. Ong et al., “Realizing hand-based biometrics based on visible and infrared imagery,” in 17th Int. Conf., ICONIP 2010, (2010). Google Scholar

23. 

A. Kumar and Y. Zhou, “Human identification using finger images,” IEEE Trans. Image Process., 21 (4), 2228 –2244 https://doi.org/10.1109/TIP.2011.2171697 IIPRE4 1057-7149 (2012). Google Scholar

24. 

M. Liu, Y. Tian and Y. Ma, “Inner-knuckle-print recognition based on improved LBP,” in Proc. 2012 Int. Conf. Inf. Technol. and Software Eng.: Software Eng. & Digital Media Technol., (2013). Google Scholar

25. 

L. Nanni, S. Brahnam and A. Lumini, “A user dependent multi-resolution approach for biometric data,” Int. J. Inf. Technol. Manage., 11 (1), 112 –121 https://doi.org/10.1504/IJITM.2012.044068 (2012). Google Scholar

26. 

F. Bahmed and M. O. Mammar, “Basic finger inner‐knuckle print: A new hand biometric modality,” IET Biom., 10 (1), 65 –73 https://doi.org/10.5120/18522-9716 (2021). Google Scholar

27. 

F. K. Nezhadian and S. Rashidi, “Inner-knuckle-print for human authentication by using ring and middle fingers,” in Int. Conf. Signal Process., (2016). Google Scholar

28. 

V. Kanhangad et al., “A unified framework for contactless hand verification,” IEEE Trans. Inf. Forensics Secur., 6 (3), 1014 –1027 https://doi.org/10.1109/TIFS.2011.2121062 (2011). Google Scholar

29. 

L. Q. Zhu and S. Y. Zhang, “Multimodal biometric identification system based on finger geometry, knuckle print and palm print,” Pattern Recognit. Lett., 31 (12), 1641 –1649 https://doi.org/10.1016/j.patrec.2010.05.010 PRLEDG 0167-8655 (2010). Google Scholar

30. 

F. Guan et al., “Research of dual-model recognition algorithm based on finger vein and finger crease,” in Int. Conf. Biomed. Eng. & Inf., (2012). https://doi.org/10.1109/BMEI.2012.6513014 Google Scholar

31. 

V. Arulalan, N. Geetha and V. Premanand, “Multimodal biometric system using iris and inner-knuckle print,” Int. J. Comput. Appl., 106 (6), 5 –9 https://doi.org/10.5120/18522-9716 IJCTEK 0952-8091 (2014). Google Scholar

32. 

F. Bahmed, M. O. Mammar and A. Ouamri, “A multimodal hand recognition system based on finger inner-knuckle print and finger geometry,” J. Appl. Secur. Res., 14 (1), 48 –73 https://doi.org/10.1080/19361610.2019.1545271 (2019). Google Scholar

33. 

Y. Xue et al., “Research on inner knuckle pattern recognition method based on convolutional neural network,” in IEEE Adv. Inf. Technol., Electron. and Autom. Control Conf., (2021). https://doi.org/10.1109/IAEAC50856.2021.9390 Google Scholar

34. 

Y. S. D. L. Prasanna and R. M. Deepika, “Palm print recognition using inner finger deep learning using neural network,” Int. J. Adv. Sci. Res. Eng. Trends, 5 (12), 48 –55 (2020). Google Scholar

35. 

S. Daas et al, “Multimodal biometric recognition systems using deep learning based on the finger vein and finger knuckle print fusion,” IET Image Proc., 14 (15), 3859 –3868 https://doi.org/10.1049/iet-ipr.2020.0491 (2020). Google Scholar

36. 

R. Chlaoua et al, “Deep learning for finger-knuckle-print identification system based on PCANet and SVM classifier,” Evolv. Syst., 10 (2), 261 –272 https://doi.org/10.1007/s12530-018-9227-y (2019). Google Scholar

37. 

M. Benmalek et al, “A semi-supervised deep rule-based classifier for robust finger knuckle-print verification,” Evolv. Syst., 13 (6), 837 –848 https://doi.org/10.1007/s12530-021-09417-x (2022). Google Scholar

38. 

A. Zohrevand, Z. Imani and M. Ezoji, “Deep convolutional neural network for finger-knuckle-print recognition,” Int. J. Eng., 34 (7), 1684 –1693 https://doi.org/10.5829/IJE.2021.34.07A.12 IFENFD (2021). Google Scholar

39. 

S. Chopra, R. Hadsell and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit. (CVPR’05), (2005). https://doi.org/10.1109/CVPR.2005.202 Google Scholar

40. 

A. Howard et al, “Searching for MobileNetV3,” in IEEE/CVF Int. Conf. Comput. Vision (ICCV), (2020). Google Scholar

41. 

D. Zhang et al., “Online palmprint identification,” IEEE Trans. Pattern Anal. Mach. Intell., 25 (9), 1041 –1050 https://doi.org/10.1109/TPAMI.2003.1227981 ITPIDJ 0162-8828 (2003). Google Scholar

42. 

Z. Sun et al., “Ordinal palmprint represention for personal identification [represention read representation],” in Comput. Vision and Pattern Recognit., (2005). Google Scholar

43. 

W. Jia, D. S. Huang and D. Zhang, “Palmprint verification based on robust line orientation code,” Pattern Recognit., 41 (5), 1504 –1513 https://doi.org/10.1016/j.patcog.2007.10.011 PTNRA8 0031-3203 (2008). Google Scholar

44. 

Y. T. Luo et al, “Local line directional pattern for palmprint recognition,” Pattern Recognit., 50 26 –44 https://doi.org/10.1016/j.patcog.2015.08.025 PTNRA8 0031-3203 (2016). Google Scholar

45. 

W. Jia et al., “EEPNet: an efficient and effective convolutional neural network for palmprint recognition,” Pattern Recognit. Lett., 159 140 –149 https://doi.org/10.1016/j.patrec.2022.05.015 PRLEDG 0167-8655 (2022). Google Scholar

46. 

Z. Yang et al., “Comprehensive competition mechanism in palmprint recognition,” IEEE Trans. Inf. Forensics Secur., 18 5160 –5170 https://doi.org/10.1109/TIFS.2023.3306104 (2023). Google Scholar

47. 

Z. Yang et al, “CO3Net: coordinate-aware contrastive competitive neural network for palmprint recognition,” IEEE Trans. Instrum. Meas., 72 2514114 https://doi.org/10.1109/TIM.2023.3276506 IEIMAO 0018-9456 (2023). Google Scholar

48. 

J. C. Joshi et al., “Finger Knuckleprint based personal authentication using Siamese network,” in 6th Int. Conf. Signal Process. and Integr. Networks (SPIN), (2019). https://doi.org/10.1109/SPIN.2019.8711663 Google Scholar

49. 

S. Hammami et al., “Contactless hand knuckle modality for identity verification using Siamese network,” in Int. Conf. Cyberworlds (CW), (2023). Google Scholar

50. 

A. Kumar and S. Shekhar, “Personal identification using multibiometrics rank-level fusion,” IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.), 41 (5), 743 –752 https://doi.org/10.1109/TSMCC.2010.2089516 (2011). Google Scholar

51. 

T. Chai, S. Prasad and S. Wang, “Boosting palmprint identification with gender information using DeepNet,” Future Gener. Comput. Syst., 99 41 –53 https://doi.org/10.1016/j.future.2019.04.013 FGSEVI 0167-739X (2019). Google Scholar

52. 

Q. Xiao et al., “Extracting palmprint ROI from whole hand image using straight line clusters,” IEEE Access, 7 74327 –74339 https://doi.org/10.1109/ACCESS.2019.2918778 (2019). Google Scholar

53. 

H. Touvron et al, “Training data-efficient image transformers & distillation through attention,” in 38th Int. Conf. Mach. Learn. PMLR, 10347 –10357 (2020). Google Scholar

Biography

Hongxia Wang is an associate professor at the School of Big Data and Artificial Intelligence, Xinhua University, Anhui. She received her bachelor of science in computer science and technology from Anhui Agricultural University, Hefei, China, in 2004. She received her master’s degree in computer applications from the University of Science and Technology of China in Hefei, China, in 2011 and has been studying for her PhD in computer science at the National University of the Philippines since 2021. She is the author of more than 20 papers. Her research interests include big data, pattern recognition, and computer vision.

Hongwu Yuan received his BS and MS degrees in computer technology from Hefei Artillery Academy in 2001 and 2004, respectively, and his PhD in optics from Chinese Academy of Sciences in 2011. Since 2017, as an associate professor, he has been engaged in teaching and research in the School of Big Data and Artificial Intelligence, Anhui Xinhua University. He is the author of more than 40 papers, and his current research interests include computer vision, image processing, and polarization image processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Hongxia Wang and Hongwu Yuan "Deep inner-knuckle-print recognition using lightweight Siamese network," Journal of Electronic Imaging 33(4), 043034 (2 August 2024). https://doi.org/10.1117/1.JEI.33.4.043034
Received: 6 April 2024; Accepted: 11 July 2024; Published: 2 August 2024
Advertisement
Advertisement
KEYWORDS
Printing

Feature extraction

Detection and tracking algorithms

Biometrics

Education and training

Deep learning

Databases

Back to Top