5TQI Electronics 12 03503 P
5TQI Electronics 12 03503 P
5TQI Electronics 12 03503 P
Article
Identity Recognition System Based on Multi-Spectral Palm
Vein Image
Wei Wu *, Yunpeng Li, Yuan Zhang and Chuanyang Li
Abstract: A multi-spectral palm vein image acquisition device based on an open environment has
been designed to achieve a highly secure and user-friendly biometric recognition system. Furthermore,
we conducted a study on a supervised discriminative sparse principal component analysis algorithm
that preserves the neighborhood structure for palm vein recognition. The algorithm incorporates label
information, sparse constraints, and local information for effective supervised learning. By employing
a robust neighborhood selection technique, it extracts discriminative and interpretable principal
component features from non-uniformly distributed multi-spectral palm vein images. The algorithm
addresses challenges posed by light scattering, as well as issues related to rotation, translation, scale
variation, and illumination changes during non-contact image acquisition, which can increase intra-
class distance. Experimental tests are conducted using databases from the CASIA, Tongji University,
and Hong Kong Polytechnic University, as well as a self-built multi-spectral palm vein dataset. The
results demonstrate that the algorithm achieves the lowest equal error rates of 0.50%, 0.19%, 0.16%,
and 0.1%, respectively, using the optimal projection parameters. Compared to other typical methods,
the algorithm exhibits distinct advantages and holds practical value.
Keywords: palm vein recognition; multispectral image; feature extraction; dimensionality reduction
of block means (DBM) [7], democratic voting down-sampling (DVD) [8], and various local
binary pattern [9] (LBP) variants mentioned in [10], extract information about the direc-
tion, frequency, and phase of palm vein texture as features for matching and recognition.
However, these methods are limited by the inadequate richness and clarity of texture
information in palm vein images, which can result in decreased recognition performance.
Structure-based methods, such as the speeded-up robust feature (SURF) operator [11],
histogram of oriented gradient (HOG) [12], and maximum curvature direction feature
(MCDF) [13], extract point- and line-based structural features to represent palm veins.
Recognition performance may be adversely affected in cases of poor image quality, as
certain point and line features might be lost.
Deep learning-based methods employ various neural networks to automatically ex-
tract features and perform classification and recognition, overcoming the limitations of
traditional feature extraction methods. For instance, Wu et al. [1] selectively emphasize clas-
sification features using the SER model and weaken less useful features, thereby addressing
issues related to rotation, translation, and scaling. Similarly, Wei et al. [14] applied neural
architecture search (NAS) techniques to overcome the drawbacks of manually designed
CNNs, expanding the application of NAS technology in palm vein recognition. However,
these methods may require large palm vein databases, limiting their applicability.
Sub-space-based methods, such as two-dimensional principal component analysis
(2D-PCA) [15], neighborhood-preserving embedding (NPE) [16], two-dimensional Bhat-
tacharyya bound linear discriminant analysis [17], and variants [18–20] of classical methods
such as PCA, treat palm vein images as high-dimensional vectors or matrices. These
methods transform the palm vein images into low-dimensional representations through
projection or mathematical transformations for image matching and classification. Sub-
space methods offer advantages, such as high recognition rates and low system resource
consumption, compared to other approaches. However, due to their disregard for the
texture features of the images, they may exhibit a certain degree of blindness in the dimen-
sionality reduction process. This could lead to the omission of some discriminative features
that are crucial for classification, particularly in non-contact acquisition methods in open
environments, where the impact on recognition performance becomes more pronounced.
Non-contact palm vein image acquisition in open environments has garnered sig-
nificant research attention due to its hygienic and convenient nature, offering promising
prospects for various applications. Nevertheless, the scarcity of non-contact acquisition
devices and publicly available datasets in open environments has impeded progress in
non-contact palm vein image recognition research. Consequently, this study focuses on
three key contributions: Firstly, the proposal of a multi-spectral palm vein image acqui-
sition device specifically designed for open environments. Secondly, the establishment
of a non-contact palm vein image dataset utilizing the developed acquisition device. Fi-
nally, addressing the existing challenges in the field, the study introduces a supervised
discriminative sparse principal component analysis algorithm with a preserved neigh-
borhood structure (SDSPCA-NPE) for palm vein recognition. As a sub-space method,
this approach combines supervised label information with sparse constraints, resulting in
discriminative and highly interpretable palm vein features. It mitigates issues related to
un-clear imaging and poor texture quality, expands the inter-class distance of projected
data, and enhances discriminability among different palm vein samples. During projection,
the concept of neighborhood structure information, commonly employed in non-linear
dimensionality reduction methods, is introduced. Robust neighborhood selection tech-
niques are utilized to preserve similar local structures in palm vein samples before and after
projection. This approach captures the non-uniform distribution of palm vein images and
improves the drawbacks arising from increased image variations within the same class due
to rotation, scaling, translation, and lighting changes. Experimental evaluations conducted
on self-built palm vein databases and commonly used public multi-spectral palm vein
databases, including the CASIA (Institute of Automation, Chinese Academy of Sciences)
database [21], the Tongji University database, and the Hong Kong Polytechnic University
built palm vein databases and commonly used public multi-spectral palm vein da
including the CASIA (Institute of Automation, Chinese Academy of Sciences) d
[21], the Tongji University database, and the Hong Kong Polytechnic University d
[22], demonstrate the superior performance of the proposed method compared to
Electronics 2023, 12, 3503 typical methods. 3 of 17
The remaining sections of this paper are organized as follows: Section 2 int
the self-developed acquisition device; Section 3 presents the proposed algorithm
database [22], demonstrate the superior performance of the proposed method compared to
4 describes the experiments and results analysis; and Section 5 concludes the pap
current typical methods.
The remaining sections of this paper are organized as follows: Section 2 introduces
2. Multi-Spectral
the Image device;
self-developed acquisition Capture Device
Section 3 presents the proposed algorithm; Section 4
describes the experiments and results analysis; and Section 5 concludes the paper.
When near-infrared light (NIR) in the range of 720–1100 nm penetrates the p
2.different absorption
Multi-Spectral rates ofDevice
Image Capture NIR radiation by various components of biologica
result in a high absorption rate
When near-infrared light (NIR) in the of blood
range ofhemoglobin (including
720–1100 nm penetrates theoxygenated
palm, the and
genatedabsorption
different hemoglobin).rates ofThis leads to by
NIR radiation thevarious
formation of observable
components shadows,
of biological tissues allow
result in a high absorption rate of blood hemoglobin (including
identification of vein locations and the generation of palm vein images oxygenated and deoxy-[3]. Du
genated hemoglobin). This leads to the formation of observable shadows, allowing the
reflection, scattering, or fluorescence in different tissues of the palm, the optical
identification of vein locations and the generation of palm vein images [3]. Due to the
tion depth
reflection, variesorfrom
scattering, 0.5 mminto
fluorescence 5 mm.tissues
different Veinofacquisition
the palm, thedevices [2] can only cap
optical penetration
perficial
depth variesveins,
from 0.5andmmpalm vein
to 5 mm. Veinimages aredevices
acquisition typically obtained
[2] can using
only capture a reflection
superficial
veins, and palm
approach. Tovein images are
improve usertypically obtainedand
acceptance using a reflection
enhance imagingduring
comfort approach. To vein
palm
improve user acceptance and enhance comfort during palm vein recognition, open envi-
tion, open environment capture is employed, which un-avoidably introduces visi
ronment capture is employed, which un-avoidably introduces visible light (390–780 nm)
(390–780 nm)
illuminating illuminating
the palm and enteringthe the
palm and entering
imaging the imaging
system, resulting system, resulting
in the acquisition of i
quisition of palm
multi-spectral multi-spectral
vein images. palm vein images.
As shown in FigureAs shownlight
1, visible in Figure
entering1,the
visible
skin light
increases light scattering, thereby interfering with clear palm vein imaging
the skin increases light scattering, thereby interfering with clear palm vein imagi [22].
Scattering
Subcutaneous Fat
Visibal light
NIR
NIR source
camera Light back to camera
Figure
Figure Palm
1. 1. veinvein
Palm image schematic
image diagram.diagram.
schematic
The self-developed non-contact palm vein image acquisition device in an open envi-
ronmentThe self-developed
is shown in Figure 2.non-contact
To enhance thepalm vein image
absorption acquisition
of near-infrared lightdevice
by palmin an op
ronment
veins is shown
and minimize in Figurefrom
interference 2. To enhance
visible light, the absorption
the device employsoftwo
near-infrared
CST brand light
near-infrared linear light sources, model BL-270-42-IR, with a wavelength
veins and minimize interference from visible light, the device employs of 850 nm. Thesetwo CS
light sources are equipped
near-infrared linear lightwithsources,
an intensity adjustment
model controller. The
BL-270-42-IR, withdevice uses an
a wavelength of
industrial camera, model MV-VD120SM, for image capture. The captured images are
These light
grayscale withsources
a resolutionareofequipped
1280 pixelswith
× 960an intensity
pixels adjustment controller. The dev
and 8 bits.
an industrial camera, model MV-VD120SM, for image capture. The captured im
grayscale with a resolution of 1280 pixels × 960 pixels and 8 bits.
Electronics 2023,12,
Electronics2023, 12,3503
x FOR PEER REVIEW 44 of
of 17
17
Camera
3. Method
3. Method
The proposed methodology consists of the following steps: (1) image pre-processing,
The proposed methodology consists of the following steps: (1) image pre-processing,
(2) feature extraction (SDSPCA-NPE), and (3) feature matching and recognition.
(2) feature extraction (SDSPCA-NPE), and (3) feature matching and recognition.
3.1. Image Pre-Processing
3.1. Image Pre‐Processing
The most important issue in image pre-processing is the localization of the region
The most
of interest (ROI). important issue innormalizes
ROI extraction image pre-processing
the feature is theof
area localization
different palm of theveins,
region of
sig-
interest (ROI). ROI extraction normalizes the feature area of different
nificantly reducing computational complexity. In this study, the ROI localization method palm veins, signifi-
cantly reducing
proposed computational
in reference complexity.
[2] was adopted. This In this study,
method the ROI
identifies localization
stable method
feature points onpro-
the
posednamely
palm, in reference
the valleys [2] was adopted.
between This method
the index and middle identifies
fingersstable featurethe
and between points
ring on the
finger
palm, namely the valleys between the index and middle fingers and
and little finger. Through ROI extraction, it partially corrects image rotation, translation, between the ring fin-
ger scaling
and and little finger.
issues Through
caused ROI extraction,
by non-contact it partially corrects image rotation, transla-
imaging.
tion,The
andROI
scalingextraction process is illustratedimaging.
issues caused by non-contact in Figure 3. Firstly, the original image
The ROI extraction
(Figure 3a) is denoised using processa is illustrated
low-pass in Figure
filter. Then,3.theFirstly,
image theisoriginal imageand
binarized, (Figure
the
3a) iscontour
palm denoised is using
extracteda low-pass filter. Then,
using binary the imagedilation,
morphological is binarized,
refininganditthe
to apalm contour
single-pixel
is extracted
width. using
Vertical linebinary morphological
scanning is performed dilation,
from therefining
rightitside
to ato single-pixel
the left side width.
of the Verti-
im-
cal line
age, and scanning
the number is performed frompoints
of intersection the right
betweenside the
to thepalmleftcontour
side of andthe the
image,
scanand linetheis
number of intersection points between the palm contour and
counted. When there are 8 intersection points, it indicates that the scan line passes throughthe scan line is counted.
When
four there are
fingers, 8 intersection
excluding points,
the thumb. it indicates
From the second thatintersection
the scan line passes
point, p2,through
to the thirdfour
fingers, excluding
intersection point, p3, thethe
thumb.
palm From
contour theissecond
traced intersection
to locate the point,
valley p2, to the
point, pointthird intersec-
A, between
tionindex
the point,finger
p3, the and palm
the contour is traced
middle finger to locate
(Figure 3c), the
usingvalley point,method
the disk point A, between
[2]. Similarly,the
index finger
between p6 andand p7,thethemiddle
valley finger (Figure
point, point B, 3c), usingthe
between thering
disk method
finger and [2]. Similarly,
the little fingerbe- is
tween p6
located. and p7,
Points A andthe Bvalley point, point
are connected, B, between
forming the ABCD
a square ring finger
on the and the with
palm little the
fingersideis
located.
length Points
equal to theA and B areofconnected,
length AB, denoted forming a square
as d. This squareABCD on the
is then palm with
grayscale the side
normalized
Electronics 2023, 12, x FOR PEER REVIEW
and resized to a size of 128 pixels × 128 pixels, resulting in the
length equal to the length of AB, denoted as d. This square is then grayscale normalized desired ROI, as 5
shownof 17in
Figure 3d. to a size of 128 pixels × 128 pixels, resulting in the desired ROI, as shown in
and resized
Figure 3d.
scan
p1
p4
p5
(a) (b)
Figure 3. Cont.
C A (X ,Y )
1 1
D B (X2,Y2)
(c) (d)
p1
p4
p5
(a) (b)
C A (X ,Y )
1 1
D B (X2,Y2)
(c) (d)
Figure
Figure3.3.Flow
Flowchart
chartofofpalm
palmvein
veinimage
imageROI
ROIextraction:
extraction: (a)
(a)denoising;
denoising; (b)
(b) determine
determine the
the cross point;
point; (c) determine
(c) determine the valley
the valley point;point; and
and (d) (d) extract
extract theregion.
the ROI ROI region.
3.2.Feature
3.2. FeatureExtraction
Extraction(SDSPCA-NPE)
(SDSPCA-NPE)
Palmvein
Palm veinimages
imagesoften
oftenencounter
encounterinterference
interferenceininthe theform
formofofpartial
partialnoise
noiseand
anddefor-
defor-
mation during the non-contact acquisition process. These disturbances
mation during the non-contact acquisition process. These disturbances not only increase not only increase
thedifficulty
the difficultyofofprocessing
processing palm
palm vein
vein datadatabutbut alsoalso
posepose challenges
challenges to dimensionality
to dimensionality re-
reduction and classification, which hinder palm vein image recognition.
duction and classification, which hinder palm vein image recognition. To address these To address these
uniquecharacteristics
unique characteristicsof ofpalm
palmvein
veinimages,
images,this thisstudy
studyemploys
employssupervised
superviseddiscriminative
discriminative
sparse principal component analysis (SDSPCA) [23] for dimensionality
sparse principal component analysis (SDSPCA) [23] for dimensionality reduction and reduction and
recognition. SDSPCA combines supervised discriminative
recognition. SDSPCA combines supervised discriminative information and sparse con- information and sparse con-
straintsinto
straints intothe
thePCA
PCAmodel
model[15],
[15],enhancing
enhancinginterpretability
interpretabilityand andmitigating
mitigatingthe
theimpact
impactof of
high inter-class ambiguity in palm vein image samples. By projecting
high inter-class ambiguity in palm vein image samples. By projecting palm vein images palm vein images
usingSDSPCA,
using SDSPCA,the theintegration
integrationof of sparse
sparse constraints
constraints and and supervised
supervised learning
learning achieves
achieves
more effective dimensionality reduction for classification tasks, ultimately improving the
more effective dimensionality reduction for classification tasks, ultimately improving the
recognition performance of palm vein images. The SDSPCA model is depicted below:
recognition performance of palm vein images. The SDSPCA model is depicted below:
min ∥ 𝐗 − 𝐐𝐐T 𝐗T ∥2F +2 α ∥ 𝐘 − 𝐐𝐐T 𝐘 ∥T2F + β2 ∥ 𝐐 ∥2,1
mink X − QQ X kF + αk Y − QQ Y kF + β k Q k2,1
𝐐
(1)(1)
Q
s. t. 𝐐T 𝐐 T= 𝐈k
s.t. Q Q = Ik
Optimize as follows [23]:
Step 1:
Optimize as follows [23]:
Step 1: ‖𝐗 − 𝐐𝐐T 𝐗‖2F
= Tr((𝐗 − 𝐐𝐐T 𝐗)TT (𝐗 2− 𝐐𝐐T 𝐗))
= Tr(𝐗kX −T 𝐗QQ − T kF T
X
T T
T𝐗 𝐐𝐐T
𝐗)
T T
= Tr(𝐗
= Tr X − QQ X 𝐗) − Tr(𝐐X− 𝐗𝐗QQ 𝐐)X
Tr(𝐗 T 𝐗) as a fixed value, independent of the final minimization
problem solution.
T T T
Step 2: = Tr X X − X QQ X
min‖𝐗=− Tr𝐐𝐐 XTTX 𝐗‖2F−=Tr min Q−T T T
Tr(𝐐
XX Q 𝐗𝐗 T 𝐐)
𝐐 𝐐
By
Tr simple
XT X algebraic
as a fixed calculation [24], the above
value, independent of the equation can be optimized
final minimization problemassolution.
follows:
min ∥ 𝐗 − 𝐐𝐐T 𝐗 ∥2F + α ∥ 𝐘 − 𝐐𝐐T 𝐘 ∥2F + β ∥ 𝐐 ∥2,1
Step 2: 𝐐
= min − Tr(𝐐T 𝐗𝐗 TT𝐐) 2− α Tr(𝐐T 𝐘𝐘 T 𝐐)T+ β TTr(𝐐T 𝐃𝐐)
𝐐minkX − QQ Xk = min − Tr Q XX Q
Q TQ
= min Tr(𝐐T (−𝐗𝐗 F − α𝐘𝐘 T + β𝐃)𝐐) (2)
𝐐
By simple algebraic calculation [24], the above equation can be optimized as follows:
s. t. 𝐐T 𝐐 = 𝐈k
2 2
mink X − QQ X kF + αk Y − QQT Y kF + β k Q k2,1
T
Q
= min − Tr QT XXT Q − αTr QT YYT Q + βTr QT DQ
Q (2)
T T T
= minTr Q −XX − αYY + βD Q
Q
s.t. QT Q = Ik
In the proposed method, α and β are weight parameters. The training data matrix
is X = [x1 , . . . , xn ] T ∈ Rn×d , where n is the number of training samples, and d is the
Electronics 2023, 12, 3503 6 of 17
feature dimension. Using Y = [y1 , . . . , yn ] T ∈ Rn×c as the label matrix of the dataset X, Y is
constructed as follows:
1, i f c j = i, j = 1, 2 . . . , n, i = 1, 2, . . . , c
Yi,j = (3)
0, otherwise
where c represents the number of classes in the training data, and c j ∈ {1, . . . , c} represents
the class labels. The optimal Q consists of k-tail eigenvectors of Z = −XXT − αYYT + βD,
where D ∈ Rn×n is a diagonal matrix, and the i-th diagonal element is:
1
Dii = r (4)
2
2 Σkj=1 Q +
ij
Figure4.4.Flowchart
Figure Flowchartofof
thethe NPE.
NPE.
palm vein data exhibits a non-uniform distribution within a class due to the influence
of outliers, linear dimensionality reduction methods that seek the final projection space
through global linear transformations often fail to preserve the non-linear and non-uniform
distribution structure of the high-dimensional palm vein dataset. Consequently, they
demonstrate low tolerance towards outliers during dimensionality reduction, resulting in
the misclassification of such samples. In contrast, by applying NPE’s non-linear mapping
and utilizing robust neighborhood selection techniques, the method encompasses the
outliers within the neighborhood range. This allows the outliers to be pulled closer to
samples of the same class during the dimensionality reduction process, ultimately resulting
in a more compact distribution of palm vein samples within the low-dimensional space for
the same class and larger separations from samples of other classes.
The proposed method is as follows:
2 2
mink X − QQT X kF + αk Y − QQT Y kF + β k Q k2,1 +δ∑i k xi − ∑j Wij xj k2
Q
= min − Tr QT XXT Q − αTr QT YYT Q + βTr QT DQ + δTr QT XXT MXXT Q
Q (7)
= minTr QT −XXT − αYYT + βD + δXXT MXXT Q
Q
s.t. QT Q = Ik
intra-class distribution
0.6
intra-classdistribution
inter-class distribution
0.6
0.5
inter-class distribution
0.5
Density(%)
0.4
Density(%)
0.4
0.3
0.3
0.2
0.2
0.1
t 0.165772
0.1
0
0 0.1 t 0.165772
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0 Normalized Matching Distence(%)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Matching Distence(%)
Figure
Figure 6.
6. Curves
Curvesof
ofmatching
matchingdistribution
distributionfor
forintra-class
intra-class and
and inter-class.
inter-class.
Figure 6. Curves of matching distribution for intra-class and inter-class.
Electronics 2023, 12, x FOR PEER REVIEW 10 of 17
Electronics 2023, 12, 3503 10 of 17
For matching, the two image feature vectors in the test set are computed to calculate
For matching, the two image feature vectors in the test set are computed to calculate
the Euclidean distance if they satisfy the following:
the Euclidean distance if they satisfy the following:
Distance t (9)
Distance < t (9)
It is considered to belong to the same person and is accepted; otherwise, it is rejected.
It is considered to belong to the same person and is accepted; otherwise, it is rejected.
4. Experimental Results and Analysis
4. Experimental Results and Analysis
The proposed algorithm was validated for its feasibility through experiments con-
The proposed algorithm was validated for its feasibility through experiments con-
ducted on a self-built image database, the image database of the Institute of Automation,
ducted on a self-built image database, the image database of the Institute of Automation,
the Chinese Academy of Sciences, the image database of Hong Kong Polytechnic Univer-
the Chinese Academy of Sciences, the image database of Hong Kong Polytechnic University,
sity, and the image database of Tongji University.
and the image database of Tongji University.
4.1.Feature
4.1. FeatureMatching
Matchingand
andRecognition
Recognition
Fourpalm
Four palmvein
veindatabases
databases collected
collected by
by heterogeneous
heterogeneous devices
devices under
under different
different condi-
condi-
tions are considered to evaluate the proposed method’s recognition accuracy.
tions are considered to evaluate the proposed method’s recognition accuracy.
(1) Self-built image databases: The self-developed device for palm vein image acquisi-
(1) Self-built image databases: The self-developed device for palm vein image acquisition
tion shown
shown in Figure
in Figure 2 was2used
wasfor used for shooting,
shooting, and theand the acquisition
acquisition environmentenvironment
is shownis
shown in Figure 7. Two hundred and sixty-five palm images
in Figure 7. Two hundred and sixty-five palm images of the left and right hands of the left and right
of
hands
265 of 265
people people
were were collected.
collected. The left and Theright
left and
hands right hands
of the sameofperson
the same were person were
regarded
regarded
as differentassamples.
differentInsamples.
total, 530In total,were
palms 530 captured,
palms were withcaptured,
10 imageswith taken 10for
images
each
taken for each hand, resulting in a total of 5300 images. In the scope
hand, resulting in a total of 5300 images. In the scope of the 5300 images we collected, of the 5300 im-
ages we collected, the FTE
the FTE rate of our device is 0%. rate of our device is 0%.
(2) CASIA
(2) CASIA (Chinese
(Chinese Academy
Academy of of Sciences
Sciences Institute
Institute of of Automation)
Automation) databases:
databases: Multi-
Multi-
spectral Palm
spectral Palm Vein
Vein Database
Database V1.0
V1.0 contains
contains 72007200 palm
palmvein vein images
imagescollected
collectedfrom from100100
different hands. Its palmprint images taken at 850 nm wavelength
different hands. Its palmprint images taken at 850 nm wavelength can clearly show can clearly show
the palm
the palm veins,
veins, making
making it it aa universal
universal palmpalm vein
vein atlas.
atlas.
(3) Hong Kong Polytechnic University databases
(3) Hong Kong Polytechnic University databases (PolyU): (PolyU): The PolyU
The PolyUmulti-spectral
multi-spectral da-
tabase collects palmprint images under blue, green, red, and
database collects palmprint images under blue, green, red, and near-infrared (NIR) near-infrared (NIR) il-
lumination. The
illumination. TheCCDCCDcamera
cameraand and high-power
high-power halogen
halogen light source source form
form aa contact
contact
device for
device for contact
contact collection.
collection. Palm
Palm vein
vein samples
samples are are extracted
extracted fromfrom palmprint
palmprintimages images
collected under
collected under near-infrared
near-infraredillumination.
illumination.ItItcontains
contains250 250palm
palmvein
veinimages
imagescollected
collected
users under
by users under aa near-infrared
near-infrared lightlight source,
source,and and6000
6000images
imageswerewerecollected.
collected.
(4) Tongji University databases: Tongji University’s non-contact
(4) Tongji University databases: Tongji University’s non-contact collection of collection ofpalm
palmveinvein
galleries has
galleries has aa light
light source
source wavelength
wavelengthof of 940
940nm.nm. ItIt contains
contains 12,000
12,000palm
palmveinveinimage
image
samples from individuals
individualsbetween
between2020and and 50.50.
These
These images
imageswere captured
were captured using pro-
using
prietary non-contact
proprietary non-contact acquisition
acquisition devices.
devices.TheThedatadatawere werecollected
collectedin two
in twostages, in-
stages,
including
cluding 600 600 palms,
palms, andeach
and eachpalmpalmhad had2020palm
palmveinveinimages.
images.
Figure7.7.Acquisition
Figure Acquisitionenvironment
environmentof
ofthe
theself-built
self-builtdatabase.
database.
Electronics
Electronics 2023,
2023, 12, 12, 3503
x FOR PEER REVIEW 11 of 1711 of 17
Electronics 2023, 12, x FOR PEER REVIEW 11 of 17
Electronics 2023, 12, x FOR PEER REVIEW 11 of 17
Electronics 2023, 12, x FOR PEER REVIEW 11 of 17
Figures
Figures 8–11
8–11show show
showthe the basic
thebasic
basic situation
situation of each database sample. As shown in the
fig-figure,
Figures
the
Figures 8–11
collected
8–11 images
show the are
basic situation
affected by
situation ofof
the
of
each
each
palm
each
database
database
vein
database itself
sample.
sample.
and
sample. AsAs
external
As
shown
shown
shown inin
factors,
in the
the
the
and fig-
there
fig- are
ure,the
ure, thecollected
Figurescollected images
8–11images
show the are
are affected
basic byby
situation
affected the
the of palm vein
eachvein
palm itselfand
database
itself andexternal
external
sample. factors,
As shown
factors, in and
and there
thethere
fig-
ure, the different
collected
aredifferent
different degrees
images
degrees of
are blurring.
affected by the palm vein itself and external factors, and there
ure,
are
are
the collected
degrees
different degrees ofof
images
of
blurring.
are affected by the palm vein itself and external factors, and there
blurring.
blurring.
are different degrees of blurring.
(a)(a)
(a)
(b)
(b)
(b) (c)(c)
(c)
FigureFigure (a) database
8. Self-built
8. Self-built samples:
database (a) Sample
samples:
(b)
1; (b) Sample
(a) Sample 2; and2;(c)and
1; (b) Sample Sample (c) 3.
3.
(c) Sample
Figure 8. Self-built database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 8. Self-built database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 8. Self-built database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
(a)(a)
(a)
(b)
(b)
(b) (c)(c)
(c)
(a) database samples: (a) Sample(b)
Figure 9. CASIA 1; (b) Sample 2; and (c) Sample 3.(c)
Figure 9. CASIA database
CASIA samples: (a) Sample
database 1; (b) Sample 2; and (c) Sample 3.
Figure 9.Figure
CASIA9.database samples:samples: (a)1;
(a) Sample Sample 1; (b)2;
(b) Sample Sample
and (c)2;Sample
and (c) 3.
Sample 3.
Figure 9. CASIA database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
(a)(a)
(a)
(b)
(b)
(b) (c)(c)
(c)
(a) database samples: (a) Sample(b)
Figure 10. PolyU (c)
1; (b) Sample 2; and (c) Sample 3.
Figure 10. PolyU database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 10. PolyU
Figure 10.database samples:samples:
PolyU database (a) Sample
(a)1; (b) Sample
Sample 1; (b)2; and (c)2;Sample
Sample and (c) 3.
Sample 3.
Figure 10. PolyU database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
(a)(a)
(a)
(b)
(b)
(b) (c)(c)
(c)
(a) database samples: (a) Sample(b)
Figure 11. Tongji (c)
1; (b) Sample 2; and (c) Sample 3.
Figure 11. Tongji database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 11. Tongji database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 11. Tongji
Figure 11.database samples:
Tongji database (a) Sample
samples: 1; (b) Sample
(a) Sample 2; and (c)
1; (b) Sample Sample
2; and 3.
(c) Sample 3.
4.2.Performance
4.2. PerformanceEvaluation
Evaluationand andError
ErrorIndicators
Indicators
4.2. Performance Evaluation and Error Indicators
4.2. Performance
Eachofofthe
4.2. Performance theEvaluationEvaluation
databases and and
consists Error
of100
ErrorofIndicators Indicators
100classes,
classes, withsix
siximages
imagesper perclass.
class.ForForeach
eachclass,
class,
Each
Each of the databases
databases consists
consists of 100 classes, with
with six images per class. For each class,
the first four
Eachfour
thefirst
first Each
of the images
of the
databases
images are
areused used
databases for
consists
used training,
consists of
of 100 classes,
fortraining,
training, while
100
whilethe the
classes,
with remaining
with six
six imagestwo
theremaining
remaining two
images images
per
per class.
images are
class.
For areeach used
For
used for class,
each
class,
for
thetesting. four
the images
After
first feature
four are
images for
extraction,
are useda total
for while
of 40,000
training, matches
while the are two images
performed
remaining two are
among
imagesused thefor
are 200
used for
the first After
testing. four images are used foratraining,
featureextraction,
extraction, totalof while the
of40,000
40,000 remaining
matches two images
areperformed
performed among are used
the200 for
200
testing.
test After
palm
testing. feature
vein images.
After feature Among a total
these
extraction, matches,
a of
total of400 matches
400
40,000matches are
matches areare performed among
performed for the
samples
among of
testing.
testpalm
palm After
veinfeature
images. extraction,
Among a total
these matches, 40,000 matches
matches are
areperformed
performed among
forsamples the the
samples 200of200 test
test
the same vein
palm images.
class,
vein while
images. Among
39,600
Among these
matches
thesematches,
are 400 400
performed
matches, matchesfor
matches are performed
samples
are of for
different
performed forclasses
samples of[2].of the
test palm
thesame vein
sameclass, images.
class,while
while39,600Among
39,600matches these
matchesare matches,
areperformed400
performedfor matches
forsamplesare
samplesof performed
ofdifferent for samples
differentclasses
classes[2]. of
[2].
theThe threshold
same class,value
while ‘t’ is determined
39,600 matches based
are on
performedthe distribution
for samples curve
of of intra-class
different classes and
the same
Thethreshold class,value
threshold while‘t’39,600 matches are
isdetermined
determined based performed
onthe for samplescurve
thedistribution
distribution of different classes and
ofintra-class
intra-class [2].[2]. The
Theinter-class
threshold value
samples ‘t’
valuein is
the
‘t’ istraining
determined set.based
The
based on
performance
on the of the
distribution curve of
recognition
curve of system
intra-class isand
eval-
and inter-
The threshold
inter-class samplesvaluein‘t’the is training
determinedset.The based
The on the distribution
performance ofthe curve of intra-class
therecognition
recognition systemisiseval- and
eval-
inter-class
uated samples
using
class the
samples in in
thethe
following training
trainingset.
metrics: false
set. Theperformance
rejection rate
performance of
(FRR),
of the false system
acceptance
recognition systemrate (FAR),
is evaluated
inter-class
uatedusing
using samples in the training
thefollowing
following metrics:set. false The performance
rejection of thefalse
rate (FRR), recognition
acceptancesystem is(FAR),
eval-
rate(FAR),
uated
correct using the
the following
recognition following
rate metrics:
metrics:
(CRR), andfalse
false rejection
rejectionrate
recognition rate(FRR),
time. (FRR), false
false acceptance
acceptance rate
rate (FAR), correct
uated using
correctrecognitionthe
recognitionrate rate(CRR), metrics:
(CRR),and false
andrecognition rejection
recognitiontime. rate
time. (FRR), false acceptance rate (FAR),
correct recognition rate (CRR), and recognition time.
correct recognition rate (CRR), and recognition time.
𝐹𝑅𝑅
𝐹𝑅𝑅 100%
100% NFR
𝐹𝑅𝑅 FRR 100%
= × 100% (10) (10)
(10)
𝐹𝑅𝑅 100% N AA (10)
(10)
𝐹𝐴𝑅 NFA
100%
𝐹𝐴𝑅 =
FAR 100% × 100% (11) (11)
(11)
𝐹𝐴𝑅 100% NIA (11)
𝐹𝐴𝑅 100% (11)
Electronics2023,
Electronics 2023,12,
12,3503
x FOR PEER REVIEW 12 of 17
12 of 17
Theconclusion
The conclusiondrawn
drawnisisthat
thatSDSPCA-NPE
SDSPCA-NPEisisrobust
robusttotoββwithin
withinthe
therange
rangeof
of[0.01,
[0.01,100],
100],
but sensitive to α and δ. Specifically, within a certain range, the weights assigned
but sensitive to α and δ. Specifically, within a certain range, the weights assigned to classto class in-
formation and local information have a significant impact on the classification ability.
information and local information have a significant impact on the classification ability.
4.4.Ablation
4.4. Ablation Experiments
Experiments
Inthe
In theexperiment,
experiment,thetheproposed
proposedmethod
method integrates
integrates global
global information,
information, category
category in-
infor-
formation
mation (supervised),
(supervised), and and
locallocal information,
information, aiming
aiming to verify
to verify the performance
the performance improve-
improvement
ment achieved
achieved by combining
by combining theseofpieces
these pieces of information.
information. To this,
To validate validate this, individual
individual ex-
experiments
were conducted
periments wereusing the NPE,
conducted usingSDSPCA,
the NPE, and SDSPCA-NPE
SDSPCA, algorithms on
and SDSPCA-NPE the same image
algorithms on the
database.
same imageThedatabase.
specific performance results can beresults
The specific performance found can
in Figure
be found13. in Figure 13.
Electronics2023,
Electronics 12,x 3503
2023,12, FOR PEER REVIEW 13 13
of of
1717
16
NPE
SDSPCA
14
SDSPCA-NPE
12
10
EER % 8
0
Self-built database CASIA database PolyU database Tongji database
Figure
Figure13.
13.Performance
Performanceofofdifferent
differentcomponents
componentsininthe
thedatabase.
database.
From
FromFigure
Figure13,
13,ititcan
canbe
beobserved
observedthatthatthe
theproposed
proposedmethod,
method,EER,EER,demonstrates
demonstrates
superior
superiorperformance
performanceacross
acrossall
allfour
fourdatasets.
datasets.Furthermore,
Furthermore,NPE NPEand
andSDSPCA
SDSPCAexhibit
exhibitthe
the
expected
expectedperformance
performancedifferences
differenceswhen
whenapplied
appliedtotodatasets
datasetsthat
thatadhere
adheretototheir
theirrespective
respective
dimensionality
dimensionalityreduction
reductionprinciples.
principles.InInconclusion,
conclusion,the theSDSPCA-NPE
SDSPCA-NPEalgorithm
algorithmcombines
combines
the strengths of
the strengths of each individual algorithm, effectively integrating class-specific,
individual algorithm, effectively integrating class-specific, global, global, and
local
and information.
local It exhibits
information. better
It exhibits applicability
better compared
applicability to SDSPCA
compared to SDSPCAand NPE alone,
and NPE
resulting
alone, in more
resulting desirable
in more performance
desirable outcomes.
performance outcomes.
4.5.Performance
4.5. PerformanceComparison
Comparison
AAcomparison
comparisonofofour ourproposed
proposedalgorithm
algorithmwith
withseveral
severaltypical
typicalalgorithms
algorithmsisispresented
presented
here, evaluating their performance on four databases. Table 1 displays the performance
here, evaluating their performance on four databases. Table 1 displays the performance re-
results
sults (CRR/EER)
(CRR/EER) of different
of different algorithms.
algorithms. The corresponding
The corresponding ROC are
ROC curves curves are illustrated
illustrated in Fig-
in Figure
ure 14. 14.
Hereare
Here aresome
someintroductions
introductionstotothese
theseused
usedalgorithms:
algorithms:
(1) PCA:
(1) PCA:ThisThismethod
methodextracts
extracts the
the main
main information
information fromfromthe
thedata,
data,avoiding
avoiding the compari-
the com-
son of redundant dimensions in palm vein images. However, it may result in data
parison of redundant dimensions in palm vein images. However, it may result in data
points being mixed together, making it difficult to distinguish between similar palm
points being mixed together, making it difficult to distinguish between similar palm
vein image samples, leading to sub-par performance.
vein image samples, leading to sub-par performance.
(2) NPE: NPE retains the local information structure of the data, ensuring that the pro-
(2) NPE: NPE retains the local information structure of the data, ensuring that the pro-
jected palm vein data maintains a close connection among samples of the same class.
jected palm vein data maintains a close connection among samples of the same class.
It effectively reduces the intra-class distance of similar palm vein samples. However,
It effectively reduces the intra-class distance of similar palm vein samples. However,
this method assumes the effective existence of local structures within the palm vein
this method assumes the effective existence of local structures within the palm vein
samples. It lacks robustness for samples that do not satisfy this data characteristic,
samples. It lacks robustness for samples that do not satisfy this data characteristic,
such as palm vein images with significant deformation.
such as palm vein images with significant deformation.
(3) SDSPCA: SDSPCA incorporates class information and sparse regularization into PCA.
(3) SDSPCA:
It exhibitsSDSPCA
a certainincorporates class information
resistance to anomalous samplesand(e.g.,sparse
blurry regularization
or deformed images)into
PCA. It exhibits a certain resistance to anomalous samples (e.g.,
in palm vein images. However, its classification capability still cannot overcome blurry or deformed
images) in palm
the inherent vein images.
limitations However,
of PCA, its classification
resulting in the loss ofcapability still cannot crucial
certain components over-
come the inherent limitations
for classification of PCA, resulting
and un-satisfactory in the loss
performance, of certain
especially for components
similar palmcru- vein
cial for classification
image samples. and un-satisfactory performance, especially for similar palm
(4) vein
DBM: image
DBM samples.
utilizes texture features extracted from divided blocks, offering a simple
(4) DBM: DBM utilizes
structure, easy texture features
implementation, andextracted fromHowever,
fast speed. divided blocks, offering aissimple
its performance signifi-
structure, easy implementation, and fast speed. However,
cantly compromised when dealing with low-quality or deformed palm vein its performance is signifi-
images.
cantly compromised
Nevertheless, when dealing
it performs reasonablywithwell
low-quality or deformed
on high-quality palmdata.
palm vein vein images.
(5) Nevertheless,
DGWLD: DGWLD it performs reasonably
consists well on high-quality
of an improved palm veinoperator
differential excitation data. and dual
(5) DGWLD: DGWLD consists
Gabor orientations. It betterofreflects
an improved
the localdifferential excitation in
grayscale variations operator
palm veinandimages,
dual
Gabor
enhancing the differences between samples of different classes. However, itim-
orientations. It better reflects the local grayscale variations in palm vein still
ages, enhancing
struggles withthe differences
sample rotationbetween samples of different
and deformation issues inclasses. However,
non-contact palm it still
vein
struggles
images, withand itsample
incurs rotation and deformation
higher computational issues in non-contact palm vein im-
costs.
ages, and it incurs higher computational costs.
Electronics 2023, 12, 3503 14 of 17
(6) MOGWLD: MOGWLD builds upon the dual Gabor framework by extracting multi-
scale Gabor orientations and improving the original differential excitation by con-
sidering grayscale differences in multiple neighborhoods. This method enhances
the discriminative power for distinguishing between samples from different classes.
However, despite the improvement over the previous method, it increases the compu-
tational time and does not fundamentally enhance the classification ability for blurry
and deformed samples within the same class.
(7) JPCDA: JPCDA incorporates class information into PCA, effectively reducing inter-
class ambiguity. However, it does not perform well with non-linear palm vein data.
From Table 1 and Figure 14, it can be observed that sub-space methods, such as
the SDSPCA-NPE algorithm, outperform other texture-based methods in terms of time
efficiency. In terms of specific performance, the algorithm achieves superior results across
four databases, with the best CRR and EER performance. It also exhibits better time
complexity compared to the majority of methods. However, it should be noted that certain
methods show lower time complexity and even better EER performance on individual
image databases. Nonetheless, these algorithms lack universality and are not applicable for
distinguishing palm vein images, especially when dealing with non-uniformly distributed
palm vein databases.
35
DBM
DGWLD
30 JPCDA
MOGWLD
NPE
PCA
25
Proposed
SDSPCA
20
FAR %
15
10
0
0 5 10 15 20 25 30 35
FRR %
(a) (b)
14 PCA
MOGWLD
JPCDA
12 DGWLD
NPE
SDSPCA
10
Proposed
DBM
8
FAR %
0
0 2 4 6 8 10 12 14 16
FRR %
(c) (d)
Figure 14. ROC curves. (a) Self-built database. (b) CASIA database. (c) PolyU database. (d) Tongji
Figure 14. ROC curves. (a) Self-built database. (b) CASIA database. (c) PolyU database. (d) Tongji
database.
database.
From Table 1 and Figure 14, it can be observed that sub-space methods, such as the
It can be concluded that SDSPCA-NPE, as a supervised algorithm, effectively combines
SDSPCA-NPE algorithm, outperform other texture-based methods in terms of time effi-
local structural information and global information for dimensionality reduction, yielding
ciency. In terms of specific performance, the algorithm achieves superior results across
better overall performance than other algorithms across the four databases.
four databases, with the best CRR and EER performance. It also exhibits better time com-
plexity
5. compared to the majority of methods. However, it should be noted that certain
Conclusions
methods show
We have lower time
designed complexity and even
an open-environment palm better EER acquisition
vein image performance on individual
device based on
multi-spectral imaging to achieve a high-security palm vein recognition system. Addition-
ally, we have established a non-contact palm vein image dataset. In this study, we propose
a supervised discriminative sparse principal component analysis (SDSPCA-NPE) algorithm
that preserves the neighborhood structure to improve recognition performance. By utilizing
sparse constraints in supervised learning, the SDSPCA-NPE algorithm obtains interpretable
principal component features that contain class-specific information. This approach reduces
the impact of issues such as un-clear imaging and low image quality during the acquisition
process. It expands the inter-class distance and enhances the discriminability between dif-
ferent palm vein samples. Moreover, we introduce the neighborhood structure information
into the projection step using robust neighborhood selection techniques, which ensure the
Electronics 2023, 12, 3503 16 of 17
preservation of similar local structures in the palm vein samples before and after projection.
This technique captures the un-even distribution of palm vein images and addresses the
drawbacks of increased image differences within the same class caused by rotation, scale
variation, translation, and illumination changes. Experimental results demonstrate the
effectiveness of the proposed method on three self-built databases, the CASIA database,
the Hong Kong Polytechnic University database, and the Tongji University database. The
equal error rates achieved are 0.10%, 0.50%, 0.16%, and 0.19%, respectively. Our approach
outperforms other typical methods in terms of recognition accuracy. The system achieves
real-time performance with an identification time of approximately 0.0019 s, indicating its
practical value. Future work will focus on miniaturizing the palm vein acquisition device
and developing recognition algorithms to accommodate large-scale palm vein databases.
References
1. MacGregor, P.; Welford, R. Veincheck: Imaging for security and personnel identification. Adv. Imaging 1991, 6, 52–56.
2. Wu, W.; Wang, Q.; Yu, S.; Luo, Q.; Lin, S.; Han, Z.; Tang, Y. Outside Box and Contactless Palm Vein Recognition Based on a
Wavelet Denoising Resnet. IEEE Access 2021, 9, 82471–82484. [CrossRef]
3. Wu, W.; Elliott, S.J.; Lin, S.; Sun, S.; Tang, Y. Review of Palm Vein Recognition. IET Biom. 2019, 9, 1–10. [CrossRef]
4. Lee, Y.P. Palm vein recognition based on a modified (2D)2LDA. Signal Image Video Process. 2013, 9, 229–242. [CrossRef]
5. Wang, H.B.; Li, M.W.; Zhou, J. Palmprint recognition based on double Gabor directional Weber local descriptors. Electron. Inform.
2018, 40, 936–943.
6. Li, M.W.; Liu, H.Y.; Gao, X.J. Palmprint recognition based on multiscale Gabor directional Weber local descriptors. Prog. Laser
Optoelectron. 2021, 58, 316–328. [CrossRef]
7. Almaghtuf, J.; Khelifi, F.; Bouridane, A. Fast and Efficient Difference of Block Means Code for Palmprint Recognition. Mach. Vis.
Appl. 2020, 31, 1–10. [CrossRef]
8. Leng, L.; Yang, Z.; Min, W. Democratic Voting Downsampling for Coding-based Palmprint Recognition. IET Biom. 2020, 9,
290–296. [CrossRef]
9. Karanwal, S. Robust Local Binary Pattern for Face Recognition in Different Challenges. Multimed. Tools Appl. 2022, 81, 29405–29421.
[CrossRef]
10. El Idrissi, A.; El Merabet, Y.; Ruichek, Y. Palmprint Recognition Using State-of-the-art Local Texture Descriptors: A Comparative
Study. IET Biom. 2020, 9, 143–153. [CrossRef]
11. Kaur, P.; Kumar, N.; Singh, M. Biometric-Based Key Handling Using Speeded Up Robust Features. In Lecture Notes in Networks
and Systems; Springer Nature: Singapore, 2023; pp. 607–616.
12. Kumar, A.; Gupta, R. Futuristic Study of a Criminal Facial Recognition: A Open-Source Face Image Dataset. Sci. Talks 2023, 6,
100229. [CrossRef]
13. Yahaya, Y.H.; Leng, W.Y.; Shamsuddin, S.M. Finger Vein Biometric Identification Using Discretization Method. J. Phys. Conf. Ser.
2021, 1878, 012030. [CrossRef]
14. Jia, W.; Xia, W.; Zhao, Y.; Min, H.; Chen, Y.-X. 2D and 3D Palmprint and Palm Vein Recognition Based on Neural Architecture
Search. Int. J. Autom. Comput. 2021, 18, 377–409. [CrossRef]
15. Rida, I.; Al-Maadeed, S.; Mahmood, A.; Bouridane, A.; Bakshi, S. Palmprint Identification Using an Ensemble of Sparse
Representations. IEEE Access 2018, 6, 3241–3248. [CrossRef]
16. Sun, S.; Cong, X.; Zhang, P.; Sun, B.; Guo, X. Palm Vein Recognition Based on NPE and KELM. IEEE Access 2021, 9, 71778–71783.
[CrossRef]
17. Guo, Y.-R.; Bai, Y.-Q.; Li, C.-N.; Bai, L.; Shao, Y.-H. Two-Dimensional Bhattacharyya Bound Linear Discriminant Analysis with Its
Applications. Appl. Intell. 2021, 52, 8793–8809. [CrossRef]
Electronics 2023, 12, 3503 17 of 17
18. Jolliffe, I.T. Principal Component Analysis and Factor Analysis. In Principal Component Analysis; Springer: New York, NY, USA,
1986; pp. 115–128.
19. Liu, J.-X.; Xu, Y.; Gao, Y.-L.; Zheng, C.-H.; Wang, D.; Zhu, Q. A Class-Information-Based Sparse Component Analysis Method to
Identify Differentially Expressed Genes on RNA-Seq Data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2016, 13, 392–398. [CrossRef]
[PubMed]
20. Multilinear Principal Component Analysis. In Multilinear Subspace Learning; Chapman and Hall/CRC: Boca Raton, FL, USA, 2013;
pp. 136–169.
21. Al-jaberi, A.S.; Mohsin Al-juboori, A. Palm Vein Recognition, a Review on Prospects and Challenges Based on CASIA’s Dataset.
In Proceedings of the 2020 13th International Conference on Developments in eSystems Engineering (DeSE), Virtual Conference,
14–17 December 2020.
22. Salazar-Jurado, E.H.; Hernández-García, R.; Vilches-Ponce, K.; Barrientos, R.J.; Mora, M.; Jaswal, G. Towards the Generation of
Synthetic Images of Palm Vein Patterns: A Review. Inf. Fusion 2023, 89, 66–90. [CrossRef]
23. Feng, C.-M.; Xu, Y.; Liu, J.-X.; Gao, Y.-L.; Zheng, C.-H. Supervised Discriminative Sparse PCA for Com-Characteristic Gene
Selection and Tumor Classification on Multiview Biological Data. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2926–2937.
[CrossRef]
24. Jiang, B.; Ding, C.; Luo, B.; Tang, J. Graph-Laplacian PCA: Closed-Form Solution and Robustness. In Proceedings of the 2013
IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013.
25. Feng, D.; He, S.; Zhou, Z.; Zhang, Y. A Finger Vein Feature Extraction Method Incorporating Principal Component Analysis and
Locality Preserving Projections. Sensors 2022, 22, 3691. [CrossRef]
26. Wang, X.; Yan, W.Q. Human Identification Based on Gait Manifold. Appl. Intell. 2022, 53, 6062–6073. [CrossRef]
27. Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326.
[CrossRef] [PubMed]
28. Zhao, X.; Guo, J.; Nie, F.; Chen, L.; Li, Z.; Zhang, H. Joint Principal Component and Discriminant Analysis for Dimensionality
Reduction. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 433–444. [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.