INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
A Robust Multi-Modal Biometric Recognition System Using Iris,  
Fingerprint and Palmprint based on Cuckoo Search Algorithm  
Dr. P. Aruna Kumari  
Assistant Professor, Department of CSE, JNTU-GV, CEV, Vizianagaram, AP, India.  
Abstract: Authentication enables individuals to be automatically recognized based on their behavioral or physiological traits.  
Biometrics is extensively utilized in many commercial and official identifying systems to facilitate automated access control. This  
research presents a model for multimodal biometric recognition that utilizes a feature level fusion method. The suggested method  
encompasses a series of five processes, namely pre-processing, feature extraction from all attributes, feature level fusion, feature  
space reduction, and recognition via machine learning techniques. The initial stage involves the pre-processing of three distinct  
modalities, namely iris, pamprint, and fingerprint. Next, the process of feature extraction is conducted for each modality in order to  
extract the features. Following this, the features extracted from three modalities were combined at the feature level. The utilization  
of feature level fusion in integrating multiple biometric data presents several advantages in comparison to alternative fusion  
procedures, but accompanied by the notable limitation of creating feature vectors of substantial dimensions. The main objective of  
this study is to analyze the difficulties related to the management of high-dimensional data and investigate several methods of  
feature reduction that can be applied to multimodal biometric systems.  
This study presents a novel approach that employs Cuckoo Search (CS) optimization technique for the purpose of feature selection.  
The objective is to address the challenges related to integrating the Iris, palmprint, and fingerprint feature spaces at the feature level.  
Normalization is applied to bring all the feature spaces into same domain during integration of features at feature level. Machine  
Learning approaches are utilized to assess the effectiveness of feature selection based on Cuckoo Search Algorithm (CSA) and  
feature space reduction using Principal Component Analysis (PCA) on the CASIA, IITD, and FVC databases. Additionally,  
matching is performed using the Euclidean distance. The trials undertaken in this study indicated a significant reduction in the  
feature space when iris, palmprint, and fingerprint characteristics were merged at the feature level. Specifically, the use of CS  
resulted in a greater reduction compared to PCA. The decrease in size led to an improvement in the accuracy of recognition.  
Keywords: Cuckoo Search optimization, Multi-modal biometric systems Feature Level Fusion, Palmprint, Iris, Fingerprint, Feature  
selection.  
I. Introduction  
In recent years, the utilization of biometrics has become a prominent element inside modern security systems. Biometrics can be  
categorized into two primary classifications: image-based systems and signal-based systems. Biometrics is a technique utilized to  
accurately differentiate individuals by analyzing their behavioral or physical attributes [1]. Signal-based systems cover the detection  
and analysis of many signals, including Electrocardiography (ECG) and speaker identification. Image-based systems contain a wide  
range of biometric modalities, such as gestures, hand-written signature, voice, hand geometry, gait, iris recognition, and face [2, 3].  
The biometric system is a rapidly developing and dynamic technology utilized in automated systems for the precise and unique  
identification of persons, eliminating the need for memory or physical tokens like identity cards and passwords [4]. Numerous  
studies have provided evidence supporting the superior attributes of iris recognition technology compared to alternative biometric  
systems, including those reliant on face characteristics [5] and fingerprints [6]. The iris system has been widely accepted in various  
applications due to its excellent accuracy and reliability in biometric systems [4, 7]. The biometric system can be categorized into  
two primary classifications: multimodal biometric systems and unimodal biometric systems. The unimodal biometric framework is  
employed for the purpose of verifying an individual's identity by depending on a single source of data, such as the left iris, right  
iris, or face [4]. Within the framework of the multimodal biometric system, while the system is functioning in the under-  
identification mode, the classifier produces a series of ranks that are obtained from the candidates. These ranks function as a  
representation of prospective matches [8]. The comprehensive performance of a multimodal biometric system is influenced by  
various factors that must be taken into account throughout its development and implementation [4].  
Iris, palmprint and fingerprint identification are widely regarded as the major and most reliable biometric modalities currently being  
researched, demonstrating a decreased error rate [9]. Since its inception in 1987, iris recognition has become a frequently utilized  
biometric recognition method, as evidenced by the research conducted by Aran and Leonard [10, 2]. In recent years, there has been  
a notable surge in scholarly investigations centered on iris recognition [12], rendering it the most rapidly expanding area of academic  
inquiry [11, 12]. The main goal of iris recognition is to capture and analyze images in order to facilitate the process of identification.  
The primary and pivotal phase in iris identification entails the localization of the iris region, which includes the determination of  
both the outside and inner boundaries [8, 13]. The utilization of iris recognition has been observed in several biometric applications,  
including border control to enhance security measures, intelligent unlocking systems, and crime screening procedures [14].  
Furthermore, it has also been utilized in operations pertaining to the regulation of border crossings [15].  
Page 1380  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
In recent times, there has been a surge in the utilization of face biometric characteristics, which have gained significant popularity.  
However, their implementation in practical situations has also shown certain limitations. Amidst the COVID-19 epidemic, it has  
become prevalent for individuals to adhere to the practice of donning masks in outdoor environments, while also displaying  
reluctance towards the utilization of contact-based biometric technologies. As a result, the effectiveness of existing face recognition  
methods is reduced [16]. Moreover, it is important to acknowledge that an individual's facial characteristics and fingerprints can be  
acquired unlawfully through either non-consensual or physical contact methods. As a result, there has been an increasing emphasis  
on the progress of palmprint-based biometrics technology, as evidenced by the creation of Amazon One [17]. This technology is  
highly regarded for its unique discriminatory characteristics, as well as its non-contact and hygienic approach to data collection. In  
addition, the process of obtaining palm images requires the active cooperation of individuals, making them difficult to capture and  
so increasing their effectiveness in protecting privacy.  
Various approaches are utilized in the iris localization process, such as Distance Regularized Level Set Evolution (DRLSE),  
integrodifferential operator, Circular Hough Transform (CHT), and Active Contour (AC). Tsai (2018) utilizes a fuzzy matching  
methodology to determine the unique attributes of iris points in her research. Daugman's original work on the iris identification  
system entailed the implementation of an integro-differential operator for the purpose of iris localization and eyelid removal [19].  
Boles (reference [20]) presented a novel iris recognition technique that exhibits invariance to translation, rotation, and scaling.  
Wildes (21) utilized a methodology in his research that incorporated the fusion of edge detection techniques with the Hough  
Transform algorithm to effectively discern iris patterns. The authors [22] presented an alternative methodology for the job of iris  
localization and pattern matching in their study. This study introduces a method for localizing non-cooperative iris identification  
that utilizes clustering techniques [23].  
Nevertheless, unimodal biometric systems are currently facing obstacles like as spoof attacks and variations within the same class  
[24]. Researchers have built multimodal systems in order to address the aforementioned challenges [24]. These systems encompass  
the amalgamation of fragmented evidence acquired from several samples, instances, sensors, algorithms, or qualities. One potential  
strategy for the development of a dependable person recognition system entails the construction of a multimodal system that  
integrates various biometric characteristics.  
The implementation of an effective method for integrating diverse sources of biometric data is of paramount importance in order  
to maximize the advantages of employing a multibiometric approach. The four layers of fusion in multibiometrics encompass sensor  
level fusion, feature level fusion, match score level fusion, and decision level fusion. The integration of features at the fusion level  
is considered to be more advantageous due to its ability to incorporate a greater amount of visual information, hence enhancing the  
accuracy of the system's identification capabilities. The utilization of this methodology is expected to yield improved performance  
and processing time, hence resulting in enhanced recognition outcomes [25]. Systems that possess a solitary (unimodal) biometric  
characteristic have the capability to employ intra-modal feature fusion for the corresponding biometric trait. During the stages of  
feature extraction or feature selection, it is feasible to do fusion at the feature level [26]. The scalability of feature selection is a  
hard subject due to the increased temporal complexity of several feature selection techniques as the dimensionality grows [27, 28].  
Nevertheless, this fusion technique presents two noteworthy concerns, akin to the aforementioned strategies. There are two primary  
concerns that require addressing. The fundamental concern pertains to the compatibility [29] between two separate feature spaces,  
a matter that can be effectively resolved through the implementation of a normalization technique [24]. The second concern revolves  
around the increased complexity of the feature space [30], which clearly requires significant memory and processing capabilities.  
Hence, it is crucial to develop a sophisticated classifier that can effectively operate inside the integrated feature space [31]. The  
resolution of this issue can be achieved by the implementation of either feature transformation or feature selection approaches.  
Feature selection is a methodical procedure that entails choosing a subset of features based on their efficacy in achieving a resilient  
and reliable categorization of the feature space. The application of this approach offers a potential avenue for enhancing the  
effectiveness of classification (recognition) by eliminating unnecessary, disruptive, and insignificant elements [32]. Feature  
transformation refers to the process of converting an initial vector space of features into a subsequent feature space that better  
captures the underlying characteristics of the data.  
In a prior investigation [33], the researchers employed PCA and Independent Component Analysis (ICA) to conduct iris feature  
extraction by projecting the data into a new dimensional space. PCA has been extensively utilized in the field of palmprint and face  
biometrics to efficiently decrease the dimensionality of the feature space, both prior to and after feature level fusion [34-36]. Several  
academic articles [37-40] in the literature field have extensively examined the subject of feature selection methods after feature  
level fusion, indicating a significant level of interest and drive in this particular area of research. In a prior investigation [36], the  
amalgamation of feature sets obtained from the hand and face was executed by feature-level fusion. Following that, the SFFS  
feature selection technique was utilized in order to reduce the dimensionality of the feature space. At the feature level, the process  
of normalizing and combining the scale-invariant feature transform (SIFT) features of the face and the minutiae features of a  
fingerprint is conducted. The utilization of K-Means clustering has been employed to select significant features from the fused  
feature space, as outlined in the study referenced as [37]. Furthermore, the use of the Genetic Algorithm is utilized to determine the  
most significant features from the combined feature space. The process of integrating eigen-features derived from a visual face  
image and an infrared facial image is accomplished through feature level fusion, as described in reference [38]. The integration is  
accomplished by utilizing the Genetic Algorithm to select the most influential attributes. In reference [40], the application of a  
Page 1381  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
Genetic algorithm was reported for the purpose of facilitating the selection of ideal features following the fusion of iris, palmprint,  
and fingerprint at the feature level. The ABC algorithm, along with its various adaptations, has been successfully applied in multiple  
fields, such as network configuration optimization [45], neural network training [44], image contrast enhancement [41],  
optimization of neural network image deblurring [42], and iris segmentation for multilevel thresholding application [43].  
Based on the information presented above, it can be observed that the focus of research endeavors has primarily been on multimodal  
systems that employ score level fusion techniques. Furthermore, it is commonly recognized that feature level fusion offers a broader  
spectrum of biometric inputs in comparison to fusion at the score level. Nevertheless, the exploration of feature level fusion in the  
domain of iris identification has been constrained as a result of the formidable obstacle presented by the complex and extensive  
attributes inside the feature space. The extant literature contains a range of techniques that have been documented with the aim of  
reducing the dimensionality of the feature space. The strategies discussed in this context include several techniques for feature  
selection and data transformation, such as PCA. In spite of the numerous ways available for reducing the feature space, the  
identification of an appropriate solution requires a comprehensive comprehension of the prioritization of features within the  
combined feature space. Additional investigation and examination are required to determine the potential application of  
optimization methodologies in lowering the feature space, hence improving the performance of the recognition system.  
To address these concerns, the present study has investigated swarm intelligence-based techniques, specifically focusing on the  
Cuckoo Search algorithm, as a means of feature selection to diminish the combined feature space. The implementation of CSA on  
practical engineering problems was promptly carried out, and its exceptional performance on various test functions was  
subsequently documented in reference [309]. When comparing CSA to other evolutionary algorithms such as Genetic Algorithm  
and Particle Swarm Optimization, it becomes evident that CSA exhibits superiority in handling multimodal objective functions. On  
the one hand, the number of pre-determined parameters in CSA is lower compared to GA and PSO [305]. However, bythe utilization  
of a hybrid approach involving global search and local search techniques, the CSA algorithm demonstrates the ability to effectively  
navigate the entirety of the search space and precisely identify the local minima inside a specific region.  
The problem of integrating biometric features at the feature level, particularly in scenarios with a large number of dimensions, has  
been effectively tackled through the utilization of an algorithm known as CSA for feature selection in Biometric Feature Level  
Fusion. The study employed PCA as a technique to reduce the number of features. The effectiveness of PCA was then compared  
to various feature selection method CSA to ascertain the superiority of feature selection and feature transformation approaches.  
Fig 1: System Architecture  
This article examines the reduction procedures indicated above, which aim to lower the amount of data in multimodal systems after  
feature level fusion. Previous studies have undertaken experiments on six discrete multi-modal systems. The process of feature  
level fusion was employed to extract and integrate various biometric features, including 2D-Gabor, 2D-LogGabor palmprint and  
iris texture features, Haar wavelet-based iris features, and fingerprint Minutiae features. The experiments made use of the CASIA  
iris, palmprint, and fingerprint databases, as well as the IIT Delhi iris and palmprint databases, specifically the FVC database. The  
system architecture is depicted in Figure 1.  
Page 1382  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
The primary contribution of the research paper is the following:  
• Proposed the CSA optimization algorithm, which is utilized to select the most effective features for an individual's authentication.  
The subsequent section, Section 2, delves into the preprocessing and feature extraction techniques employed for the iris, palmprint,  
and fingerprint data. The discussion in Section 3 pertains to the integration of feature spaces. The options for reducing the features  
in the fused feature space are elaborated upon in Section 4. Section 5 of the manuscript encompasses the outcomes and analysis of  
the proposed methodology, whilst Section 6 is dedicated to providing the last remarks and conclusions of the research work.  
Preprocessing and Feature Extraction  
The process of preparing a biometric characteristic for the extraction of features is usually known as preprocessing. The present  
work involved the execution of a methodology for the identification and extraction of the iris region from an image of an eye, which  
was subsequently followed by a normalization process aimed at achieving a consistent and standardized dimension for the extracted  
iris regions. Section 2.1 provides a detailed explanation of the iris preparation technique. Preprocessing is performed on the  
palmprint picture to isolate the region of interest (ROI) corresponding to the core area. This is necessary as the complete image is  
unsuitable for the recognition procedure. The aforementioned objective is accomplished by the utilization of a tangent-based  
methodology, as expounded upon in section 2.2. Additionally, this part provides a detailed account of the techniques employed to  
improve the quality of the palm image. The procedure of preparing fingerprint images encompasses multiple stages, namely  
segmentation, normalization, filtering, and thinning, as elucidated in section 2.3. The method of extracting features from  
preprocessed photos is essential for differentiating genuine humans from impostors. Three different approaches have been utilized  
to extract texture characteristics and phase features from preprocessed iris samples in order to perform feature extraction. The  
approaches employed consist of the application of a 2D-Gabor filter, a 2D-Log Gabor filter, and Haar wavelets, respectively. The  
methodology for extraction has been discussed in section 2.4. The process of extracting textural information from the area of interest  
(ROI) of palmprints is achieved by employing the 2D-Gabor filter and 2D-Log Gabor filter, as described in section 2.4. The process  
of extracting minutiae features from fingerprints is elaborated upon in section 2.5.  
Iris Preprocessing  
The act of extracting the iris area from a given eye image, with the purpose of enabling subsequent feature extraction, is commonly  
known as iris preprocessing. Within the realm of literature, scholars have utilized a wide range of approaches to compile and analyze  
iris data [19-21]. The procedure of iris extraction in our investigation encompassed multiple steps, as seen in Figure 2. Initially, the  
iris was detected through the utilization of the Canny operator and subsequently localized using the Hough transform. Subsequently,  
the process of normalization was conducted utilizing Daugman's Rubber Sheet Model. The quality of the normalized image was  
improved. The complexities of this methodology have been explicated in the next section.  
Localization  
The process of iris localization entails the detection of the iris inside an image of the eye. This particular phase holds significant  
importance in the process of iris identification. One possible approach to accomplish this task is by the utilization of techniques  
such as Daugman's method [20], which employs an integro-differential operator. According to Wildes [21], the detection of iris can  
be achieved through the identification of edges and subsequent application of the Hough transform. In their study, Boles [20]  
employs edge detection techniques to localize both the pupil and the outer boundary. Subsequently, they extract features by utilizing  
the pupil as a reference point. In a separate investigation [19], the identification of the pupil is achieved using the utilization of pixel  
intensity projections, thresholding techniques, and the circular Hough transform. Numerous additional methods have been proposed  
[24]. The present work employed an initial Canny operator for the purpose of edge tracking and detection. The limits of the iris and  
pupil were thereafter identified by the utilization of the Hough transform. The pupil and iris radii were determined.  
Canny Edge Detection  
Intelligent edge detection is common in image processing and computer vision. The operator that maximizes signal-to-noise ratio  
and localization is precisely determined by the initial Gaussian derivative [46]. Smoothing, gradient estimation, non-maximum  
suppression, and hysteresis thresholding are used by canny edge detection algorithms to accurately identify and track image edges.  
The iris image I [a, b] was convolved with a Gaussian filter G for data smoothing. This decreases auditory disruptions and inaccurate  
boundaries. Visualise the smoothed S[a, b] function. The gradient of the smoothed image array S[a, b] is determined using 2x2  
first-difference estimations. P[a, b] and Q[a, b] indicate the partial derivatives in the x and y dimensions, respectively, generated by  
this approach. The gradient magnitude array and orientation must be calculated. When the gradient rises, the magnitude array shows  
large values. However, enormous ridge magnitude arrays must be reduced to find edge spots. The biggest change stays. The next  
phase is non-maxima suppression. The non-maxima suppressed image NI cannot eliminate erroneous edges with single threshold  
values[a, b]. Hysteresis or double thresholding eliminates false edges.  
Segmentation  
The process of iris segmentation involves the detection of both the inner and external boundaries [47]. The Circular Hough  
transform was employed to determine the coordinates and radius of the iris and pupil centers. The computation of the initial  
derivative of intensity in the eye image resulted in the generation of an edge map. The process involves examining each edge pixel  
Page 1383  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
in the edge map and considering the neighboring points of a circle at various radii. These points are then evaluated and their votes  
are accumulated to get the optimal parameters for the circle in the Hough space. In order to determine the center coordinates and  
radius, it is necessary to perform calculations. The primary focus of consideration is the radius denoted as 'r', while the coordinates  
(x, y) at the center of the circle represent the boundary points in Hough space. Both the Parabolic and Linear Hough Transforms  
have the capability to detect and identify both the upper and lower eyelids. To detect the border between the iris and sclera, the  
vertical derivatives (gradients) are computed in order to minimize the impact of horizontally aligned eyelids. To detect eyelids,  
horizontal gradients are computed. The Hough space approach achieves accurate localization of the iris boundary by considering a  
reduced number of edge points, rather than all edge pixels.  
Normalization  
Normalization converts the iris region into a standard size. Illumination fluctuations cause pupil dilation, causing iris picture  
dimensions to vary. Dilatation occurs when the head, eyes, or camera rotate during image capture. Using the Cartesian system  
representation affects iris matching distance calculations. The doughnut-shaped pupil area in the iris is another worry [19]. An  
effective normalization method creates unique representations for similar images and maintains stable iris region dimensions under  
different situations. Literature suggests many literary strategies [20, 21]. Daugman's rubber sheet model transforms iris points into  
polar coordinates (r, θ). In this coordinate system, r represents distance (0-1), while θ represents angle (0-2π). Iris area points are  
converted from Cartesian (p, q) to normalized polar coordinates. Pixel coordinates for the circle around the iris and pupil were  
generated using 20 radial and 240 angle resolutions.  
Palmprint Preprocessing  
The procedure involves aligning palmprint photos and segmenting the region of interest (ROI) for subsequent processing. One of  
the primary preprocessing procedures involves the establishment of a coordinate system that is derived from significant anatomical  
landmarks situated amidst the fingers. The preprocessing stage includes several steps such as binarization, contour extraction of the  
hand or palm, key point identification, coordinate system design, and ROI extraction [48]. Starting from the third step, the methods  
of preparation undergo variations until they ultimately converge and become identical [48]. There are various methods by which  
key points can be discerned. In this study, palm image regions of interest (ROIs) were obtained by employing tangents to extract  
square-shaped shapes. The implementation of pre-processing techniques on the image prior to feature extraction serves to mitigate  
the presence of noise, as well as minimize the occurrence of misconnections and isolated regions that may disrupt the subsequent  
analysis. The original image has been modified to boost the contrast of the palm feature. The binary representation of the image is  
generated by enhancing the original image in order to facilitate the identification of unique characteristics. The detection of edges  
is accomplished by the utilization of the Sobel filter, which indicates that edges manifest at points of discontinuity in the intensity  
function or regions with sharp intensity gradients. The tangent-based method is employed to extract regions of interest (ROIs) from  
images.  
Tangent Based Approach  
The reliability of palm print identification systems is largely influenced by the precision attained in the ROI segmentation operation  
of palm images. The center region of the palm is segregated with the application of pre-processing procedures employing various  
algorithms. The aforementioned region is subsequently divided into several geometric shapes, including circles, half ellipses, and  
squares, in order to enable the extraction of pertinent features. The square area is a fundamental and frequently employed geometric  
form. The acquisition of the blurred image is achieved by applying a low pass filter (LPF) on the clipped image. The image exhibits  
a reduction in visual fidelity, leading to the attenuation of fine lines and an observable effect on prominent lines. The calculation of  
the return on investment (ROI) for the palm image involves isolating and extracting a sub image that encompasses significant and  
distinctive features  
Fingerprint Preprocessing  
This section provides an examination of two distinct preprocessing processes utilized in unimodal fingerprint recognition systems,  
as illustrated in Figure 2. The whole fingerprint system encompasses the subsequent stages: The procedure of preparing fingerprint  
images encompasses a series of sequential operations, which include segmentation, normalization, filtering, thinning, and extraction  
and matching of Minutiae features.  
Fig 2: Unimodal Fingerprint Recognition System  
Page 1384  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
The existence of non-optimal environments introduces isolated regions and disconnections within fingerprint ridges as a result of  
noise and disturbances, hence impacting the extraction of minute details. To enhance the fidelity of the fingerprint picture and  
minimize noise, it is important to engage in pre-processing techniques that involve the removal of undesirable regions.  
Preprocessing often encompasses a sequence of operations, which encompass region masking, binarization, thinning, segmentation,  
filtering, ridge frequency analysis, normalizing, and picture orientation correction [52]. The fingerprint picture has undergone many  
preprocessing approaches, such as segmentation based on morphological processing [53], normalization, orientation, filtering, ridge  
frequency analysis, region masking, and thinning.  
Normalization  
The fingerprint image acquisition technique may result in variations in gray level values along the ridges and valleys of the resulting  
image. This scenario may arise if the finger establishes an inaccurate connection with the sensor. Therefore, it is imperative to  
perform a normalization phase in order to mitigate the impact of these differences by regulating the range of gray level values. The  
present methodology employs a predetermined mean and variance in order to standardize a finger image. The intensity values of  
the finger image provided and the normalized image at pixel (p, q) can be denoted as Im (p, q) and Nm (p, q), respectively. The  
equation presented above is utilized for the purpose of acquiring the normalized image.  
2
(
(
)
)
,
0
 
+
( , ) >  
0
0
=
(1)  
2
(
(
)
)
 
{
,
0
Equation 1 represents the estimated mean (M) and variance (V) of the function Im (p, q). The target mean (M0) and variance (V0)  
values are also represented in the equation.  
Segmentation  
The fingerprint image commonly encompasses the region of interest (ROI), denoted as the foreground, which is comprised of  
ridges, bifurcations, and valleys. Furthermore, it is possible for it to include a contextual setting, a four-sided geometric shape that  
encloses the content, and altered portions of a design known as the contextual setting. To address the challenge of extracting detailed  
information from a noisy environment, it is necessary to isolate the area of interest (ROI) of the fingerprint from the surrounding  
background. Segmentation is the process by which the Region of Interest (ROI) is extracted from a picture. There exist various  
strategies that can be employed to execute this procedure. These techniques encompass segmentation methods that rely on statistical  
characteristics and orientation field, segmentation methods that rely on ridge orientation and frequency features, as well as a strategy  
that involves extracting regions of interest (ROI) from fingerprints utilizing a neural network-based methodology. The present  
investigation utilized a morphological processing segmentation technique [53] to extract the region of interest (ROI) from a  
fingerprint.  
The process of ridge identification should be carried out following the excision of the area of interest (ROI). The image of a finger  
has undergone an initial normalizing step. The uncomplicated identification of minute characteristics in an optimum fingerprint  
image is facilitated by the regular orientations of ridgelines, bifurcations, and valley lines present within it. However, within the  
practical setting, there are several factors that provide difficulties in the extraction of intricate particulars. The aforementioned  
concerns encompass cutaneous injuries, sensor noise, suboptimal image resolution, skin moisture, and inadequate contact with  
finger sensors. The normalization of the image is a crucial step in reducing the extraction of erroneous minutiae features and  
minimizing the risk of losing important minutiae points. This technique ultimately enhances the overall clarity of the image. The  
mean and standard deviation are employed in the procedure of producing the normalized image. In the present era, the method of  
one-dimensional masking is utilized to detect ridges by utilizing ridge orientation.  
Thinning  
Thinning is a procedure that entails the removal of superfluous edge pixels while preserving the interconnectedness of the original  
ridge patterns. This technique successfully decreases the width of ridgelines to a single pixel. The main goal of this morphological  
process is to attain skeletonization. The application of the thinning technique leads to the generation of a thinned image, sometimes  
known as a skeleton image, that serves as a simplified depiction of a given pattern [2]. The preprocessing module utilizes the  
thinning method to enhance the analysis and identification of advanced features in several applications, including optical character  
recognition, fingerprint analysis, and picture understanding. Thinning has been achieved through the application of two distinct  
algorithms, specifically the Zhang Suen thinning algorithm and the Sentiford Thinning algorithm. The Zhang Suen thinning  
algorithm, as outlined in the cited reference [54], is a very efficient and parallel thinning methodology that comprises two sub-  
iterations. The Stentiford thinning algorithm [55] is an iterative technique utilized for the process of skeletonization, which is based  
on the utilization of a mask.  
Iris and Palmprint Feature Extraction  
Different feature extraction algorithms can be utilized to extract the information from normalized iris [49]. The categorization of  
methodologies predominantly encompasses Phase and Texture-Based techniques, Zero-crossing Representation, Key point  
Page 1385  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
Descriptors, and Intensity Variation Analysis [49]. The efficacy of employing Gabor filter-based feature extraction for iris and  
palmprint recognition has been substantiated by previous research works [19, 20]. Furthermore, within the domain of iris feature  
extraction, the research conducted by the authors in study [23] effectively utilized Haar Wavelet and attained remarkable levels of  
accuracy while simultaneously reducing computing complexity. The adoption of Phase and Texture-based techniques for iris feature  
extraction has been driven by the benefits provided by Gabor filters and Haar Wavelets. Three different methods, specifically the  
Haar wavelet [51], 2D-Gabor filter [50], and 2D-LogGabor filter [50], have been utilized for this objective.  
A. Haar Wavelets  
The Haar wavelet technique has been utilized to derive phase information from the iris [22]. The iris feature pattern was subjected  
to a five-level decomposition, wherein approximation coefficients were used. This process resulted in the creation of a feature  
vector, which is commonly referred to as the feature vector [23].  
B. 2D - Gabor Filter  
The utilization of Gabor-based feature extraction has been extensively documented in many pattern recognition applications, as  
seen by the existing literature. The Gabor function has been found to be more efficacious in mitigating the instability of visual  
contrast and brightness, since it exhibits a higher degree of accuracy in determining the precise temporal frequency position [50].  
The Gabor filter bank has been employed in the extraction of iris and palmprint textures owing to its beneficial characteristics [30,  
31].  
2
2
2
+
2
(
)
(
)
( (  
))  
, ; , , , ,  
=
+
2
+
(21)  
2
Where,  
a = cos  
=  sin  
+
sin  
cos  
+
The symbol θ is used to represent the direction of the normal to parallel stripes in a Gabor function. The symbol φ denotes the phase  
offset, while λ specifies the wavelength of the sinusoidal factor. Additionally, σ represents the standard deviation of the Gaussian  
envelope, and γ represents the spatial aspect ratio [50].  
C. 2D-Log Gabor Filter  
The Log-Gabor filter has been extensively researched and employed in the field of texture-based feature extraction. This is mostly  
attributed to its notable characteristics of time/space and frequency invariance, together with its symmetry on the log frequency  
axis [51]. The application of the Log Gabor filter is performed using the mathematical expression as described in the reference [50].  
2
) )  
2
) )  
−1  
−1  
(
)
(
(
(
(
, , , = exp  
+
(3)  
2
2
The log-polar coordinates, denoted as (ρ, θ), are characterized by the parameters a and b, which represent orientation and scale,  
respectively. The pair (ρk, θpk) corresponds to the frequency center of the filters, while (σρ , σθ) represents the angular and radial  
bandwidths.  
Minutiae Extraction from Fingerprint  
The precision of minutiae feature extraction plays a crucial role in determining the reliability of fingerprint recognition. The CN  
methodology is extensively utilized for the extraction of minutiae points from fingerprint data. In Rutovitz's study [56], the crossing  
number of a pixel was defined.  
8
|
|
+1  
= 0.5  
=1  
Where Pi is the neighborhood binary pixel value of P with Pi = (0 or 1) and P1 = P9.  
Based on the definition presented in reference [56], the attributes of CN are utilized to identify even the most subtle features inside  
the thinned image. In the context of examining a thinned image, it has been ascertained that a pixel can be classified as a bifurcation  
point within a 3x3 window if its central pixel value is 1 and all three of its neighboring pixels exhibit the same value. The indication  
of a ridge ending occurs when all neighboring pixels have a value of 1, and the central pixel likewise has a value of 1. The  
determination of the veracity of specific details relies on the preprocessed and obtained fingerprint, regardless of this circumstance.  
The postprocessing phase is implemented to eliminate data items that are deemed incorrect.  
Page 1386  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
Post-processing  
The retrieved minutiae features obtained from the preprocessed binary fingerprint image include both authentic and false minutiae  
points. Post-processing is employed to obtain precise and intricate details. This methodology investigates the immediate  
surrounding area of a specified location and validates the finer details inside the reduced image. The Euclidean distance method is  
employed to measure the distance between the termination and bifurcation locations. The incorporation of inaccurate and  
inconsequential particulars will lead to a rise in the rates of False Acceptance (FAR) and False Rejection (FRR) in the context of  
fingerprint matching. The algorithm is utilized for the aim of identifying bifurcation points and ridge ends in order to eliminate  
these erroneous minutiae points.  
Integration of Feature Vectors  
This section presents the integration of iris, palmprint, and Fingerprint features. The Haar wavelet decomposition of 20×240 iris  
image has produced 1×114 Haar feature vector. And Gabor features (2D-LogGabor or 2D-Gabor) of iris image contains 12 different  
images G of size 20×240 each. By Horizontal and vertical down sampling it has been brought to an image GF of size 20×240. Then  
it has been converted to a vector of 1× 4800.  
The texture analysis of 100×100 palmprint ROI image obtained by applying 2DGabor filter and 2D-Log Gabor filter produces 12  
different images of size 100×100 each; this texture has brought to a single image of size 100×100 by using horizontal and vertical  
down sampling. Further, it converted into a row feature vector of size 10000.  
Fingerprint minutiae feature vector size varies with fingerprint databases from 52 to 112. Palmprint, iris and fingerprint feature  
spaces are normalized to bring into same domain space. Then integration of these features spaces is simply concatenated to form  
fused feature space and whose size varies from 14912 to 14852 based on fingerprint and iris.  
Here, 12 multi-modal systems based on fingerprint, iris and palmprint are designed namely  
MM_Finger_Iris_Palm_sys1 which is a multi-modal system developed based on integration of Log-Gabor features of Palmprint  
with Log-Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Zhang Suen thinning  
algorithm.  
MM_Finger_Iris_Palm_sys2 which is a multi-modal system developed based on integration of Log-Gabor features of Palmprint  
with Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Zhang Suen thinning  
algorithm.  
MM_Finger_Iris_Palm_sys3 which is a multi-modal system developed based on integration of Log-Gabor features of Palmprint  
with Haar features of iris and minutiae features extracted from thinned fingerprint image obtained from Zhang Suen thinning  
algorithm.  
MM_Finger_Iris_Palm_sys4which is a multi-modal system developed based on integration of Gabor features of Palmprint with  
Log-Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Zhang Suen thinning  
algorithm.  
MM_Finger_Iris_Palm_sys5which is a multi-modal system developed based on integration of Gabor features of Palmprint with  
Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Zhang Suen thinning algorithm.  
MM_Finger_Iris_Palm_sys6which is a multi-modal system developed based on integration of Gabor features of Palmprint with  
Haar features of iris and minutiae features extracted from thinned fingerprint image obtained from Zhang Suen thinning algorithm.  
MM_Finger_Iris_Palm_sys7 which is a multi-modal system developed based on integration of Log-Gabor features of Palmprint  
with Log-Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Stentiford thinning  
algorithm.  
MM_Finger_Iris_Palm_sys8 which is a multi-modal system developed based on integration of Log-Gabor features of Palmprint  
with Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Stentiford thinning  
algorithm.  
MM_Finger_Iris_Palm_sys9 which is a multi-modal system developed based on integration of Log-Gabor features of Palmprint  
with Haar features of iris and minutiae features extracted from thinned fingerprint image obtained from Stentiford thinning  
algorithm.  
MM_Finger_Iris_Palm_sys10which is a multi-modal system developed based on integration of Gabor features of Palmprint  
with Log-Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Stentiford thinning  
algorithm.  
MM_Finger_Iris_Palm_sys11which is a multi-modal system developed based on integration of Gabor features of Palmprint  
with Gabor features of iris and minutiae features extracted from thinned fingerprint image obtained from Stentiford thinning  
algorithm.  
Page 1387  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
MM_Finger_Iris_Palm_sys12which is a multi-modal system developed based on integration of Gabor features of Palmprint  
with Haar features of iris and minutiae features extracted from thinned fingerprint image obtained from Stentiford thinning  
algorithm.  
Features Space Reduction  
The existing body of literature demonstrates that the process of feature selection or reduction has been found to enhance the accuracy  
of classifier prediction, improve scalability, and enhance generalization capabilities. The process of knowledge discovery heavily  
relies on the technique of feature reduction, which serves to decrease the computational complexity, storage requirements, and  
overall cost associated with the analysis. This study utilized both PCA for feature extraction and a feature selection method based  
on CSA to reduce the dimensionality of the fused feature space.  
Feature Space Reduction using PCA  
The technique of PCA is frequently employed in the field of image processing for the purpose of reducing dimensionality and  
projecting the data onto a lower-dimensional subspace. This methodology has effectively addressed issues pertaining to photo  
compression and recognition. PCA is commonly employed in the field of biometrics for the purpose of feature extraction from  
facial images [57], palmprint data [58], and footprint data [59]. Prior to the classification of fingerprints, faces, and signatures,  
researchers employed a hybrid methodology that involved the integration of PCA and Linear Discriminant Analysis (LDA) to  
effectively reduce the dimensionality of these biometric features. As stated by the source referenced as [61], PCA is employed to  
decrease the dimensionality of vectors, hence enhancing the process of image identification. PCA is a widely employed technique  
for the identification and characterization of patterns within datasets that possess many dimensions [62]. PCA was employed to  
reduce dimensionality in three multi-biometric systems, which integrate eye, palm, and finger prints, following feature fusion [50].  
PCA is a commonly employed technique for linear data reduction. The process involves the transformation of data into a novel  
dimensional space, wherein the dimensions with the most variability encapsulate the data. PCA is a statistical technique utilized to  
transform visual data into its principal components. The orthogonal principal components of the picture data are arranged in  
descending order based on their variance.  
CSA is utilized to assess the variability of feature vectors extracted from iris, palmprint and fingerprint data across different  
orientations [50]. Let T represent the training dataset consisting of p iris, palmprint and fingerprint templates. One-dimensional  
templates can be represented as matrices with dimensions of 1 x q. The dataset T, which has dimensions p×q, undergoes  
dimensionality reduction through the utilization of the PCA algorithm. The dimensions of the new dataset T' are p×k, where k ≤ q.  
The approach use the eigen() function to solve equation 5 and determine the eigenvalues and eigenvectors.  
[
− Ι] = 0  
(4)  
cov is the covariance matrix in this case. The eigen vectors (e1, e2, e3,..., eq) are given by the identity matrix I, the eigen value λ,  
and the eigen vector e. The eigen vectors e1, e2, e3,... eq are sorted by the Sort() function in decreasing order of their associated  
eigen values λ1, λ2, …, λq.  
Cuckoo Search Algorithm for Feature Selection  
In standard CS, the solutions are updated in the search space towards continuous valued positions. Unlike, in the BCS for feature  
selection [63], the search space is modelled as a n-dimensional boolean lattice, in which the solutions are updated across the corners  
of a hypercube. In addition, as the problem is to select or not a given feature, a solution binary vector is employed, where 1  
corresponds whether a feature will be selected to compose the new dataset and 0 otherwise. In order to build this binary vector, we  
have employ the Eq. 6, which can provide only binary values in the boolean lattice restricting the new solutions to only binary  
values:  
1
(
)
( ) =  
(5)  
( )  
1+  
(
)
1
( ) >  
(
)
{
+ 1 =  
(6)  
0
In which σ ~ U(0, 1) and (t) denotes the new egg’s value at time step t. Algorithm1 presents the proposed BCS algorithm for  
feature selection.  
The algorithm starts with initialization of each nest with a vector of binary values. Each nest is evaluated in order to update its  
fitness value. It replaces the nests with the worst solutions using the probability p, generating new nests randomly as described in  
[64]. Finally, updates the binary vector for each nest restricting the generated solutions via Lévy flights and the sigmoid function.  
The cuckoo's host nest will represent a potential solution array xi. Hence xi is encoded as an binary array with dimension [1xd],  
where 'd' denotes the total number of features to be selected in the Feature Selection (FS) problem. The ‘n’ cuckoos work in parallel  
evolving these n nests by making series of Global and Local flights finding and evaluating new hosts nests until finally finding the  
Page 1388  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
fittest host-nest to lay their eggs. The fittest host-nest found will represent the optimal FS solution. Various steps involved in the  
CSA are as follows:  
Initialization of Population  
Each host nest xi is initialized as follows  
(
)
=
+
0, 1    
(7)  
where, xij is the jth dimension of the ith host nest . LLj and ULj are the lower and upper bounds for the jth feature.  
LLj and ULj will define the range of random weight factors for jth feature. U(0, 1) is a uniformly distributed random number in the  
range [0, 1]. Since the host nests are to be initialized as binary arrays, Xij is initialized using equations 7, 6, 5.  
Computation of Fitness  
The cuckoos compute the fitness of their initial host-nests. The fitness of each host-nest 'xi' is computed using the objective function  
defined in equation 8 given below. A selected feature vector of host nest with higher fitness will be fitter than the host-nest with  
less fitness.  
The fitness function is a driver for CSA to pick up a subset of features plays a crucial role to generate able nests of new generation  
from the current generation. This fitness value evaluates the performance of the features subset. The fitness of each nest is calculated  
based on C4.5 machine learning algorithm. The selected features in the given nest are identified, and then new biometric dataset is  
produced based on selected features from the given biometric dataset. The classification or recognition accuracy is obtained by  
applying the C4.5 algorithm on new biometric dataset; which is represented by α. The fitness function used to assess the current  
chromosome is defined as:  
=
+
+
(8)  
Where α is the accuracy of biometric data based on selected features, is the impact of selected features on biometric recognition,  
γ gives the effect of non-selected features on recognition.  
There are three components to the fitness function fit. The first component, α, quantifies the degree to which the underlying  
distribution of biometric images can accurately identify the biometric image. The weight of a subset of features relative to all  
features in the collection of biometric images is determined in the second section. This may be assessed as  
(
)
=
(9)  
Here, nselected is the cardinality of selected features, n represents the total number of features, NDB denotes the total number of  
biometric images in the given dataset.  
The third part reflects the impact of other non-selected features or weight factor of other features not present in chromosome on  
recognition. This is calculated as  
( )  
=
(10)  
Where nnon-selected is the cardinality of non-selected features of the chromosome, n denotes the total number of features, N represents  
the number of trained biometric images. The three parts of the function look for optimum features in the biometric feature space  
with a complete description of each biometric, and this function also shows that the calculations are performed based on trained  
data without the usage of test data.  
Lévy Global Flight  
The cuckoos then start their search process of finding even fitter host-nests. Each cuckoo ‘i’ makes a flight from their initial nest  
‘xi‘ to new host nest ‘xi new’ by making the Discrete Lévy/Global Flight equation as shown below.  
=
+
 ( )  
(11)  
New solutions are generated arbitrarily using levy flight, for successive iteration. If the solution generation is made adaptive, rather  
than random generation, then the search process of the algorithm can be enhanced.  
Discrete Local Walk  
After the discrete global walk, some of the cuckoos make another local flight to see if they can find even better host nest in the  
neighbourhood. To implement this flight two random neighbouring host nests are picked and the local flight is mimicked by  
equation 12. The global walk for exploring the available searching space can be carried out by using Levy flight as eq 12.  
( )  
( )  
( )  
( )  (  
=
+
×
)
(12)  
Page 1389  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
Where αis a constant scale factor and is a vector comprised of random numbers subjected to the standard normal distribution.  
( ) is a vector comprised of a set of numerical values generated by Levy flight. The operator  means element-wise multiplications.  
best is the best solution that has ever been obtained from the beginning to current iteration [65].  
The cuckoos which made the local flight, again evaluate the fitness of their new host nest, to see if the fitness has improved. If there  
is no improvement in the fitness, they revert back to the previous nest using equation 11 discussed before.  
This way the ‘n’ cuckoos work in parallel making series of lévy/global and local flights discovering new host nests iteratively again  
and again until they find their fittest host nests.  
II. Discussion of Results  
This section describes the experimental findings of a multi-modal recognition system, wherein the outcomes are compared between  
the deployment of reduction methodologies and the absence thereof. The primary focus of the study revolves around three major  
dimensions: the rate of recognition, the computational time needed for dataset processing, and the decrease in feature space resulting  
from the fusion of features at the level of analysis. Recognition is considered to occur when the false alarm rate (FAR) reaches a  
level of 0.01%. To determine the recognition rate, two matching systems utilizing Euclidean distance were utilized, in conjunction  
with four classification algorithms: SMO, C4.5, NB, and RF.  
The experiment was carried out utilizing two separate databases. An instance of an iris database is CASIA Version 1.0 [66], which  
consists of a collection of 756 iris photographs acquired from 108 unique eyes. During each session, a set of seven pictures is  
obtained for each individual eye. The initial session entails the acquisition of three iris samples, whereas the subsequent session  
entails the acquisition of four samples. Six samples were chosen from each eye of the individuals in this database. An additional  
illustration can be found in the iris image database version 1.0, which is affiliated with the Indian Institute of Technology Delhi  
(IITD) [67]. The assemblage consists of a grand total of 2240 photographs, obtained from a diverse pool of 224 unique individuals.  
In this study, a sample size of 10 eye specimens is collected from each participant. Specifically, the first five samples are obtained  
from the left eye, while the subsequent five samples are obtained from the right eye. Inside the framework of our experimental  
design, it is necessary to treat the left eye and right eye of a single participant as separate entities owing to the inherent dissimilarities  
that exist between each eye inside an individual. A total of 448 unique participants were chosen, with three samples being collected  
for each individual.  
The research was carried out utilizing the CASIA and IIT Delhi palmprint databases, which encompassed a total of 100 samples.  
A set of 5 samples was gathered for each object, resulting in a cumulative total of 500 samples. The texture features extracted from  
palmprints using the 2D-Gabor filter and 2D-Log Gabor filter demonstrate a lack of comparability with the minutiae features found  
in fingerprints. The analysis of the texture in a region of interest (ROI) palmprint image is performed by utilizing a 2D-Gabor filter  
and a 2D-Log Gabor filter. The aforementioned procedure yields a total of twelve unique photos, each possessing dimensions of  
100×100. In order to combine the texture information, horizontal and vertical downsampling techniques are utilized, leading to the  
production of a final singular image with dimensions of 100×100. Furthermore, the data is converted into a feature vector of one  
dimension, with a dimensionality of 10,000.  
Two different databases are adapted to carry out the experiments, namely CASIA Version1.0 Fingerprint image database, which  
consists of the left hand and right-hand fingerprint images of 100 individual persons. For each hand of the person, from 4 different  
fingers, 4 samples are collected. Since each finger of the same person is different from other fingers, 2 fingers from the left hand  
and 2 fingers from the right hand from each individual person are considered for experiments; totally 400 different subjects and 4  
samples for each subject are selected and experimented. From FVC 2004 DB1_B fingerprint database 10 individual person  
fingerprint images are selected for experiments. From each individual person finger, 6 samples are chosen.  
Each virtual entity within our multi-modal biometric databases is characterized by a unique biometric trait derived from one  
database, combined with another trait acquired from a separate database. In the process of constructing a multimodal database  
utilizing iris, palmprint, and fingerprint data, each virtual human entity was meticulously generated by selecting a single iris sample  
from the iris CASIA database, one palmprint sample from the palmprint CASIA database, and one fingerprint sample from the  
palmprint CASIA database. A supplementary database was created to support multimodal systems. The procedure for creating  
simulated humans inside this database entails the amalgamation of iris data sourced from the iris IITD DB, palmprint data acquired  
from the palmprint IITD DB, and fingerprint data collected from the FVC DB. The previously indicated methodology produces  
datasets that encompass a wide range of biometric information modes.  
The experiments were carried out with a personal computer that was outfitted with a 1.8 GHz i7 processor, 16 GB of RAM, and  
operated on the Windows 10 operating system. The effectiveness of two reduction methodologies, namely PCA and CSA, is  
assessed on multi-modal systems in order to ascertain the optimal approach. The efficacy of these two approaches can be ascertained  
based on the operational capabilities of the proposed systems. The rates of true positives and false positives are computed using a  
Euclidean distance metric and a measure based on a supervised algorithm. The suggested systems utilize four supervised algorithms,  
including the C4.5 decision tree, Random Forest, SMO, and Naive Bayes.  
Page 1390  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
Result Analysis  
This article presents the results of our examination of multimodal systems that incorporate iris and palmprint biometric data, both  
with and without the implementation of reduction strategies. The calculations under consideration primarily relate to the rates at  
which objects are correctly identified, the time it takes to process datasets, and the decrease in the number of features resulting from  
the combination of features at the feature level.  
The current work aims to demonstrate the results of matching multimodal systems by employing Euclidean distance across various  
reduction procedures. Table 2 displays the recognition rate attained by six systems on two datasets, utilizing data reduction  
techniques. Based on the specified parameters, a false acceptance rate (FAR) of 0.01% is established for the recognition rate. PCA  
exhibits a higher processing speed compared to CSA, albeit with a slightly reduced identification rate across all six systems. In  
terms of the rate of recognition, CSA demonstrated superior performance in comparison to PCA. The results suggest that CSA  
exhibits enhanced performance when used to datasets of considerable scale.  
Table 1 Number of Features Selected in Multi-Modal System Based on Fingerprint, Iris, and Palmprint for Various DB’s  
Table 1 presents the quantity of diminished characteristics in the eigen space PCA, CS, for both the iris and palmprint-based multi-  
modal systems in section 3. Table 2 presents the results of the PCA and the CS Euclidean distance performance. Achieving a high  
identification rate and effectively reducing the feature space are crucial components of any system. The results presented in Tables  
2 and 1 demonstrate that PCA exhibits superior performance in reducing the feature space compared to CS. However, it is important  
to note that PCA does not yield an improvement in recognition accuracy, whereas CS produces superior results in this regard.  
The findings for tweleve multi-modal systems utilizing iris, palmprint and fingerprint are displayed in Table 3. In the context of  
multi-modal systems, it was shown that the SMO and C4.5 classifiers exhibited significantly higher levels of recognition accuracy  
when compared to the NB and RF classifiers. Among the various classifiers examined, the NB classifier has demonstrated superior  
performance compared to distance measures. However, it exhibits subpar performance among supervised classifiers due to its  
limitations in effectively handling continuous data. The SMO and C4.5 classifiers demonstrated the greatest accuracy rates of 96.5%  
and 96.3%, respectively, among all multi-modal datasets of iris, palmprint and fingerprint. These results were superior to those  
achieved by the other classifiers utilized in the proposed CS algorithm.  
Page 1391  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
Table 2 Recognition Accuracy using Euclidean Distance Measure in Multi-Modal Systems Based on Fingerprint, Iris And  
Palmprint for Various DB’s  
The MM_Palm_Finger_Iris_sys1 system achieved an accuracy of 96.9% using the C4.5 algorithm. In the  
MM_Palm_Finger_Iris_sys2 system, the SMO algorithm achieved a recognition accuracy of 96.3%. Similarly, the  
MM_Palm_Finger_Iris_sys3 system achieved an accuracy of 96.8% using the C4.5 algorithm. The MM_Palm_Finger_Iris_sys4  
system attained a recognition rate of 97.3% using both the SMO algorithm. In the MM_Palm_Finger_Iris_sys5 system, the C4.5  
algorithm produced an accuracy of 96.9%. Lastly, the SMO algorithm achieved a recognition level of 96.8% in the  
MM_Palm_Finger_Iris_sys6 system.  
The MM_Palm_Finger_Iris_sys7 system achieved an accuracy of 96.8% using the SMO algorithm. In the  
MM_Palm_Finger_Iris_sys8 system, the SMO algorithm achieved a recognition accuracy of 96.4%. Similarly, the  
MM_Palm_Finger_Iris_sys9 system achieved an accuracy of 96.8% using the SMO algorithm. The MM_Palm_Finger_Iris_sys10  
system attained a recognition rate of 97.5% using both the SMO algorithm. In the MM_Palm_Finger_Iris_sys11 system, the C4.5  
algorithm produced an accuracy of 96.8%. Lastly, the SMO algorithm achieved a recognition level of 97.2% in the  
MM_Palm_Finger_Iris_sys12 system.  
Analysis of Computation Time: The Feature Selection techniques, namely PCA and CSA were employed in the tests using  
identical databases and conducted within a consistent context. Despite the fact that the algorithm ABC requires a longer training  
time compared to the other algorithms, it yields the least testing time. In the context of biometric systems, the training process is  
often conducted during the enrollment phase and is performed offline. However, the nature of the testing process differs from what  
has been described. In the context of biometric systems, the duration of testing exhibits a greater influence when juxtaposed with  
the duration of training. Due to its ability to create a reduced amount of characteristics in comparison to alternative algorithms, the  
suggested approach consistently exhibits a shorter processing time for classifying the test biometric template as either genuine or  
imposter.  
Page 1392  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
Table 3 Recognition Accuracy using Various Classifiers in Multi-modal System Based on Iris, Palmprint and Fingerprint  
for Various DB’s  
III. Conclusion  
The present study employed the CS algorithm to reduce the dimensionality of the feature space following the integration of  
biometric features derived from multiple modalities. Principal component analysis (PCA) has the capability to handle big datasets;  
yet, it may fail to capture crucial information on each occasion. As a result of this, the development of CSA was undertaken, which  
effectively addresses this issue through the utilization of an exponential function to broaden the exploration of feature space. The  
performance of ABC is superior to that of PCA in reducing the feature space of iris, plamprint, and fingerprint benchmark datasets,  
specifically CASIA, IITD and fvc .  
Page 1393  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
ABC demonstrated strong performance across various scenarios, encompassing feature space reduction, accuracy in identifying  
distance measures, and the effectiveness of supervised classifiers. In the context of section 3 multi-modal systems, it can be observed  
that CSA has superior performance in terms of recognition accuracy compared to PCA. When employing supervised classifiers,  
PCA yields an accuracy rate of 87.8%, whereas the CSA classifier achieves a higher accuracy rate of 96.3%. Enhanced identification  
of distinctive characteristics would facilitate the enhancement of categorization precision. The findings indicate that supervised  
algorithms exhibit a higher degree of accuracy compared to the use of Euclidean distance.  
References  
1. Milad Salem, Shayan Taheri and Jiann-Shiun Yuan,” Utilizing transfer learning and homomorphic encryption in a privacy  
preserving and secure biometric recognition system”, vol. 8, no. 1, pp. 3, 2019.  
2. Soliman Randa F, Amin Mohamed, Fathi E. Abd El-Samie, “A Novel Cancelable Iris Recognition Approach novel  
cancelable Iris recognition system based on feature learning techniques. Inf Sci 2017;406:10218.  
3. Powalkar Samarjeet, Mukhedkar Moresh M. Fast face recognition based on wavelet transform on pca. Int J Sci Res Sci  
Eng Technol (IJSRSET) 2015;1(4):214.  
4. Al‑Waisy Alaa S, Qahwaji Rami, Ipson Stanley, Al‑Fahdawi Shumoos, Nagem Tarek AM. A multi‑biometric iris  
recognition system based on a deep learning approach. Pattern Anal Appl August 2018;21(3):783802.  
5. Brammya G, Suki Antely A. Face recognition using active appearance and type-2 fuzzy classifier. Multimedia Res  
2019;2(1).  
6. Lee Eui Chul, Jung Hyunwoo, Kim Daeyeoul. New finger biometric method using near infrared imaging. Sensors 2011.  
7. Ninu Preetha NS, Brammya G, Ramya R, Praveena S, Binu D, Rajakumar BR. Grey wolf optimisation-based feature  
selection and classification for facial emotion recognition. IET Biometrics 2018;7(5):4909.  
8. Gupta Richa, Sehgal Priti. A complete end-to-end system for iris recognition to mitigate replay and template attack. Soft  
Comput Signal Process 2019:57182.  
9. Othman N, Dorizzi B. Impact of quality-based fusion techniques for video-based iris recognition at a distance. IEEE Trans  
Inf Forensics Secur 2015;10(8):1590602.  
10. [10] Shubhika Ranjan S, Swarnalatha P, Magesh G, Sundararajan Ravee. Iris recognition system. Int Res J Eng Technol  
(IRJET) December 2017;4(12). 2395-0056.  
11. Bowyer Kevin W, Hollingsworth Karen, Flynn Patrick J. Image understanding for iris biometrics: a survey. Comput Vision  
Image Understanding 2008;110:281307.  
12. Hofbauer Heinz, Jalilian Ehsaneddin, Uhl Andreas. Exploiting superior CNN-based iris segmentation for better  
recognition accuracy. Pattern Recognit Lett 2019;120: 1723.  
13. Maryam Mostafa Salah, Sameh A. Napoleon, El-Sayed M. El-Rabaie, Fathi E. Abd El-Samie, and Mustafa M. Abd Elnaby,  
"Sensitivity analysis of a class of Iris localization algorithms to blurring effect", Wirel Pers Commun, vol. 104, no. 1,pp.  
269-286.  
14. Ratre Avinash, Pankajakshan Vinod. Tucker visual search-based hybrid tracking model and fractional Kohonen self-  
organizing map for anomaly localization and detection in surveillance videos. Imaging Sci J 2018;66(4):195210.  
15. Caiyong Wang, Yuhao Zhu, Yunfan Liu, Ran He, and Zhenan Sun, Joint Iris segmentation and localization using deep  
multi-task learning framework, 2019.  
16. Yue-Tong Luo, Lan-Ying Zhao, Bob Zhang, Wei Jia, Feng Xue, Jing-Ting Lu, Yi-Hai Zhu, Bing-Qing Xu, Local line  
directional pattern for palmprint recognition, Pattern Recognition, Volume 50, 2016, pp. 26-44.  
17. Z. Yang, L. Leng and W. Min, "Extreme Downsampling and Joint Feature for Coding-Based Palmprint Recognition," in  
IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021, Art no. 5005112, doi:  
10.1109/TIM.2020.3038229.  
18. Tsai Chung-Chih, Lin Heng-Yi, Taur Jinshiuh, Tao Chin-Wang. Iris recognition using possibilistic fuzzy matching on  
local features. IEEE Trans Syst Man Cybernpart B: Cybernetics February 2012;42(1).  
19. J. Daugman, High confidence visual recognition of person by a test of statistical independence, IEEE Transaction on  
Pattern Analysis and Machine Intelligence 15(11) (1993) 1148-1161.  
20. W. Boles, B. Boshash, “A Human Identification Technique Using Images of the Iris and Wavelet Transform” IEEE  
Transactions on signal processing, vol. 46, no. 4, 1998.  
21. Wildes R, Iris Recognition an emerging biometric technology, Proceedings of the IEEE, 85(9) (1997) 1348- 1363.  
22. N Singh, D Gandhi, K. P. Singh, “Iris recognition using Canny edge detection and circuler Hough transform,” International  
Journal of Advances in Engineering & Technology, May 2011.  
23. Lim, S., Lee, K., Byeon, O., Kim, T, “Efficient Iris Recognition through Improvement of Feature Vector and Classifier”,  
ETRI Journal 23(2), June 2001, pp. 61-70.  
24. A Ross, K Nandakumar, A K Jain, Hand Book of Multibiometrics, Springer Verlag edition, 2006.  
25. O.T. Adedeji, A.S. Falohun, O.M. Alade, E.O. Omidiora, S.O. Olabiyisi, “ Clonal Selection Algorithm for feature level  
fusion of multibiometric systems” Annals Comput. Sci. Series, 17 (1) (2019), pp. 69-77.  
26. O.T. Adedeji, A.S. Falohun, O.M. Alade, E.O. Omidiora, S.O. Olabiyisi, “Comparative analysis of face, Iris and  
fingerprint recognition systems” World J.f Eng. Res. Technol. (WJERT0, 4 (4) (2018), pp. 160-167.  
27. V. Kumar, S. Minz, “Feature selection: a literature review”, Smart Comput. Rev., 4 (1) (2014), pp. 211-229.  
Page 1394  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
28. V. Rajasekar, B. Predić, M. Saračević, M. Elhoseny, D. Karabasevic, D. Stanujkic, P. Jayapaul, “ Enhanced multimodal  
biometric recognition approach for smart cities based on an optimized fuzzy genetic algorithm Sci. Rep., 12 (2022), p.  
622.  
29. U. Park, S. Pankanti, A. K. Jain, Fingerprint Verification using SIFT features, Proceedings of SPIE Defense and Security  
Symposium, pp. 69440K-69440K-9 (2008).  
30. Y. S. Moon, H. W. Yeung, K. C. Chan, S. O. Chan, Template synthesis and image mosaicking for fingerprint registration:  
an experimental study, Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Proceedings 2004  
(ICASSP‟04) vol.5, pp. 409-412, 2004.  
31. Faundez-Zanuy M, Data Fusion in biometrics, In IEEE Aerospace and Electronic Systems Magzine, 20 (2005) 34-48.  
32. Chen. Y, Li. Y, Cheng. X, Guo. L, Survey and Taxonomy of Feature Selection Algorithms in Intrusion Detection System,  
In Lipmaa H., Yung M., Lin D. (eds) Information Security and Cryptology. Inscrypt 2006. Lecture Notes in Computer  
Science, vol 4318. Springer, Berlin, Heidelberg.  
33. JX. Shi, XF. Gu, The comparison of iris recognition using Principal Component Analysis, Independent Component  
Analysis and Gabor Wavelets, IEEE, International Conference on Computer Science and Information Technology,2010.  
34. G. Feng, K. Dong, D. Hu, D. Zhang, When faces are combined with palmprints: a novel biometric fusion strategy, in: First  
International Conference on Biometric Authentication (ICBA), 2004, pp.701-707.  
35. Y. Yan, Y.J. Zhang, Multimodal biometrics fusion using correlation filter bank, in: proceedings of International conference  
on Pattern Recognition (ICPR-2008), 2008, pp.1-4.  
36. Y. Yao, X. jing, H. Wong, Face and palmprint feature level fusion for single sample biometric recognition,  
Neurocomputing 70(7-9) (2007) 1582-1586.  
37. A. Ross, R. Govindarajan, Feature level fusion using Hand and Face biometrics, Proceedings of SPIE Conference on  
biometric technology for human identification II, Orlando, USA, pp. 196-204, March 2005.  
38. A. Rattani, D.R. Kisku, M. Bicego, Feature level fusion of face and fingerprint biometrics, in: Proceedings of First IEEE  
international Conference on Biometrics: Theory, Applications, and Systems (BTAS 2007), pp. 1-6, 2007.  
39. S. Singh, G. Gyaourova and I. Pavlidis, Infrared and visible image fusion for face recognition, SPIE Defence and Security  
Symposium, pp. 585-596,2004.  
40. A.A. Altun, H.E. Kocer, N. Allahverdi, Genetic algorithm based feature selection level fusion using fingerprint and iris  
biometrics, International Journal of Pattern Recognition and Artificial intelligence (IJPRAI) 22(3) (2008) 585-600.  
41. A. Draa, A. Bouaziz, An Artificial Bee Colony algorithm for image contrast enhancement,  
J.SwarmEvolut.Comput.16(2014)6984.  
42. S. Saadi, A. Guessouma, M. Bettayeb, ABC optimized neural network model for image deblurring with its FPGA  
implementation, Microprocess, Microsyst, 37 (2013) 52-64.  
43. A. Bouaziz, et al., Artificial bees for multilevel thresholding of iris images, Swarm and Evolutionary Computation, 21  
(2015) 32-40.  
44. C. Ozturk, D. Karaboga, Hybrid Artificial Bee Colony algorithm for neural network training, in: IEEE Congress on  
Evolutionary Computation (CEC), 2011, pp. 8488.  
45. R. Srinivasa Rao, S. V. L. Narasimham, M. Ramalinga raju, Optimization of distribution network configuration for loss  
reduction using Artificial Bee Colony algorithm, International Journal of Electrical Power Energy System Engineering,  
1(2)(2008)644650.  
46. Ramesh Jain, Rangachar Kasturi, Brian G Schunck, Machine Vision, McGraw-Hill,1995.  
47. Vanaja Roselin.E.C, L.M.Waghmare, Pupil detection and feature extraction algorithm for Iris recognition, AMO-  
Advanced Modeling and Optimization, Volume 15, Number 2, 2013.  
48. A. Kong, D. Zhang, and M. Kamel, “A survey of palmprint recognition,” Pattern Recognition Letters, vol. 42, pp. 1408–  
1418, 2009.  
49. Alice Nithya A, Lakshmi C, Feature Extraction Techniques For Recognition of Iris images: A Review, International  
Journal of Control Theory and Applications (IJCTA), 9(28) 2016, pp.87-92.  
50. P. Aruna Kumari, G. Jaya Suma, An Experimental Study of Feature Reduction Using PCA in Multi-Biometric Systems  
Based on Feature Level Fusion, 2016 International Conference on Advances in Electrical, Electronic and System  
Engineering, 14-16 Nov 2016, Putrajaya, Malaysia.  
51. H. Mehrotra, B. Majhi, Phalguni Gupta, “Multi-algorithmic Iris Authentication System”, International Journal of  
Computer, Electrical, Automation, Control and Information Engineering, Vol.2, No.8, 2008.  
52. Letian Cao, Yazhou Wang, Fingerprint image enhancement and minutiae extraction algorithm, 2016.  
53. M. F. Fahmy, M. A. Thabet, A Fingerprint Segmentation Technique Based on Morphological Processing, ISSPIT, 2013.  
54. T. Y. Zhang, C. Y. Suen, A Fast Parallel Algorithm for Thinning Digital Patterns, Image Processing and Computer Vision,  
27(3) (1984) 236-239.  
55. Stentiford. F. W. M, Mortimer. R. G, Some new heuristics for thinning binary handprinted characters for OCR, IEEE  
Transactions on Systems, Man, and Cybernetics, SMC-13(1) (1983) 81-84.  
56. D. Rutovitz, Pattern recognition, J. Roy. Stat. Soc. 129 (1966) 504530.  
57. Jamal Hussain Shah, Muhammad Sharif, Mudassar Raza, and Aisha Azeem, A Survey: Linear and Nonlinear PCA Based  
Face Recognition Techniques, The International Arab Journal of Information Technology, Vol. 10, No. 6, November 2013.  
Page 1395  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025  
58. Mithuna Behera et al,Palm print Authentication Using PCA Technique, International Journal of Computer Science and  
Information Technologies, Vol. 5 (3), 2014, 3638-3640.  
59. Rohit Khokher, Ram Chandra Singh, Rahul Kumar, Footprint Recognition with Principal Component Analysis and  
Independent Component Analysis, Macromol. Symp. 2015, 347, 1626.  
60. Nittaya Kerdprasop, Ratiporn Chanklan, Anusara Hirunyawanakul, Kittisak Kerdprasop, An Empirical Study of  
Dimensionality Reduction Methods for Biometric Recognition, 7th International Conference on Security Technology  
IEEE 2014 26-29.  
61. Z. Wang and X. Li, Face Recognition Based on Improved PCA Reconstruction, in Intelligent Control and Automation  
(WCICA), 2010 8th World Congress on, 2010, pp. 6272-6276.  
62. J. Meng and Y. Yang, Symmetrical Two-Dimensional PCA with Image Measures in Face Recognition, Int J Adv Robotic  
Sy, Vol. 9, 2012.  
63. X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, 2nd edition, 2010.  
64. Yang, Xin-She. "Cuckoo Search and Firefly Algorithm: Overview and Analysis." Cuckoo Search and Firefly Algorithm.  
Springer International Publishing, 2014. 1-26.  
65. H. K. Kwan and J. Liang, "Minimax design of linear phase FIR filters using cuckoo search algorithm", in Proceedings of  
8th International Conference on Wireless Communications and Signal Processing (WCSP 2016), Yangzhou, Jiangsu,  
China, October 13-15, 2016, pp.1-4.  
Page 1396