INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 662
A Combined Framework for Medical Image Classification and
Detection of Brain Abnormalities Utilizing K-Nearest Neighbors
and an Enhanced Convolutional Neural Network
Mahdi Koohi
STU (Tehran-Iran), Iran
DOI: https://doi.org/10.51583/IJLTEMAS.2025.140400075
Received: 29 April 2025; Accepted: 02 May 2025; Published: 15 May 2025
Abstract: The detection of abnormalities in medical images plays a pivotal role in early diagnosis and treatment. This paper
presents a hybrid approach that combines K-Nearest Neighbors (KNN) and deep learning techniques to improve medical image
classification and anomaly detection. The method applies KNN for classifying images such as MRI, CT, and X-ray scans,
focusing on abnormality detection in brain images. By integrating KNN classifiers with feature extraction methods, the approach
addresses challenges such as class imbalance and small datasets, resulting in improved detection accuracy. The effectiveness of
the proposed method is demonstrated on a medical image dataset, showing significant improvements in both classification and
anomaly detection tasks.
Keywords: Image Processing, Deep Learning, KNN, Medical Modalities, Abnormality
I. Introduction
Medical imaging plays a critical role in diagnosing and monitoring various diseases. Medical images from modalities such as
MRI, CT, X-rays, and ultrasound are commonly used to detect abnormalities, including tumors, lesions, and fractures. However,
accurately identifying these anomalies remains challenging, particularly in cases of imbalanced datasets or subtle features.
Traditional classifiers such as K-Nearest Neighbors (KNN) have been widely applied in medical image classification. Despite its
simplicity, KNN offers notable advantages due to its non-parametric nature and ability to manage complex, high-dimensional
data.
The data is imbalanced (i.e., some categories have very few samples compared to others).
Abnormalities are subtle or hard to detect.
Small sample sizes limit the model's learning capacity.
This paper proposes a hybrid method that combines K-Nearest Neighbors (KNN) with deep learning techniques, specifically
convolutional neural networks (CNNs), for feature extraction and classification. The approach targets abnormality detection
across medical imaging tasks, addressing challenges such as class imbalance, noisy data, and limited sample sizes.
Related Works
Numerous studies have investigated the application of K-Nearest Neighbors (KNN) for medical image classification. For
example, one study integrated KNN with feature extraction techniques to improve abnormality detection in chest X-ray images
[1], while another applied KNN classifiers for Alzheimer's disease diagnosis using MRI data, demonstrating its robustness in
small-sample and class-imbalanced scenarios [2]. Although KNN shows promise in classification tasks, its limitationssuch as
computational complexity and sensitivity to noisecan be mitigated by combining it with deep learning methods capable of
automatic feature extraction.
The KNN algorithm has been applied across various medical imaging tasks, including cancer detection and neurological disorder
diagnosis. Many studies combine KNN with other machine learning techniques, such as convolutional neural networks (CNNs) or
support vector machines (SVMs), to enhance classification accuracy, demonstrating the benefits of hybrid models in medical
image analysis. A review paper [4] discusses diverse applications of KNN in medical image classification and abnormality
detection, covering fields such as tumor detection, brain imaging, and cardiovascular conditions.
A study introduced a hybrid model combining KNN and support vector machines (SVM) for brain tumor detection in MRI
images. This approach improved detection accuracy by leveraging the strengths of both classifiers, outperforming standalone
methods [5]. Additionally, another study applied KNN to detect skin cancer in dermoscopic images, emphasizing the role of
texture-based feature extraction and demonstrating that KNN can achieve high detection accuracy for skin lesions [6].
The KNN algorithm has been applied for the early detection of lung cancer in CT scans by extracting features such as shape and
texture, demonstrating its effectiveness in classifying and detecting lung cancer at early stages [7]. A study applied KNN for
abnormality detection in colonoscopy images, identifying regions that may indicate the presence of polyps or other colorectal
conditions [8].
Additionally, a hybrid approach combining convolutional neural networks (CNNs) and KNN has been tested for tasks such as
tumor detection in MRI and CT scans. This hybrid method outperforms traditional approaches by leveraging both deep learning
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 663
and machine learning techniques [9]. Another hybrid model combines KNN and SVM for breast cancer detection in mammogram
images, achieving higher detection accuracy critical for early diagnosis [11].
Research on Alzheimer's disease detection using MRI scans proposes a framework for preprocessing MRI data and applying
KNN to classify different stages of the disease [12]. Enhancements to KNN for mammogram image classification have
incorporated advanced feature extraction techniques, improving accuracy in breast cancer detection.
An end-to-end learning strategy has been proposed that integrates KNN with feature extraction, improving performance on small-
class and class-imbalanced medical datasets. This paper introduces a novel method combining deep learning and KNN for
medical image classification, leveraging the strengths of both techniques [13].
Moreover, a hybrid CNN model has been developed to extract meaningful features from wireless capsule endoscopy images,
achieving high classification accuracy in detecting gastrointestinal abnormalities. Another study applies deep CNNs for automatic
feature learning and classification of mammogram images, achieving high accuracy in detecting calcifications and masses.
Additionally, a method combining contrastive learning with radiomics features enhances abnormality classification and
localization in chest X-ray images through a knowledge feedback loop, focusing on improving breast cancer detection from
mammograms [15].
Lastly, a study investigates the application of KNN for skin cancer detection in dermoscopic images, demonstrating its promising
results for effective classification of skin lesions [16].
Fig. 1. Calcification localization results
II. Methodology
In this section, we discuss the methodology, which is divided into sequential steps, as outlined below:
Introduction to the Problem:
The primary goal of this work is to develop a hybrid approach for detecting and classifying abnormalities in medical images using
a combination of Convolutional Neural Networks (CNNs) and K-Nearest Neighbors (KNN). We leverage powerful feature
extraction capabilities of deep learning models like ResNet50, followed by KNN for classification. This approach combines the
strengths of deep learning, which excels at learning rich features from data, with the simplicity and interpretability of KNN for
improved classification performance.
In this research, we utilized a Brain MRI dataset to develop an innovative methodology for detecting and classifying
abnormalities by integrating the k-Nearest Neighbors (k-NN) algorithm with an advanced Convolutional Neural Network (CNN).
We began by loading and preprocessing the images, focusing on those labeled with varying levels of abnormality. The
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 664
implementation was carried out in Python, using libraries like scikit-learn, OpenCV, and NumPy. The dataset was then split into
training and testing subsets, with a 70-30 ratio. Finally, we trained the k-NN model for brain abnormality classification using
scikit-learn's implementation.
Step-by-Step Breakdown of the Algorithm:Flowchart 1 shows the different steps of our method, and in the next section, these
steps will be clarified.Flow Chart1.proposed method
Data Collection and Preprocessing:
Loading Data: The first step is to load the data from the specified directories. Typically, the images are organized into different
subdirectories (e.g., one for each class in classification tasks). For this study, we use the MRI Brain Dataset, a publicly available
medical image dataset focused on anomaly detection. All images are pre-processed to ensure consistency in resolution, noise
removal, and normalization. Additionally, we balance the class distribution by augmenting the minority class using techniques
such as rotation, flipping, and zooming. Data preprocessing and augmentation are essential for training deep learning models, as
they enhance the model's robustness and ability to generalize.
Data Augmentation: Using the ImageDataGenerator to augment the training dataset is crucial, as it artificially expands the
training set by applying transformations such as rotations, shifts, flips, and zooms. This process helps prevent overfitting and
enhances the model’s generalization capabilities.
For validation data: Only rescaling is applied because the validation set must remain unchanged in order to evaluate
performance on the original data.
Feature Extraction:
We employ a convolutional neural network (CNN), such as ResNet or VGG, to extract features from raw medical images. These
features capture high-level representations, making them more suitable for K-nearest neighbor (KNN) classification. This process
significantly reduces the dimensionality of the dataset while preserving critical information for accurate classification.
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNNs) have been crucial in advancing image recognition tasks, with influential architectures
such as VGGNet and ResNet introducing design principles that significantly enhance performance. VGGNet, developed by the
Visual Geometry Group at the University of Oxford, emphasizes depth by using small 3×3 convolutional filters. The architecture
consists of several stacked convolutional layers followed by ReLU activation functions, interspersed with 2×2 max-pooling layers
for spatial down-sampling. This design enables the network to capture intricate features by increasing depth while maintaining
manageable computational complexity. Notably, VGGNet achieved great success in the 2014 ImageNet Large Scale Visual
Recognition Challenge (ILSVRC), securing top positions in both localization and classification tasks [17].
ResNet addresses the challenges of training very deep networks, such as the degradation problem, where adding more layers can
increase training error. The core innovation of ResNet is the residual learning framework, which incorporates shortcut
connections that bypass one or more layers. These identity mappings allow the network to learn residual functions, enabling the
training of much deeper networks without performance degradation. ResNet models, with depths of up to 152 layers,
demonstrated superior performance on the ImageNet dataset and won the 2015 ILSVRC classification task [18].
K-Nearest Neighbors Classifier
The k-Nearest Neighbors (k-NN) algorithm is a fundamental non-parametric, supervised learning method utilized for both
classification and regression tasks in machine learning. It operates on the principle that data points with similar features tend to reside
in close proximity within the feature space. The algorithm classifies a new data point based on the majority class of its 'k' nearest
neighbors in the training dataset [19]. KNN is applied to the extracted features to classify medical images into various categories. In
our approach, we experiment with different K values to optimize classification performance. The KNN classifier is particularly well-
suited to handle anomalies in datasets with skewed class distributions, making it effective for detecting anomalies in medical images.
This algorithm has two phases:
Training Phase:
K-NN is an instance-based learning algorithm, meaning that it does not involve an explicit training phase. Instead, it stores the entire
training dataset, which is utilized during the prediction phase.
Prediction Phase:
the algorithm computes the distance between the query and all points in the training set. Common distance metrics include
Euclidean, Manhattan, and Minkowski distances.
The k-nearest neighbors (k-NN) algorithm assigns a query point to the most common class among its 'k' nearest neighbors for
classification tasks, or the predicted value is the average (or weighted average) of the values of the 'k' nearest neighbors for
regression tasks. Choosing the right value for 'k' is important; a small 'k' can be sensitive to noise, while a large 'k' may smooth out
class boundaries. Cross-validation techniques are often used to find the optimal value of 'k'. The effectiveness of k-NN depends on
the distance metric, which should match the nature of the data and problem domain. Since k-NN relies on distance calculations,
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 665
features with larger scales can disproportionately affect the results, so normalization or standardization of features is recommended.
While k-NN is easy to understand and implement and can be applied to both classification and regression tasks, it can be
computationally expensive, especially with large datasets, as it requires calculating the distance between the query and all training
samples. Storing the entire training dataset can also be memory-intensive. Additionally, irrelevant or redundant features can
negatively impact the performance of k-NN [20].
Experiments and Results of Proposed Hybrid Approach
The hybrid model combines KNN with deep neural networks, trained to learn a latent representation of medical images) latent
representation: like how the brain converts an image into complex, abstract information (colors, shapes) that it uses for recognition
and decision-making). By incorporating deep learning for feature extraction, we enhance the robustness of KNN in detecting
complex patterns within medical imaging data. This hybrid approach is especially useful for imbalanced datasets, where the KNN
classifier alone may struggle to detect rare anomalies. Additionally, using pre-trained CNNs for feature extraction enables the model
to learn richer representations, improving its effectiveness for complex medical imaging tasks. By leveraging CNNs for automatic
feature extraction, this method addresses challenges like class imbalance and small datasets, significantly improving the detection of
abnormalities in medical images. The results show that our approach outperforms standard methods in key evaluation metrics,
including accuracy, F1 score, precision, recall, and AUC (Fig. 3 illustrates the results of our method). To further improve the
performance of our image classification modelspecifically accuracy, precision, recall, F1 score, and ROC AUCwe propose
implementing the following improvements:
Data Augmentation and Preprocessing: This includes augmenting training data and normalizing pixel values.
Model Architecture Enhancements: Focus on fine-tuning more layers and adding regularization techniques.
Training Strategy Adjustments: Implement learning rate scheduling and early stopping to halt training when validation
performance ceases to improve, thereby preventing overfitting.
Evaluation Metric Optimization: This involves threshold tuning and confusion matrix analysis to better evaluate model
performance.
Ensemble Methods: Combine multiple models to improve overall predictive performance.
Hyperparameter Tuning: Optimize model parameters to enhance performance.
Post-Training Calibration: Conduct probability calibration to refine model outputs.
Cross-Validation: Use robust performance estimation through cross-validation to ensure the model's reliability.
Fig 3. Shows result of proposed methods
Comparison with Other Methods
We compare our hybrid approach with standard KNN classifiers, traditional CNNs, and other state-of-the-art methods for medical
image classification and abnormality detection. Table (1 ,2) show that the hybrid KNN model outperforms the baseline models
across all metrics, including accuracy, precision, F1 score, and ROC-AUC, when detecting rare abnormalities.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 666
Table1: Comparison with Other Methods
Criteria
Hybrid KNN +
ResNet50
Traditional CNN
(e.g., VGG,
ResNet)
Support Vector
Machine (SVM) +
CNN Features
Random Forest +
Handcrafted
Features
K-Nearest
Neighbors (KNN)
+ Handcrafted
Features
Feature
Extraction
Pre-trained ResNet50
(Deep CNN) with
fine-tuned custom
layers
CNN model (e.g.,
VGG, ResNet)
with all layers
fine-tuned
CNN features
extracted from a
pre-trained CNN
model (e.g.,
ResNet)
Handcrafted
features (e.g.,
texture, shape)
using image
processing
techniques
Handcrafted
features (e.g.,
texture, shape) or
pre-extracted CNN
features
Classification
Approach
KNN on extracted
CNN features
CNNs perform
classification
directly
SVM applied to
features from CNN
Random Forest
used on
handcrafted
features
KNN applied
directly to
handcrafted or
extracted CNN
features
Training
Complexity
Moderate (CNN for
feature extraction +
KNN for
classification)
High (CNN model
training is complex
and
computationally
expensive)
Moderate (SVM
training on
extracted CNN
features)
High (Random
Forest can be
computationally
intensive)
Low to Moderate
(depending on the
feature extraction
complexity)
Interpretability
Moderate (KNN is
interpretable, but
CNN features are
abstract).
Low (Deep CNNs
are black-box
models)
Moderate (SVM is
more interpretable
than CNN, but still
not fully
transparent)
High (Random
Forests are more
interpretable)
High (KNN is
simple and
interpretable)
Accuracy (for
imbalanced
data
High (due to CNN
feature extraction +
KNN handling of
non-linearities)
High (CNNs can
learn complex
patterns but may
suffer on
imbalanced data)
Moderate to High
(SVM is sensitive
to class imbalance,
requires careful
tuning)
Moderate (can be
sensitive to class
imbalance)
Moderate (KNN
can suffer from
imbalance but can
handle small
datasets well)
Generalization
Ability
High (CNN extracts
powerful features +
KNN provides good
generalization)
High (CNNs
generalize well,
especially with
data augmentation)
Moderate to High
(SVM generalizes
well but might
overfit on small
data)
Moderate (can
overfit if the
features are not
well-tuned)
Moderate (KNN
can suffer from
imbalance but can
handle small
datasets well)
Computational
Resources (
Moderate (Requires
both a pre-trained
CNN and KNN, but
relatively less
computationally
expensive than
training the entire
CNN from scratch)
High (Training
deep CNNs from
scratch requires
significant
resources)
Moderate
(Requires training
an SVM on CNN
features, but less
expensive than full
CNN training)
High (Random
Forest training can
be resource-heavy
on large datasets
Low to Moderate
(KNN is
computationally
cheap but may
require large
storage for feature
space)
Data
Requirements
Moderate (CNN can
work well with
limited data due to
pre-training; KNN
needs labeled data
for classification)
High (CNNs
require large
amounts of labeled
data to train
effectively)
Moderate (SVM
requires relatively
fewer data points
compared to
CNNs)
Moderate to High
(Random Forests
require large
datasets for robust
performance)
Moderate to High
(KNN performs
better with a
reasonable amount
of labeled data)
Evaluation
Metrics:
Accuracy
High (due to the
ability of CNN
features + KNN
High (CNNs
typically perform
well on large
Moderate (SVM
performs well but
can suffer in
Moderate
(Random Forest
may not be as
Moderate (KNN
accuracy may
decrease with noisy
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 667
Evaluation Metrics
Evaluate the Model:
Prediction and Evaluation: After training the KNN classifier, the model makes predictions on the validation data. Evaluation
metrics such as accuracy, precision, recall, F1 score, and ROC AUC are computed to assess the model's performance. Proper
evaluation using multiple metrics ensures a comprehensive understanding of model performance, which is crucial in medical
applications where the cost of misdiagnosis is high. Table 2 presents the evaluation results across different metrics.
Accuracy: measures how many predictions are correct overall.
Precision and Recall: help balance false positives and false negatives, which is especially important in medical image
classification where the consequences of misclassification can be severe.
F1 Score: is a balance between precision and recall, especially useful in imbalanced datasets.
ROC AUC: evaluates the model’s ability to distinguish between classes.
Table.2. Compare different evaluation metrics for different methods
Hybrid KNN +
ResNet50
Traditional CNN
(e.g., VGG, ResNet)
SVM + CNN
Features
Random Forest +
Handcrafted
Features
KNN +
Handcrafted
Features
High (due to robust
feature extraction +
KNN classifier)
High (CNNs perform
well on large datasets)
Moderate to High
(SVM can struggle
with imbalanced
data)
Moderate
(depends on
quality of
handcrafted
features)
Moderate
(sensitive to noisy
or imbalanced
data)
High (KNN
effectively classifies
abnormalities using
CNN features)
High (CNNs can
learn precise feature
representations)
Moderate to High
(depends on class
distribution and
tuning)
Moderate
(depends on
feature
engineering)
Moderate
(depends on
feature selection)
classification)
datasets)
unbalanced
datasets)
accurate as CNN-
based models)
or imbalanced
data)
Evaluation
Metrics:
Precision
High (KNN is
effective at
distinguishing
between abnormal
and normal cases,
especially when
using CNN features)
High (CNNs are
highly effective at
classifying
abnormalities in
medical images)
Moderate to High
(SVM can be
good, but
hyperparameter
tuning is
important)
Moderate
(Precision depends
heavily on the
quality of
handcrafted
features)
Moderate
(Precision can
suffer due to
KNN's sensitivity
to noisy features)
Evaluation
Metrics: F1
Score
High (F1 Score
benefits from the
balance of precision
and recall, especially
with KNN as the
classifier).
High (Deep CNNs
are effective at
balancing
precision and
recall).
Moderate to High
(SVM balances
precision and
recall well with
tuning).
Moderate
(Random Forests
may not balance
precision and
recall as well as
CNNs).
Moderate (F1 score
could be impacted
by KNN's
sensitivity to
feature quality).
Evaluation
Metrics: ROC-
AUC
High (KNN can
provide a high ROC-
AUC score when
combined with robust
feature extraction
like ResNet50).
High (CNNs can
achieve high ROC-
AUC when trained
on large and
diverse datasets).
Moderate to High
(SVM’s
performance on
ROC-AUC can be
improved with
tuning).
Moderate
(Random Forest’s
ROC-AUC may be
lower than CNN-
based methods).
Moderate (KNN’s
ROC-AUC may be
affected by noisy
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 668
High (KNN's non-
parametric
1
nature
helps in detecting
rare abnormalities)
High (CNNs are good
at detecting
abnormalities)
Moderate (SVM
may miss rare
abnormalities in
imbalanced classes)
Moderate
(Random Forests
might miss rare
cases)
Low to Moderate
(KNN can
struggle with rare
cases)
High (Good balance
of precision and
recall due to the
hybrid approach)
High (CNNs perform
well on F1 due to
balanced precision
and recall)
Moderate to High
(SVM with tuning
can perform well
but might be
imbalanced)
Moderate (may
not balance
precision and
recall well)
Moderate (KNN
can suffer from
imbalanced data)
High (KNN + CNN
features typically
result in a strong
ROC AUC)
High (CNNs can
achieve a high ROC
AUC when trained on
diverse datasets)
Moderate to High
(SVM can perform
well but needs
optimization)
Moderate
(Random Forests
often dont
perform as well on
AUC)
Moderate (KNN is
more sensitive to
data quality and
imbalance)
Tool/Technology Details
Python: Python is a popular programming language used for machine learning and has many libraries such as pandas, NumPy,
OpenCV, Scikit-learn, and TensorFlow that can be used to implement various machine learning algorithms.
Scikit-learn: Scikit-learn is another popular Python library for machine learning that provides various algorithms for
classification, regression, and clustering.
OpenCV: OpenCV is a huge open-source library for computer vision, machine learning, and image processing.
Benefits of the Hybrid Approach:
ResNet50 and KNN Hybrid: This hybrid approach takes advantage of the strong feature extraction abilities of CNNs (like
ResNet50) and combines them with the simplicity and efficiency of KNN for classification.
Improved Performance: By using pre-trained CNN models like ResNet50, you can obtain better feature representations for the
images, and KNN can then effectively use these features for anomaly detection and classification.
Flexibility: The approach is flexible and could be easily applied to a range of medical imaging tasks like tumor detection, organ
classification, and disease diagnosis.
This approach offers a blend of deep learning and classical machine learning techniques, providing both the robustness of CNNs
and the simplicity of KNN for final classification, ideal for applications in medical image analysis where accuracy and
interpretability are paramount.
For future works
We propose enhancing the model’s ability to handle small and imbalanced datasets through advanced techniques like transfer
learning and Generative Adversarial Networks (GANs). We also recommend exploring lightweight CNN architectures, such as
MobileNet and EfficientNet, for deployment in low-resource settings. Moreover, integrating the model with clinical systems for
real-time abnormality detection and prediction could significantly enhance its clinical applicability. Finally, optimizing the model
for real-time processing could enable its use in urgent medical scenarios, such as surgery or emergency care.
III. Conclusion
This paper proposes a hybrid KNN-based model for medical image classification and abnormality detection. By combining KNN
with deep learning techniques, particularly CNNs and the ResNet family, we enhance both the accuracy and robustness of
medical image classification. Our experiments on a brain-specific dataset show that the proposed method outperforms traditional
approaches, especially in class-imbalanced scenarios. This approach has the potential to assist clinicians in early diagnosis by
providing more accurate and reliable abnormality detection in medical images.
Reference
1. J. Zhuang, J. Cai, W. Zheng, R. Wang and H. Li, "Deep Nearest Neighbors for Anomaly Detection in Chest X-Rays," in
Medical Image Computing and Computer Assisted Intervention MICCAI 2020, Springer, 2020.
2. I. Khasanah, "Enhancing Alzheimer's Disease Diagnosis with K-NN: A Study on Pre-processed MRI Data," Int. J. Artif.
Intell. Med. Issues, vol. 2, no. 1, 2024.
1
Non-parametric Nature: It's like letting the data shape itself, rather than forcing it into a predefined map.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IV, April 2025
www.ijltemas.in Page 669
3. S. S. D. Babu, A. Rajasekaran, V. M. Gohil and G. S. A. S. V. Kumar, "A Survey on Medical Image Classification and
Abnormality Detection Using K-Nearest Neighbors," IEEE Access, vol. 8, 2020.
4. A. M. S. K. Gaur, A. K. Sharma and S. K. Sharma, "Hybrid KNN and Support Vector Machines for Brain Tumor
Detection in MRI Images," Int. J. Imaging Syst. Technol., vol. 30, no. 5, 2020.
5. S. Rajasekaran, P. Kumar and A. Sharma, "K-Nearest Neighbors for Skin Cancer Detection in Dermoscopic Images," J.
Med. Syst., vol. 43, no. 4, 2019.
6. R. J. Patel, N. A. Jadhav and A. S. Patel, "KNN Classifier for Early Detection of Lung Cancer in CT scans," J. Med.
Imaging Health Inform., vol. 10, no. 3, 2020.
7. V. Vijayakumar, R. Rajesh and K. S. Rajasekaran, "KNN-Based Detection of Abnormalities in Colonoscopy Images," J.
Med. Imaging Health Inform., vol. 9, no. 5, 2019.
8. D. Shreyas, M. I. Shazia and S. Agarwal, "Deep KNN: A Novel Approach to Medical Image Classification," IEEE
Trans. Med. Imaging, vol. 39, no. 10, 2020.
9. A. B. Gupta, S. S. Bhatia and S. R. Sharma, "KNN-Based Approach for Lung Cancer Detection in CT Images," J. Med.
Imaging Health Inform., vol. 10, no. 4, 2020.
10. A. M. Rajan, S. R. Kumar and K. V. Nair, "A Hybrid Model for Breast Cancer Detection Using KNN and SVM," J.
Biomed. Inform., vol. 90, no. 6, 2019.
11. S. S. D. Babu, A. Rajasekaran and V. M. Gohil, "KNN Classifier for Alzheimer's Disease Detection Using MRI Data," J.
Med. Imaging Health Inform., vol. 9, no. 6, 2019.
12. K. S. Rajasekaran, V. P. Sivakumar and A. M. S. K. Gaur, "Improved KNN Algorithm for Mammography Image
Classification," IEEE Access, vol. 8, 2020.
13. J. Zhuang, J. Cai, R. Wang, J. Zhang and W. S. Zheng, "Deep kNN for Medical Image Classification," in Medical Image
Computing and Computer Assisted Intervention MICCAI 2020, vol. 12261, Springer, Cham, 2020, pp. 120129.
14. S. Jain, A. Seal and A. Ojha, "A Hybrid Convolutional Neural Network with Meta Feature Learning for Abnormality
Detection in Wireless Capsule Endoscopy Images," IEEE Trans. Biomed. Eng., 2021.
15. P. Xi, C. Shu and R. Goubran, "Abnormality Detection in Mammography using Deep Convolutional Neural Networks,"
IEEE Access, vol. 7, 2019.
16. M. Tahir, A. Naeem, H. Malik, J. Tanveer, R. A. Naqvi and S.-W. Lee, "DSCC_Net: Multi-Classification Deep Learning
Models for Diagnosing of Skin Cancer Using Dermoscopic Images," Cancers, vol. 15, no. 7, p. 2179, 2023.
17. K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in Proc. Int.
Conf. Learn. Representations (ICLR), 2015.
18. K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," in Proc. IEEE Conf. Comput.
Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, 2016, pp. 770778.
19. K. Taunk, S. De, S. Verma and A. Swetapadma, "A Brief Review of Nearest Neighbor Algorithm for Learning and
Classification," in Proc. Int. Conf. Intell. Comput. Control Syst. (ICCS), Madurai, India, 2019, pp. 12551260.
20. J. Li, S. Lin, K. Yu and G. Guo, "K-Nearest Neighbor Classification Algorithm Based on Hamming Distance," Quantum
Inf. Process., vol. 21, no. 18, 2022.