INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 188
A Comparative Study of Deep Learning Models for Fake News
Classification
1 Amal Yadav, 1 Manoj Kumar, 1 Sharad Kumar, 1 Sachin Kumar, 1 Jagdeep Singh, 2 Vikas Sharma
1 School of Engineering & Technology, Shri Venkateshwara University, Gajraula, U.P. India
2 Department of Computer Applications, SRM Institute of Science and Technology, Delhi NCR Campus, Ghaziabad, U.P.
India
DOI: https://doi.org/10.51583/IJLTEMAS.2025.1409000026
Abstract—The rapid growth of online media platforms has led to the widespread spread of misinformation, resulting in an
important issue which is to correctly categorize fake news to inform citizens effectively, to be a significant issue in the fields of
natural language processing (NLP), and social media. Within NLP, deep learning models have become a standard and effective
methodology. These models can learn rich linguistic and contextual representations with large datasets. Here contributes a
comparative analysis of several Deep Learning model architectures for the identification of fake news: Convolutional Neural
Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) and transformer-based models like
BERT. Also compare all models based on fake news detection datasets and measures, and present their outcomes in terms of
accuracy, precision, recall, F1-score, and overall computational efficiency. The analysis revealed the transformer-based models
offered the best performance in academic literature due to their contextual awareness in classification, while the RNN and CNN
models proffered the best computational efficiency and training times. These findings to highlight the respective advantages and
disadvantages that shed light on useful design approaches for the development of effective and operationally efficient fake news
detection systems for academic and practitioners alike.
Keywords—Fake News Detection, Deep Learning, Natural Language Processing, Fake News Classification, Transformer Models.
I. Introduction
In the current digital age, rely on the internet and social media to change how information is produced, shared, and received. Billions
of individuals can now acquire information via platforms such as Facebook, Twitter (X), Instagram, and online news websites,
instantaneously. Fake news generally describes fabricated information, or misleading information portrayed to be legitimate news
that is not true journalism. Fake news has become an issue that concerns society, its implications may influence public opinion,
shape political discourse, and even impact world history. The ramifications can, and has resulted in not just mistrust in journalistic
institutions, but violence, recessional economies, and pandemics. Accordingly, must research and develop trustworthy and scalable
methods for the detection of fake news. Dirty detections of fake news rely on pervasive use of high-temperature rule-based
techniques, statistical models, and trick investigations. While these methods have merits; each demonstrate fatigue in dealing with
the never-ending influx of new online misinformation, requiring direct human labour and are not as comprehensively scalable, due
to the extreme volume and velocity of shared content on social media platforms. For this reason, machine learning has been
demonstrated usefully for fake news classification tasks, particularly supervised classifiers, which include Support Vector Machines
(SVM), Decision Trees, and Naïve Bayes. Even though these techniques exhibited advances over the rule-based systems by utilizing
linguistic and stylistic mannerisms, they were still limited in their ability to model the underlying complex semantics, contextual
dependencies, and long-distance correlations of news articles and social media text. The rise of deep learning has opened new
directions for addressing many of these challenges. In recent years, deep learning models being able to automatically learn
hierarchical feature representations directly from raw data, have demonstrated considerable success in many NLP tasks, such as
sentiment analysis, machine translation, and text summarization. In the fake news detection problem, deep learning models are
capable of capturing complex textual patterns, semantic features, and even multimodal features when considering images, videos,
or metadata. Deep learning models' ability to automatically extract features can further make them more appealing than traditional
approaches, which tend to rely on hand-crafted features. There are a number of deep learning architectures which have been applied
to the fake news detection problem, and each has its own benefits and drawbacks. While Convolutional Neural Networks (CNNs)
have traditionally been popular models to process images, they have been adapted for text classification in extracting n-gram local
features to identify detect patterns associated with misinformation. Nandan et al. [1] proposed a framework for real-time fake news
detection utilizing deep learning methods and showed that they were able to effectively identify other textual patterns associated
with misinformation utilizing Convolutional Neural Networks (CNN). Recurrent Neural Networks (RNNs) and their more advanced
form, Long Short-Term Memory (LSTM) networks, offer great promise for modelling sequential dependencies and capturing
temporal syntactic and semantic structure in the flow of textual data. More recently, the emergence of transformer-based
architectures, including Bidirectional Encoder Representations from Transformers (BERT), has reshaped the NLP landscape and
provided powerful contextual embeddings that capture dependencies in both directions, leading to new state-of-the-art results in a
variety of classification tasks. These advancements launch important questions regarding the degree to which the accuracy,
efficiency, and generalizability of different deep learning models support their adoption for fake news detection. Despite a number
of efforts, detecting fake news remains a challenge due to multiple considerations. First, fake news is designed to look like real
news, rendering a deceptively surface-level classification task. Second, the language of misinformation is highly diverse, with
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 189
different language appearing in different domains of misinformation. For example, the language of misinformation in political posts
is different from the language of misinformation in health, economics, or entertainment posts. Thirdly, multimodal misinformation
poses additional barriers to the detection process as misleading text and images or video may show up together or captions may
provide an explanation of the image or video. Additionally, scalability and computational efficiency must be considered when
deployment occurs in a real-world platform where millions (or billions) of posts can be made every second. Each of these challenges
highlights the validity of comparative studies and systematic evaluations of different deep learning approaches in the literature on
fake news detection. In particular, this study provides evidence on widely used deep learning architectures for fake news detection.
In particular, this study provides a systematic evaluation of widely used deep learning architectures for fake news detection,
providing empirical considerations of their relative performance. Secondly, it emphasizes the contexts under which simpler models,
such as CNNs and RNNs, may be preferable in terms of costs in their computation with a lower impact on accuracy than transformer
models. Similarly, Benaouda et al. [2] compared CNN models against traditional machine learning classifiers and noted that CNNs
outperformed the classically-based machine learning in both accuracy and reliability for fake news detection tasks. Thirdly, the
paper presents future directions in fake news detection such as multi modal learning, cross-lingual adaptation and explainable AI
which are increasingly relevant to counteract misinformation at a more global level.
II. Literature Review
The growth of online content has increased the dissemination of misinformation, establishing the detection of fake news as a vital
area of research. Recently, it has been highlighted that machine learning and deep learning approaches can be employed as techniques
to expertly recognize fake news articles. Evolutionary ensemble deep learning approaches have emerged as potential alternatives.
Kaushik and Nadeem [3] presented an ensemble framework compatible with multiple deep learning models for fake news detection,
demonstrating an enhancement in robustness and generalization versus a standard approach over a variety of datasets. Similarly,
Khelil et al. [4] analysed fake news in Arabic, studying an array of deep learning strategies, while noting distinctive challenges that
non-English languages encounter – including limited annotated datasets and morphological complexity. While deep learning models
are on the rise, classical approaches in machine learning also produce useful improvement in the performance of detection. Tamang
et al. [5] described a random forest approach that uses feature importance to categorize fake news and legitimate news. The authors
concluded that ensemble tree methods can compete when combined with customized feature engineering. Dongre and Kalaiarasi [6]
described and examined hybrid frameworks that utilize multivariate feature selection combined with deep learning, showing that
hybrid frameworks better exploit complementary information from text, metadata, and social contexts to improve classification
results. Comparison studies have illustrated differences in performance between machine learning models and deep learning models.
For example, Roslan et al. [7] compared multiple machine learning models (SVM, logistic regression) against multiple deep learning
models (LSTM, CNN), and reported that the deep learning models consistently produced greater precision, recall, and F1-scores. In
a similar vein, Brinda et al. [8] employed an array of heterogeneous deep learning architectures to detect the emergence of fake news
on social media, addressing the complex issues of concept drift and temporal dynamics associated with online misinformation. At
the same time, applications of pre-trained transformer models (e.g., BERT) in improving contextual understanding of text have
emerged. Davis et al. [9] showed that BERT-related models generally produce better results for features associated with semantic
and syntactic structure, proving useful for detecting fake news on an hourly basis. In addition, Saha and Puthran [10] conducted an
extensive review and devised a new framework for fake news detection based on content, propagation, and user features that
emphasized multi-dimensional feature analysis to improve performance. More recently, initiatives extending fake news detection to
other areas such as online and sentiment analysis has emerged. Sharma et al. [13] and set of others used machine learning models
and the LDCP model suggested by Mercy et al. to recognize fraudulent reviews on e-commerce and social media platforms
respectively, demonstrating that techniques associated with fake news detection may help in bounded applications. Further, Kumar
and Tomar [15] examined and developed hybrid detection models based on some combination of classical classifiers and deep
learning that improved performance within news articles and on social media posts. Overall, the literature embraces a clear trend
toward hybrid and ensemble deep learning approaches, with and without feature selection, alongside the use of transformer models
and adaptations based on context or data source. These studies further indicate the need for robust, scalable, language-independent
frameworks to support real-time detection of misinformation across multiple social media platforms.
III. Proposed Methodology
The proposed methodology for this study is purposefully designed to evaluate the performance of multi-deep learning models for
the classification of and detection of potential fake news. The methodology is framed around a five-stage framework: dataset
collection and preprocessing; feature representation; model selection and architecture design; training and evaluation; and
performance discussion. Each of these are significant to evaluate equivalent comparisons and thereby produce reproducing of
results. Fig. 1. discusses the approaches to the detection of fraudulent reviews and news. It is at this point, and before moving on to
the next stage of models it should be noted that this is a comparative exercise which seeks to demonstrate deep learning approaches
are more powerful than older methods, in terms of performance and necessity of systems.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 190
Fig. 1. Fake Review and News Workflow
1. Collection and Preprocessing of Datasets: This research utilizes benchmark datasets, frequently used in fake news detection
research. These datasets include publicly available datasets such as LIAR, FakeNewsNet, and Kaggle's Fake News dataset, and
contains labelled news articles and social media posts from varying domains (i.e., politics, health, entertainment). Here split the
datasets are into training, validation, and testing subsets for all datasets to avoid data leakage and guarantee unbiased evaluation.
Preprocessing processes are conducted to clean and standardize the text data. Preprocessing includes removal of stopwords,
punctuation, and special characters, tokenization, lower casing, and lemmatizing. In addition, noise reduction includes filtering
URLs, hashtags, and mentions in social media posts. For transformer-based models, text sequences are tokenized using the
appropriate model-specific tokenizer (i.e., the BERT tokenizer) to produce input embeddings.
2. Feature Representation: The study examines both traditional words embedding techniques and contextual embeddings. For the
CNN, RNN, and LSTM models, pre-trained word embeddings, including GloVe or Word2Vec, are used to produce dense vector
representations of words that encapsulate their semantic meaning. For example, the transformer-based models include BERT, which
produces contextual embeddings that encode salient semantic and syntactic features bi-directionally to understand the context
around each word in the sentence.
3. Model Architectures: The research studies several deep learning architectures that possess their unique advantages for
classifying fake news content. Convolutional Neural Networks (CNNs) can be modified for text by applying convolutional filters
to high-dimensional word embeddings to extract local patterns that could include key phrases or local n-grams indicative of fake
news. These features are then pooled and usually summarized, and are again operated on by fully connected layers that classify the
text as fake or real. Recurrent Neural Networks (RNNs) are a type of architecture designed to capture the sequential dependencies
that occur in text. RNNs realize this by maintaining hidden states that are preserved over the entire input sequence, capturing the
sequential flow of information in the data. To address some limitations of standard RNNs, like vanishing gradients, Long Short-
Term Memory networks (LSTMs) are used to model the sequence. LSTMs can extract long memories of dependencies in sequences
of text. In these cases, Bidirectional LSTMs (BiLSTMs) also can be used to consider objects present in the past, as opposed to only
taking one direction into consideration. Finally, there are Transformer models, most notably BERT, that allow for a model to
generate deep contextual embeddings for each object by attending to all other objects in the sequence in a bidirectional fashion. By
applying pre-trained BERT with a classification layer to capture order within the fake news articles or not, and by fine-tuning the
BERT model, the BERT model can use a fine-tuned knowledge of fake news selections on small resistances with mild complexity
on selected structured dataset.
4. Training and Evaluation: The models, consistent with principled research practices, were trained for fair comparison using
identical splits of the datasets. Distinct splits of the dataset were designated for training, validation, and testing. Hyperparameters
of each model were tuned using grid search or random search, including hyperparameters such as learning rate, batch size, number
of layers, and dropout. Regularization techniques and early stopping were used throughout training, as appropriate, to support
overfitting prevention and facilitate generalizability. Assessment included several metrics used in combination: accuracy for overall
correctness, precision for correctness of predicted fake news, recall for the model's ability to detect all true instances of fake news,
and F1-score for trade-off between precision and recall. In addition to performance metrics, computational efficiency is a noted
aspect for analysis (including training and inference times, for future implementation analysis) in addition to the model assessments
including performance metrics.
5. Comparative Analysis: After training and assessment, the models will be compared systematically against assessment metrics.
Based on the performance of the models, transformer-based architectures such as BERT would be expected to achieve higher
accuracy and F1-scores relative to CNNs and RNNs due to the architecture's ability to systems deep contextual weight in the overall
understanding of the models. CNNs/RNNs may have a faster training time than transformer-based models and a lower
computational expense making them viable in constrained settings (given enough sequential training). The analysis of the models
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 191
will expose the trade-offs between performance and efficiency, which facilitate play for makers, researchers, and practitioners (at
later stages of development) to select an appropriate model for specific applications based on intended use.
IV. Result & Analysis
In this section, the experimental results obtained from testing the convolutional neural networks (CNN), recurrent neural networks
(RNN), long short-term memory (LSTM), and bidirectional encoder representation from transformers (BERT) deep learning models
on the benchmark fake news datasets. The same splits of the dataset were utilized for training and testing of the modes, and the
models were evaluated using several metrics to have a thorough comparison of performance and computing power. The
experimentation was conducted utilizing the benchmark datasets of LIAR, FakeNewsNet, and the Kaggle Fake News dataset that
used media articles from several domains such as politics, health, and entertainment, that contain labeled news articles and social
media posts. The data was thoroughly preprocessed for consistent and quality assurance. Preprocessing consisted of tokenizing the
data, lowercasing, lemmatizing, and removing punctuation, special characters, URLs, hashtags, mentions, etc. The CNN, RNN, and
LSTM models used pre-trained word embeddings like GloVe to convert text based data into dense vectors that we can use to capture
meaning. The BERT models used their native tokenizers to create contextual embeddings for each input sequence and provide
stronger semantic and syntactic meaning bidirectionally. Each dataset was split between training, validation, and testing using the
representation of a 70-15-15 ratio for random sampling. Hyperparameter tuning for all the models was accomplished using grid
search or random search to find the best hyperparameters for learning rate, batch size, hidden layers, dropout ratio, and the best
sequences length for ablation analysis. Dropout and early stopping based models were completed to prevent overfitting and increase
generalization based outcomes. All experimentation was completed using Python code running with TensorFlow and PyTorch
backends on a workstation that included an NVIDIA GPU for fast compute and reproducibility. The models were trained until
convergence or early stopping criteria were met, and then, were assessed on a hold-out test set using accuracy, precision, recall, F1-
score, and computational reflective efficiency to provide insight across several metrics on classified performance. In a study such
as this, the operational contingent configuration provides a fair – and consistent – comparator for associated CNN, RNN, LSTM,
and BERT models while also providing clarity for performance, as well as practical ability to perform in real-world applications of
fake news detection.
1. Accuracy: Accuracy indicates the overall accuracy of the model when classifying news articles as either fake or real based on
the ratio of class instances that are predicted and total classified class instances. Within the confines of this study, accuracy indicates
each of the model’s individual performance relative to the entire dataset. High accuracy reflects that the model can discriminate
between false and real news articles more accurately, whilst low accuracy reflects that the model has a higher misclassification rate.
Of the models assessed, would expect transformer-based models such as the BERT model to achieve the greatest accuracy because
it is able to model based on context. CNNs, RNNs, and LSTMs would demonstrate moderate performance to model architecture
and ability to model extract semantic patterns in a sequential fashion. In TABLE I. illustrate that BERT gained the highest accuracy
based on parameters of deep contextual understanding, LSTM established a higher accuracy when compared with RNN and CNN
because they can capture long-term dependencies. Finally, CNN demonstrated the lowest overall accuracy established by longer
computational times.
Accuracy of Deep Learning Models for Fake News Classification
Model Accuracy
CNN 0.84
RNN 0.85
LSTM 0.87
BERT 0.92
Fig. 2. Accuracy of Deep Learning Models for Fake News Classification
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 192
2. Precision: Precision is the ratio of fake news articles that have been predicted, that is correct. Precision is especially important
in the case to minimize false positives (predicting real news labeled as fake). Precision suggests that model has high confidence in
that its prediction is fake news and rarely mistakes real news as fake. Precision provides a companion to recall, emphasizing the
dependable judgement of reported predictions. TABLE II. precision measures the correctness of predicted fake news. It may be
inferred from the data that BERT outperforms the other models, meaning it performs well or is more inclusive of truth in the
judgement minimizing false positives. LSTM and RNN clock moderate precision, while CNN is rated low. The data suggests that
lower precision could include possible misclassifications of fake news due to the simplistic nature of its architecture. Fig. 2.
Accuracy from the models compared show CNN (0.84), RNN (0.85), LSTM (0.87), and BERT (0.92) with BERT mentioned
highest.
Precision of Deep Learning Models for Fake News Classification
Model Precision
CNN 0.82
RNN 0.83
LSTM 0.85
BERT 0.91
Fig. 3. Precision of Deep Learning Models for Fake News Classification
3. Recall: Recall measures how much of the actual fake news articles the model got right. Recall is important to make sure most of
the misinformation is detected, which is important, and that may mean other real news may have been missed. A model with a high
recall shows the model does a good job at capturing most fake news articles. In this research study, BERT has the highest recall
because of its bidirectional attention mechanism and deeply semantic meaning understanding. This is to say, the BERT outperforms
the CNN models simply due to the CNN limited contextual modeling. Recall and precision together help to provide a balanced
view on how well the model performs in detecting fake news. TABLE III. displays the model's potential to detect all fake news
instances. Once again, BERT had the highest recall and captured most of the fake news articles. LSTM had a good recall rate while
CNN had limitations in capturing all instances due to the CNN focus on local patterns in terms of fake news when sequencing in
sequential text. Fig. 4 demonstrates precision of CNN (0.82), RNN (0.83), LSTM (0.85), and again BERT (0.91), showing BERT
achieves the highest precision.
Recall of Deep Learning Models for Fake News Classification
Model Recall
CNN 0.8
RNN 0.81
LSTM 0.84
BERT 0.9
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 193
Fig. 4. Recall of Deep Learning Models for Fake News Classification
4. F1-Score: The F1-score is the harmonic mean of precision and recall and allows us to see how well the model makes a tradeoff
of having a good balance between the false positive and capturing all the fake news articles. It is a good evaluation metric for
imbalanced datasets, which are typically the case for fake news datasets. A high F1-score means can assume that the model does a
good job of achieving high precision, while at the same time not sacrificing recall in varying degrees. For the study, BERT has the
highest F1-score, which indicates that the model has performed overall better. LSTM and RNN have a moderate score and CNN
are faster, but has slightly lower F1-scores. TABLE IV A given model is good at trading off precision and recall into a single metric.
BERT is the highest F1-score indicating it has superior classification ability in general. LSTM and RNN are also competitive, while
CNN scores lower due to its simplicity. Fig. 5. demonstrates recall scores for CNN (0.80)), RNN (0.81), LSTM (0.84), and BERT
(0.90) showing that BERT captures the highest instances of fake news classification.
F1-Score of Deep Learning Models for Fake News Classification
Model F1-Score
CNN 0.81
RNN 0.82
LSTM 0.84
BERT 0.91
Fig. 5. F1-Score of Deep Learning Models for Fake News Classification
5. Training Time: Training time is representative of the time and computational resources used to train each model until
convergence. Although the transformer-based models, such as BERT, achieved the best accuracy and F1 scores, they required
significantly more time and GPUs to do so. CNNs, because of their simpler architecture and ability to be highly parallelized and
trained much faster, with RNNs and LSTMs requiring sequential computation and the longest training time. This metric is used to
assess feasibility for the deployment of these models in real-life settings where computational efficiency will need to be prioritized.
CNN trained in the shortest time, as each convolution can be done in parallel. RNN and LSTM trained longer as it requires sequential
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 194
computation. BERT had superior accuracy but also required the highest computational resources highlighting the trade-off between
performance and efficiency is illustrated in TABLE V. Fig. 6. depicts the F1 score of CNN (0.81), RNN(0.82), LSTM(0.84) and
BERT (0.91) with BERT achieving the highest F1 score. Fig 6. illustrates the training time in seconds which are CNN (120 s), RNN
(150 s), LSTM (220 s), and BERT (450 s) showing BERT takes the longest.
Training Time of Deep Learning Models for Fake News Classification
Model Training Time (s)
CNN 120
RNN 150
LSTM 220
BERT 450
Fig. 6. Training Time of Deep Learning Models for Fake News Classification
The comparative analysis reveals that transformer-based models, particularly BERT, are better for fake news detection because
they can understand deep context and more complex semantic patterns. However, the disadvantage of transformer models is that
they require significantly more compute time, and therefore deep learning models may be better suited for a resource constrained
environment, utilizing CNN or LSTM models that can balance compute time with reasonable accuracy. The results also indicate
that there may be future improvements in fake news detection using a hybrid approach that involves other multimodal features (e.g.,
using images, videos, or metadata) and also leveraging transfer learning to improve generalization across domains and languages.
V. Conclusion
This paper has presented a comparison of various deep learning models for classifying fake news. The paper provided evaluations
of CNN, RNN, LSTM, and BERT models using benchmark datasets. From the experiments, it was shown that transformer-based
models performed the best across all measurements, displaying accuracy (92%), precision (0.91), recall (0.90), and F1 scores (0.91)
that could be attributed to transformer models' ability to model deep contextualized and semantic relationships in text. Though
LSTM networks performed well in learning sequential information, they were still limited in other metrics. RNN and CNN models
performed moderately, but often trained quickly. The analysis demonstrated that ambiguous, sarcastic, or partially true news articles
were difficult for every model, which demonstrates the complexity of detecting fake news in the wild. The last section of the paper
discusses future work and outlines many future work opportunities for further improving a fake news detection system. Introducing
some form of explainable AI could provide reasoning for how models come to their conclusions about why the article may be
classified a certain way, which could be useful for the user and policymaker stakeholders. Future work may also include looking
into various options for more lightweight transformer variants or hybrid architectures to improve accuracy but provide enough
efficiency to be used in low-resource situations.
References
1. K. P. Nandan, B. Pakruddin, S. Afridi, S. K. R. and S. V. Pati, "Real-Time Detection of Fake News Articles Using Deep
Learning Techniques," 2025 International Conference on Next Generation Communication & Information Processing
(INCIP), Bangalore, India, 2025, pp. 687-691, doi: 10.1109/INCIP64058.2025.11019208.
2. W. Benaouda, S. Ouamour and H. Sayoud, "Comparison of CNN Model with Different Machine Learning Models for
Fake News Detection," 2024 1st International Conference on Electrical, Computer, Telecommunication and Energy
Technologies (ECTE-Tech), Oum El Bouaghi, Algeria, 2024, pp. 1-5, doi: 10.1109/ECTE-Tech62477.2024.10851137.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue IX, September 2025
www.ijltemas.in Page 195
3. D. Kaushik and M. Nadeem, "Fake News Detection Using Evolutionary Ensemble Deep Learning," 2024 International
Conference on Communication, Computer Sciences and Engineering (IC3SE), Gautam Buddha Nagar, India, 2024, pp.
161-166, doi: 10.1109/IC3SE62002.2024.10593429.
4. Y. Khelil, S. Mechti and R. Faiz, "Detecting Arabic Fake News Using Deep Learning: A Review," 2024 International
Symposium of Systems, Advanced Technologies and Knowledge (ISSATK), Kairouan, Tunisia, 2024, pp. 1-6, doi:
10.1109/ISSATK62463.2024.10808291.
5. M. T. Tamang and M. S. Sharif, "An Innovative Random Forest-Based Approach for Enhancing Fake News Detection,"
2025 International Conference on Communication, Computing, Networking, and Control in Cyber-Physical Systems
(CCNCPS), Dubai, United Arab Emirates, 2025, pp. 25-30, doi: 10.1109/CCNCPS66785.2025.11135784.
6. A. K. Dongre and G. Kalaiarasi, "A Survey on Fake News Detection Using Multivariate Feature Selection and Hybrid
Deep Learning Approach," 2025 1st International Conference on AIML-Applications for Engineering & Technology
(ICAET), Pune, India, 2025, pp. 1-9, doi: 10.1109/ICAET63349.2025.10932142.
7. M. N. S. Roslan, M. Mohd and K. Shirai, "Investigating the Performance of Machine Learning and Deep Learning Models
in Fake News Detection," 2024 16th International Conference on Knowledge and System Engineering (KSE), Kuala
Lumpur, Malaysia, 2024, pp. 231-236, doi: 10.1109/KSE63888.2024.11063621.
8. B. M. Brinda, C. Rajan and K. Geetha, "Detecting Evolving Fake News in Social Media by Leveraging Heterogeneous
Deep Learning Model," 2024 Second International Conference on Advances in Information Technology (ICAIT),
Chikkamagaluru, Karnataka, India, 2024, pp. 1-5, doi: 10.1109/ICAIT61638.2024.10690338.
9. J. Davis, R. K R, S. D, S. A S and R. Jose, "Fake News Detection using BERT Model," 2025 2nd International Conference
on Trends in Engineering Systems and Technologies (ICTEST), Ernakulam, India, 2025, pp. 1-5, doi:
10.1109/ICTEST64710.2025.11042689.
10. I. Saha and S. Puthran, "Fake News Detection: A Comprehensive Review and a Novel Framework," 2024 OITS
International Conference on Information Technology (OCIT), Vijayawada, India, 2024, pp. 463-469, doi:
10.1109/OCIT65031.2024.00087.
11. T. K. Vashishth, Vikas, B. Kumar, R. Panwar, S. Kumar and S. Chaudhary, "Exploring the Role of Computer Vision in
Human Emotion Recognition: A Systematic Review and Meta-Analysis," 2023 Second International Conference on
Augmented Intelligence and Sustainable Systems (ICAISS), Trichy, India, 2023, pp. 1071-1077, doi:
10.1109/ICAISS58487.2023.10250614.
12. R. Sharma, V. Sharma, T. K. Vashishth, S. Shashi, A. Pandey and S. Chaudhary, "Revealing the Reliability of Amazon
Products via Innovative Fake Review Detection using Machine Learning," 2025 6th International Conference on
Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 2025, pp. 217-221,
doi: 10.1109/ICICV64824.2025.11086089.
13. S. S. Mercy, S. Venkatesan, B. M, N. S, L. Venkateswarlu and N. Padmaja, "LDCP: A Novel Approach to Predict Fake
Reviews in Online Social Network by using Learning based Data Classification Principle," 2024 International Conference
on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), Chennai, India, 2024, pp.
1-6, doi: 10.1109/ICSES63760.2024.10910317.
14. S. Kumar and P. Tomar, "Fake News Identification using Hybrid Detection Models," 2025 Global Conference in Emerging
Technology (GINOTECH), PUNE, India, 2025, pp. 1-6, doi: 10.1109/GINOTECH63460.2025.11077102.