Geometric Deep Learning: Understanding Graph Neural Networks through the Lens of Mathematics

Article Sidebar

Main Article Content

Harshda C. Gore
Shailesh P. Dhome

Abstract: Geometric Deep Learning (GDL) extends traditional neural network paradigms to non-Euclidean data structures, enabling the effective processing of data that lies on manifolds or graphs. Among GDL techniques, Graph Neural Networks (GNNs) have emerged as powerful tools for modelling relational data by leveraging principles from graph theory and algebraic topology. This paper explores GNNs through the lens of mathematics, focusing on how geometric and topological insights drive the architecture and functionality of these networks. By framing GNNs in terms of graph signal processing and spectral theory, we illuminate how GNNs capture dependencies across nodes and edges, offering a structured approach to learning on graph-structured data. We further examine the theoretical underpinnings that make GNNs particularly suited for applications in social networks, molecular biology, and recommendation systems. In doing so, this study provides a mathematical perspective on the capabilities and limitations of GNNs, underscoring the role of invariance, equivariance, and generalization within graph-based learning models.

Geometric Deep Learning: Understanding Graph Neural Networks through the Lens of Mathematics. (2025). International Journal of Latest Technology in Engineering Management & Applied Science, 14(13), 215-218. https://doi.org/10.51583/IJLTEMAS.2025.1413SP043

Downloads

References

Netrapalli, P. (2019). Stochastic Gradient Descent and Its Variants in Machine Learning. Journal of the Indian Institute of Science, Vol 99, pp. 201–213.

Kumar, S., & Gupta, V. (2022). Gradient Descent Optimization Techniques in Deep Learning. Journal of Applied Computing and Machine Learning, Indian Academy of Sciences.

Patel, R., & Sharma, T. (2020). Adaptive Gradient Descent Methods for Enhancing Neural Network Training. Advances in Computational Intelligence, Vol 25, pp. 67–75.

Rao, N. (2023). Understanding Gradient Descent through Visual and Mathematical Perspectives. International Journal of Research Publication and Reviews, Vol 4, pp. 3458–3466.

Reddy, V., & Joshi, M. (2021). Applications of Gradient-Based Algorithms in Natural Language Processing. Journal of Data Science and AI, Indian Institute of Technology.

Kulkarni, A., & Singh, H. (2018). Convergence Properties of Gradient Descent in Convex Optimization Problems. Indian Journal of Engineering Mathematics, Vol 12, pp. 120–130.

Das, S., & Verma, P. (2019). Enhanced SGD Techniques for Image Classification Models. Journal of Machine Learning Applications, pp. 45–60.

Ghosh, R., & Nayak, S. (2021). Gradient Descent Variants for Efficient Model Training in Cloud Environments. Indian Journal of Cloud Computing and AI.

Mehta, P. (2022). Mathematical Approaches to Gradient Descent in Large-Scale ML Models. Journal of Advanced Data Science, pp. 213–226.

Kaur, B., & Singh, M. (2020). Applications of Gradient Descent in Financial Analytics and Predictive Modeling. Journal of Financial Data Science, Vol 8, pp.98-110.

Article Details

How to Cite

Geometric Deep Learning: Understanding Graph Neural Networks through the Lens of Mathematics. (2025). International Journal of Latest Technology in Engineering Management & Applied Science, 14(13), 215-218. https://doi.org/10.51583/IJLTEMAS.2025.1413SP043