Geometric Deep Learning: Understanding Graph Neural Networks through the Lens of Mathematics
Article Sidebar
Main Article Content
Abstract: Geometric Deep Learning (GDL) extends traditional neural network paradigms to non-Euclidean data structures, enabling the effective processing of data that lies on manifolds or graphs. Among GDL techniques, Graph Neural Networks (GNNs) have emerged as powerful tools for modelling relational data by leveraging principles from graph theory and algebraic topology. This paper explores GNNs through the lens of mathematics, focusing on how geometric and topological insights drive the architecture and functionality of these networks. By framing GNNs in terms of graph signal processing and spectral theory, we illuminate how GNNs capture dependencies across nodes and edges, offering a structured approach to learning on graph-structured data. We further examine the theoretical underpinnings that make GNNs particularly suited for applications in social networks, molecular biology, and recommendation systems. In doing so, this study provides a mathematical perspective on the capabilities and limitations of GNNs, underscoring the role of invariance, equivariance, and generalization within graph-based learning models.
Downloads
References
Netrapalli, P. (2019). Stochastic Gradient Descent and Its Variants in Machine Learning. Journal of the Indian Institute of Science, Vol 99, pp. 201–213.
Kumar, S., & Gupta, V. (2022). Gradient Descent Optimization Techniques in Deep Learning. Journal of Applied Computing and Machine Learning, Indian Academy of Sciences.
Patel, R., & Sharma, T. (2020). Adaptive Gradient Descent Methods for Enhancing Neural Network Training. Advances in Computational Intelligence, Vol 25, pp. 67–75.
Rao, N. (2023). Understanding Gradient Descent through Visual and Mathematical Perspectives. International Journal of Research Publication and Reviews, Vol 4, pp. 3458–3466.
Reddy, V., & Joshi, M. (2021). Applications of Gradient-Based Algorithms in Natural Language Processing. Journal of Data Science and AI, Indian Institute of Technology.
Kulkarni, A., & Singh, H. (2018). Convergence Properties of Gradient Descent in Convex Optimization Problems. Indian Journal of Engineering Mathematics, Vol 12, pp. 120–130.
Das, S., & Verma, P. (2019). Enhanced SGD Techniques for Image Classification Models. Journal of Machine Learning Applications, pp. 45–60.
Ghosh, R., & Nayak, S. (2021). Gradient Descent Variants for Efficient Model Training in Cloud Environments. Indian Journal of Cloud Computing and AI.
Mehta, P. (2022). Mathematical Approaches to Gradient Descent in Large-Scale ML Models. Journal of Advanced Data Science, pp. 213–226.
Kaur, B., & Singh, M. (2020). Applications of Gradient Descent in Financial Analytics and Predictive Modeling. Journal of Financial Data Science, Vol 8, pp.98-110.

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in our journal are licensed under CC-BY 4.0, which permits authors to retain copyright of their work. This license allows for unrestricted use, sharing, and reproduction of the articles, provided that proper credit is given to the original authors and the source.