INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
HyperNova++: A Novel Adaptive Activation Function for High-
Accuracy Neural Learning on Nonlinear Synthetic Decision
Manifolds
Sourish Dey, Sunil Kumar Sawant, Arunima Dutta, Abhradeep Hazra
KIIT University, Bhubaneswar, Odisha, India
Received: 27 December 2025; Accepted: 01 January 2026; Published: 10 January 2026
ABSTRACT
Activation functions are at the heart of how deep neural networks perform non-linear transformations. The use
of an activation function allows a neural network to approximate highly complex functions, train using a
gradient-based optimization technique and generalize to new data. However, existing activation functions, such
as ReLU, GELU, and Swish, have limitations that restrict their use in practice. Specifically, they can saturate
gradients during training due to their inherent structure, cause vanishing gradients on deeply stacked
architectures, and are inefficient at learning periodic dependency relationships while performing poorly at
modeling highly heterogeneous non-linear interactions. These limitations are of particular importance for
scientific, financial, and engineering use cases where data represent polynomial, periodic, saturating, and
exponential shapes on the same data manifold.
This paper introduces HyperNova++, a smooth, adaptive, parameterized activation function that unifies bounded
saturation, periodic oscillation, and unbounded growth into a single learnable formula. HyperNova++ is
architectured and designed to overcome the expressive constraints of existing activations which enables
dynamic, data-driven modulation of curvature, frequency, and growth behavior using three trainable parameters
(α,β,γ). These above mentioned parameters respectively govern contributions from the hyperbolic tangent (tanh)
for bounded saturation, sine (sin) for periodic oscillations, and Softplus (log(1+ex)) for getting a smooth
monotonic growth all thorughout. The resulting function obtained ensures non-vanishing gradients, smooth
transitions, and controlled Lipschitz continuity, along with maintaining computational efficiency comparable to
contemporary activations and other counterparts.
After doing a rigorous, large-scale evaluation on a meticulously crafted synthetic dataset with a known ground-
truth decision boundary that stimulates reall life linear, polynomial, and periodic interactions. This controlled
environment enables precise, unbiased comparisons against various functions including ReLU, GELU, and
Swish under identical architectural, optimization, and hyperparameter settings. HyperNova++ achieves
statistically significant superior performance compared to all , exceeding 99% accuracy (0.9903) compared to
98.34% for ReLU, 98.08% for GELU, and 97.60% for Swish, while also attaining the highest F1-score (0.9906)
and ROC-AUC (0.9997). Gradient analyses obtained confirm stable, non-vanishing gradients and accelerated
convergence.
We supplement empirical results obtained during testing with comprehensive theoretical analysis, thus
establishing HyperNova++’s universal approximation guarantee, Lipschitz properties, gradient bounds, and
optimization landscape characteristics. Practical implementation guidelines, computational complexity
dissections, and prospective applications in scientific machine learning, time-series analysis, and multimodal
inference are being discussed. Collectively, this work positions HyperNova++ as a potent, versatile activation
function for advanced deep learning architectures confronting intricate nonlinear manifolds in upcoming future.
Index Terms—Activation Function, Deep Learning, Hyper-Nova++, Neural Networks, Nonlinear Modeling
Synthetic Dataset, ROC-AUC Curve, Optimization, Adaptive Activation, Mixed Nonlinearities, Universal
Approximation, Lipschitz Continuity
Page 1228