INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Conversely, invertibility requires that the innovations,
admit a convergent linear combination of current and
past observations of
, which requires that all roots of the moving average polynomial
, lie outside
the unit circle. An ARMA process is stable if the AR polynomial satisfies
for all
; equivalently,
in state-space (companion) form, stability holds if all eigenvalues of the companion matrix lie strictly inside the
unit circle, i.e., formally, if is the companion matrix,
.
Generally, patterns in a random set of data can be modelled to draw inferences about the process or population
being observed. The science that deals with the collection and interpretation of these patterns is popularized in
time series. As conceptualised in the recent literature on stochastic processes (Montgomery et al., 2024), a time
series is not merely a data sequence but a realisation of a generating mechanism where temporary dependency
dictates the information value of each successive observation. In a discrete-time series, these observations are
recorded at a specified time interval, where , the continuous-time series, on the other hand,
observations are recorded continuously over some time interval, e.g.,
.
The analysis of random variables,
as outlined in (Kitagawa, 2020), is captured in the selection of a
suitable probability model for the data. A complete probabilistic time series model for the sequential
would
specify all the joint distributions of the random vector
, or equivalently, all the
probabilities
.
Exact Maximum Likelihood (EML) has long been the gold standard for ARMA estimation (Wei, 2006). Recent
studies by Shumway, Robert H., and Stoffer (2025) have highlighted its computational limitations in high-
dimensional settings. Consequently, there has been a resurgence in investigating Generalised Estimation
Procedure (GEP), as seen in the work of (Athanasopoulos et al., 2024). An ARMA (p, q) process is a sequence
of random variables
, capturing the present and future dynamics.
Several existing procedures exhibit distinctive asymptotic properties; recent comparative studies have focused
primarily on large-scale datasets, often overlooking the nuanced transition from small to moderate sample sizes
Chen & Yao, (2024). The focus of this research is to bridge this gap by examining the comparative asymptotic
convergence properties of GEP, GLS, and EML estimators specifically within the Fibonacci sequential
framework. By identifying the precise thresholds where these estimators achieve numerical reconciliation,
particularly in higher-order ARMA (2, 2) specifications, this study establishes a structural blueprint for estimator
selection in environments characterised by varying data availability.
The use of finitely parameterized models to characterize the behaviuors of time series data assumed to be
generated by a stochastic process has received a wide coverage in both statistical and engineering literature
(Salau, 2000). An important class of such stochastic models is the autoregressive moving average (ARMA)
models. One reason for the great empirical importance of ARMA models is that every regular stationary process
can be approximated with arbitrary accuracy by an ARMA process, and that only a finite number of (real-valued)
parameters is needed to describe (up to second moment) an ARMA process.
Arma Representation
In modelling applications, such as forecasting and control, it is commonplace to represent such a process using
a stationary and invertible autoregressive moving average (ARMA) model of the form:
[1.5]
Where
is an unobservable stochastic disturbance, assumed to be white noise or more generally, a sequence of
stationary martingale differences, so that, almost surely
,
, [1.6]
And symbol is the expectation operator. Here,
is the of events determined by the
,
and
is the prediction variance. The condition in [1.6] provides a natural relaxation of the model
assumption that the innovation sequence,
, are serially independent. For a complete exposition of these
moment assumptions, see, for example, (Box et. al., 1994), (Tsay, 2019). The essential feature of [1.6] in our