INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 241
Solving Unconstrained Optimization Problem Using Hybrid CG
Method with Exact Line Search
Nurul ‘Aini*, Ain Aqiela Azamuddin
Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA Johor, Segamat, Johor, Malaysia
DOI: https://doi.org/10.51583/IJLTEMAS.2025.1410000033
Received: 01 October 2025; Accepted: 07 October 2025; Published: 07 November 2025
Abstract: The conjugate gradient (CG) method is a one of the common approaches for solving unconstrained optimization
problems, notably known for its suitability for large scale problems. Many recent studies show that this method is also useful for
problems of smaller scale. One of the methods used for improving the performance of CG method is hybrid approach, where a
CG method is combined with another method. In this study, the ARM CG method is combined with the SMR CG method and
tested under exact line search. The resulting hybrid algorithm is globally convergent under exact line search and shown to
perform well numerically in comparison to other tested CG methods.
Keywords: Conjugate gradient method, exact line search, hybrid conjugate gradient method, unconstrained optimization
I. Introduction
The unconstrained optimization problem is formulated by
min ( )
n
xR
fx
, (1)
such that
:
n
f R R
is a function, continuously differentiable with gradient at point
k
x
denoted as
k
g
. Iterative method is
commonly used as
1
, 0,1,2,...
k k k k
x x d k
(2)
with
0
k
denotes the step size,
k
x
is the iteration point and
k
d
is the search direction. In this study, the exact line search is
used to determine the step size which is given as follows:
0
( ) min ( )
k k k k k
f x d f x d

(3)
The exact line search determines the optimal step size for each iterative step that gives the best possible reduction of the objective
function. However, it is often argued to be very slow and even fail if the initial point chosen is far from the solution point [1]. As
a result, the inexact line search methods such as strong Wolfe and Armijo line search are usually preferred as they are more
practical and easier to implement [2,3]. In recent years, the development of faster processors has helped overcome the traditional
slowness of exact line search [4,5]. In addition, several studies also suggested that the exact line search has stronger convergence
properties with fewer iterations in some cases and simpler analysis structure [6,7]. This makes exact line search to be still a viable
approach for solving unconstrained optimization problems alongside various other inexact approaches.
The conjugate gradient (CG) algorithm is widely regarded as one of the most effective methods for solving unconstrained
optimization problems, particularly in large-scale settings where memory and computational efficiency are critical. The CG
methods are iterative and require relatively low storage, making them well-suited for large optimization tasks. These made CG
methods a popular choice in fields such as machine learning, data fitting, engineering design, and scientific computing. The
search direction of CG method is given by
, (4)
where the
k
is known as the CG coefficient. The types of CG method can be classified based on the design of their CG
coefficient. Some notable examples are the classical CG methods such as the Fletcher-Reeves (FR) [8], and Polak-Ribiere-Polyak
(PRP) [9] methods. These methods are usually used as the base for development of current CG methods. The PolakRibière
Polyak (PRP) method is one of the most effective CG techniques, largely because it often solves optimization problems using
fewer iterations than classical methods like FletcherReeves (FR). However, Powell [10] later showed that despite its practical
efficiency, the PRP method is not guaranteed to be globally convergent. The findings highlighted the importance of ensuring that
the CG coefficient remains positive. Then, various improvements have been proposed to the modify the CG formula to maintain a
positive CG coefficient [11-13]. This study proposes a hybrid CG method with improved performance in solving unconstrained
optimization problems under exact line search.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 242
II. Methodology
The ARM-SMR method proposed by Aini et al. [14] is a hybrid CG method that combines the CG coefficient of ARM and SMR
methods. This hybrid method incorporates the properties of ARM method while eliminating its drawback of generating negative
CG coefficient as the presence of these negative CG coefficients could cause the method to sometimes fail to converge [10].
When tested with strong Wolfe line search, the ARM-SMR method showed the most efficient results in comparison to the other
CG methods it was compared with, i.e, ARM, AMR and FR methods [14]. While most of the recent studies generally favour
inexact line search over exact line search, there are still cases where exact line search is preferred, such as when there is need for
method with stronger convergence properties [6]. In this section, the ARM-SMR method is presented under exact line search. The
hybrid method is defined as
2
1
, if
, otherwise
ARM T
k k k k k
ARM SMR
k
SMR
k
m g g g
. (5)
2
1
1
1 1 1
where ,

T
k k k k
kk
ARM
kk
T
k k k k
m g g g
dg
m
m g d d
and
2
1
11
max 0,






T
k k k
SMR
k
T
kk
g g g
gd
. The proposed hybrid
approach combines the ARM method with the SMR method by Mohamed et al. [15], which ensures the CG coefficient remains
positive. In the hybrid method, the SMR method is used as a fallback whenever the ARM method generates a negative CG
coefficient.
The resulting hybrid CG method, referred as the ARM-SMR method, is implemented according to following algorithm.
Algorithm 1: Hybrid ARM-SMR Method
Step 1 : Given an initial point
0
,
n
xR
set
0.k
Step 2 : If
6
10
k
g
or k = 10000, then stop.
Step 3 : Calculate
k
d
by using (4) and (7).
Step 4: Determine the stepsize
k
by using (3).
Step 5: Compute the new iterative point
k
x
using (2).
Step 6: Set k := k+1 and return to Step 2.
Consider the following assumptions for objective function:
1. The function f(x) is bounded below on the level set
0
( ) ( )Q x f x f x
, where
0
x
is the initial point.
2. In some neighbourhood W of Q where the function f(x) is continuously differentiable and its gradient is Lipschitz
continuous; then the condition
( ) ( )h x h z L x y
is satisfied for all
,x z W
,
0L
.
The proposed method is said to converge provided it meets the conditions for sufficient descent and global convergence. These
conditions ensure that a descent direction is produced at each iteration, ultimately leading the algorithm to a solution. The
sufficient descent condition holds when
2
, where 0 and 0
T
k k k
g d c g k c
Next, it must also satisfies the global convergence property that is based on the following condition:
liminf 0.

k
k
g
The analysis of the convergence properties of ARM-SMR method are discussed based on two cases, that is, when
0
ARM
k
and
when
0
ARM
k
. This can be further explained as follows.
Case 1:
0
ARM
k
. In this case, the CG method will apply
ARM
k
in the calculation. Refer proof in [13].
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 243
Case 2:
0
ARM
k
. In this case, the CG method will use
SMR
k
. Refer proof in [14].
III. Numerical Result And Discussion
This section presents the numerical results obtained from evaluating the proposed ARM-SMR method in comparison with several
well-established conjugate gradient (CG) methods under exact line search. For benchmarking purposes, the methods selected
include the FR, ARM and SMR methods. The FR method is included as they are one of the earliest CG method which had served
as a foundation for the development of many modern CG algorithms. The performance of each method is tested on a set of
standard unconstrained optimization problems taken from [16], with problem dimensions ranging from 2 to 1000 variables to test
their applicability for small to large scale problems. All methods employ the exact line search for stepsize calculation. Numerical
experiments are carried out using MATLAB 2021.
The stopping criteria are defined as reaching a gradient norm below a pre-specified tolerance or exceeding 10,000 iterations.
Next, the performance of the CG methods are evaluated based on the number of iterations and CPU time required to reach
convergence. To provide a comprehensive comparison, performance profiles are constructed following the approach of Dolan and
Mo[17], which offers a robust framework for assessing the relative efficiency of optimization algorithms across a diverse test
set. The complete list of test functions used is provided in Table 1.
Table 1: List of test functions
No.
Test Problems
No. of Variables
1
Three hump
2
2
Six Hump
2
3
Zettl
2
4
Trecanni
2
5
Colville
4
6
Dixon and Price
2,4
7
Hager
2,10
8
Shalow
2, 100, 1000
9
Extended Himmelblau
2, 100, 1000
10
Extended Strait
2, 100, 1000
11
Qing
2, 100, 1000
12
FLETCHR
2, 100, 1000
Figures 1 and 2 illustrate the performance of the four tested CG methods, measured in terms of iteration count and CPU time,
respectively. The left side of the figure shows the percentage of test problems for which a solver achieves the lowest iteration
count or CPU time. The right side of the figure reflects the overall success rate of each method. Therefore, the solver positioned
furthest to the top-right of the plot can be considered the most effective overall, demonstrating both high efficiency and consistent
performance across the test set.
Figure 1: Performance profile based on number of iteration
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 244
Figure 2: Performance profile based on CPU time
As shown in Figure 1, the proposed ARM-SMR method outperforms the other tested CG methods in terms of iteration count,
consistently appearing at the top left and right regions of the performance profilean indicator of both efficiency and robustness.
The same result is shown in Figure 2, although the performance gap in CPU time between the best method (ARM-SMR) and
second-best (ARM) is smaller. Compared to ARM-SMR, the original ARM method demonstrates slightly lower performance,
with its curve positioned beneath that of ARM-SMR in both figures. This suggests that, for most test problems, ARM requires
more iterations and CPU time to converge. The SMR method is the third best in overall efficiency, with its curve located below
ARM and ARM-SMR methods. On the other hand, the FR method exhibits the lowest performance among all methods tested,
with its performance profile appearing at the bottom in both figures. It also has the lowest success rate, solving 91.3% of the test
problems, whereas the other methodsincluding the proposed ARM-SMRsuccessfully solved 100% of the test set.
Given its consistent placement at the top of the performance profiles and its perfect success rate, the ARM-SMR method
demonstrates the most reliable and efficient overall performance, confirming its effectiveness as a hybrid CG approach for
unconstrained optimization.
IV. Conclusion
This paper presents a modified CG method using hybrid approach where it combines the strengths of the ARM and SMR methods
under exact line search. The hybrid, referred to as the ARM-SMR method, integrates the SMR strategy to address a known
limitation of the ARM methodnamely, the occurrence of negative CG coefficients that can hinder solver performance. By
incorporating SMR when such values arise, the proposed algorithm ensures greater numerical stability and robustness. To evaluate
its effectiveness, the ARM-SMR method was tested against the original ARM, SMR, and FR methods using a standard set of
unconstrained optimization problems. The numerical results demonstrate that the ARM-SMR method consistently achieves
superior performance in terms of iteration count, CPU time and problem-solving success rate. This result is consistent with the
findings from [14], which indicates that ARM-SMR method is suitable for both exact and inexact line searches.
Acknowledgements
This research was not funded by a grant.
References
1. W. Sun & Y. X. Yuan (2006), Optimization Theory and Methods: Nonlinear Programming. New York: Springer Science
and Business Media.
2. J. Jian, P. Liu, X. Jiang, & B. He (2022). Two improved nonlinear conjugate gradient methods with the strong Wolfe
line search. Bulletin of the Iranian Mathematical Society, 48(5), 2297-2319.
3. Q. Jin, Q., R. Jiang, & A. Mokhtari, A. (2024). Non-asymptotic global convergence analysis of BFGS with the Armijo-
Wolfe line search. Advances in Neural Information Processing Systems, 37, 16810-16851.
4. M. Rivaie, M. Mamat, L. W. June & I. Mohd (2012), A new class of nonlinear conjugate gradient coefficient with global
convergence properties, Applied Mathematics and Computation, 218, pp. 11323-11332.
5. M. Rivaie, M. Mamat & A. Abashar (2015), A new class of nonlinear conjugate gradient coefficients with exact and
inexact line searches, Applied Mathematics and Computation, 268, pp. 1152-1163.
6. Q. Jin, R. Jiang & A. Mokhtari (2025). Non-asymptotic global convergence rates of BFGS with exact line search. Math.
Program.
7. N. H. Fadhilah, M. Rivaie, Ishak, F., & Idalisa, N. (2020). New Three-Term Conjugate Gradient Method with Exact
Line Search. MATEMATIKA, 36(3), 197207.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 245
8. R. Fletcher & C. Reeves (1964), The Computer Journal 7, 149-154.
9. E. Polak & G. Ribiere (1969), Rev. Francaise Informat Recherche Operationelle 3, 35-43.
10. M. J. D. Powell (1984), Nonconvex Minimizations Calculations and the Conjugate Gradient Methods. Lecture Notes in
Mathematics, 1066, Springer-Verlag, Berlin.
11. T. Diphofu, Kaelo & A.R. Tufa, (2023). A modified nonlinear conjugate gradient algorithm for unconstrained
optimization and portfolio selection problems. RAIRO-Operations Research, 57(2), 817-835.
12. M. Awwal, I. M. Sulaiman, M. Malik, M. Mamat, P. Kumam & K. Sitthithakerngkiet (2021), A Spectral RMIL+
Conjugate Gradient Method for Unconstrained Optimization With Applications in Portfolio Selection and Motion
Control, in IEEE Access, vol. 9, pp. 75398-75414.
13. M. Malik, I. M. Sulaiman, A. B. Abubakar, G. Ardaneswari, Sukono (2023). A new family of hybrid three-term
conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection,
AIMS Mathematics, 8(1)
14. N.‘Aini, N. Hajar, M. Rivaie, S. N. Ahmad, & A. A. Azamuddin, (2024). A hybrid of conjugate gradient method in
modelling number of road accidents in Malaysia. In AIP Conference Proceedings (Vol. 3189, No. 1, p. 060002). AIP
Publishing LLC.
15. N. S. Mohamed, M. Mamat, M. Rivaie, & S. M. Shaharudin, (2020). A new hyhbrid coefficient of conjugate gradient
method, Indonesian Journal of Electrical Engineering and Computer Science, Vol. 18, No. 3
16. E. Dolan & J. J. More (2002). Mathematical Programming 91, 201-213
17. M. Jamil & X. S. Yang (2013), International Journal of Mathematical Modelling and Numerical Optimisation 4, 150
194.