A faster version of MATLAB’s fsolve using the NAG Toolbox for MATLAB

September 21st, 2010 | Categories: math software, matlab, NAG Library, programming | Tags:

Part of my job at the University of Manchester is to help researchers improve their code in systems such as Mathematica, MATLAB, NAG and so on.  One of the most common questions I get asked is ‘Can you make this go any faster please?’ and it always makes my day if I manage to do something significant such as a speed up of a factor of 10 or more.  The users, however, are often happy with a lot less and I once got bought a bottle of (rather good) wine for a mere 25% speed-up (Note:  Gifts of wine aren’t mandatory)

I employ numerous tricks to get these speed-ups including applying vectorisation, using mex files, modifying the algorithm to something more efficient,picking the brains of colleagues and so on.  Over the last year or so, I have managed to help several researchers get significant speed-ups in their MATLAB programs by doing one simple thing – swapping the fsolve function for an equivalent from the NAG Toolbox for MATLAB.

The fsolve function is part of MATLAB’s optimisation toolbox and, according to the documentation, it does the following:

“fsolve Solves a system of nonlinear equation of the form F(X) = 0 where F and X may be vectors or matrices.”

The NAG Toolbox for MATLAB contains a total of 6 different routines that can perform similar calculations to fsolve.  These 6 routines are split into 2 sets of 3 as follows

For when you have function values only:
c05nb – Solution of system of nonlinear equations using function values only (easy-to-use)
c05nc – Solution of system of nonlinear equations using function values only (comprehensive)
c05nd – Solution of system of nonlinear equations using function values only (reverse communication)

For when you have both function values and first derivatives
c05pb – Solution of system of nonlinear equations using first derivatives (easy-to-use)
c05pc – Solution of system of nonlinear equations using first derivatives (comprehensive)
c05pd – Solution of system of nonlinear equations using first derivatives (reverse communication)

So, the NAG routine you choose depends on whether or not you can supply first derivatives and exactly which options you’d like to exercise.  For many problems it will come down to a choice between the two ‘easy to use’ routines – c05nb and c05pb (at least, they are the ones I’ve used most of the time)

Let’s look at a simple example.  You’d like to find a root of the following system of non-linear equations.

F(1)=exp(-x(1)) + sinh(2*x(2)) + tanh(2*x(3)) – 5.01;
F(2)=exp(2*x(1)) + sinh(-x(2) ) + tanh(2*x(3) ) – 5.85;
F(3)=exp(2*x(1)) + sinh(2*x(2) ) + tanh(-x(3) ) – 8.88;

i.e. you want to find a vector X containing three values for which F(X)=0.

To solve this using MATLAB and the optimisation toolbox you could proceed as follows, first create a .m file called fsolve_obj_MATLAB.m (henceforth referred to as the objective function) that contains the following

function  F=fsolve_obj_MATLAB(x)
F(1)=exp(-x(1))  + sinh(2*x(2)) + tanh(2*x(3)) - 5.01;
F(2)=exp(2*x(1)) +  sinh(-x(2) ) + tanh(2*x(3) ) - 5.85;
F(3)=exp(2*x(1)) +  sinh(2*x(2) ) + tanh(-x(3) ) - 8.88;

Now type the following at the MATLAB command prompt to actually solve the problem:

options=optimset('Display','off'); %Prevents fsolve from  displaying too much information
startX=[0 0 0]; %Our  starting guess for the solution vector

MATLAB finds the following solution (Note that it’s only found one of the solutions, not all of them, which is typical of the type of algorithm used in fsolve)

X =
0.9001     1.0002    1.0945

Since we are not supplying the derivatives of our objective function and we are not using any special options, it turns out to be very easy to switch from using the optimisation toolbox to the NAG toolbox for this particular problem by making use of the routine c05nb.

First, we need to change the header of our objective function’s .m file from

function F=fsolve_obj_MATLAB(x)


function  [F,iflag]=fsolve_obj_NAG(n,x,iflag)

so our new .m file, fsolve_obj_NAG.m, will contain the following

function  [F,iflag]=fsolve_obj_NAG(n,x,iflag)
F(1)=exp(-x(1)) + sinh(2*x(2)) + tanh(2*x(3)) - 5.01;
F(2)=exp(2*x(1)) + sinh(-x(2) ) + tanh(2*x(3) ) - 5.85;
F(3)=exp(2*x(1)) + sinh(2*x(2) ) + tanh(-x(3) ) - 8.88;

Note that the ONLY change we made to the objective function was the very first line.  The NAG routine c05nb expects to see some extra arguments in the objective function (iflag and n in this example) and it expects to see them in a very particular order but we are not obliged to use them.  Using this modified objective function we can proceed as follows.

startX=[0 0 0]; %Our starting guess

NAG gives the same result as the optimisation toolbox:

X =
0.9001    1.0002    1.0945

Let’s look at timings.  I performed the calculations above on an 800Mhz laptop running Ubuntu Linux 10.04, MATLAB 2009 and Mark 22 of the NAG toolbox and got the following timings (averaged over 10 runs):

Optimisation toolbox: 0.0414 seconds
NAG toolbox: 0.0021 seconds

So, for this particular problem NAG was faster than MATLAB by a factor of 19.7 times on my machine.

That’s all well and good, but this is a simple problem. Does this scale to more realistic problems I hear you ask? Well, the answer is ‘yes’. I’ve used this technique several times now on production code and it almost always results in some kind of speed-up. Not always 19.7 times faster, I’ll grant you, but usually enough to make the modifications worthwhile.

I can’t actually post any of the more complicated examples here because the code in question belongs to other people but I was recently sent a piece of code from a researcher that made heavy use of fsolve and a typical run took 18.19 seconds (that’s for everything, not just the fsolve bit).  Simply by swapping a couple of lines of code to make it use the NAG toolbox rather than the optimisation toolbox I reduced the runtime to 1.08 seconds – a speed-up of almost 17 times.

Sadly, this technique doesn’t always result in a speed-up and I’m working on figuring out exactly when the benefits disappear. I guess that the main reason for NAG’s good performance is that it uses highly optimised, compiled Fortran code compared to MATLAB’s interpreted .m code. One case where NAG didn’t help was for a massively complicated objective function and the majority of the run-time of the code was spent evaluating this function. In this situation, NAG and MATLAB came out roughly neck and neck.

In the meantime, if you have some code that you’d like me to try it on then drop me a line.

If you enjoyed this post then you may also like:

  1. September 22nd, 2010 at 10:54
    Reply | Quote | #1

    that’s an excellent example, Mike. Do you mind me re-posting this article on my blog to let more readers know it? for sure I will leave the original link and mention your blog as the resource, thanks.

  2. Nate T
    April 18th, 2011 at 21:01
    Reply | Quote | #2


    I am wondering if there’s a way to pass through additional arguments. I tried using the function handle, but I get an error message. For example,

    startX=[0 0 0];
    alph = [5.01 5.85 8.88];
    f = @(n,x,iflag) header(n,x,iflag,alph);
    X = c05nb(‘f’,startX);

    function [F,iflag]=header(n,x,iflag,alph)
    F = fsolve_obj(x,alph);

    function F = fsolve_obj(x,alph)
    F(1)=exp(-x(1)) + sinh(2*x(2)) + tanh(2*x(3)) – alph(1);
    F(2)=exp(2*x(1)) + sinh(-x(2) ) + tanh(2*x(3) ) – alph(2);
    F(3)=exp(2*x(1)) + sinh(2*x(2) ) + tanh(-x(3) ) – alph(3);


    ??? Undefined function or method ‘g’ for input arguments of type

    ??? Error using ==> c05nb
    Execution of NAG routine halted


  3. April 21st, 2011 at 08:59
    Reply | Quote | #3

    Hi Nate

    The current version of the NAG toolbox (Mark 22) doesn’t allow you to pass function handles to ANY of its routines. This is a royal pain and I mention it to NAG regularly. Hopefully they will add this functionality in the near future.

    One way of doing it is via global variables. I hate using global variables in any situation but its a hack that works. More details at



  4. Nate T
    April 21st, 2011 at 12:50
    Reply | Quote | #4

    Hah, pity that we can’t use global variables in MATLAB’s parallel computing toolbox.

    Thanks for the info. And thanks for nagging NAG.

  5. April 21st, 2011 at 14:27
    Reply | Quote | #5

    Nate, I had a thought….

    Does your institution have access to any other NAG products? The C library would be best but I guess we could also try something in Fortran.

    I’m thinking that maybe we could Mex something up in such a way that it would be suitable for the parallel toolbox.

    Email me if you’d like to take this further. My details are in the ‘contact me’ section at the top of this page.


  6. robert
    July 6th, 2011 at 12:25
    Reply | Quote | #6


    I am using a code which is solved by fsolve in Matlab with a jacobian.

    options=optimset(‘Display’, ‘iter’,’JacobPattern’, Jsparse,’MaxIter’,500);

    Using the jacobian, it is possible to reduce rapidly the number of function calls of fsolve. Is there a possibility to use the NAG toolbox using the jacobian too?

    Thanks for your comments,

  7. July 6th, 2011 at 13:46
    Reply | Quote | #7

    Hi Robert

    Yes you can use the Jacobian in NAG too. Use one of c05pb, c05pc or c05pd. These are described in the post under ‘For when you have both function values and first derivatives’. The NAG documentation has more details.

    Hope this helps.

  8. Grace
    October 14th, 2011 at 17:54
    Reply | Quote | #8

    Thanks Mike! It indeed much fast than fsolve, at least 8 times for my equation system.

  9. Drazick
    August 5th, 2012 at 13:24
    Reply | Quote | #9

    What about optimizing sums of squares?

    Something like ‘lsnonlin’ in MATLAB.

  10. August 7th, 2012 at 13:52

    Hi Drazick

    The NAG toolbox has a function called e04fy which does the same as lsqnonlin. The tests I have done suggests that is faster.


  11. Arun
    August 14th, 2012 at 20:10

    syms T E L1 L2 I1 w m

    A= [1 0 1 0;
    0 T 0 T;
    -sin(T*L1) cos(T*L1) Sinh(T*L1) cosh(T*L1);
    -E*I1*(T^3)*cos(T*L1)+m*L2*(w^2)*sin(T*L1) E*I1*(T^3)*sinh(T*L1)+m*L2*(w^2)*cosh(T*L1) E*I1*(T^3)*cosh(T*L1)+m*L2*(w^2)*sinh(T*L1)]

    % A_val=subs(A,[T E L1 L2 I1 w m ],[2 4])
    hii friends while i m inputting these in matlab its showing “??? Undefined function or method ‘Sinh’ for input arguments of type ‘sym’ “..so how can i solve this..can any1 explain these please..
    thanks Arun

  12. Julian
    June 20th, 2013 at 22:11

    Hi Michael,

    I am trying to solve a non-linear equation which should have real roots but I keep getting imaginary answers using fsolve. Is there a way to limit fsolve only to real roots?


  13. Nich
    September 5th, 2013 at 00:17

    I am trying to implement fsolve (or NAG’s equivalent) in the forward kinematics of a Steward Platform robot as described in this link:

    My system of equations to solve is viewable here:

    The constants A-F and K change with each new robot position (I have them hard coded in this example). G will have to be solved for many thousands of robot positions, so speed is of paramount importance. The ideal state would be to solve each position in real time (up to 2000 Hz), but don’t know if that would be possible. Post processing is an option.

    Unfortunately, I have neither the optimization or NAG toolboxes. I will be submitting a proposal to purchase one of these toolboxes, but would like to know which is faster before I do so. I’d like your input on which way to go.