Archive for the ‘matlab’ Category

September 23rd, 2008

Introduction

MATLAB is an incredibly powerful piece of software that is used by students and researchers in a wide variety of disciplines. If you have some maths to do then MATLAB can probably help you out – curve fitting, statistics, plotting, signal processing, optimization, linear algebra, symbolic calculus – the list just goes on and on. There is one big problem though – much of this stuff doesn’t actually come with the basic install of MATLAB.

Let’s say that you are a new academic researcher and you have just purchased a copy of MATLAB at a cost of several hundred pounds. At first it seems that it can do everything that you require and so you are happy. One day though, you find yourself needing to find the minimum of a constrained nonlinear multivariable function. After a bit of searching you realise that the function you need is called fmincon but, when you try to use it, you discover that it is part of an add-on called the Optimization Toolbox which costs a couple of hundred pounds. You duly pay for the toolbox and are happy once again.

Some time later in your career you find yourself in need of a good quasi random number generator to help you implement a monte-carlo integration scheme. MATLAB can help you out in the form of the sobolset and haltonset functions but you have to pay yet more money to get access to the Statistics Toolbox.

MATLAB has a toolbox for almost every eventuality:

  • You need to symbolically integrate a function. – buy the Symbolic Toolbox
  • You need to do some advanced curve fitting. – buy the Curve Fitting Toolbox
  • Working with Splines? – buy the Spline Toolbox
  • Financial Mathematics?- buy the Financial Toolbox

You get the idea. I guess Mathworks do this for a good reason – by splitting off these more specialised functions into separate products they can keep the price of the basic version of MATLAB down at a reasonable level. This is a good thing for many users. After all, who wants to pay for a lot of functionality they will never use.

There are problems with this model though. If you are unlucky enough to have a problem that requires several of these specialised functions then MATLAB can be a very expensive solution for you. If happen to be the head of a research group with, say 20 members, and all of them require access to several of these specialised toolboxes then MATLAB can be an extremely expensive option for you.

Finally, if you are someone like me and you need to maintain network licenses for over 50 different MATLAB toolboxes with varying numbers of seats for an entire large university and co-ordinate who pays for what and give recommendations on which toolboxes academics might choose for teaching and research then the whole MATLAB toolbox system can become rather well….tiring. The license related issues alone are enough to give you sleepless nights – trust me on this!

NAG – A new kind of toolbox for MATLAB

Wouldn’t it be nice if there was a MATLAB toolbox that did the work of several other toolboxes which also had an amazingly straightforward licensing system? Well, the Numerical Algorithms Group (NAG) have come up with a contender for such a toolbox – The NAG toolbox for MATLAB.

NAG have got history – they have been around for a long time. They published the first version of their numerical libraries (which they refer to as Mark 1) back in 1971 so they are even older than I am. Mark 1 of the NAG libraries contained 98 different functions and could be had in 2 flavors – ANSI Fortran or Algol 60! Fast forward to 2008 and they are up to Mark 21 of their Fortran library that contains around 1,500 different functions, which is a lot by anyone’s measure. Subjects covered by these functions include optimization, statistics, splines, curve fitting, numerical quadrature, differential equations and a whole lot more. The recent release of the NAG toolbox for MATLAB allows you to call most of these functions directly from MATLAB – not a whiff of Fortran anywhere.

Did you notice anything about the subjects covered by the NAG library in the paragraph above? That’s right – they look just like the names of various MATLAB toolboxes. A very quick survey of the NAG functions on offer suggests to me that you might be able to substitute the NAG toolbox for MATLAB for the following standard MATLAB toolboxes:

  • Statistics toolbox
  • Optimization toolbox
  • Curve fitting toolbox
  • Spline toolbox
  • Partial Differential Equation toolbox

That’s at least 5 toolboxes in one – which is great news and it gets better. If your institution has a site license for the NAG Fortran library then, at the time of writing at least, it will also have a site license for the NAG toolbox for MATLAB. That’s a genuine site license as well – one that you could use to install the toolbox on every machine owned by your institution. No mucking around with concurrent network licenses here – which is one major point in it’s favour from my point of view.

As you might expect, however, there are complications. The NAG toolbox may well offer similar functionality to the MATLAB toolboxes listed above but you cannot use it as a simple drop in replacement for them. For example, if you want to create quasi random numbers using the sobol sequence then the function you would use from the MATLB statistics toolbox is sobolset whereas if you use the NAG toolbox then the function you need is called G05YAF. The exact properties of these two functions may differ as well since they have probably been implemented differently and the two companies have rather different ways of doing things.

Function Naming

While on the subject of function naming – I have to say that I simply don’t like the naming convention used by the NAG toolbox for MATLAB (which is identical to what they use for their Fortran library). Let’s take the function mentioned above as an example – G05YAF. It’s hardly memorable is it? The MATLAB equivalent – sobolset – is much more reasonable in my opinion. Of course your mileage may vary but I have been brought up on systems such as Mathematica, MATLAB and, more recently, Python and in all of these systems the naming conventions used are almost obvious. Let’s take the function for calculating the eigenvalues of a real general matrix as another example:

  • Mathematica: Eigenvalues[]
  • Matlab: eig()
  • NAG toolbox: F02EBF()

Of course the NAG naming convention isn’t completely random even though it may look like it at first sight of the function names above. It’s very well organised in fact and you can quickly get used to it once you understand the system but it definitely looks out of place when put up against its competitors in my opinion.

Functionality Differences

Although I keep comparing the NAG product with those of the Mathworks, it’s probably worth mentioning that NAG have not set out to build a product that directly competes with the MATLAB toolboxes. They are very different beasts and so if you are looking for a one to one correspondence between the NAG toolbox and MATLAB then you are going to be disappointed. NAG were around long before the Mathworks and they have their own particular way of doing things. I’m not saying that one approach is better than another but they are certainly different.

As a particular example of what I mean consider finding the eigenvalues of a matrix again. The approach taken by the Mathworks is to provide you with one function in MATLAB – eig(). The algorithm that MATLAB uses to actually find the eigenvalues depends upon the type of matrix you give it. The system takes a look at your matrix at run-time and does its best to come up with the optimum algorithm to find the solution.

NAG take a rather different approach. They provide lots of individual functions (click here to see them) that can find eigenvalues for various matrix types and they expect the user to choose the most appropriate one.

These differences can make a direct comparison between NAG and MATLAB rather difficult – if it were easy I would have done it by now. In my opinion you should not be thinking “What is the NAG equivalent to the MATLAB function xyz” as there might not be a direct equivalent, instead you should be thinking “I need a function that does abc – which routine in the NAG toolbox might help me?”

If you find that you need to do something that is not contained within the NAG toolbox then you have essentially have three options.

  1. Try to find something in the Mathwork’s set of toolboxes that does what you need and use that instead.
  2. Code the algorithm yourself – or find someone else’s solution on the web.
  3. Email NAG and tell them what you want to do.

I have taken option 3 on behalf of myself and several users at my university and am very pleased to say that NAG have always responded. In some cases they told us exactly which function within the NAG toolbox we needed to use and in other cases they have actually implemented a suitable algorithm for us

There will be several functions in the next release of the NAG library (and hence the NAG toolbox for MATLAB) simply because we asked for them. That’s what I call customer service! Of course I can’t promise that they will do this in all cases but I have personally found them to be very approachable.

Quality

I might not like NAG’s naming convention but I seriously like the quality of their routines. If you came to me with two solutions to a problem – one which came from the NAG libraries and one which came from some other source then I would almost always trust the NAG result. Writing numerical code is pretty much the main focus of their business and they have been doing it for over 30 years so they know their onions (as my Australian friend Barrie would say). Take a quick look at the scientific literature on google scholar and you will find thousands of references to the NAG libraries – if thousands of academic feel that they can trust them then so can you.

Another high quality part of the NAG toolbox for MATLAB is the documentation. Every single function has been meticulously documented and contains details of the algorithms involved, references to the original literature and fully working example programs. It integrates well with MATLAB’s standard documentation system and so appears to the user as if it were any other MATLAB toolbox. The only thing that is missing is a set of demo’s that can be run directly from the help system. Every standard Mathwork’s toolbox has a nice set of illustrative demo applications that you can start running with just a few mouse clicks but the NAG toolbox has none. This is a shame isn’t exactly a show-stopper and I am reliably informed that there will be a whole set of demo’s included in the next version.

Conclusions

Pros:

  • Truly massive set of robust, accurate numerical routines that could potentially negate the need for a whole set of Mathwork’s tooboxes (write me for more details)
  • Superb documentation and first class customer support.
  • Easy license administration for academic institutions.

Cons:

  • Esoteric naming convention.
  • I couldn’t find a price for an individual who wants to buy the toolbox. NAG’s license model seems to be geared more towards site licenses.

The NAG toolbox for MATLAB is a great piece of software that deserves to be in the toolkit of everyone who is in the business of writing numerical code.

Full disclosure and the usual disclaimers

  • I work for the University of Manchester in the UK but these are my opinions alone and do not necessarily reflect the policy of the University.
  • NAG once bought me lunch but so have the Mathworks so it all evens out nicely.
  • I have never been paid by either company to do anything – I’m just a customer.
  • Comments are welcomed. Even if you disagree with me.

Other articles like this one:

If you enjoyed this article, feel free to click here to subscribe to my RSS Feed.

June 30th, 2008

If you are a user of MATLAB’s Symbolic Toolbox then you are a user of the Maple 10 kernel since this is what MATLAB uses ‘under the hood’ in order to perform symbolic calculations. As of Septemeber 28th 2008 the Mathworks will be switching the kernel of the Symbolic Toolbox from Maple to Mupad. So should you care?

The answer is almost certainly yes. Mupad is completely different from Maple with a different set of abilities, behaviours and, inevitably, bugs. For example, things that didn’t work in Maple versions of the symbolic toolbox will start to work in the Mupad version. On the flip side, some things may stop working where there was once no problem.

I have been through this before when Mathcad switched from the Maple kernel to Mupad and there were a few issues but I support a much larger number of MATLAB symbolic toolbox users and so I am fearing the worst, but then I always do when something major changes like this.

An example of the kind of issue I came across when dealing with the Mathcad change included things like equation solvers changing their behavior due to using different algorithms internally. This sort of thing cropped up in problems that had multiple solutions such as when you try to find the roots of certain equations. Lecturers notes suddenly started disagreeing with the output because, for a given starting value, Mathcad converged to a different solution. No big deal in a class situation (in fact it might be a good learning experience) but not good if you have legacy code that depends upon that result.

Other bugs were a lot more embarrassing.

I’ll be honest – I do not know enough about either Maple or Mupad (I don’t even have access to Mupad in fact) to be able to decide if this change is going to be a good thing or not but one thing is for certain – it will be different and that will need to be managed.

The practical upshot is – when you upgrade, you should check all scripts that depend on the symbolic toolbox

June 16th, 2008

A few years ago, while working through a degree in theoretical physics at Sheffield University, I took a course on special functions in physics that was given by the legendary lecturer Dr Stoddart (saviour of many a physics undergraduate, including me, during his many years there – please leave a comment if you studied at Sheffield and remember him).

This course introduced me to the fascinating world of the so called ‘higher transcendental functions’ of mathematical physics. I remember that we covered topics such as Bessel functions, Laguerre polynomials, Hermite Polynomials and the Gamma function among others but in a one semester course we only really scratched the surface of the subject.

Since then I have come across several other special functions during the course of my work such as the LambertW function, Mathieu functions, Chebyshev polynomials and more. I used to be a physicist and so, despite the fact that the theory behind these functions can often be fascinating, all I had time to consider back then was how to evaluate them.

In fact, as far as my professional life goes, the question of evaluation is still the only thing that I get asked about regarding special functions. Questions such as ‘How can I evaluate the LambertW function in MATLAB?’ (Answer – by using this user-defined function) or ‘Do you know of a free, open source, implementation of Bessel’s function?’ (Answer – the GNU Scientific Library).

The idea for this post came to me while reading an article written in 1994 (and subsequently updated in 2000) where the authors discussed the Numerical Evaluation of Special Functions. One of the features of this document was a list of various special functions combined with a list of software packages that could evaluate them. For example it lists Dawson’s integral and tells us that if you need to evaluate this then you can use various software packages such as the NAG libraries or Numerical Recipes.

I thought that this was a very useful document but a major problem with it is that it is rather out of date! Wouldn’t it be great if someone were to create an updated version that included all of the latest advances in software libraries and applications. I even idly thought of attempting to do this myself and publish the results here but it turns out that I have (thankfully) been beaten to it.

It’s not finished yet but the NIST Digital Library of Mathematical Functions looks like it is going to be exactly what I need. Apparently this project aims to be a sort of modern rewrite of Abramowitz and Stegun’s Handbook of Mathematical Functions, a book that almost every physicist I knew had a copy of. The preview looks very promising to say the least! For example, take the section on the Gamma Function. The library contains everything you might want to know about this function such as its definition, 2D and 3D plots of its graphs, its series expansion and, of course, a list of software packages and libraries that can be used to evaluate it. I note that, for the Gamma function, one can choose from MATLAB, Mathematica, MAPLE, NAG, Maxima, PARI-GP, the GSL, Numerical Recipes and several others – not exactly short of Gamma function implementations are we?

When it’s finished, the work will be published as a book called ‘Handbook of Mathematical Functions’ but will also be available freely online as a digital library – fabulous!

May 23rd, 2008

Yesterday I wrote about Mathematica 6.0.2 not working properly on Ubuntu 8.04 and today I discover that Matlab 2007b doesn’t like it much either. When you run Matlab you might get nothing more than an empty gray window and Matlab will essentially be unusable. Apparently the problem stems from a bug with Java applications when using Compiz or Beryl effects so just turn them off and all will be well.

To fix the problem just click on System -> Preferences -> Appearance in the GNOME menu and select the Visual Effects tab. Set the effects to None and then click on close. Matlab 2007b should now work properly and you won’t be wasting any CPU cycles on eye candy.

Of course you might not want to disable all of the pretty visual effects. If so then try the workarounds detailed in The Mathwork’s article on this issue – I haven’t tried any of these extra tricks since they are unsupported and I am more insterested in a working Matlab than a pretty desktop.

May 13th, 2008

One of the newsletters I subscribe to described the following ‘bug’ in Excel. If you sum the following numbers by hand then you get a result of zero:

-127551.73
103130.41
1807.75
7390.11
9028.59
2831.26
1568.90
1794.71

but if you use Excel to sum these numbers then you get a result of about 8.6402e-12 which, shock, horror, is not zero. So, clearly there is a bug, Excel sucks and we can all have a happy rant about Microsoft’s incompetence. Right?

Wrong! Excel is behaving exactly as I would expect it too and you should expect it to behave this way too. First of all let us determine that this ‘bug’ doesn’t just occur in Excel. Fire up your copy of Matlab (or the open source equivalent, Octave) and type

-127551.73+103130.41+1807.75+7390.11+9028.59+2831.26+1568.90+1794.71

The result?

8.6402e-12 – exactly the same as Excel.

Either you come to the conclusion that 3 different development teams have produced software that can’t add up or that something more subtle is going on.

The ‘something subtle’ is the fact that computers represent numbers internally using binary and when you only have a limited number of binary digits to play with you cannot represent all decimal numbers exactly. A classic example is the decimal number 0.1. The binary representation of 0.1 requires an infinite amount of digits and so if you only store a finite number of these you will always be working with an approximation (just like when you write 0.33333333 as the decimal expansion of 1/3).

In fact, when working in double precision, 0.1 is approximated to

0.1000000000000000055511151231257827021181583404541015625

Which you can see in Matlab by typing

fprintf(‘%.55f\n’,0.1)

You can see the effect of this if you do the following calculation in something like Octave or Matlab

(0.1 + 0.1 + 0.1) – 0.3

the result of which 5.551115123125783e-17

If you need to learn more about this sort of thing then the Wikipedia page on IEEE arithmetic is quite good and so is this article from the Mathworks.

March 3rd, 2008

I thought I would quickly mention a blog I recently discovered and added to my blogroll – blinkdagger. The main reason I decided to start reading this one is because of the excellent Matlab tutorials that they have written. They have also just started a math contest in collaboration with Wild About Math.

February 29th, 2008

I love playing around with computer algebra software, which is useful since supporting them is part of my day job, and I also love reading well written blogs that offer little tidbits of advice and interesting examples. Fairly recently, the companies and groups that actually produce the software have started writing blogs.

Of these, some of my favorites include Doug’s Pick of the Week (Mathworks’ Matlab), Wolfram’s Blog (Mathematica), Loren on the Art of MATLAB, and the Sage Math blog.

The Mathworks’ have just launched a new one, Seth on Simulink, which looks like it has got real potential. Simulink is a part of Matlab that has been on my ‘stuff to learn’ list for a while now and so I look forward to reading what Seth has to say about it.

February 29th, 2008

I read a lot of blogs concerning topics such as numerical computing, programming and computer algebra and most of the time the subject matter can be a bit…well…hard! The articles concerned can often be interesting and very useful but it can take a heck of a lot of concentration on my part before I can properly grok what the author is trying to say.

Sometimes though I come across something that is both useful and interesting and also so mind-numblingly simple that I wonder why on earth I didn’t think of it before. A case in point is Loren’s recent “Art of Matlab” article called “Should min and max marry?”.

I won’t re-iterate what she has written since she has done a great job and makes her point well. I am also currently busy implementing her idea in a piece of code I am writing at the moment while pretending to everyone around me that its such an obvious idea that I had thought about it well in advance!

December 31st, 2007

While catching up on some news items that I missed over the festive period I noticed that version 3.0 of Octave was released on December 21st 2007. For those of you who have never heard of it before, Octave is an open-source project that attempts to emulate much of the core functionality of Matlab – an extremely popular commercial mathematics application centered around linear algebra and numerical analysis. Octave has been around for some time now – version 1.0 was released back in 1994 so it is certainly not just a flash in the pan.

Octave aims to be source compatible with Matlab wherever possible which means that in many cases you can take code written for Matlab, feed it to octave and it will just work. As you might expect this compatibility is far from perfect but it is good enough for many purposes. Some core Matlab functions have not yet been implemented in Octave and there are also some syntactic differences between the programming languages of the two packages but in many situations the compatibility is quite good and I used it myself very successfully back in the days before I had access to Matlab. A more detailed discussion of Octave-Matlab compatibility can be found on the Octave website.

You can read about some of the changes made to Octave for the version 3.0 release over at Octave’s news page. One of the most interesting updates seems to be that Octave now has increased compatibilty with Matlab’s Handle graphics system. It’s been a while since I used Octave myself so I will be having a play with it on my daily commute over the next few weeks to see what I can see.

If you hit this page from google while looking for open-source Matlab alternatives you might also want to check out Scilab, Freemat and Sage.

December 12th, 2007

The Mathworks have a fantastic blog written by Loren Shure called The Art of Matlab which should be on the reading list of every Matlab user. Yesterday she invited a guest blogger, Jiro Doke, to write about how to use Matlab to generate publication-quality graphs. In the article Jiro starts with a very basic Matlab figure like this one:

and teaches you the required commands to turn it into this

The full article can be found by clicking here.