What’s new in MATLAB 2011a?

June 16th, 2011 | Categories: math software, matlab | Tags:

Every time there is a new MATLAB release I take a look to see which new features interest me the most and share them with the world.  If you find this article interesting then you may also enjoy similar articles on 2010b and 2010a.

Simpler random number control

MATLAB 2011a introduces the function rng which allows you to control random number generation much more easily.  For example. in older versions of MATLAB you would have to do the following to reseed the default random number stream to something based upon the system time.

RandStream.setDefaultStream(RandStream('mt19937ar','seed',sum(100*clock)));

In MATLAB 2011a you can achieve something similar with

rng shuffle

Faster Functions

I love it when The Mathworks improve the performance of some of their functions because you can guarantee that, in an organisation as large as the one I work for, there will always be someone who’ll be able to say ‘Wow! I switched to the latest version of MATLAB and my code runs faster.’  All of the following timings were performed on a 3Ghz quad-core running Ubuntu Linux with the cpu-selector turned up to maximum for all 4 cores.  In all cases the command was run 5 times and an average taken.  Some of the faster functions include conv, conv2, qz, complex eig and svd. The speedup on svd is astonishing!

a=rand(1,100000);
b=rand(1,100000);
tic;conv(a,b);toc

MATLAB 2010a: 3.31 seconds
MATLAB 2011a: 1.56 seconds

a=rand(1000,1000);
b=rand(1000,1000);
tic;q=qz(a,b);toc

MATLAB 2010a: 36.67 seconds
MATLAB 2011a: 22.87 seconds

a=rand(1000,1000);
tic;[U,S,V] = svd(a);toc

MATLAB 2010a: 9.21 seconds
MATLAB 2011a: 0.7114 seconds

Symbolic toolbox gets beefed up

Ever since its introduction back in MATLAB 2008b, The Mathworks have been steadily improving the Mupad-based symbolic toolbox.  Pretty much all of the integration failures that I and my readers identified back then have been fixed for example.  MATLAB 2011a sees several new improvements but I’d like to focus on improvements for non-algebraic equations.

Take this system of equations

solve('10*cos(a)+5*cos(b)=x', '10*sin(a)+5*sin(b)=y', 'a','b')

MATLAB 2011a finds the (extremely complicated) symbolic solution whereas MATLAB 2010b just gave up.
Here’s another one

syms an1 an2;
eq1 = sym('4*cos(an1) + 3*cos(an1+an2) = 6');
eq2 = sym('4*sin(an1) + 3*sin(an1+an2) = 2');
eq3 = solve(eq1,eq2);

MATLAB 2010b only finds one solution set and it’s approximate

>> eq3.an1
ans =
-0.057562921169951811658913433179187

>> eq3.an2
ans =
0.89566479385786497202226542634536

MATLAB 2011a, on the other hand, finds two solutions and they are exact

>> eq3.an1

ans =
 2*atan((3*39^(1/2))/95 + 16/95)
 2*atan(16/95 - (3*39^(1/2))/95)

>> eq3.an2

ans =
 -2*atan(39^(1/2)/13)
  2*atan(39^(1/2)/13)

MATLAB Compiler has improved parallel support
Lifted direct from the MATLAB documentation:

MATLAB Compiler generated standalone executables and libraries from parallel applications can now launch up to eight local workers without requiring MATLAB® Distributed Computing Server™ software.

Amen to that!

GPU Support has been beefed up in the parallel computing toolbox

A load of new functions now support GPUArrays.

cat
colon
conv
conv2
cumsum
cumprod
eps
filter
filter2
horzcat
meshgrid
ndgrid
plot
subsasgn
subsindex
subsref
vertcat

You can also index directly into GPUArrays now and the amount of MATLAB code supported by arrayfun for GPUArrays has also been increased to include the following.

&, |, ~, &&, ||,
while, if, else, elseif, for, return, break, continue, eps

This brings the full list of MATLAB functions and operators supported by the GPU version of arrayfun to

abs
acos
acosh
acot
acoth
acsc
acsch
asec
asech
asin
asinh
atan
atan2
atanh
bitand
bitcmp
bitor
bitshift
bitxor
ceil
complex
conj
cos
cosh
cot
coth
csc
csch
double
eps
erf
erfc
erfcinv
erfcx
erfinv
exp
expm1
false
fix
floor
gamma
gammaln
hypot
imag
Inf
int32
isfinite
isinf
isnan
log
log2
log10
log1p
logical
max
min
mod
NaN
pi
real
reallog
realpow
realsqrt
rem
round
sec
sech
sign
sin
single
sinh
sqrt
tan
tanh
true
uint32
+
-
.*
./
.\
.^
==
~=
<
<=
>
>=
&
|
~
&&
||

Scalar expansion versions of the following:

*
/
\
^
Branching instructions:

break
continue
else
elseif
for
if
return
while

The Parallel Computing Toolbox is not the only game in town for GPU support on MATLAB.  One alternative is Jacket by Accelereyes and they have put up a comparison between the PCT and Jacket.  At the time of writing it compares against 2011a.

More information about GPU support in various mathematical software packages can be found here.

Toolbox mergers and acquisitions

There have been several license related changes in this version of MATLAB comprising of 2 new products, 4 mergers and one name change.  Sadly, none of my toolbox-merging suggestions have been implemented but let’s take a closer look at what has been done.

  • The Communications Blockset and Communications Toolbox have merged into what’s now called the Communications System Toolbox. This new product requires another new product as a pre-requisite – The DSP System Toolbox.
  • The DSP System Toolbox isn’t completely new, however, since it was formed out of a merger between the Filter Design Toolbox and Signal Processing Blockset.
  • Stateflow Coder and Real-Time Workshop have combined their powers to form the new Simulink Coder which depends upon the new MATLAB Coder.
  • The new Embedded Coder has been formed from the merging of no less than 3 old products: Real-Time Workshop Embedded Coder, Target Support Package, and Embedded IDE Link. This new product also requires the new MATLAB Coder.
  • MATLAB Coder is totally new and according to the Mathwork’s blurb it “generates standalone C and C++ code from MATLAB® code. The generated source code is portable and readable.”  I’m looking forward to trying that out.
  • Next up, is what seems to be little more than a renaming exercise since the Video and Image Processing Blockset has been renamed the Computer Vision System Toolbox.

Personally, few of these changes affect me but professionally they do since I have users of many of these toolboxes.  An original set of 9 toolboxes has been rationalized into 5 (4 from mergers and the new MATLAB Coder) and I do like it when the number of Mathwork’s toolboxes goes down.  To counter this, there is another new product called The Phased Array System Toolbox.

So, that rounds up what was important for me in MATLAB 2011a.  What did you like/dislike about it?

Other blog posts about 2011a

  1. MySchizoBuddy
    June 16th, 2011 at 17:49
    Reply | Quote | #1

    what happened to the cost after the mergers?

  2. MySchizoBuddy
    June 16th, 2011 at 18:38
    Reply | Quote | #2

    Jacket now includes CULA from EM Photonics. That gives it a major edge over PCT

  3. June 16th, 2011 at 19:35
    Reply | Quote | #3

    Didnt know that Jacket included CULA; thanks for letting me know.
    I recently got funding for enough network licenses for the PCT to supply our uni. I considered Jacket (its a great product) but didnt go with it for the following reasons.

    -PCT provides explicit multicore support. This will benefit far more users than a CUDA only product.
    -network licenses for jacket are more expensive than network licenses for PCT. I could support more users by going with PCT.
    -I fully expect PCT CUDA to become the dominant CUDA solution for MATLAB in the long run. PCT is behind jacket right now but surely this wont last given Mathworks resources.

    Cheers,
    mike

  4. Michal Kvasnicka
    June 17th, 2011 at 08:18
    Reply | Quote | #4

    The above mentioned hard symbolic problem:
    solve(’10*cos(a)+5*cos(b)=x’, ’10*sin(a)+5*sin(b)=y’, ‘a’,’b’)

    produce on MATLAB 2011a (Ubuntu 10.10 64bit) the following result:

    >> solve(’10*cos(a)+5*cos(b)=x’, ’10*sin(a)+5*sin(b)=y’, ‘a’,’b’)
    Warning: The solutions are parametrized by the symbols:
    z = (Dom::ImageSet(arccos(x/5 + 2) + 2*PI*k, k, Z_) union Dom::ImageSet(- arccos(x/5 + 2) + 2*PI*k,
    k, Z_)) intersect (Dom::ImageSet(PI – arcsin(y/5) + 2*PI*k, k, Z_) union Dom::ImageSet(arcsin(y/5) +
    2*PI*k, k, Z_))
    z12 = (Dom::ImageSet(arccos(x/10 + 1/2) + 2*PI*k, k, Z_) union Dom::ImageSet(- arccos(x/10 + 1/2) +
    2*PI*k, k, Z_)) intersect (Dom::ImageSet(PI – arcsin(y/10) + 2*PI*k, k, Z_) union
    Dom::ImageSet(arcsin(y/10) + 2*PI*k, k, Z_))

    > In solve at 94

    ans =

    a: [4×1 sym]
    b: [4×1 sym]

    So, I am not sure if this result is acceptable as a real improvement.

    On the other hand the MUPAD provides full solution.

  5. June 17th, 2011 at 13:27
    Reply | Quote | #5

    Are they using 128-bit floats? That’s an awful lot of decimal digits to print in “-0.057562921169951811658913433179187”

    128-bit floats would be very cool.

  6. June 17th, 2011 at 13:32
    Reply | Quote | #6

    I don’t think so. It’s probably arbitrary precision arithmetic.

  7. MySchizoBuddy
    June 17th, 2011 at 21:47
    Reply | Quote | #7

    yes eventually PCT will catch up with jacket. That’s why Jacket has now opted for libjacket for C/C++ code so they are now looking outside matlab for their product. Mathworks to me is a slow moving Giant. Fastest way for them will be to just buy CULA.

    AMD users are still left out either way. OpenCL allows both CPUs and GPUs; sadly we haven’t seen any good product targeting OpenCL.

  8. MySchizoBuddy
    June 17th, 2011 at 22:04
    Reply | Quote | #8

    just found out that Jacket 1.7 now allows you to run the code on the CPU as well. Here is their blog post explaining it http://blog.accelereyes.com/blog/2011/03/17/write_once_run_everywhere/

  9. June 18th, 2011 at 08:16
    Reply | Quote | #9

    Its a nice trick but It’s not actually doing anything to parallelise on the CPU though. What the info in that link does is allow you write more portable functions. If function user has jacket and GPU then all he need do is pass a garray and he gets GPU acceleration. If user doesn’t have jacket,GPU then he just passes normal array and function runs on CPU with no acceleration or parallelisation.

    As for OpenCL, Mathematica has support for that.

    Finally, news just in, if you buy PGI Accelerator products then you can write code that parallelises over CPU or GPU using CUDA.

    http://www.pgroup.com/resources/cuda-x86.htm

  10. November 11th, 2011 at 22:05

    Hi! Just stumbled upon your awesome blog looking for info on smart seeding of independent Mersenne Twister rngs. Just wanted to add my interest in a “part 2” to your series! Thanks! David

  11. najol
    December 31st, 2012 at 03:04

    I have problem with solve in MATLAB 2011. I can run my code in version 2010a without any problem but when I run it in high version I can not and I have warning:The solutions are parametrized by the symbols
    Is it possible whats the solution for this problem?