{"id":1894,"date":"2009-11-22T19:04:02","date_gmt":"2009-11-22T18:04:02","guid":{"rendered":"http:\/\/www.walkingrandomly.com\/?p=1894"},"modified":"2012-05-11T23:18:37","modified_gmt":"2012-05-11T22:18:37","slug":"which-matlab-functions-are-multicore-aware","status":"publish","type":"post","link":"https:\/\/walkingrandomly.com\/?p=1894","title":{"rendered":"Which MATLAB functions are multicore aware?"},"content":{"rendered":"<p>In order to write fully parallel programs in MATLAB you have a few choices but they are either hard work, expensive or both.\u00a0 For example you could<\/p>\n<ul>\n<li>Write <a href=\"https:\/\/www.walkingrandomly.com\/?p=1795\">parallel mex files using C and OpenMP<\/a><\/li>\n<li>Drop a load of cash on the <a href=\"http:\/\/www.mathworks.co.uk\/products\/parallel-computing\/\">parallel computing toolbox<\/a> from the Mathworks<\/li>\n<li>Use the free parallel toolbox, <a href=\"http:\/\/www.ll.mit.edu\/mission\/isr\/pmatlab\/pmatlab.html\">pMATLAB<\/a><\/li>\n<\/ul>\n<p>Wouldn&#8217;t it be nice if you could do no work at all and yet STILL get a speedup on a multicore machine?\u00a0 Well, you can&#8230;.sometimes.<\/p>\n<p>Slowly but surely, The Mathworks are parallelising some of the built in MATLAB functions to make use of modern multicore processors.\u00a0 So, for certain functions you will see a speed increase in your code when you move to a dual or quad core machine.\u00a0 But which functions?<\/p>\n<p>On a recent train journey I trawled through all of the release notes in MATLAB and did my best to come up with a definitive list and the result is below.<\/p>\n<p>Alongside each function is the version of MATLAB where it first became parallelised (if known).  If there is any extra detail then I have included it in brackets (typically things like &#8211; &#8216;this function is only run in parallel for input arrays of more than 20,000 elements&#8217;).   I am almost certainly missing some functions and details so if you know something that I don&#8217;t then please drop me a comment and I&#8217;ll add it to the list.<\/p>\n<p>Of course when I came to write all of this up I did some googling and discovered that <a href=\"http:\/\/www.mathworks.com\/support\/solutions\/en\/data\/1-4PG4AN\/?solution=1-4PG4AN\">The Mathworks have already answered this question themselves<\/a>!\u00a0 Oh well&#8230;.I&#8217;ll publish my list anyway.<\/p>\n<p><strong>Update 11th May 2012<\/strong> While optimising a user&#8217;s application, I discovered that the pinv function makes use of multicore processors so I&#8217;ve added it to the list.  pinv makes use of svd which is probably the bit that&#8217;s multithreaded.<\/p>\n<p><strong>Update 18th February 2012:<\/strong> Added a few functions that I had missed earlier: erfcinv, rank,isfinite,lu and schur. Not sure when they became multithreaded so left the version number blank for now<\/p>\n<p><strong>Update 16th June 2011: <\/strong>Added new functions that became multicore aware in version 2011a plus a couple that have been multicore for a while but I just didn&#8217;t know about them!<\/p>\n<p><strong>Update 8th March 2010:<\/strong> Added new functions that became multicore aware in version 2010a.  Also added multicore aware functions from the image processing toolbox.<\/p>\n<pre>abs (for double arrays &gt; 200k elements),2007a\r\nacos (for double arrays &gt; 20k elements),2007a\r\nacosh (for double arrays &gt; 20k elements),2007a\r\napplylut,2009b (Image processing toolbox)\r\nasin (for double arrays &gt; 20k elements),2007a\r\nasinh(for double arrays &gt; 20k elements),2007a\r\natan (for double arrays &gt; 20k elements),2007a\r\natand (for double arrays &gt; 20k elements),2007a\r\natanh (for double arrays &gt; 20k elements),2007a\r\nbackslash operator (A\\b for double arrays &gt; 40k elements),2007a\r\nbsxfun,2009b\r\nbwmorph,2010a (Image processing toolbox)\r\nbwpack,2009b (Image processing toolbox)\r\nbwunpack,2009b (Image processing toolbox)\r\nceil (for double arrays &gt; 200k elements),2007a\r\nconv,2011a\r\nconv2,(two input form),2010a\r\ncos (for double arrays &gt; 20k elements),2007a\r\ncosh (for double arrays &gt; 20k elements),2007a\r\ndet (for double arrays &gt; 40k elements),2007a\r\nedge,2010a (Image processing toolbox)\r\neig\r\nerf,2009b\r\nerfc,2009b\r\nerfcinv\r\nerfcx,2009b\r\nerfinv,2009b\r\nexp (for double arrays &gt; 20k elements),2007a\r\nexpm (for double arrays &gt; 40k elements),2007a\r\nfft,2009a\r\nfft2,2009a\r\nfftn,2009a\r\nfilter,2009b\r\nfix (for double arrays &gt; 200k elements),2007a\r\nfloor (for double arrays &gt; 200k elements),2007a\r\ngamma,2009b\r\ngammaln,2009b\r\nhess (for double arrays &gt; 40k elements),2007a\r\nhypot (for double arrays &gt; 200k elements),2007a\r\nifft,2009a\r\nifft2,2009a\r\nifftn,2009a\r\nimabsdiff,2010a (Image processing toolbox)\r\nimadd,2010a (Image processing toolbox)\r\nimclose,2010a (Image processing toolbox)\r\nimdilate,2009b (Image processing toolbox)\r\nimdivide,2010a (Image processing toolbox)\r\nimerode,2009b (Image processing toolbox)\r\nimmultiply,2010a (Image processing toolbox)\r\nimopen,2010a (Image processing toolbox)\r\nimreconstruct,2009b (Image processing toolbox)\r\nint16 (for double arrays &gt; 200k elements),2007a\r\nint32 (for double arrays &gt; 200k elements),2007a\r\nint8 (for double arrays &gt; 200k elements),2007a\r\ninv (for double arrays &gt; 40k elements),2007a\r\niradon,2010a (Image processing toolbox)\r\nisfinite\r\nisinf (for double arrays &gt; 200k elements),2007a\r\nisnan (for double arrays &gt; 200k elements),2007a\r\nldivide,2008a\r\nlinsolve (for double arrays &gt; 40k elements),2007a\r\nlog,2008a\r\nlog2,2008a\r\nlogical (for double arrays &gt; 200k elements),2007a\r\nlscov (for double arrays &gt; 40k elements),2007a\r\nlu\r\nMatrix Multiply (X*Y - for double arrays &gt; 40k elements),2007a\r\nMatrix Power (X^N - for double arrays &gt; 40k elements),2007a\r\nmax (for double arrays &gt; 40k elements),2009a\r\nmedfilt2,2010a (Image processing toolbox)\r\nmin (for double arrays &gt; 40k elements),2009a\r\nmldivide (for sparse matrix input),2009b\r\nmod (for double arrays &gt; 200k elements),2007a\r\npinv\r\npow2 (for double arrays &gt; 20k elements),2007a\r\nprod (for double arrays &gt; 40k elements),2009a\r\nqr (for sparse matrix input),2009b\r\nqz,2011a\r\nrank\r\nrcond (for double arrays &gt; 40k elements),2007a\r\nrdivide,2008a\r\nrem,2008a\r\nround (for double arrays &gt; 200k elements),2007a\r\nschur\r\nsin (for double arrays &gt; 20k elements),2007a\r\nsinh (for double arrays &gt; 20k elements),2007a\r\nsort (for long matrices),2009b\r\nsqrt (for double arrays &gt; 20k elements),2007a\r\nsum (for double arrays &gt; 40k elements),2009a\r\nsvd,\r\ntan (for double arrays &gt; 20k elements),2007a\r\ntand (for double arrays &gt; 200k elements),2007a\r\ntanh (for double arrays &gt; 20k elements),2007a\r\nunwrap (for double arrays &gt; 200k elements),2007a\r\nvarious operators such as x.^y (for double arrays &gt; 20k elements),2007a\r\nInteger conversion and arithmetic,2010a<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>In order to write fully parallel programs in MATLAB you have a few choices but they are either hard work, expensive or both.\u00a0 For example you could Write parallel mex files using C and OpenMP Drop a load of cash on the parallel computing toolbox from the Mathworks Use the free parallel toolbox, pMATLAB Wouldn&#8217;t [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11,7],"tags":[],"class_list":["post-1894","post","type-post","status-publish","format-standard","hentry","category-matlab","category-programming"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p3swhs-uy","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts\/1894","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1894"}],"version-history":[{"count":10,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts\/1894\/revisions"}],"predecessor-version":[{"id":2392,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts\/1894\/revisions\/2392"}],"wp:attachment":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1894"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1894"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1894"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}