Maple 14 released

May 17th, 2010 | Categories: Maple, math software | Tags:

Version 14 of Maple was released a couple of weeks ago and it appears to have some very cool stuff in it.  Some of the highlights that stand out for me include

  • Accelerated linear algebra using graphics cards via NVIDIA’s CUDA.    Maple’s advertising blurb says that they have implemented matrix multiplication and that this will help speed up many linear algebra routines since this is such a fundamental operation.  I think that Maple are the first of the big general purpose mathematical packages to offer direct CUDA integration out of the box and this development is well worth watching.  The speed-ups that are possible from CUDA technology are nothing short of astonishing – hundreds of times in some cases.  However, Maplesoft are going to need to add a lot more than matrix multiplication in order for this to be truly useful.  A set of fast random number generators would be nice for example (I’m thinking superfast Monte-carlo simulations – the finance people would love it).
  • Maple uses Intel’s Math Kernel Library (MKL) for many of its low-level numerical linear algebra routines and this has been updated to version 10.0 in Maple 14.  For 32bit windows users this has sped certain operations up quite a lot but it is 64bit Windows users who will really see the benefit since 64bit Maple 13 only used a set of generic BLAS routines.  The practical upshot is that certain basic linear algebra routines, such as Matrix Multiplication, can be around 10 times faster in 64bit windows Maple 14 compared to the previous version.  I couldn’t find mention of the Linux version.
  • A shed load of updates to their differential equation solvers including a new numerical routine called the Cash-Karp pair.
  • The Maple toolbox for MATLAB is no longer a separate product and is now included with Maple itself.  This is great news if you, like me, tend to work with several mathematical packages simultaneously.  Of course you need to have a copy of MATLAB installed to make use of this functionality – you don’t get a copy of MATLAB for free :)
  • You can now import MATLAB binary files (compressed and uncompressed) directly into Maple using the ImportMatrix command.
  • Another product, The Maple-NAG connector, has also been integrated with Maple itself.  This allows you to easily call the NAG C library directly from Maple but, similar to the MATLAB toolbox, you’ll have to purchase the NAG C library separately to make use of this.

As you can see, I tend to favour new features that lead to improved performance or better interoperability with other software packages in the first instance.  New mathematics and usability features take a little longer to sink in (for me at least).

I’ve not got a copy of Maple 14 yet but will try to write more if I upgrade (finances permitting).

For a full list of changes check out Maple’s online help section.

More on Maple from Walking Randomly

  1. martin cohen
    May 17th, 2010 at 19:05
    Reply | Quote | #1

    It would be nice if Maple had an affordable version like Mathematica has (Their home version is complete and $300).

  2. May 18th, 2010 at 09:11
    Reply | Quote | #2

    Hi Martin

    I completely agree!

    Cheers,
    Mike

  3. Samir
    May 18th, 2010 at 13:56
    Reply | Quote | #3

    Hi

    A fully functional personal (or “home”) edition of Maple is available. Take a look at http://www.maplesoft.com/products/Maple/personal_edition/

    Samir,
    Maplesoft

  4. May 18th, 2010 at 14:02
    Reply | Quote | #4

    Hi Samir

    Thanks for that – $239 at the time of writing – quite a bargin!

    Cheers,
    Mike

  5. May 19th, 2010 at 07:30
    Reply | Quote | #5
  6. May 19th, 2010 at 10:36
    Reply | Quote | #6

    Hi rp

    Very impressive! Some of those benchmarks are stunning and blow away the competition. I also like the fact that you are making good use of multicore processors without bothering the user with the details. Do you have any numbers that show how your algorithms scale with number of cores?

    Cheers,
    Mike

  7. May 19th, 2010 at 10:49
    Reply | Quote | #7

    It’s superlinear on the Core i5/i7 up to very sparse problems, and generally linear speedup on the Core 2.
    http://www.cecm.sfu.ca/~rpearcea/sdmp/sdmp_pmul.pdf

    However, in Maple 14 you don’t get all of that due to sequential overhead. We have to convert to and from Maple’s data structures and in particular, building the result in Maple’s format can be almost as expensive as computing it in parallel using our code. We have plans to address this in a future release.

  8. Student
    May 19th, 2010 at 14:17
    Reply | Quote | #8

    What are the differences between the Maple 14 student edition and the professional?

  9. May 19th, 2010 at 14:43
    Reply | Quote | #9

    @Student

    Apart from the price I don’t think there are any differences in the functionality. I believe that the student version doesn’t come with printed manuals and that’s about it. Can anyone from Maplesoft confirm?

    Cheers,
    Mike

  10. May 19th, 2010 at 15:35

    There’s no difference between the versions of the software.

  11. MySchizoBuddy
    June 26th, 2010 at 10:51

    You mentioned Matrix multiplication for both CUDA and MKL. which one is it. Or does it use MKL if nvidia GPU isn’t detected and switches to CUDA when it does?

    Request: Benchmark between CULA (CUDA optimized Linear Algebra) for matlab and Jacket for matlab.

  12. MySchizoBuddy
    June 26th, 2010 at 10:59

    There are benchmarks on CULA’s website about CUDA optimized Linear Algebra compares to Intel MKL.
    http://www.culatools.com/features/performance/

  13. June 26th, 2010 at 11:00

    It uses MKL unless you specifically call the CUDA package.

    Would love to do the CULA/Jacket benchmark – don’t have the software though :(

  14. MySchizoBuddy
    June 27th, 2010 at 13:10

    Has Maple moved to Mathcad style, where you can just type in the equations just like you do it by hand. I mean is it visual equation editing?

  15. Cmath
    August 17th, 2010 at 23:22

    I believe it’s been that way for a while.