Archive for the ‘OpenCL’ Category

November 13th, 2012

Intel have finally released the Xeon Phi – an accelerator card based on 60 or so customised Intel cores to give around a Teraflop of double precision performance.  That’s comparable to the latest cards from NVIDIA (1.3 Teraflops according to http://www.theregister.co.uk/2012/11/12/nvidia_tesla_k20_k20x_gpu_coprocessors/) but with one key difference—you don’t need to learn any new languages or technologies to take advantage of it (although you can do so if you wish)!

The Xeon Phi uses good, old fashioned High Performance Computing technologies that we’ve been using for years such as OpenMP and MPI.  There’s no need to completely recode your algorithms in CUDA or OpenCL to get a performance boost…just a sprinkling of OpenMP pragmas might be enough in many cases.  Obviously it will take quite a bit of work to squeeze every last drop of performance out of the thing but this might just be the realisation of ‘personal supercomputer’ we’ve all been waiting for.

Here are some links I’ve found so far — would love to see what everyone else has come up with.  I’ll update as I find more

I also note that the Xeon Phi uses AVX extensions but with a wider vector width of 512 bytes so if you’ve been taking advantage of that technology in your code (using one of these techniques perhaps) you’ll reap the benefits there too.

I, for one, am very excited and can’t wait to get my hands on one!  Thoughts, comments and links gratefully received!

August 24th, 2012

Updated 26th March 2015

I’ve been playing with AVX vectorisation on modern CPUs off and on for a while now and thought that I’d write up a little of what I’ve discovered.  The basic idea of vectorisation is that each processor core in a modern CPU can operate on multiple values (i.e. a vector) simultaneously per instruction cycle.

Modern processors have 256bit wide vector units which means that each CORE can perform up to 16 double precision  or 32 single precision floating point operations (FLOPS) per clock cycle. So, on a quad core CPU that’s typically found in a decent laptop you have 4 vector units (one per core) and could perform up to 64 double precision FLOPS per cycle. The Intel Xeon Phi accelerator unit has even wider vector units — 512bit!

This all sounds great but how does a programmer actually make use of this neat hardware trick?  There are many routes:-

Intrinsics

At the ‘close to the metal’ level you code for these vector units using instructions called AVX intrinsics.  This is relatively difficult and leads to none-portable code if you are not careful.

Auto-vectorisation in compilers

Since working with intrinsics is such hard work, why not let the compiler take the strain? Many modern compilers can automatically vectorize your C, C++ or Fortran code including gcc, PGI and Intel. Sometimes all you need to do is add an extra switch at compile time and reap the speed benefits. In truth, vectorization isn’t always automatic and the programmer needs to give the compiler some assistance but it is a lot easier than hand-coding intrinsics.

Intel SPMD Program Compiler (ispc)

There is a midway point between automagic vectorisation and having to use intrinsics. Intel have a free compiler called ispc (http://ispc.github.com/) that allows you to write compute kernels in a modified subset of C. These kernels are then compiled to make use of vectorised instruction sets. Programming using ispc feels a little like using OpenCL or CUDA. I figured out how to hook it up to MATLAB a few months ago and developed a version of the Square Root function that is almost twice as fast as MATLAB’s own version for sandy bridge i7 processors.

OpenMP

OpenMP is an API specification for parallel programming that’s been supported by several compilers for many years. OpenMP 4 was released in mid 2013 and included support for vectorisation.

Vectorised Libraries

Vendors of numerical libraries are steadily applying vectorisation techniques in order to maximise performance.  If the execution speed of your application depends upon these library functions, you may get a significant speed boost simply by updating to the latest version of the library and recompiling with the relevant compiler flags.

CUDA for x86

Another route to vectorised code is to make use of the PGI Compiler’s support for x86 CUDA.  What you do is take CUDA kernels written for NVIDIA GPUs and use the PGI Compiler to compile these kernels for x86 processors.  The resulting executables take advantage of vectorisation.  In essence, the vector units of the CPU are acting like CUDA cores–which they sort of are anyway!

The PGI compilers also have technology which they call PGI Unified binary which allows you to use NVIDIA GPUs when present or default to using multi-core x86 if no GPU is present.

  • PGI CUDA-x86 – PGI’s main page for their CUDA on x86 technologies

OpenCL for x86 processors

Yet another route to vectorisation would be to use Intel’s OpenCL implementation which takes OpenCL kernels and compiles them down to take advantage of vector units (http://software.intel.com/en-us/blogs/2011/09/26/autovectorization-in-intel-opencl-sdk-15/).  The AMD OpenCL implementation may also do this but I haven’t tried it and haven’t had chance to research it yet.

WalkingRandomly posts

I’ve written a couple of blog posts that made use of this technology.

Miscellaneous resources

There is other stuff out there but the above covers everything that I have used so far.  I’ll finish by saying that everyone interested in vectorisation should check out this website…It’s the bible!

Research Articles on SSE/AVX vectorisation

I found the following research articles useful/interesting.  I’ll add to this list over time as I dig out other articles.

April 26th, 2012

Intel have just released their OpenCL Software Development Kit (SDK) for Intel processors.  The good news is that this version targets the on-die GPU as well as the CPU allowing truly heterogeneous programming.  The bad news is that the GPU goodness is for 3rd Generation ‘Ivy Bridge‘ Processors only– us backward Sandy Bridge users have been left in the cold :(

A quick scan through the release notes reveals the following:-

  • OpenCL access to the on-die GPU part is currently for Windows only. Linux users only have CPU support at the moment.
  • No access to the GPU part of Sandy Bridge Processors via this implementation.
  • The GPU part has single precision only (I guess we’ll see many more mixed-precision algorithms from now on)

I don’t have access to an Ivy Bridge processor and so can’t have a play but I’m looking forward to seeing how much performance OpenCL programmers can squeeze out of this new implementation.

Other WalkingRandomly posts on GPU computing

January 10th, 2012

From where I sit it seems that the majority of scientific GPU work is being done with NVIDIA’s proprietary CUDA platform.  All the signs point to the possibility of this changing, however, and I wonder if 2012 will be the year when OpenCL comes of age.  Let’s look at some recent and near future events….

Hardware

  • AMD have recently released the AMD Radeon HD 7970, said to be the fastest single- GPU graphics card on the planet.  This new card supports both Microsoft’s DirectCompute along with OpenCL and is much faster than the previous generation of AMD card (see here for compute benchmarks) as well as being faster than NVIDIAs current top of the line GTX 580.
  • Intel will release their next generation of CPUs – Ivy Bridge – which will include an increased number of built in GPU cores which should be OpenCL compatible.  Although the current Sandy Bridge processors also contain GPU cores, it is not currently possible to target them with Intel’s OpenCL implementation (version 1.5 is strictly for the CPU cores).  I would be very surprised if Intel didn’t update their OpenCL implementation to be able to target the GPUs in Ivy Bridge this year.
  • AMDs latest Fusion processors also contain OpenCL compatible GPU cores directly integrated with the CPU which programmers can exploit using AMD’s Accelerated Parallel Processing (APP) SDK.

The practical upshot of the above is that if a software vendor uses OpenCL to accelerate their product then it could potentially benefit more of their customers than if they used CUDA.  Furthermore, if you want your code to run on the fastest GPU around then OpenCL is the way to go right now.

Software

Having the latest, fastest hardware is pointless if the software you run can’t take advantage of it.  Over the last 12 months I have had the opportunity to speak to developers of various commerical scientific and mathematical software products which support GPU acceleration.  With the exception of Wolfram’s Mathematica, all of them only supported CUDA.  When I asked why they don’t support OpenCL, the response of most of these developers could be paraphrased as ‘The mathematical libraries and vendor support for CUDA are far more developed than those of OpenCL so CUDA support is significantly easier to integerate into our product.‘  Despite this, however, OpenCL support is definitely growing in the world of mathematical and scientific software.

OpenCL in Mathematics software and libraries

  • ViennaCL, a GPU-accelerated C++ open-source linear algebra library, was updated to version 1.2.0 on December 31st (just missing the deadline for December’s Month of Math Software).  Roughly speaking, ViennaCL is a mixture of Boost.ublas (high-level interface) and MAGMA (GPU-support), yet based on OpenCL rather than CUDA.
  • AccelerEyes released a new major version of their GPU accelerated MATLAB toolbox, Jacket, in late December 2011.  The big news as far as this article is concerned is that it includes support for OpenCL; something that is currently missing from The Mathworks’ Parallel Computing Toolbox.
  • Not content with bringing OpenCL support to MATLAB, AccelerEyes also realesed ArrayFire— a free (for the basic version at least) library for C, C++, Fortran, and Python that includes support for both CUDA and OpenCL.
  • Although it’s not new news, it’s worth bearing in mind that Mathematica has supported OpenCL for a while now– since the relase of version 8 back in  November 2010.

Finite Element Modelling with Abaqus

  • Back in May 2011, Version 6.11 of the finite element modelling package, Abaqus, was released and it included support for NVIDIA cards (see here for NVIDIA’s page on it).  In September, GPU support in Abaqus was broadened to include AMD Hardware with an OpenCL compliant release (see here).

Other projects

  • In late December 2011, the first alpha version of FortranCL, an OpenCL interface for Fortran 90, was released.

What do you think?  Will OpenCL start to take the lead in scientific and mathematical software applications this year or will CUDA continue to dominate?  Are there any new OpenCL projects that I’ve missed?

May 6th, 2011

Updated January 4th 2011

It is becoming increasingly common for programmers to make use of GPUs (Graphical Processing Units) to speed up their programs substantially.  There are three major low-level programming libraries that allow you to do this in languages such as C; namely CUDA, OpenCL and Microsoft DirectCompute.  Of these three, CUDA is the most developed but it only works on Nvidia graphics cards.

I am often asked if the major commercial math packages support GPU computing and I find myself writing the same summary email over and over again.  So, here is a very brief breakdown of what is currently on offer.  I plan to expand the information contained in this page over time so if you have any information about GPU computing in these packages then let me know.

MATLAB

Core MATLAB contains no support for GPU computing but several organizations (including The Mathworks themselves) have produced add-on toolboxes that add such support:

  • Jacket – This is a product from a company called AccelerEyes and is possibly the most advanced and well developed GPU solution for MATLAB currently available.  As of version 2.0 it supports both OpenCL and CUDA frameworks.
  • The Mathworks’ Parallel Computing Toolbox (PCT) – If you want to do your MATLAB GPU computing the officially supported way then this is the product you need.  As a bonus, it also allows you to make better use of the multicore processor that almost certainly resides in your machine.  Like many of the offerings on this page, only the CUDA framework is supported so you are out of luck if you don’t have an NVidia graphics card.  Even if you do have an NVidia graphics card then you still might be out of luck since the PCT only supports cards that have compute level 1.3 or above (i.e. double precision only).
  • CULA is a set of GPU-accelerated linear algebra libraries utilizing the NVIDIA CUDA parallel computing architecture and it has a MATLAB interface.
  • GPUmat – This product is completely free but is less developed than the commercial offerings above.  Again. it is CUDA only
  • OpenCL toolbox – The only OpenCL solution for MATLAB I could find.  It is free but development seems to have stalled.

Mathematica

Mathematica 8 has support for both CUDA and OpenCL built in so no need for any add-ons.  Furthermore, it supports both single and double precision GPUs so you can experiment with GPU computing on older, cheaper cards.

Maple

Maple has had some CUDA-only GPU support since version 14.  On the face of it, the CUDA package only appears to contain one accelerated function–Matrix-Matrix multiplication– but when you load this function it accelerates many functions that use matrix-matrix multiply internally.  I’ve never found a definitive list of such functions though.

Mathcad

Mathcad 15 and Mathcad Prime have no support for GPU enhanced computing.

TOP