What is High Performance Computing (HPC)?

June 26th, 2015 | Categories: HPC | Tags:

I hang out with a lot of people who work in the field of High Performance Computing (HPC) and learned long ago that a great conversation starter when in a room full of HPC experts is to ask the question ‘So…what IS High Performance Computing’ and then offer some opinion of my own. This apparently simple question can lead to some very heated debates!

I’m feeling in a curious mood and am wondering if this would work as a conversation starter for a blog post. So, here goes….

‘What IS High Performance Computing? I think it’s anything that requires more computational resources (memory, disk, CPU/GPU) than is available on a typical half-decent laptop or desktop’

What’s your view? Comments are open!

  1. June 26th, 2015 at 18:15
    Reply | Quote | #1

    I would agree, today it seems commodity based HPC is about solving problems by collective use of average hardware solutions available to the market.

    Multiple cores, multiple GPUs, multiple disks, multiple systems, etc. all used to solve one large problem or many variations of a smaller problem or both.

  2. June 26th, 2015 at 18:42
    Reply | Quote | #2

    I’m channeling my inner Miron here, but HTC aka high throughput computing is clearly now the new normal We benched our 60,000 core box recently, our community go where the weather is good and what works for them. There are a couple of thousand researchers banging away at our kit. 90% of it is “pleasantly parallel” single core stuff.

    We are seeing this nationally also with the rise of the computing for the 99% with epic machines like Comet: https://portal.xsede.org/sdsc-comet Add to this the incredible growth in gpgpu, and some boxes that their only trick is having 1 or 2 TB of DRAM…. you can certainly call that high performance computing.

    These days I pretty much tag everything I type into twitter with #hpc… I hear you can also run a whole lot of HPC on the #cloud (whoops), but that’s a story for another time :-)

  3. June 26th, 2015 at 18:52
    Reply | Quote | #3

    I think your definition is good. I’m inclined to up the threshold slightly — say, make the comparison a 2-socket commodity server rather than a desktop — on the basis that I don’t like calling a single AWS c4.xlarge instance “HPC”. :) But I don’t insist on it.

    Alternative definition, based on programming model rather than scale: “You’re doing HPC if you’re parallelizing work across multiple physical processors to improve performance.”

  4. Mike Croucher
    June 26th, 2015 at 18:59
    Reply | Quote | #4

    I remember when 2 sockets meant 2 single core CPUs. *looks at his quad core laptop* how things change :)
    Yesterday’s HPC is todays laptop :)

    I like the focus on programming model. If we were having beers together, I’d wait until people started talking OpenMP and MPI in C or Fortran before asking ‘Does it count as HPC if you code in MATLAB, Python or R’ and sit back while the feathers fly :)

  5. June 26th, 2015 at 19:07
    Reply | Quote | #5

    Yeah – maybe there is a thresholding aspect here. “It no longer fits on a single box” isn’t a bad metric. Mind you even our laptops are starting to get out of order (see below). The 2U “unit of issue” is a pretty good place to be, at that point you need either a scheduler, or you have to work out how to carve your stuff up into pieces either with or without MPI. Once you head down the path of “bigger than a single system image”, I could argue you are heading down a high performance route, mainly because the single system image don’t have enough “performance”, or you would either wait, or not other carving up your stuff :-))

    jcair:~ jcuff$ system_profiler SPHardwareDataType
    Hardware:

    Hardware Overview:

    Model Name: MacBook Pro
    Model Identifier: MacBookPro11,1
    Processor Name: Intel Core i5
    Processor Speed: 2.4 GHz
    Number of Processors: 1
    Total Number of Cores: 2
    L2 Cache (per Core): 256 KB
    L3 Cache: 3 MB
    Memory: 16 GB

  6. Mike Croucher
    June 26th, 2015 at 19:16
    Reply | Quote | #6

    So I was getting serious HPC envy over Harvard’s 60,000 core box compared to Sheffield’s…um…smaller one!

    Then I realised I have a better laptop and so feel better ;)

    Mikes-MBP:~ walkingrandomly$ system_profiler SPHardwareDataType
    Hardware:

    Hardware Overview:

    Model Name: MacBook Pro
    Model Identifier: MacBookPro11,3
    Processor Name: Intel Core i7
    Processor Speed: 2.8 GHz
    Number of Processors: 1
    Total Number of Cores: 4
    L2 Cache (per Core): 256 KB
    L3 Cache: 6 MB
    Memory: 16 GB

  7. Glenn K. Lockwood
    June 26th, 2015 at 19:58
    Reply | Quote | #7

    Perhaps I’m old-fashioned, but I feel like high performance computing should be defined as computing where, in some context, high performance is the principal concern. That is to say, HPC is, well, high-performance…computing.

    Is Matlab considered HPC? Maybe, but if you’re using Matlab, aren’t you knowingly sacrificing performance for developmental convenience?

    Is running an application that eats up 1 TB of RAM considered HPC? If it strives to play nicely with the system’s underlying numa topology, performs effective cache reuse, and benefits from balanced memory bandwidth, then yes. If it’s just loading a ton of garbage into memory for programmer convenience, then it’s starting to smell like the Matlab case to me.

    Is the cloud HPC? If your application does a lot of IO (disk or network) but you aren’t sure if your instance’s IO is hurting your application performance, well, probably not. If you have a killer application that makes very good use of vectorization and you’re using AVX2-enabled instances for that reason, then it is absolutely HPC.

    What about high-throughput computing? Are you losing performance because each of your pipeline stages is naively book-ended by reading its input and writing its output to slow network storage? Or are you cleverly staging intermediate IO to node-local SSDs to maximize data locality, and only sending IO over the network when necessary?

    This is not to say that performance is the only parameter within the sphere of the HPC world. HPC exists to solve problems (of scientific, economic, or any other nature), and as such, there is a lot of overlap between the HPC sphere and other domains’ spheres. If you’re in HPC you’re likely also in another sphere, but I don’t think that changes what defines HPC itself.

  8. Jeff Pummill
    June 26th, 2015 at 21:22
    Reply | Quote | #8

    So, I expected that if I waited long enough, Glenn would make an eloquent and meaningful statement that echoes my sentiments as well..GOOD SHOW, Glenn ;-

    I originally got into HPC due to the awe of the huge, powerful machines at the time which were doing calculations unfathomable to researchers before. Exploiting (efficiently) multitudes of cores, incredible I/O rates…etc. To this day, much of that still holds true. While I realize that the term “HPC” has become very diverse in use due to the many disciplines now highly dependent on “computational science” as a major component of their efforts, I still get a huge buzz when I hear of someone utilizing a highly optimized, Fortran based MPI code on tens of thousands of cores to solve a new problem in science. For me, personally, such an effort epitomizes the term “HPC”.

    Of course, there WAS a time when we referred to all of this as “supercomputing”. Is this term still applicable? Across the spectrum? Or only for very large capability systems like Titan, Mira, etc?

  9. Mike Jenkins
    June 29th, 2015 at 17:04
    Reply | Quote | #9

    It means having to pay extra to use multiple cores of your workstation! :)

    ANSYS HPC Packs are a flexible way to license parallel processing capability. For single users who want the ability to run simulations on a workstation, a single ANSYS HPC Pack provides great value with increased throughput of up to 8 times. For users with access to larger HPC resources, multiple HPC Packs can be combined to enable parallel processing on hundreds or even thousands of processing cores. ANSYS HPC Packs offer virtually unlimited parallel processing for the biggest jobs, combined with the ability to run multiple jobs using everyday parallel capacity.

  10. July 3rd, 2015 at 16:01

    For me, “High-Performance-Computing” is synonymous with using a “Distributed Memory” system, be it a supercomputer / compute cluster, a CPU + (discrete) GPU Workstation or a distributed cloud-based (internet-connected) system. The major challenge in HPC is the design of algorithms that minimize communication between the (distributed) compute nodes.

  11. July 12th, 2015 at 22:20

    Great question that has no changed in decades. About 20 years ago the definition similar to yours, and which I most like is:
    “An order of magnitude more computing power than available on the desktop”

    I think this is a bit out-of-date, as it’s probably two orders of magnitude at least.It does not distinguish high-throughput and high-performance computing, as discussed. UK government changed from HPC to High-End Computing to make this distinction a while back.

    In practical terms, a distinction of HTC and HPC/HEC seems to be tightly-coupled, distributed memory applications, that require low-latency network interconnects. This also implies the same distinction from ‘Big Data’ solutions, such as Hadoop, which are I/0-intensive, rather than compute-intensive. They can perform better on HPC hardware, although typical large installations use Ethernet-based (commodity) networking to massively scale-out. Note cloud computing is not really different, for example true HPC with Infiniband is now available in the public cloud from us (Azure Big Compute), delivering sub 3-microsec latency in MPI applications.

    Back to the original question, maybe the answer should be:
    “Orders of magnitude more computing power than available on the desktop, and using commodity networking”

  12. Steve Salt
    August 5th, 2015 at 11:35

    I totally agree with you, the purpose of HPC is to provide computing resources that are unavauilable in a desktop/laptop package. Lets face it even a high spec desktop will only have 16-24 cores max (across 2 CPUs). I use 2 machines, MacBook Pro (10,2 2.6Ghz for those interested) which has 2 cores with HT (4 virtual cores) and a Dell Precision M6800 which is my employers machine and seems restricted to 2 of the 4 cores in the i7 processor and it also has CUDA disabled. I use these for running small analyses in Matlab and OptiStruct and to trial code for the cluster before unleashing it.

    So, the challenge then is to ensure that HPC such as Iceberg always offers more power than the desktops/laptops available on the market. Unless I find ~£1000 and can go out and buy the PC of my dreams!

  13. Mayuresh
    August 11th, 2015 at 12:24

    Reminds of debates on what is “artifical intelligence”…