Over 73% of users of our High Performance Computing (HPC) service only ever run single core jobs

January 10th, 2017 | Categories: HPC, parallel programming, RSE, Scientific Software, University of Sheffield | Tags:

I work at The University of Sheffield where I am one of the leaders of the new Research Software Engineering function. One of the things that my group does is help people make use of Sheffield’s High Performance Computing cluster, Iceberg.

Iceberg is a heterogenous system with around 3440 CPU cores and a sprinkling of GPUs. It’s been in use for several years and has been upgraded a few times over that period. It’s a very traditional HPC system that makes use of Linux and a variant of  Sun Grid Engine as the scheduler and had served us well.

iceberg

A while ago, the sysadmin pointed me to a goldmine of a resource — Iceberg’s accounting log. This 15 Gigabyte file contains information on every job submitted since July 2009. That’s more than 7 years of the HPC usage of 3249 users — over 46 million individual jobs.

The file format is very straightforward. There’s one line per job and each line consists of a set of colon separated fields.  The first few fields look like something like this:

long.q:node54.iceberg.shef.ac.uk:el:abc07de:

The username is field 4 and the number of slots used by the job is field 35. On our system, slots correspond to CPU cores. If you want to run a 16 core job, you ask for 16 slots.

With one line of awk, we can determine the maximum number of slots ever requested by each user.

gawk -F: '$35>=slots[$4] {slots[$4]=$35};END{for(n in slots){print n, slots[n]}}' accounting > ./users_max_slots.csv

As a quick check, I grepped the output file for my username and saw that the maximum number of cores I’d ever requested was 20. I ran a 32 core MPI ‘Hello World’ job, reran the line of awk and confirmed that my new maximum was 32 cores.

There are several ways I could have filtered the number of users but I was having awk lessons from David Jones so let’s create a new file containing the users who have only ever requested 1 slot.

gawk -F: '$35>=slots[$4] {slots[$4]=$35};END{for(n in slots){if(slots[n]==1){print n, slots[n]}}}' accounting > users_where_max_is_one_slot.csv

Running wc on these files allows us to determine how many users are in each group

wc users_max_slots.csv 

3250  6498 32706 users_max_slots.csv

One of those users turned out to be a blank line so 3249 usernames have been used on Iceberg over the last 7 years.

wc users_where_max_is_one_slot.csv 
2393  4786 23837 users_where_max_is_one_slot.csv

That is, 2393 of our 3249 users (just over 73%) over the last 7 years have only ever run 1 slot, and therefore 1 core, jobs.

High Performance?

So 73% of all users have only ever submitted single core jobs. This does not necessarily mean that they have not been making use of parallelism. For example, they might have been running job arrays – hundreds or thousands of single core jobs performing parameter sweeps or monte carlo simulations.

Maybe they were running parallel codes but only asked the scheduler for one core. In the early days this would have led to oversubscribed nodes, possibly up to 16 jobs, each trying to run 16 cores.These days, our sysadmin does some voodoo to ensure that jobs can only use the number of cores that have been requested, no matter how many threads their code is spawning. Either way, making this mistake is not great for performance.

Whatever is going on, this figure of 73% is surprising to me!

Thanks to David Jones for the awk lessons although if I’ve made a mistake, it’s all my fault!

Update (11th Jan 2017)

UCL’s Ian Kirker took a look at the usage of their general purpose cluster and found that 71.8% of their users have only ever run 1 core jobs. https://twitter.com/ikirker/status/819133966292807680

 

 

  1. Ian Sudbery
    January 10th, 2017 at 23:47
    Reply | Quote | #1

    I wonder how many of those usernames are ever used a substantial amount?

    We give all of the undergrads on our programmng course HPC logins, but mostly just because it the most convenient linux system that can handle 40 users simultaneously, with fault tolerant, redundant servers. None of them will ever have done anything with the system other than schedule an interactive session. Thats 40 new userids every year. I bet many others do the same.

  2. Mike Croucher
    January 11th, 2017 at 08:20
    Reply | Quote | #2

    Hi Ian.

    Good point! Thank you. It would be useful to filter out those undergrads if it’s possible.
    Perhaps those undergrads would be better served by cloud services?

    Cheers,
    Mike

  3. Ian Cottam
    January 11th, 2017 at 12:27
    Reply | Quote | #3

    I’ve often wondered why such systems don’t run HTCondor (rather than SGE variant), given that the vast majority of jobs are high throughput. HTCondor can also run multicore jobs as you know.

    I think several of the the big systems the CERN particle physicists run are HTCondor now. (But can’t swear to it.)
    cheers
    -Ian
    ps: you can do all the various queries you want in awk, no need for pipes into wc or grep :-)

  4. Ian
    January 11th, 2017 at 13:55
    Reply | Quote | #4

    @Mike Croucher
    People who have been running courses for years that “just work” are not particularly keen to change, especially when that involves trusting that if they put several months of their time into changing the way a course, the outside partner isn’t going to change at any point in the next few years and require them to do it all again.

    Plus using cloud services could lead to costs when there is 0 budget.