Research PortalRAC 2017 Overview and Stats

French

List of Resources for Research Groups (RRG) Awards (PDF)  (XLSX)
List of Research Platforms and Portals (RPP) Awards (PDF)  (XLSX)
CFI Challenge 1 Allocations (PDF)  (XLSX)

2017 Resource Allocations Competition Results
Computational Resources
– CPU Allocations
GPU Allocations
– Cloud Allocations
Storage Allocations
Acceptance Rate
Allocation Process
Scaling for Compute Requests
Monetary Value of the 2017 Allocations

Compute Canada reserves 80% of its resources to the Resource Allocation Competitions (RAC), leaving 20% for use via our Rapid Access Service. Unlike other countries around the world, in Canada, we do not have a specialized provider serving very high-end needs, separate from general-purpose need.

We have done an analysis of our largest users compared to the bulk of our users (who do not have RAC awards) and have found the quality of their publications (by field-weighted citation impact) to be similar. In other words, both large and small Compute Canada users achieve high levels of scientific impact with the research performed with the assistance of Compute Canada resources. View our bibliometrics report here.

If you have questions about the terminology used in this page, please consult the Compute Canada Technical Glossary. For other questions or general inquiries, email rac@computecanada.ca or visit our Frequently Asked Questions page.

(Please note: all data contained within these pages is current as of April 29, 2017.)

Table 1: Applications submitted to the Resource Allocation Competitions
between 2011 and 2017

Year Resources for Research Groups

(RRG)

Research Platforms and Portals (RPP) CFI Cyberinfrastructure Challenge 1 Total
2017 345 64 5 414
2016 324 42 366
2015 335 15 350
2014 291 291
2013 211 211
2012 159 159
2011 135 135

Computational Resources 

CPU Allocations

Based on available computing resources for 2017, Compute Canada was only able to allocate 58% of the CPU (core years) requests. However, as Tables 2 shows, the allocation success rate for CPU improved slightly in 2017 compared to 2016. Note that more than 50,000 of the cores available in 2017 are new and are higher performance than those they replace.

Table 2: Historical CPU demand vs. supply (core years)

Year Total CPU Capacity Total CY Requested Total CY allocated Allocation success rate
2017 182,760 254,251 147,384 0.58
2016 155,952 237,862 128,463 0.54
2015 161,888 191,690 123,699 0.65
2014 190,466 172,989 133,508 0.77
2013 187,227 142,106 126,677 0.89
2012 189,024 103,845 87,312 0.84
2011 132,316 72,848 75,471 1.04

 

 

 

Table 3: 2017 CPU demand vs. supply by system

CPU System 2017 Allocatable Capacity (core-years)  

2017 Total Request (core-years)

2017 Total Allocation (core-years) Fraction Allocated
briarée 7,000 8,596 5,107 0.73
bugaboo 4,220 5,658 3,507 0.83
CAC* 840 1,079 800 0.95
cedar-compute 24,000 36,935 21,076 0.88
glooscap 1,344 786 564 0.42
gpc_ib 30,912 45,081 28,560 0.92
graham-compute** 31,000 47,164 25,643 0.83
grex 3,712 4,579 2,464 0.66
guillimin 17,240 23,788 13,903 0.81
mp2 30,984 41,600 24,862 0.8
orca 7,680 9,502 4,687 0.61
orcinus 9,616 13,783 6,739 0.7
parallel 6,336 7,047 4,506 0.71
placentia 2,636 2,750 1,591 0.6
psi 900 515 334 0.37
sw_2 1,076 1,690 780 0.72
tcs 3,264 3,676 2,261 0.69
Total 182,760 254,251 147,384 0.81

* Total capacity of the CAC cluster is 2600 core-years, but the allocatable capacity for 2017 is 840 core-years. So in this case the allocation target is 100% of 840 and not 80% like in the other systems.

** Allocation in Graham will increase to more than 85% in September 2017 when GPC is decommissioned and some of its current users are moved to the new system.

 

GPU Allocations

Constraint in GPU resources was greater than in CPU. As Table 4 shows, the demand for GPUs has increased 4.5x since 2015. In spite of the increase, the allocation success rate was 38%, compared to 20% in 2016.  GPUs in the newest systems have much greater performance than legacy GPU devices

Table 4: Historical GPU demand vs. supply (GPU years)

Year Total GPU capacity Total Requested Total allocated Allocation success rate
2017 1,420 2,785 1,042 0.38
2016 373 1,357 269 0.2
2015 482 608 300 0.49

 

 Table 5: 2017 GPU demand vs. supply by system

GPU System 2017 Allocatable Capacity (GPU-years) 2017 Total Request (GPU-years)  

2017 Total Allocation (GPU-years)

Fraction Allocated
cedar-gpu 584 1163 506.7 0.87
graham-gpu 320 843 253.9 0.79
guillimin-gpu 64 167 56.9 0.89
guillimin-phi 100 54 25.2 0.25
helios-gpu 72 230 61.0 0.85
parallel-gpu 180 326 138.4 0.77
monk-gpu* 100 0 0.0 0.00
Total 1320 2785 1042.1  

0.79

*Monk-GPU is out of warranty but available for opportunistic use at the user’s risk.

Cloud Allocations

The installation of Arbutus at the University of Victoria campus increase our cloud computing capacity from 104 nodes to 290 at UVic, plus 36 nodes in Cloud East at U. de Sherbrooke. Storage at UVic was quadrupled, to over 2.2petabytes.  We received requests for 9,152 VCPU’s and our capacity was 23,040 VCPU’s. As the need and awareness for cloud computing resources grows we anticipate more requirements in this area.

 Table 6: 2017 VCPU demand vs. supply by system

Cloud system 2017 Allocatable Capacity (VCPU)  

2017 Total Request (VCPU)

 

2017 Total Allocation (VCPU)

Fraction Allocated
arbutus-compute-cloud 14,592 6,778 3,787 0.26
arbutus-persistent-cloud 7,296 2,374 1,990 0.27
East-cloud* 1,152 0 0
Total 23,040 9,152 5,776.6 0.25

*East-cloud is available for users needing cloud resources without an allocation via our Rapid Access Service.

Storage Allocations

The incorporation of the new systems Cedar (SFU), Graham (Waterloo), and Arbutus (Victoria) made possible for Compute Canada to meet the storage demand in 2017, as Table 7 shows. 

Table 7: 2017 Storage Supply vs. Demand by Storage Type (TB)

Storage type 2017 cluster capacity 2017 Total Requested 2017 Total Allocated Success rate
Project 43,151 31,335 30,146 0.96
Nearline 83,333 16,640 16,892 1.02
Cloud 660 518.5 518.5 1.00
Total 127,144 48,493.5 47556.5 0.98

 

Table 8: 2017 Project Storage Supply vs. Demand (TB)

Project Storage 2017 Allocatable Capacity 2017 Total Request 2017 Total Allocation Fraction Allocated
briarée 200 203 157 0.79
bugaboo 1,110 786 766 0.69
CAC 1,000 1,231 1,081 1.08
global_c 642 379 379 0.59
gpc_ib 3,000 2,104 1,664 0.55
guillimin-datastar 3,800 3,677 3,677 0.97
helios 0 2 0
mp2 800 926 693 0.87
NDC-SFU 14,900 9,300 9,143 0.61
NDC-Waterloo 15,000 10,821 10,845 0.72
NDC-UVic 2,443 1,800 1,643 0.67
orcinus 256 98 98 0.38
glooscap 0 8 0 0
Total 43,151 31,335 30,146 0.70

 

Table 9: 2017 Nearline Storage Supply vs. Demand (TB)

Nearline Storage 2017 Allocatable Capacity (TB) 2017 Total Ask (TB) 2017 Total Allocation (TB) Fraction Allocated
guillimin-datastar 2,500 935 944 0.38
HPSS 12,500 5,483 5,886 0.47
mammouth-archive 8,333 90 30 0.00
NDC-SFU* 30,000 3,214 3,214 0.11
NDC- Waterloo 30,000 6,918 6,818 0.23
Total 83,333 16,640 16,892 0.20

* NDC = National Data Cyberinfrastructure

Table 10: 2017 Cloud Storage Supply vs. Demand (TB)

Cloud storage (Ceph) 2017 Allocatable Capacity (TB) 2017 Total Request (TB) 2017 Total Allocation (TB) Fraction Allocated
arbutus-storage-cloud 560 518.5 518.5 0.93
East-cloud 100 0 0 0
Total 660 518.5 518.5 0.79

 

Acceptance Rate

Submissions are evaluated for both technical feasibility and scientific excellence. For the 2017 competitions, 414 applications were submitted and 390 allocations were awarded. Note that virtually all of applicants are requesting resources to support research programs and HQP that are already funded through tri-council and other peer-reviewed sources. 

This year’s resource allocations competition awarded 58% of the total compute requested and 98% of the total storage requested. Due to the competitiveness of the proposals and the limited amount of computing resources available, all projects, across all disciplines, received final allocations less than their original request.

Table 11: Requests vs. Allocations (broken down by resource)

 

RAC 2017

Number of
Requests Received
Number of Requests Granted
Storage 282 271
CPU 351 314
GPU 42 34
Cloud (VCPU) 46 41 

Allocation Process

  • Compute Canada Technical staff review each proposal;
  • A peer review panel evaluates each proposal:
    • Each proposal receives multiple independent reviews;
    • Scientific committees meet to discuss the applications;
    • The peer review panel may or may not recommend specific cuts for an application;
    • The peer review panel gives a final science score on a 5-point scale;
  • The committee of RAC chairs endorses a scaling function based on science score. That scaling function is applied to all compute requests.

Scaling for Compute Requests

As in previous years, in 2017 the available compute resources were not enough to satisfy the demand. This is because a considerable number of legacy systems are being removed from service at the same time that the new systems are coming online.

The scaling function applied to the 2017 competition (see chart below) was set so that only applications with a science score of 2.25 or higher received an allocation, with a maximum of 87.5% for those with a score of 5. Note that those who did not receive a compute allocation can still make opportunistic use of system via our Rapid Access Service.

Monetary Value of the 2017 Allocations

These values represent an average across all Compute Canada facilities and include the total capital and operational costs incurred by Compute Canada to deliver the resources and associated services. These are not commercial or market values. For the 2017 competition, the value of the resources allocated was calculated on a per-year basis using the following rates:

  • $188.84 / core-year
  • $566.52 / GPU-year
  • $128.00 / TB-year
  • $40.50 / VCPU-year
  • $178.50 / cloud storage TB-year (Ceph)

Please note that the valuation of each of of these resources goes down each year as older, more expensive, resources are retired and replaced with newer, more cost effective, resources.

Top