Research PortalRapid Access Service

French

Any Compute Canada user can access modest quantities of compute, storage and cloud resources as soon as they have a Compute Canada account. This Rapid Access Service (RAS)  allows users to experiment and to start working right away. Many research groups can meet their needs through using the Rapid Access Service only. Users requiring larger resource quantities can apply to one of our annual Resource Allocation Competitions (RAC).

Please consult the Compute Canada Technical Glossary if you have questions about any of the terminology used in the page below. General questions can be emailed at any time to rac@computecanada.ca.  

RAC vs RAS decision chart - John no quoitas
    

Table of Contents


Rapid Access to Batch Computational Resources

Compute Canada operates large shared computing facilities on which we schedule computational “jobs” via a priority-setting mechanism known as “fair share”. Each year we allocate roughly 80% of the available compute cycles via formal Resource Allocation Competitions and leave roughly 20% for the Rapid Access Service.

Unlike the Resource Allocation Competition process, the Rapid Access Service is not a guaranteed allocation of certain computational resources. It is a shared pool of unallocated resources. These resources are available for “opportunistic use” to anyone with an active Compute Canada account.

Rapid Access Service on National Systems

Anyone needing less than 50 core-years (total needs on all machines combined) of batch computational resources on a national compute system is expected to use the Rapid Access Service. If you require more than 50 core-years on a national compute system, you should apply to our Resource Allocation Competitions.

National Compute System Location Type of System Availability
Cedar (GP2) Simon Fraser University Heterogeneous, general-purpose cluster:
• Serial and small parallel jobs
• GPU and big memory nodes
• Small cloud partition
Spring 2017
Graham (GP3) University of Waterloo Heterogeneous, general-purpose cluster:
• Serial and small parallel jobs
• GPU and big memory nodes
• Small cloud partition
Spring 2017
Rapid Access Service on Legacy Systems

The number of core-years available for opportunistic use on legacy systems is set by local system policies. More information on those systems can be found here: 

Compute Burst

Starting in Spring 2017, we will pilot a new service called “Compute Burst”. This will be available on the new clusters commissioned as part of our technology deployment plan (GP2 and GP3 to start). When implemented, a Compute Burst will allow Rapid Access Service users to create modest short-term allocations at any time using a light-weight process. Stay tuned for announcements related to the implementation of the Compute Burst pilot.

GPU

Requests for up to 10 GPU-years are accepted via the Rapid Access Service. If you need more than that, you must to submit an application to any of Compute Canada’s Resource Allocation Competitions.

Return to Table of Contents

Rapid Access to Storage Resources

Users can access a variety of storage types, with no requirement to apply to a Resource Allocation Competition:

  • /SCRATCH:  This filesystem, available on compute nodes, is composed of high-performance storage used during computational jobs. Data should be copied to scratch, then removed from scratch once job execution is complete. Scratch storage is usually subject to periodic “cleaning” (or purging) according to local system policies.
  • /HOME:  The home directory is persistent, smaller than scratch and, in most systems, backed up regularly. It is visible to all nodes in a given cluster and commonly used for storage of user’s personal files, executable programs, job execution scripts, and input data.
  • /PROJECT: The project filesystem is of medium performance disk and generally not available to compute nodes on a clustered system. This filesystem is larger in available storage than a home directory, and in most systems, backed up regularly. This filesystem is generally used to store frequently-used project data.
  • /NEARLINE: The nearline filesystem is made up of medium to low performance storage in very high capacity. This filesystem should be used for storage of data that is infrequently accessed that needs to be kept for long periods of time. This is not true archival storage in that the datasets are still considered “active”.

The National Data Cyberinfrastructure (NDC) is being deployed at 4 sites, associated with 4 new compute facilities, and will be available by April 2017. Through the Rapid Access Service, you have access to the following storage resources, without a Resource Allocation Competition allocation, in GP2, GP3 and LP:

Storage Type Space Quota # of Files Quota
Space Available, by Default Maximum available via RAS, by request
/HOME 50GB per user NA 500K per user
/SCRATCH 20TB per user,

100TB per group

100TB per user (maximum duration: 3 months) 1M per user, 10M per group
/PROJECT NA 10TB per group (GP2/GP3);

1TB per group (LP)

500K per user, 5M per group
/NEARLINE NA 5TB per group none

To request an increased storage space, please contact support@computecanada.ca.

The table above is intended for groups that do not have a Resource Allocation Competition storage allocation. If you need more storage than is available via the Rapid Access Service in the table above, you should apply for storage via our Resource Allocation Competitions. Note: /HOME and /SCRATCH are not generally allocated through RAC.

Storage on Legacy Systems

Compute Canada runs a number of legacy storage systems across the country. The amount of storage available without a Resource Allocation Competition allocation on each system is set by local system policies. These are documented on the regional websites linked from the Compute Canada wiki.

Return to Table of Contents

Rapid Access to Cloud Resources

There are three types of cloud resources available via the Rapid Access Service:

  • Testing: These instances have a limited life-time and are available for testing, debugging, etc. Users generally need only a few testing instances.
  • Compute: These are instances that have a limited life-time and typically have constant high-CPU requirements. They are sometimes referred to as ‘batch’ instances. Users may need a large number of compute instances for production activities.
  • Persistent: These are instances that are meant to run indefinitely and would include web servers, database servers, etc. In general, these use less CPU power than compute instances (ie. the nodes are “oversubscribed”).

If you need more resources than what is listed below, you must apply through our Resource Allocation Competitions.

Testing Cloud – available now

Immediate access without request. Just apply for a cloud account.

Cloud Testing – Max Allowed
VCPUs Instances Volumes Volume snapshots RAM (MB) Floating IP Total size of Volumes and Snapshots (GB) Default duration Maximum duration
4 2 2 2 15360 1 40 1 week 1 month
Compute Cloud  – available Spring 2017

PIs will, after April 2017, be able request a Persistent Cloud instance by sending an email to support@computecanada.ca.  These Compute Cloud  Bursts are intended for groups that do not have a RAC award on the cloud.

Compute Cloud – Max Allowed
VCPUs Instances Volumes Volume snapshots RAM (MB) Floating IP Total size of Volumes and Snapshots (GB) Default duration Maximum duration
80 20 2 2 307200 1 1000 2 weeks 1 month
Persistent Cloud  – available Spring 2017

PIs will, after April 2017, be able request a Persistent Cloud instance by sending an email to support@computecanada.ca.  These persistent cloud instances are intended for groups that do not have a RAC award on the cloud.

Persistent Cloud – Max Allowed
VCPUs Instances Volumes Volume snapshots RAM (MB) Floating IP Total size of Volumes and Snapshots (GB) Default duration Maximum duration
10 5 5 5 45000 1 1000 1 year 1 year

Return to Table of Contents

Top