This document describes the business model of and access procedure to the HPC infrastructure hosted at the DeiC National HPC Centre at SDU. Access to the HPC facility for academic users follows a schedule of four months periods and is conditional to appropriate funding being available to academic users. The SDU Access Committee will allocate time taking into consideration the funding available to each user/institution. Requests for access to the HPC facility for private/commercial users do not follow the same four months schedule, and they are handled directly by the Access Committee. These requests are allocated depending on the available resources and within the limits of the service agreement with DeiC. The Access Committee is appointed by the SDU eScience steering committee, to which it reports its activities. The SDU eScience steering committee at SDU is responsible for and in charge of the governance of the DeiC National Supercomputing Centre at SDU, in a dialog and in collaboration with DeiC.
Access for Academic Users
All academic users have access to the national facility under the exact same conditions. Access to the facility for (academic) users is not free. The details of the pricing model for academic users are described below in the following section. The Access Committee will allocate time taking into consideration the funding available to academic users. Three major Danish universities — AAU, AU and SDU — have already committed to buy HPC resources every year for a total amount of 21M DKK. To properly allocate resources on the facility, coordination among these institutions, which have already bought resources, is required. Currently, AAU has committed to buy 5M DKK over the 4 years and AU has committed to buy 0.25M resources every year, for the next 4 years.
Pricing model for academic Users
The DeiC National HPC Centre, SDU, hosts state-of-the-art solutions for academic HPC suitable for a wide range of research and technological applications. The Abacus 2.0 supercomputer is currently the most power efficient HPC installation in the Nordic countries and offers one of the best pricing options for academic users in Europe, also thanks to the national funding from DeiC.
The HPC centre is open to all Danish researchers, and all academic users can benefit from the available resources under the same conditions. In particular the same price is offered to all academic users, independently of their host institution. This allows all Danish researchers to take advantage equally of the DeiC contribution to the national facility at SDU.
The price for academic access to HPC resources is calculated based on the full cost of ownership to operate the facility. In particular it includes:
• the cost of the HPC hardware and the cost of the common storage system;
• the cost of the infrastructure needed to host the HPC facility at SDU, such as: the server rooms, the cooling system, the required electrical components to connect to the power grid (e.g. transformers), etc;
• the cost of the electricity for the whole HPC facility (HPC hardware, storage system, cooling system).
The costs for IT-service support are not included in the estimate. Currently, this cost is very low being 1.5 FTE.
The final price for academic users is calculated to be:
• Slim nodes: 1,90 DKK / node-hour (~ 0,08 DKK / CPU core-hour)
• Fat nodes: 2,30 DKK / node-hour (~0,10 DKK / CPU core-hour)
• GPU nodes: 2,65 DKK / node-hour (~0,11 DKK / CPU core-hour)
The rates above are per node per hour in DKK excluding VAT. All our nodes have 24 CPU cores, i.e., the rates per CPU core, are obtained by dividing node-hours prices by 24.
The prices are estimated based on the assumption of an average utilization of 80% of the machine and a life span of the HPC hardware of 4 years.
The prices take into account the co-financing from DeiC, which reduces the overall price for all users by about 35%.
Note that the pricing model at the national HPC facility at SDU is such that the costs for the storage are included in the cost of node-hours, i.e. users are not required to pay separately for the use of the storage system. SDU applicants have the possibility to receive a grant from the Rektors Aktivitets Pulje (RAP) for the funding of computational resources on the HPC facility.
SDU applicants are hereby referred to the specific conditions available in Appendix 3 of this document.
How academic users obtain an account
Before you can gain access to Abacus 2.0 an administrative procedure has to be followed.
The allocation of the available resources at the HPC facility for academic users follows a regular schedule of four months periods, called “allocation periods”. The three yearly allocation periods are: 1st of March to 30th of June, 1st of July to 31st of October, 1st of November to last day of February.
The HPC resources are allocated in terms of node-hours and storage space.
All academic users are required to submit a request to the facility. The forms are collected online through the website of the HPC facility (http://deic.sdu.dk) and forwarded to the Access Committee for evaluation.
Proposers for any kind of allocation schemes must hold at least the position of postdoc within their institution. PhD students cannot apply on their own, but must instead do so in consultation with, and under the auspices of, their supervisor.
The call for access requests are open at least 6 weeks before the start of the next period and deadlines for the requests are set one month before the start of the next allocation period: 1st February, 1st June and 1st October.
Academic users can request computational resources for three different types of allocation: test projects, regular projects and long term projects. The different types of allocations are described in the next sections.
During the evaluation of the requests for access, the Access Committee can use the advise of the HPC technical team at the SDU HPC facility in relation to the overall technical feasibility of the project. Projects which cannot possibly be accommodated by the HPC installation for technical reasons will be rejected.
A consequence of the limited resources available, even high-quality projects might not be allocated resources for the current allocation period. The Access Committee can, under these special circumstances, allocate resources for the next allocation period for these projects, as it deems appropriate, so as to ensure access without the need to resubmit an application.
As part of the allocated resources, access to the HPC facility storage system is provided. Applicants are asked to provide the storage requirements for their request in the form.
The DeiC National HPC centre at SDU will provide access to the infrastructure and the requested resources to the granted projects as well as the appropriate user support within the scope and the limits of the HPC centre policies and regulations.
Abacus 2.0 is not a free service. Users are billed according to how many node hours they could be allocated on Abacus 2.0. This means that node hours not used by the applicant within the corresponding time frame will not be refunded.
However, up to a 20% of the original quota for a period will automatically be carried over to the next period, if the project has not used all available resources.
Test projects are those which are exploratory or preliminary in nature and which only require a limited amount of resources. Test projects can request for a maximum of 2000 node-hours on fat and GPU nodes and a maximum of 10000 node-hours on slim nodes and they are limited to one allocation period in time and one kind of nodes.
The Access Committee will facilitate the allocation of resources to test projects in the prioritization process as much as possible to allow access of new users to the HPC infrastructure.
The application form is available online at the HPC facility website and includes the following sections:
• A description of test project that requires access to HPC resources and a plan for proposed test activities (maximum 4000 ch);
• Requested resources.
Regular projects provide access to HPC resources for one allocation period. The access period can be extended to more consecutive allocation periods up to a maximum of three periods in case of the need for a prolonged access is justified.
Access for regular projects, or simply projects, is intended for a specific activity. A description of this activity and a reasoned plan for the HPC resources needed is required as part of the application.
The application form is available online at the HPC facility website and includes the following sections:
• A general description of the research project, its goals and innovative aspects (maximum 4000 ch);
• A description of the specific activity that requires access to HPC resources and a justification of the requested resources. Must also contain a technical description of the software which will be used to carry out the project and its adequacy in relation to the requested HPC resources (maximum 4000 ch);
• A short CV of the project leader, a list of the project members and their previous experience in HPC environments (maximum 2000 ch).
Those projects, that require access for more than one allocation period, can request access for up to three periods, providing adequate justification of the requested resources.
Long Term Projects
Research groups with experience in HPC computing, which require continued access to HPC resources, can apply for long term projects. This kind of allocation provide access for three allocation periods (one year) intended for wider research programs which make extensive use of HPC resources.
Applicants are required to have a track-record of research based on the use of HPC platforms.
The application form is available online at the HPC facility website and includes the following sections:
• A general description of the research program, its goals and innovative aspects and the need for prolonged access to HPC resources (maximum 4000 ch);
• A description of the planned activities that requires access to HPC resources and a justification of the requested resources. Must also contain a technical description of the software which will be used to carry out the project and its adequacy in relation to the requested HPC resources (maximum 4000 ch);
• A short CV of the project leader, a list of the project members (maximum 2000 ch).
• A description of the team experience in HPC environments and research output from previous HPC allocations (maximum 3000ch);
Long term projects provide access for three allocation periods (one year), providing adequate justification of the requested resources.
Long term projects applicants must also specify how much resources are needed for each of the three allocation periods.
In addition to test, regular and long term projects, which always begin at the start of an allocation period, a fourth kind of allocation is provided: pilot projects. Pilot projects are limited in number, resources and duration, but can start at any time. Pilot projects are not subject to the evaluation procedure described above but are instead evaluated case-by-case by the Access Committee as soon as a request is received. Requests for pilot projects can be made contacting the HPC centre via the following email address: firstname.lastname@example.org.
Additional support for DeiC pilot projects is available through DeiC, see https://deic.sdu.dk/get-access/deic-pilot.
Access for non-Academic Users
Non-academic users, such as private companies, can also request access to the DeiC National Supercomputing Centre at SDU. Non-academic users request access to the HPC facility by writing to email@example.com The price for nonacademic users does not include the co-funding from DeiC and are aligned with current market prices.
Individual prices are agreed with non-academic users, based on their needs and level of support which they require. In addition, access for non-academic users is regulated by the service provider agreement with DeiC which, in particular, limits the amount of resources which can be allocated to non-academic users.
DeiC National HPC Centre, SDU, requests the acknowledgement of DeiC National HPC Centre, SDU, in papers, presentations and other publications that feature work that relied on Abacus 2.0. All users are required to comply with the HPC facility policies, available at its website.
• Appendix 1 Storage system policies
• Appendix 2 Queue policy
• Appendix 3 SDU applications and funding
Appendix 1 Storage system policies
In addition to the requested node-hours, the projects running on Abacus 2.0 will be given access to the HPC facility storage system, within the limits of the requested disk space for the project and the HPC own storage policy. The users must refer to the current official storage policies available on the webpage of the SDU HPC facility website (http://deic.sdu.dk). These policies can change at the discretion of the HPC facility management to guarantee a fair use of available resources among all projects. We summarise in this appendix the current storage policies of the HPC centre. Projects on the DeIC National HPC centre, SDU will have access to several major file systems, called ‘$HOME’, ‘$WORK’, ‘$SCRATCH’ and ‘$LOCALSCRATCH’. The policies for each of these are detailed below. The fronted nodes and all compute nodes can access all directories, except for $LOCALSCRATCH which can only be accessed on each of the compute node.
Local node storage - $LOCALSCRATCH filesystem
• Can be used to store files and perform local I/O for the duration of a batch job. • 330 GB (150 GB on gpu nodes) available on each compute node on a local disk.
• Files stored in the $LOCALSCRATCH directory on each node are removed immediately after the job terminates.
Home directory - $HOME filesystem
• This is your home directory. Store your own files, source code and build your executables here.
• At most 10 GB (150 k files) per user. Plus 10 GB for snapshots up to ten days back
• Note: if you change and delete many files in $HOME during the 10 days, the amount of snapshot space may not suffice, and you will only have snapshots fewer days back.
• Snapshots are taken daily.
Work directory - $WORK filesystem
• Store large files here. Change to this directory in your batch scripts and run jobs in this file system.
• Space and number of files in this file system is setup for each project based on each project’s requirements. Quota is shared among all users within a project.
• The file system is not backed up.
• Purge Policy: the $WORK filesystem is purged 3 months after the end of the project, i.e., all the information stored on this filesystem relative the specific project will be automatically deleted with no possibility of recovery.
• Temporary storage of larger files needed at intermediate steps in a longer run. Change to this directory in your batch scripts and run jobs in this file system.
• This file system is not backed up.
• Purge Policy: Files with access times greater than 10 days are purged. NB: the system administrators of the HPC centre may delete files from $SCRATCH, without previous notice, in the event that the filesystem becomes full, even if files have been access during the last 10. The use of programs or scripts to actively circumvent the file purge policy will not be tolerated.
Appendix 2 Queue policy
The users must refer to the current official storage policies available on the webpage of the SDU HPC facility website (http://deic.sdu.dk). The job scheduling and queue system at the DeIC National HPC centre at SDU is based on SLURM (http://slurm.schedmd.com/).
These policies can change at the discretion of the HPC facility management with the aim to guarantee a quick and fair turnaround of both jobs and projects and to enable a more effective use of available HPC resources.
We summarise in this appendix the current queue policies of HPC centre.
Job scheduling on the HPC system is done using Slurm (Simple Linux Utility for Resource Management)
The queue system is configured with three distinct partitions, corresponding to the three different types of nodes available: Slim, Fat, and GPU.
The priority on the queue is based on the waiting time of jobs and a fairshare based on allocated resources. This ensures that we both prioritise that jobs are run in the order they are added to the queue (FIFO) while also ensuring that no user group can occupy an entire node partition for longer periods (fairshare).
Each month the fairshare is updated such that unused node hours in previous months does not count in the new month, i.e., even if a group does not use any node hours for two months, they only get a fairshare in the third month corresponding to the fraction of their time allocated in that month.
Jobs are backfilled, i.e., small jobs which can be fitted in earlier in the schedule without delaying later large jobs, are allowed to run.
To guarantee a fair turnaround of jobs and projects and to enable more effective use of HPC resources the wall-time of jobs is limited to at most 24 hours.
Appendix 3 SDU applications and funding
Evaluation of applications
Valid SDU applications are prioritized by the Access Committee based on evaluation criteria for each of the allocation types. Based on this priority list, the Access Committee grants the HPC resources available for the allocation period.
A consequence of the limited resources available, even high-quality projects might not be granted resources for the current allocation period. The Access Committee can, under these special circumstances, allocate resources for the next allocation period for these projects, as it deems appropriate, so as to ensure access without the need to resubmit an application.
The valid applications for test projects are prioritized based on the following criteria:
1. Innovation potential of the proposed HPC application (90%);
2. The technical adequacy in relation to the requested HPC resources (10%).
The valid applications for regular projects are prioritized based on the following criteria:
1. Excellence of the research project and the specific activity proposed (45%).
2. The adequateness of the requested resources and their effectiveness to complete the proposed research activity, including the real need for a supercomputer to perform the computation (35%).
3. The research credentials of the applicant research group (10%).
4. Experience and training in high performance computing (10%).
The valid applications for long term projects are prioritized based on the following criteria:
1. Excellence of the proposed research program (45%);
2. Previous experience with HPC platforms and research output from previous granted allocations (35%);
3. The research credentials of the applicant research group (10%);
4. The adequateness of the requested resources and their effectiveness to complete the proposed research activity, including the real need for a supercomputer to perform the computation (10%).
Cofinancing of SDU HPC Projects
The Rector of SDU has set aside 15 MDKK from the Rektors Akitivitets Pulje (RAP) for the period 2015-2018 to (co)finance access to computational resources on the HPC facility by SDU researchers.
Funding for computational resources will be awarded to individual researchers or research groups via a Primary Investigator (PI).
Applications through the RAP are for compute and storage only. Applicants must make sure that any additional expenses related to their projects, such as e.g. staff time, are covered by other means.
RAP grant amount in 2015-2018
In 2015, all SDU applicants have been able to receive complete funding of the requested computational resources on the HPC facility. The aim was to give experienced and new HPC users the opportunity to use the HPC facility for their research and further encourage the use of HPC for new directions.
As a consequence of the limited amount of SDU funding available and an increasing number of applications, SDU researchers will, from 2016 and onwards, no longer be able to receive a full grant for their requested use of the HPC facility. PIs will be requested to provide their own cofinancing for resources applied for to top-up the funding available through the SDU RAP.
In 2016, test/pilot projects will continue to be fully financed via the RAP. However, from November 2016 (Period 3) each SDU faculty can finance test/pilot projects only up to 100.000 DKK per 4-months period. SDU applicants from all SDU faculties but NAT will continue to receive full funding for regular / long term projects via the RAP in 2016.
SDU applicants from NAT will need to provide their own cofinancing to top-up resources from the RAP from July 2016 (Period 2) for regular / long term projects. The (co)financing provided by the RAP to researchers from NAT from 2016 and onwards is as follows:
|2016 Period 1||2016 Period 2||2016 Period 3||2017 Period 1|
|100% RAP||70% RAP||50% RAP||50% RAP|
From March 2017 (Period 1), all regular and long term projects from SDU researchers from all faculties will receive a cofinancing of 50% from the RAP and will need to cover the remaining 50% from other resources.
Each SDU faculty can finance test/pilot projects only up to 100.000 DKK per 4-months period. Access for test/pilot projects is granted by the faculty representative in the eSience Steering Committee. Researchers are to contact their faculty representative directly.
Additional support for NAT users from the Dean´s office
SDU applicants for regular and long term projects from NAT can request for a topup to the RAP via the Dean´s office. The Dean´s office has set aside 3M DKK to be divided equally over the years 2016-2018.
Applicants to the Dean´s grant can request computational resources on the HPC facility only to facilitate the remainder of their SDU RAP grant.
To be eligible for a top-up grant, NAT applicants need to supply evidence that they have or are going to apply for other non-SDU grants.
Applicants apply for the Dean’s grant separately by submitting an approved budget for their already sent non-SDU grant application form, and/or a justification of where and when they intend to apply for non-SDU grants by e-mail to firstname.lastname@example.org. Applications for the Dean’s grant have the same deadline as the HPC applications.
If an applicant is successful in relation to a non-SDU grant, this grant will replace the NAT-grant as soon as this is possible. Please note that the Dean may recommend a reduced allocation in consideration with the resources available.
Only computational resources on the HPC facility can be applied for. This does not include staff time and other resources, which should be covered by other means.