User News

Stay up to date with up to the minute news from XSEDE and XSEDE User Portal. Subscribe for email notifications.

Key Points
Breaking user information
Contact Information

Anvil scheduled maintenance - Thursday, June 2, 2022 from 8:00am - 6:00pm EDT.

Posted by Eric Adams on 05/13/2022 18:14 UTC

Anvil will be undergoing scheduled maintenance on Thursday, June 2, 2022 from 8:00am – 6:00pm EDT.

During this time, Anvil will be unavailable for use as we fine-tune home directories filesystem and deploy updates to the scheduler.

Any Slurm jobs submitted to Anvil which request a walltime which would take them past Thursday, June 2nd, 2022 at 8:00am EDT will not start and will remain in the queue until after the maintenance is completed.

For real time updates on the maintenance, please see the Anvil Maintenance Post on the RCAC website.

Anvil will return to full production by Thursday, June 2nd, 2022 at 6:00pm EDT.

During Maintenance, a couple of changes will be made to the current behavior of Slurm on Anvil.

1. The standard queue will be renamed to wholenode. This change seeks to alleviate some of the confusion regarding the default behavior of the standard partition, and make the naming more descriptive. In an effort to resemble other XSEDE systems, the default behavior of this queue is to allocate all of the resources on the requested nodes to the user. Jobs submitted to this partition will consume all 128-cores on a node even if a user requests one task, i.e. it will consume 128 SUs per node per hour. Note that this partition remains the scheduler’s default (i.e. jobs not requesting an explicit partition will be placed in wholenode). Users can use the shared partition to request partial nodes which consume fewer SUs.

2. The --mem=0 option will be disabled. This option is used to explicitly request all of the memory on the nodes used by your job. However, Slurm does not count the cores allocated to the job properly, leading to incorrect SU calculation. If necessary, users must explicitly specify the amount of memory they want to use, or use the --exclusive option if requesting the entire node’s memory.

Please email if you have any questions.