<p>28/06/2022<br />
Dear User, </p>
<p>this is to inform you that the maintenance of Galileo100 has been completed<br />
and the cluster is now back in production.</p>
<p> </p>
<p>We would like also to inform you that a new partition has been added<br />
to Galileo100: g100_usr_pmem.</p>
<p>This partition is part of the already existing g100_usr_prod and is<br />
characterised by a maximum memory per node of 375300MB, with<br />
additional features that will be better described in following updates.<br />
Instructions and best practices for how to use efficiently these areas will<br />
arrive soon. In order to request this partition, you need to specify the<br />
Slurm directive:</p>
<p>#SBATCH -p g100_usr_pmem</p>
<p>in your batch script.</p>
<p> </p>
<p>As for the g100_usr_bmem partition with nodes with a maximum memory of up to<br />
3TB, they have been reduced in number and separated from the standard<br />
g100_usr_prod partition, so they now have to be requested explicitly:</p>
<p>#SBATCH -p g100_usr_bmem.</p>
<p> </p>
<p>Best regards,</p>
<p>HPC User Support @ Cineca</p>