Running MACAULAY2 on Argo
According to the Macaulay2 website: Macaulay2 is a
software system devoted to supporting research in algebraic geometry and
commutative algebra.
Loading Macaulay2
You can check to see what versions of Caffe are available using the following command:
module avail Macaulay2
module load
command. For
example:
module load Macaulay2/1.14
Running Macaulay2 Interactively
Macaulay2 can be run interactively on the login nodes, but we ask that
you not run any lengthy experiments there since it is a shared resource.
To get interactive access to a compute node, you can use the salloc
Slum command. Unfortunately there does not seem to be a good way to
control the number of threads used when running Macaulay version 1.14
(the current version, as of this writing). When it runs, the program
will allocate a number of threads proportional to the number of cores on
the machine on which it is running. For this reason, we recommend that
you allocate as many cores on a given node as you can. You can do that
by combining the --constraint and --cpus-per-task Slurm options. For
example, if you wanted access to 16 cores you could use this command
(Note that other valid options for constraints are Proc20, Proc24,
Proc28 and Proc64):
salloc --constraint=Proc16 --cpus-per-task=16
If this command does not give you access to a machine, and instead
leaves you hanging with the following message: "salloc: job XXXXXX
queued and waiting for resources", then try pressing
Once you have logged in (and assuming the Macaulay2 module is loaded) you can get a list of options for the program with the following command:
M2 --help
M2 -q
Note that if you need to have write access to a file system for any
reason, you should use you /scratch/
Creating a Macaulay2 Script
When performing operations that are more time consuming, it may be more useful to submit them to Slurm as a job so that they will run whenever the appropriate resources are available. Here is an example script that takes 30 minutes or more to run:
-- collatz.m2 -- Borrowed from this tutorial on Beginning Macaulay2
Collatz = n ->
while n != 1 list if n%2 == 0 then n=n//2 else n=3*n+1
print(tally for n from 1 to 1000000 list length Collatz n)
Running the Example
In order to run the script on the cluster, we will use this Slurm submit script:
#!/bin/sh
# collatz.slurm
## Give your job a name to distinguish it from other jobs you run.
#SBATCH --job-name=Collatz
## General partitions: all-HiPri, bigmem-HiPri -- (12 hour limit)
## all-LoPri, bigmem-LoPri, gpuq (5 days limit)
## Restricted: CDS_q, CS_q, STATS_q, HH_q, GA_q, ES_q, COS_q (10 day limit)
#SBATCH --partition=all-HiPri
## Separate output and error messages into 2 files.
## NOTE: %u=userID, %x=jobName, %N=nodeID, %j=jobID, %A=arrayID, %a=arrayTaskID
#SBATCH --output=/scratch/%u/%x-%N-%j.out # Output file
#SBATCH --error=/scratch/%u/%x-%N-%j.err # Error file
## Slurm can send you updates via email
#SBATCH --mail-type=BEGIN,END,FAIL # ALL,NONE,BEGIN,END,FAIL,REQUEUE,..
#SBATCH --mail-user=<GMUnetID>@gmu.edu # Put your GMU email address here
## Specify how much memory your job needs. (2G is the default)
#SBATCH --mem=1G # Total memory needed per task (units: K,M,G,T)
## Request a 16 core node.
#SBATCH --constraint=Proc16
## Define how many cores we want to allocate for threads.
#SBATCH --cpus-per-task=16
## Load the relevant modules needed for the job
module load Macaulay2/1.14
## Run your program or script
M2 --script collatz.m2
Note: Be sure to replace
And to launch the job, use the following command:
sbatch collatz.slurm