Skip to content

Running Gurobi on the ORC Clusters

[!NOTE] To run Gurobi jobs or compile software using Gurobi libraries load the Gurobi module:

```
 module load gurobi
 ```

To see all available versions of gurobi, run the command

module avail gurobi

This will list all the available versions of Gurobi that are installed on the cluster

Specific versions of gurobi can be loaded by specifying the desired module version, e.g.: module load gurobi/11.0.3

Running Gurobi

Once the Gurobi module is loaded, gurobi can be utilized in a number of ways

GurobiPy - Gurobi with Python

For a full guide on gurobi with python, see How to install Gurobi for Python. To create a python environment on Hopper which can run gurobi with python, run the following commands:

module load gnu10 python gurobi/11.0.3
python -m venv gurobi_env
source_activate gurobi_env/bin/activate
python -m pip install gurobipy==11.0.3
Take care to ensure that the gurobi module version and gurobipy version match. Once gurobipy has been installed, you can run gurobi with python scripts. To load the python environment in future logins and slurm scripts, load the module and python virtual environment
module load gnu10 python gurobi/11.0.3
python -m venv gurobi_env
source_activate gurobi_env/bin/activate

Gurobi CLI

To directly input gurobi commands in the command line interface run gurobi_cl

```
 gurobi_cl
 ```

Running Gurobi Jobs

Interactively on a CPU

Jobs should not be run directly on the head nodes. The preferred method, even if you're testing a small job, is to start an interactive session directly on a compute node and then test your script or, for short jobs, run it directly from the node. To get more information on the available partitions, resources, and limits on the node use the sinfo command.

To connect directly to a compute node use the salloc command together with additional Slurm parameters

salloc -p interactive -n 1 --cpus-per-task=4 --mem=15GB
However, if you want to run an interactive job that may require a time limit of more than 1 hour use the command shown below :
salloc -p interactive -n 1 --cpus-per-task=4 --mem=15GB -t 0-02:00:00
This command will allocate you a single node with 4 cores and 15GB of memory for 2 hours on the normal partition. Once the resources become available, your prompt should now show that you're on one of the Hopper nodes.

salloc: Granted job allocation 
salloc: Waiting for resource configuration
salloc: Nodes hop065 are ready for job
[user@hop065 ~]$
Modules you loaded while on the head nodes are exported onto the node as well. If you had not already loaded any modules, you should be able to load them now as well. To check the currently loaded modules on the node use the command shown below :
[user@hop065 ~]$ module list

Currently Loaded Modules:
  1) use.own     3) prun/2.0       5) gnu10/10.3.0-ya   7) sqlite/3.37.1-6s   9) openmpi/4.1.2-4a
  2) autotools   4) hosts/hopper   6) zlib/1.2.11-2y    8) tcl/8.6.11-d4     10) gurobi/11.0.3

Inactive Modules:
  1) openmpi4