Jupyter Lab Setup Guide for the Sabine HPC Cluster Version 1.0 – August 2025
Purpose
This document provides a standardized and formal procedure to set up and access Jupyter
Lab on the Sabine High-Performance Computing (HPC) cluster. The steps outlined herein
are generalized to support usage across any research group or project directory.
Prerequisites
Before proceeding, ensure the following requirements are met:
- University of Houston CougarNet credentials are available and active.
- An SSH-compatible terminal or client is installed (such as native terminal on Linux or macOS, or PuTTY or MobaXterm on Windows).
- The user is connected to the University of Houston VPN if accessing the cluster from outside the campus network.
- A Conda environment has been created in the user's designated project directory. The home directory should not be used due to quota limitations.
Step-by-Step Setup Procedure
Step 1. Connect to the Sabine Cluster
Open a terminal and execute the following command:
ssh your_username@sabine.rcdc.uh.edu
Replace your_username with your actual CougarNet identifier.
Step 2. Request an Interactive Compute Node
Request a node using SLURM's salloc command:
salloc -t 02:00:00 -n 1 --mem=8GB
This command allocates a compute node for two hours with 8GB of memory.
Step 3. Load Modules and Activate the Conda Environment
Load the Miniforge module and activate your Conda environment located in your project directory:
module load Miniforge3/py3.10
source activate /project/project_name/username/jupyter_env
Step 4. Set Jupyter-Related Environment Variables
To avoid exceeding disk quota limits in the home directory, redirect Jupyter’s data, runtime, and configuration directories to the project space:
export JUPYTER_DATA_DIR=/project/project_name/username/.jupyter_data
export JUPYTER_RUNTIME_DIR=/project/project_name/username/.jupyter_runtime
export JUPYTER_CONFIG_DIR=/project/project_name/username/.jupyter_config
export IPYTHONDIR=/project/project_name/username/.ipython
Create the above directories if they do not already exist:
mkdir -p $JUPYTER_DATA_DIR $JUPYTER_RUNTIME_DIR $JUPYTER_CONFIG_DIR $IPYTHONDIR
Step 5. Start the Jupyter Lab Server
Run the following command to start Jupyter Lab on the compute node:
jupyter lab --no-browser --ip=0.0.0.0 --port=8888
The command will return a URL containing a token. This will be used to access the server via a browser.
Step 6. Establish an SSH Tunnel from the Local Machine
On your local terminal, create an SSH tunnel to the compute node. First, identify the compute node name using the command hostname on the cluster. Then execute the following:
ssh -N -L 8888:compute-node-name:8888 your_username@sabine.rcdc.uh.edu
Replace compute-node-name with the actual hostname and your_username with your CougarNet ID.
Step 7. Access Jupyter Lab in the Web Browser
Copy the full URL generated in Step 5 and replace the hostname with localhost. Open the URL in any modern web browser. For example:
http://localhost:8888/lab?token=your_token
The Jupyter Lab interface will appear once authenticated.
Step 8. Properly Shut Down the Session
To shut down Jupyter Lab and release allocated resources:
- Save all notebooks within the Jupyter interface.
- Return to the Jupyter terminal and press Ctrl+C twice to stop the server.
- Type exit to leave the compute node.
- Close the SSH tunnel terminal on the local machine.
Optional: Using a Batch Script for Background Sessions
For unattended Jupyter sessions, a batch job can be submitted using a template script.
Step 1. Copy the Template Script
cp /var/tmp/jupyter.sbatch ~/my_jupyter.sbatch
Step 2. Edit the Script
Use a text editor to modify memory, duration, and environment paths. Set the email address for job notifications.
Step 3. Submit the Job
sbatch ~/my_jupyter.sbatch
Step 4. Retrieve Logs
cat slurm-jobID.out
This log will contain the access URL and token.