Jupyter via VSCode Guide#
This guide demonstrates how to run Jupyter notebooks on a compute node via Slurm while editing them locally in VSCode.
Who is this guide for?
- Users familiar with logging into the cluster and submitting jobs via Slurm
- Users who want to use VSCode as their development environment with remote Jupyter support
Prerequisites
- Cluster access (VPN + login node)
- VSCode installed locally with
Remote - SSHandJupyterextensions - Basic familiarity with Slurm and Linux command line
Batch Job with Jupyter on Compute Node via VSCode#
Step 0: (optional) Install ipykernel for your environment#
If you have multiple environments, installing its python kernel for Jupyter will allow switching between them without restarting the Jupyter server.
Step 1: Connect to the login node with VSCode#
With the Remote - SSH extension installed, select the Remote-SSH: Connect to Host... option from the command palette and connect to [CNETID]@sscs-cronus2.ssd.uchicago.edu. When prompted enter your CNETID password.
Step 2: Install Jupyter extension for VSCode#
Make sure you are connected to the login node and install the Jupyter extension. Note the connection info in the lower left corner of the VSCode window and whether the extension is being installed in the local or remote VSCode instance.
Step 3: Prepare a slurm batch script to launch Jupyter#
Step 4: Submit the batch job and check job status:#
Step 5: Get connection URL for Jupyter#
Obtain the connection URL for the running Jupyter server on the compute node. The batch script above redirects the Jupyter server's logs to logs/${SLURM_JOB_ID}/jupyter.log. Copy the URL which DOES NOT include 127.0.0.1:
Step 6: Change the notebook's python kernel#
In VSCode, press the Select Kernel button in the top right of the notebook window and use the Existing Jupyter Server... option to enter the URL of the Jupyter server from above to connect to the Jupyter server on the compute node. Proceed to name the connection and select the Python kernel you wish to use.
Step 7: Tests / confirmation#
Run some quick tests to make sure notebook execution occurs on the compute node and not the login node:
Note
The hostname will reflect whichever compute node your job was allocated to —
this could be any general compute node (sn1–sn20) or a GPU node (gpu1–gpu3) if requested.
Step 8: Cancel job when finished#
When you are doing working, cancel your batch job: