Skip to article frontmatterSkip to article content
if not input('Load JupyterAI? [Y/n]').lower()=='n':
    %reload_ext jupyter_ai

The course’s programming environment is conveniently accessible via a JupyterHub server, allowing remote access without the need to install any special software, thanks to Project Jupyter. Each student will have their own Jupyter server with individual computing resources including CPU, memory, and even GPU for later materials on language models. This means you can easily write and run programs on your mobile devices using just a web browser.

In practice, programmers have to use and setup different programming environments based on their tasks, and they have to find ways to work collaboratively with others under the same environment. This notebook will introduce a collaborative server with GPU, and how you may setup the basic environment locally on your computer.

Remote Access

Group Server

Collaboration

Registered students can collaboratively work on the same Jupyter server using the group (Jupyter) server all-cs1302 that have higher resource limits than the individual user servers:

  • Storage: 100GB
  • Memory: 100GB
  • CPU: 32 cores for default servers without GPU.
  • GPU: 48GB for the GPU server option.

If the Jupyter server is not already running, you have an option to run it with GPU as shown in Figure 1:

Server options

Figure 1:Server options for group server.

To access and manage the server from the JupyterHub interface:

  1. Access the Hub Control Panel:

    • Within the JupyterLab interface, click File->Hub Control Panel.
  2. Select the Admin Panel:

    • In the top navigation bar of the Hub Control Panel, select the Admin Panel as shown in Figure 2.
  3. Locate the User:

    • Within the Admin Panel, look for the user named all-cs1302.
  4. Manage the Server:

    • If the server has not started:
      • Click the action button labeled Spawn Page to select the server options with higher resource limits.
      • If you click the action button labeled Start Server, the server will start with lower resource limits that apply to individual user servers.
    • If the group server is already running:
      • Click the action button labeled Access Server to access the currently running server.
      • If necessary, click the action button labeled Stop Server to terminate the existing server.
Admin panel

Figure 2:Admin panel for managing group server.

Group servers run JupyterLab in collaborative mode, which provides real-time collaboration features as shown in Figure 3. This allows multiple users to chat with each other and live edit the same notebook simultaneously. For more details, see the packages:

Collaborative mode

Figure 3:Collaborative mode in JupyterLab.

  • To view another user’s name while they’re editing the same notebook, hover over their cursor.
  • To chat with others, you can either write directly in the notebook or invite them to a chat via the chat panel.[1]

Learning with GPU

With the GPU resources, we can run a generative model locally using Ollama. To do so, start the Ollama service as follows.

  1. In JupyterLab, navigate to the File menu.
  2. Select New from the drop-down menu and choose Terminal.
  3. The terminal window will appear. You can use this terminal to run a shell command. Enter the following command into the terminal prompt and hit enter.
    ollama serve
  4. To terminate Ollama, simply type Ctrl + C, the same way you would terminate any shell command by Keyboard Interrupt.

After running ollama serve, you can execute the following cells to chat with different models.

%%ai ollama:qwen3
Why LLM runs much faster on GPU than on CPU?

The first run will take a while as the model gets loaded into the memory.

%%ai ollama:gpt-oss
What is the difference between GPU and NPU?

Switching models also require additional time to reload the new model into the memory.

To use Ollama with Jupyternaut:

  1. Click the chat icon 💬 on the left menu bar. A chat panel will open.
  2. Click the gear icon ⚙️ on the chat panel to set up the provider.
  3. Select the Completion model as DIVE Ollama :: ... with ... replaced by your desired model such as phi3. You may also use Ollama :: * to enter the model ID.
  4. Click the Save Changes button.
  5. Click the back arrow at the top to go back to the chat panel.
  6. Enter some messages to see a response from the chat model.

Different models have different sizes and may be good at different things. The following executes a shell command to list other models that can be served by Ollama.

if not input('Execute? [Y/n]').lower()=='n':
    !ollama list

The models reside in the directory specified by the environment variable OLLAMA_MODELS:[2]

%env OLLAMA_MODELS

In addition to running Ollama, you can learn to use PyTorch and other machine learning packages pre-installed in the server.

import torch

torch.cuda.is_available()

Custom Packages

Installing Packages

To list existing packages installed in the Jupyter server:

if not input('Execute? [Y/n]').lower()=='n':
    !conda list
if not input('Execute? [Y/n]').lower()=='n':
    !pip list

You can install additional packages using the commands

The following is an example using pip to install a package cowsay:

if not input('Execute? [Y/n]').lower()=='n': 
    !pip install cowsay
import cowsay

cowsay.cow("I am a pip installed package ((((((...ip)ip)ip)ip)ip)ip)!")
%%ai
What does pip stand for?

As another example, you can use conda to install the new super-fast package manager uv written in Rust:

if not input('Execute? [Y/n]').lower()=='n':
    !conda install uv --yes

Let’s take a glimpse of the future of Python by installing the latest version with uv:

if not input('Execute? [Y/n]').lower()=='n':
    !uv python install 3.14

Now, run the following Hello-World program which uses the newly introduced t-string:

!uvx python@3.14 -c 'name="World"; msg=t"Hello, {name}!"; print(msg)'
%%ai
What are the pros and cons of conda install, pip install, and uv install?

Conda Environment

Although this could be viewed as a feature rather than a bug—because you can test by installing a package and reset your environment simply by restarting the server—what if you want the installation to stick?

You can create a conda environment[3] using a YAML file as follows:

%%writefile myenv.yaml
name: myenv
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.12
  - ipykernel
  - pip:
    - cowsay
%%ai
What does YAML stand for?

To create the environment, run

conda env create --file myenv.yaml

To update an existing environment, run

conda env update --file myenv.yaml --prune

The environment will persist because we have reconfigured conda to save to environments to the home directory by default:

!cat /opt/conda/.condarc
%%ai
What does rc stand for in .condarc?

You can create a Jupyter kernel using the environment as shown in Figure 4 by running the command:[4]

conda activate myenv
python -m ipykernel install \
    --user \
    --name "myenv" --display-name "myenv"
Custom Jupyter kernel

Figure 4:Custom Jupyter kernel from a conda environment.

conda activate is the command to activate an environment to use its installed packages.

To deactivate the conda environment in a terminal, run

conda deactivate

To delete the kernel, run the command

rm -rf ~/.local/share/jupyter/kernels/myenv

To delete the conda environment, run

conda deactivate
conda env remove -n myenv
%%ai
How is micromamba compared to mamba and conda?

Desktop Applications

In addition to using a web browser, you can connect to the JupyterHub server using desktop applications such as JupyterLab Desktop or VSCode Editor. Follow the links below to install the applications and the required extensions on your computer:

Connecting via JupyterLab Desktop

  1. Launch the JupyterLab Desktop app.
  2. Click Connect...
  3. Enter the server URL:
    https://dive.cs.cityu.edu.hk/cs1302_25a/
  4. Login with your EID and password.

Connecting via Visual Studio Code

Before connection, make sure your Jupyter server is running since idle Jupyter servers are culled automatically to release the computing resources.

  1. Open JupyterHub, go to the Token page ( File → Hub Control Panel → Token)
    and generate a token with your desired settings.
  1. In your local VS Code app:
    • Open the command palette (View → Command Palette...) and run:
      JupyterHub: Connect to JupyterHub
    • Create a new connection with the following details:
      • Name: Any name of your choice such as DIVE
      • URL:
        https://dive.cs.cityu.edu.hk/cs1302_25a/
      • Token: Use the one you generated.
  1. Once connected, you’ll see the JupyterHub Explorer panel in the bottom-left corner.

    • Open any notebook file.
    • In the top-right corner, click Select Kernel.
    • Choose an existing Existing JupyterHub Server... and enter the following details:
      • Server URL:
        https://dive.cs.cityu.edu.hk/cs1302_25a/
      • Username: your EID
      • Token: the one you generated
      • Server Name: any name you prefer
      • Kernel: select Python 3 (ipykernel)

You’re now equipped to work with JupyterHub from your desktop. Just keep an eye out for quirks and bugs, and you’ll be fine!

Running locally

Local Installation

You can install the minimal jupyter environment to run the course notebooks locally on your computer.

First, ensure you have the following prerequisites installed:

  1. Conda: You have some options on which distribution to install on what platform. Miniforge is recommended as we mainly use packages from conda-forge.
  2. Git and Make: These can be installed in a terminal by the following command after activating the base Conda environment:
    conda install git make

To install the jupyter environment for the course:

  1. Git clone the cs1302nb repository:
git clone https://github.com/dive4dec/cs1302nb.git
cd cs1302nb
  1. Run the make command to create/update the conda environment cs1302nb:
make install

To use the environment, activate it with:

conda activate cs1302nb

Once activated, you can start JupyterLab under a working directory of your choice:

jupyter lab

If the installation was successful, the following message will appear but with a different token:

1
2
3
4
5
6
...
    To access the server, open this file in a browser:
        file:///home/jovyan/.local/share/jupyter/runtime/jpserver-7-open.html
    Or copy and paste one of these URLs:
        http://2d94aee27406:8888/lab?token=afe3d84a4cadff3fe397640f651de4805471e7b19d1d6f1e
        http://127.0.0.1:8888/lab?token=afe3d84a4cadff3fe397640f651de4805471e7b19d1d6f1e

Copy and paste the last URL into your web browser to access the JupyterLab.

To access the course notebooks, open a new terminal within the JupyterLab interface and run the following command:

gitpuller https://github.com/dive4dec/cs1302_25a main cs1302_25a

This command will:

  • Pull the main branch of the GitHub repository dive4dec/cs1302_25a.
  • Clone it into the subfolder cs1302_25a under your working directory.

Docker container

A docker container is like a lightweight, standalone, and executable package that includes everything needed to run a piece of software, including the Jupyter server used for our course. If you know about virtual machines, you may think of a docker container as a separate virtual machine running on your host machine, but with much less overhead.

Indeed, you can also build and run the Jupyter server locally in your computer by installing Docker as follows.

  1. To install docker:
    • For macOS, install Orbstack, which is faster than the alternative below.
    • For Windows/Linux:
    • Once installation is complete, open Orbstack or Docker Desktop and ensure it is running.
  1. For Windows WSL2 and Linux on amd64, open a terminal and run the following command to pull the Docker image:

    docker pull chungc/cs1302nb
  1. To run the Docker Container:
    • Navigate to your working directory (e.g., cs1302_home) where you want to map to the home directory of the docker container:

      cd /part/to/cs1302_home
    • Run the Docker container with the following command:

      docker run -it --rm \
        -p 8888:8888 \
        -v $(pwd):/home/jovyan/ \
        chungc/cs1302nb \
        start-notebook.sh \
        --IdentityProvider.token=''
    • If successful, you can follow similar steps in Local Installation to fetch the course notebooks using gitpuller.

Footnotes
  1. The AI chat panel button looks identical to that for collaboration, as both use the jupyterlab-chat interface.

  2. To download or run a new Ollama model, you need to set the directory to ~/.ollama or any directory you have write access to. To use the model in JupyterNaut, select the Completion model as Ollama :: * and specify the model id. Avoid downloading very big models with over 30b parameters, as those cannot run on the current GPU without quantization.

  3. See the documentation for more details on managing conda environment.

  4. See the documentation for more details on creating kernels for conda environments.