if not input('Load JupyterAI? [Y/n]').lower()=='n':
%reload_ext jupyter_ai
The course’s programming environment is conveniently accessible via a JupyterHub server, allowing remote access without the need to install any special software, thanks to Project Jupyter. Each student will have their own Jupyter server with individual computing resources including CPU, memory, and even GPU for later materials on language models. This means you can easily write and run programs on your mobile devices using just a web browser.
In practice, programmers have to use and setup different programming environments based on their tasks, and they have to find ways to work collaboratively with others under the same environment. This notebook will introduce a collaborative server with GPU, and how you may setup the basic environment locally on your computer.
Remote Access¶
Group Server¶
Collaboration¶
Registered students can collaboratively work on the same Jupyter server using the group (Jupyter) server all-cs1302
that have higher resource limits than the individual user servers:
- Storage: 100GB
- Memory: 100GB
- CPU: 32 cores for default servers without GPU.
- GPU: 48GB for the GPU server option.
To open Lab0 in the collaborative server, you can use a gitpull link such as:
i.e., with .../hub/user-redirect/...
replaced by .../user/all-cs1302/...
.
If the Jupyter server is not already running, you have an option to run it with GPU as shown in Figure 1:
Figure 1:Server options for group server.
To access and manage the server from the JupyterHub interface:
Access the Hub Control Panel:
- Within the JupyterLab interface, click
File->Hub Control Panel
.
- Within the JupyterLab interface, click
Select the Admin Panel:
- In the top navigation bar of the Hub Control Panel, select the
Admin
Panel as shown in Figure 2.
- In the top navigation bar of the Hub Control Panel, select the
Locate the User:
- Within the Admin Panel, look for the user named
all-cs1302
.
- Within the Admin Panel, look for the user named
Manage the Server:
- If the server has not started:
- Click the action button labeled Spawn Page to select the server options with higher resource limits.
- If you click the action button labeled Start Server, the server will start with lower resource limits that apply to individual user servers.
- If the group server is already running:
- Click the action button labeled Access Server to access the currently running server.
- If necessary, click the action button labeled Stop Server to terminate the existing server.
- If the server has not started:
Figure 2:Admin panel for managing group server.
Group servers run JupyterLab in collaborative mode, which provides real-time collaboration features as shown in Figure 3. This allows multiple users to chat with each other and live edit the same notebook simultaneously. For more details, see the packages:
Figure 3:Collaborative mode in JupyterLab.
- To view another user’s name while they’re editing the same notebook, hover over their cursor.
- To chat with others, you can either write directly in the notebook or invite them to a chat via the chat panel.[1]
Learning with GPU¶
With the GPU resources, we can run a generative model locally using Ollama. To do so, start the Ollama service as follows.
- In JupyterLab, navigate to the
File
menu. - Select
New
from the drop-down menu and chooseTerminal
. - The terminal window will appear. You can use this terminal to run a shell command. Enter the following command into the terminal prompt and hit enter.
ollama serve
- To terminate Ollama, simply type Ctrl + C, the same way you would terminate any shell command by Keyboard Interrupt.
After running ollama serve
, you can execute the following cells to chat with different models.
%%ai ollama:qwen3
Why LLM runs much faster on GPU than on CPU?
The first run will take a while as the model gets loaded into the memory.
%%ai ollama:gpt-oss
What is the difference between GPU and NPU?
Switching models also require additional time to reload the new model into the memory.
To use Ollama with Jupyternaut:
- Click the chat icon 💬 on the left menu bar. A chat panel will open.
- Click the gear icon ⚙️ on the chat panel to set up the provider.
- Select the Completion model as
DIVE Ollama :: ...
with...
replaced by your desired model such asphi3
. You may also useOllama :: *
to enter the model ID. - Click the
Save Changes
button. - Click the back arrow at the top to go back to the chat panel.
- Enter some messages to see a response from the chat model.
Different models have different sizes and may be good at different things. The following executes a shell command to list other models that can be served by Ollama.
if not input('Execute? [Y/n]').lower()=='n':
!ollama list
The models reside in the directory specified by the environment variable OLLAMA_MODELS
:[2]
%env OLLAMA_MODELS
In addition to running Ollama, you can learn to use PyTorch and other machine learning packages pre-installed in the server.
import torch
torch.cuda.is_available()
Custom Packages¶
Installing Packages¶
To list existing packages installed in the Jupyter server:
if not input('Execute? [Y/n]').lower()=='n':
!conda list
if not input('Execute? [Y/n]').lower()=='n':
!pip list
You can install additional packages using the commands
conda install
for packages on Anaconda, orpip install
for packages on PyPI.
The following is an example using pip
to install a package cowsay
:
if not input('Execute? [Y/n]').lower()=='n':
!pip install cowsay
import cowsay
cowsay.cow("I am a pip installed package ((((((...ip)ip)ip)ip)ip)ip)!")
%%ai
What does pip stand for?
As another example, you can use conda to install the new super-fast package manager uv
written in Rust:
if not input('Execute? [Y/n]').lower()=='n':
!conda install uv --yes
Let’s take a glimpse of the future of Python by installing the latest version with uv
:
if not input('Execute? [Y/n]').lower()=='n':
!uv python install 3.14
Now, run the following Hello-World program which uses the newly introduced t
-string:
!uvx python@3.14 -c 'name="World"; msg=t"Hello, {name}!"; print(msg)'
%%ai
What are the pros and cons of conda install, pip install, and uv install?
Conda Environment¶
Due to how your server is spawned using docker containers, the above installations do not persist when restarting the Jupyter server because the packages are saved to an emphemeral instead of a persistent storage.
Although this could be viewed as a feature rather than a bug—because you can test by installing a package and reset your environment simply by restarting the server—what if you want the installation to stick?
%%writefile myenv.yaml
name: myenv
channels:
- conda-forge
- defaults
dependencies:
- python=3.12
- ipykernel
- pip:
- cowsay
%%ai
What does YAML stand for?
To create the environment, run
conda env create --file myenv.yaml
To update an existing environment, run
conda env update --file myenv.yaml --prune
The environment will persist because we have reconfigured conda
to save to environments to the home directory by default:
!cat /opt/conda/.condarc
%%ai
What does rc stand for in .condarc?
You can create a Jupyter kernel using the environment as shown in Figure 4 by running the command:[4]
conda activate myenv
python -m ipykernel install \
--user \
--name "myenv" --display-name "myenv"
Reload the browser window for the new kernel to take effect.
Figure 4:Custom Jupyter kernel from a conda environment.
conda activate
is the command to activate an environment to use its installed packages.
To deactivate the conda environment in a terminal, run
conda deactivate
To delete the kernel, run the command
rm -rf ~/.local/share/jupyter/kernels/myenv
To delete the conda environment, run
conda deactivate
conda env remove -n myenv
We are actually using micromamba instead of conda, so some features such as --clone base
would not work.
%%ai
How is micromamba compared to mamba and conda?
Desktop Applications¶
In addition to using a web browser, you can connect to the JupyterHub server using desktop applications such as JupyterLab Desktop or VSCode Editor. Follow the links below to install the applications and the required extensions on your computer:
- JupyterLab Desktop
- VSCode Editor with the following extensions:
- JupyterLab Desktop is not actively maintained. Use it with caution, especially in environments where security is a concern.
- Some VS Code extensions such as the JupyterHub Extension are currently in preview mode. You might encounter bugs or instability.
- Revoke your API token to prevent unauthorized access to your JupyterHub account.
Connecting via JupyterLab Desktop¶
- Launch the JupyterLab Desktop app.
- Click Connect...
- Enter the server URL:
https://dive.cs.cityu.edu.hk/cs1302_25a/
- Login with your EID and password.
Troubleshooting
If you encounter connection errors or the interface becomes unresponsive, try reconnecting or restarting the app. You can also open the developer tools (Ctrl+Alt+I or Cmd+Opt+I) and then reload the page (Ctrl/Cmd+R).
Connecting via Visual Studio Code¶
Before connection, make sure your Jupyter server is running since idle Jupyter servers are culled automatically to release the computing resources.
- Open JupyterHub, go to the Token page (
File → Hub Control Panel → Token
)
and generate a token with your desired settings.CautionDo NOT close or move away from the page as you need to copy and paste the token later two times later.
- In your local VS Code app:
- Open the command palette (
View → Command Palette...
) and run:JupyterHub: Connect to JupyterHub
- Create a new connection with the following details:
- Name: Any name of your choice such as
DIVE
- URL:
https://dive.cs.cityu.edu.hk/cs1302_25a/
- Token: Use the one you generated.
- Name: Any name of your choice such as
- Open the command palette (
Once connected, you’ll see the JupyterHub Explorer panel in the bottom-left corner.
- Open any notebook file.
- In the top-right corner, click Select Kernel.
- Choose an existing Existing JupyterHub Server... and enter the following details:
- Server URL:
https://dive.cs.cityu.edu.hk/cs1302_25a/
- Username: your EID
- Token: the one you generated
- Server Name: any name you prefer
- Kernel: select Python 3 (ipykernel)
- Server URL:
You’re now equipped to work with JupyterHub from your desktop. Just keep an eye out for quirks and bugs, and you’ll be fine!
Troubleshooting
If VS Code behaves unexpectedly or fails to connect, try reloading the window:
Go to the command palette and run:Developer: Reload Window
Running locally¶
Local Installation¶
You can install the minimal jupyter environment to run the course notebooks locally on your computer.
First, ensure you have the following prerequisites installed:
- Conda: You have some options on which distribution to install on what platform. Miniforge is recommended as we mainly use packages from
conda-forge
. - Git and Make: These can be installed in a terminal by the following command after activating the base Conda environment:
conda install git make
To install the jupyter environment for the course:
- Git clone the
cs1302nb
repository:
git clone https://github.com/dive4dec/cs1302nb.git
cd cs1302nb
- Run the make command to create/update the conda environment
cs1302nb
:
make install
To use the environment, activate it with:
conda activate cs1302nb
Once activated, you can start JupyterLab under a working directory of your choice:
jupyter lab
If the installation was successful, the following message will appear but with a different token:
1 2 3 4 5 6
... To access the server, open this file in a browser: file:///home/jovyan/.local/share/jupyter/runtime/jpserver-7-open.html Or copy and paste one of these URLs: http://2d94aee27406:8888/lab?token=afe3d84a4cadff3fe397640f651de4805471e7b19d1d6f1e http://127.0.0.1:8888/lab?token=afe3d84a4cadff3fe397640f651de4805471e7b19d1d6f1e
Copy and paste the last URL into your web browser to access the JupyterLab.
To access the course notebooks, open a new terminal within the JupyterLab interface and run the following command:
gitpuller https://github.com/dive4dec/cs1302_25a main cs1302_25a
This command will:
- Pull the
main
branch of the GitHub repositorydive4dec/cs1302_25a
. - Clone it into the subfolder
cs1302_25a
under your working directory.
Alternatively, you may use a gitpull link such as
to open the current notebook.
Troubleshooting
To ensure the link works correctly, you may need to adjust the port number. JupyterLab typically uses port 8888 by default, but if that port is already in use—possibly by another running instance of JupyterLab—it may automatically switch to a different port such as 8889. You may also need to enter the token printed in the URL:
...token=afe3d84a4cadff3fe397640f651de4805471e7b19d1d6f1e
Docker container¶
A docker container is like a lightweight, standalone, and executable package that includes everything needed to run a piece of software, including the Jupyter server used for our course. If you know about virtual machines, you may think of a docker container as a separate virtual machine running on your host machine, but with much less overhead.
For computers using arm64, Local Installation is a better option than docker since docker has limited support for the architecture. Nevertheless, you are encouraged to play with docker and try to get things to work!
Indeed, you can also build and run the Jupyter server locally in your computer by installing Docker as follows.
- To install docker:
- For macOS, install Orbstack, which is faster than the alternative below.
- For Windows/Linux:
- install Docker Desktop from the official Docker website.
- For Windows, see the additional setup to use docker for WSL2 on Windows.
- Once installation is complete, open Orbstack or Docker Desktop and ensure it is running.
For Windows WSL2 and Linux on amd64, open a terminal and run the following command to pull the Docker image:
docker pull chungc/cs1302nb
A docker image is a the blueprint for creating Docker containers. When you run a Docker container, it is instantiated from an image. For example, chungc/cs1302nb
is a docker image created from a text file Dockerfile.min
, which specifies how and what packages should be installed. The image was built using some Make commands in the repository. The resulting image was published to the public registry DockerHub. You can also git clone
the repository locally in your computer and modify the dockerfiles to build your desired image locally. The Dockerfile for the image running on the JupyterHub server is, which requires >40GB
of storage to build.
- To run the Docker Container:
Navigate to your working directory (e.g.,
cs1302_home
) where you want to map to the home directory of the docker container:cd /part/to/cs1302_home
Run the Docker container with the following command:
docker run -it --rm \ -p 8888:8888 \ -v $(pwd):/home/jovyan/ \ chungc/cs1302nb \ start-notebook.sh \ --IdentityProvider.token=''
If successful, you can follow similar steps in Local Installation to fetch the course notebooks using gitpuller.
What do the options mean?
The docker run
command
- run the container interactively (
-it
); - remove the container after it stops (
--rm
); - map port
8888
on your host to port8888
on the container (-p 8888:8888
); - mount the current working directory to the home directory
/home/jovyan/work
inside the container (-v $(pwd):/home/jovyan/work
); and - set the token to an empty string (
''
) so that you don’t need to provide a token when logging in (--IdentityProvider.token=''
).
See the documentation for other options.
Troubleshooting
If the host port 8888
is occupied, you can change it to another port such as 1302
by running the container with -p1302:8888
. The port used to launch JupyterLab should be the host port, not the container port printed, i.e., the url to access the JupyterLab should be http://127.0.0.1:1302
.
To download or run a new Ollama model, you need to set the directory to
~/.ollama
or any directory you have write access to. To use the model in JupyterNaut, select the Completion model asOllama :: *
and specify the model id. Avoid downloading very big models with over 30b parameters, as those cannot run on the current GPU without quantization.See the documentation for more details on managing conda environment.
See the documentation for more details on creating kernels for conda environments.