A simple docker compose file to start ollama docker container and a model along with it.
Follow official guide here or following
Remove unoffical packages
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; doneConfigure docker repository
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get updateInstall Docker packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginAllow docker to run without sudo
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp dockerTest docker
docker run hello-worldConfigure nvidia-container-toolkit repository
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.listInstall container-toolkit
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkitConfigure container runtime
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart dockergit clone https://github.com/CollaborativeRoboticsLab/ollama-docker.gitcd ollama-docker
docker compose -f ollama-compose-gpu.yaml pull
docker compose -f ollama-compose-gpu.yaml upOpen a browser window and follow localhost:3000 For any other computer in the network, replace localhost with server ip address
on the terminal run following command once the container is running. For any other computer in the network, replace localhost with server ip address
curl -X POST http://localhost:11434/download -d '{"model": "llama3.1:8b"}'