Docker Update 2
This commit is contained in:
31
Dockerfile
31
Dockerfile
@@ -41,22 +41,32 @@ RUN apt-get update && apt-get install -y \
|
||||
python3-pip \
|
||||
python3-venv \
|
||||
python3-dev \
|
||||
# Graphics / X11
|
||||
# Graphics / X11 / Vulkan
|
||||
libgl1 \
|
||||
libglx-mesa0 \
|
||||
libgl1-mesa-dri \
|
||||
mesa-utils \
|
||||
x11-apps \
|
||||
# Video recording
|
||||
libxcb-xinerama0 \
|
||||
libxkbcommon-x11-0 \
|
||||
libxcb-cursor0 \
|
||||
libxcb-icccm4 \
|
||||
libxcb-keysyms1 \
|
||||
libxcb-shape0 \
|
||||
libvulkan1 \
|
||||
# Video recording & codecs
|
||||
ffmpeg \
|
||||
xdotool \
|
||||
wmctrl \
|
||||
libx264-dev \
|
||||
libx265-dev \
|
||||
# Networking
|
||||
netcat-openbsd \
|
||||
# GStreamer (for ArduPilot Gazebo plugin)
|
||||
# GStreamer (for ArduPilot Gazebo plugin video streaming)
|
||||
libgstreamer1.0-dev \
|
||||
libgstreamer-plugins-base1.0-dev \
|
||||
gstreamer1.0-plugins-bad \
|
||||
gstreamer1.0-plugins-good \
|
||||
gstreamer1.0-libav \
|
||||
gstreamer1.0-gl \
|
||||
# OpenCV
|
||||
@@ -106,11 +116,18 @@ ARG USERNAME=pilot
|
||||
ARG USER_UID=1000
|
||||
ARG USER_GID=1000
|
||||
|
||||
RUN groupadd --gid $USER_GID $USERNAME \
|
||||
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
|
||||
&& apt-get update \
|
||||
# Create user - handle case where GID/UID 1000 may already exist
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y sudo \
|
||||
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
|
||||
# Create group if it doesn't exist, or use existing
|
||||
&& (getent group $USER_GID || groupadd --gid $USER_GID $USERNAME) \
|
||||
# Create user if doesn't exist
|
||||
&& (id -u $USER_UID >/dev/null 2>&1 || useradd --uid $USER_UID --gid $USER_GID -m $USERNAME) \
|
||||
# Ensure home directory exists and has correct ownership
|
||||
&& mkdir -p /home/$USERNAME \
|
||||
&& chown $USER_UID:$USER_GID /home/$USERNAME \
|
||||
# Add sudo permissions
|
||||
&& echo "$USERNAME ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/$USERNAME \
|
||||
&& chmod 0440 /etc/sudoers.d/$USERNAME \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
||||
317
docs/docker.md
317
docs/docker.md
@@ -1,21 +1,58 @@
|
||||
# Docker Setup for RDC Simulation
|
||||
|
||||
This guide explains how to run the RDC Simulation environment inside a Docker container with NVIDIA GPU support and X11 display forwarding. This approach isolates the simulation environment from your host system while providing full hardware acceleration.
|
||||
This guide explains how to run the RDC Simulation environment inside a Docker container with NVIDIA GPU support and X11 display forwarding. **This is the recommended approach** for Ubuntu 25.x+ or any system where native installation has dependency issues.
|
||||
|
||||
## Benefits of Docker
|
||||
|
||||
| Aspect | Native Install | Docker Container |
|
||||
|--------|---------------|------------------|
|
||||
| **Ubuntu 25.x support** | ❌ Dependency conflicts | ✅ Works perfectly |
|
||||
| **ROS 2 Jazzy** | ❌ No packages for 25.x | ✅ Full support |
|
||||
| **Gazebo dev packages** | ❌ Library conflicts | ✅ All packages work |
|
||||
| **Host system** | Modified (packages, bashrc) | ✅ Untouched |
|
||||
| **Reproducibility** | Varies by system | ✅ Identical everywhere |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Docker Engine**: Install Docker for your OS.
|
||||
2. **NVIDIA GPU & Drivers**: Ensure you have an NVIDIA GPU and proprietary drivers installed.
|
||||
3. **NVIDIA Container Toolkit**: Required for GPU pass-through to Docker.
|
||||
### 1. Docker Engine
|
||||
|
||||
```bash
|
||||
# Install Docker
|
||||
curl -fsSL https://get.docker.com | sh
|
||||
sudo usermod -aG docker $USER
|
||||
newgrp docker # or log out and back in
|
||||
```
|
||||
|
||||
### 2. NVIDIA GPU & Drivers (Required for Gazebo)
|
||||
|
||||
```bash
|
||||
# Check your GPU is detected
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
### 3. NVIDIA Container Toolkit
|
||||
|
||||
```bash
|
||||
# Install NVIDIA Container Toolkit
|
||||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
||||
echo "deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://nvidia.github.io/libnvidia-container/stable/deb/$(. /etc/os-release;echo $ID$VERSION_ID) /" | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||||
sudo apt update && sudo apt install -y nvidia-container-toolkit
|
||||
|
||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
|
||||
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
|
||||
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y nvidia-container-toolkit
|
||||
sudo nvidia-ctk runtime configure --runtime=docker
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
Verify GPU is accessible in Docker:
|
||||
```bash
|
||||
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Build the Image
|
||||
@@ -24,103 +61,295 @@ This guide explains how to run the RDC Simulation environment inside a Docker co
|
||||
cd ~/RDC_Simulation
|
||||
docker compose build
|
||||
```
|
||||
*This may take 15-20 minutes mostly due to compiling Gazebo plugins.*
|
||||
|
||||
> **Note:** First build takes 20-40 minutes (downloads ~5GB, compiles ArduPilot + plugins).
|
||||
|
||||
### 2. Allow X11 Display Access
|
||||
|
||||
To allow the container to display the Gazebo GUI on your host screen:
|
||||
|
||||
```bash
|
||||
xhost +local:docker
|
||||
```
|
||||
|
||||
> Add this to your `~/.bashrc` to make it permanent.
|
||||
|
||||
### 3. Run the Container
|
||||
|
||||
```bash
|
||||
docker compose run --rm simulation
|
||||
```
|
||||
|
||||
This drops you into a bash shell inside the container as the user `pilot`.
|
||||
This drops you into a bash shell inside the container as user `pilot`.
|
||||
|
||||
---
|
||||
|
||||
## Running the Simulation
|
||||
|
||||
Once inside the container, the environment is pre-configured.
|
||||
Once inside the container, the environment is pre-configured. You'll see a welcome message with available commands.
|
||||
|
||||
### 1. Start the Environment
|
||||
You can use the helper script to verify everything is ready:
|
||||
### Option 1: All-in-One Script
|
||||
|
||||
```bash
|
||||
./docker-entrypoint.sh
|
||||
```
|
||||
|
||||
### 2. Launch Components (Multi-Terminal)
|
||||
|
||||
Since `docker compose run` gives you a single terminal, you can verify basic functionality, but for the full workflow you might want to run the combined script:
|
||||
|
||||
```bash
|
||||
# Run everything (Gazebo + SITL + Controller)
|
||||
./scripts/run_ardupilot_controller.sh
|
||||
```
|
||||
|
||||
**Or** manually launch components in background/separate terminals:
|
||||
### Option 2: Multi-Terminal (Recommended for Development)
|
||||
|
||||
To open a second terminal into the *running* container:
|
||||
1. Find the container name or ID: `docker ps`
|
||||
2. Exec into it: `docker exec -it <container_name> bash`
|
||||
|
||||
Then run your components:
|
||||
* **Terminal 1:** Gazebo
|
||||
**Terminal 1 - Start Gazebo:**
|
||||
```bash
|
||||
./scripts/run_ardupilot_sim.sh runway
|
||||
```
|
||||
* **Terminal 2:** ArduPilot SITL
|
||||
|
||||
**Terminal 2 - Open new shell in same container:**
|
||||
```bash
|
||||
# On host, find container and exec into it
|
||||
docker exec -it rdc-sim bash
|
||||
# Then start SITL
|
||||
sim_vehicle.py -v ArduCopter -f gazebo-iris --model JSON --console
|
||||
```
|
||||
* **Terminal 3:** Controller
|
||||
|
||||
**Terminal 3 - Open another shell:**
|
||||
```bash
|
||||
docker exec -it rdc-sim bash
|
||||
# Run controller
|
||||
python scripts/run_ardupilot.py --pattern square
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Docker Commands Reference
|
||||
|
||||
### Build Commands
|
||||
|
||||
```bash
|
||||
# Build image (standard)
|
||||
docker compose build
|
||||
|
||||
# Build with no cache (clean rebuild)
|
||||
docker compose build --no-cache
|
||||
|
||||
# Build just the Dockerfile directly
|
||||
docker build -t rdc-simulation .
|
||||
```
|
||||
|
||||
### Run Commands
|
||||
|
||||
```bash
|
||||
# Interactive shell
|
||||
docker compose run --rm simulation
|
||||
|
||||
# Run specific command
|
||||
docker compose run --rm simulation ./scripts/run_ardupilot_sim.sh runway
|
||||
|
||||
# Run headless (no display, for CI)
|
||||
docker compose run --rm simulation-headless
|
||||
|
||||
# Start container in background
|
||||
docker compose up -d simulation
|
||||
docker exec -it rdc-sim bash
|
||||
```
|
||||
|
||||
### Cleanup Commands
|
||||
|
||||
```bash
|
||||
# Stop all running containers
|
||||
docker compose down
|
||||
|
||||
# Stop and remove containers
|
||||
docker compose down --remove-orphans
|
||||
|
||||
# Remove built image (to rebuild from scratch)
|
||||
docker rmi rdc-simulation:latest
|
||||
|
||||
# Remove all project containers and images
|
||||
docker compose down --rmi all
|
||||
|
||||
# Full cleanup (removes image + build cache)
|
||||
docker compose down --rmi all
|
||||
docker builder prune -f
|
||||
```
|
||||
|
||||
### Complete Uninstall & Rebuild
|
||||
|
||||
To completely remove everything and start fresh:
|
||||
|
||||
```bash
|
||||
# 1. Stop any running containers
|
||||
docker compose down
|
||||
|
||||
# 2. Remove the image
|
||||
docker rmi rdc-simulation:latest
|
||||
|
||||
# 3. Clear Docker build cache (optional, saves disk space)
|
||||
docker builder prune -f
|
||||
|
||||
# 4. Clear unused Docker data (optional, aggressive cleanup)
|
||||
docker system prune -f
|
||||
|
||||
# 5. Rebuild from scratch
|
||||
docker compose build --no-cache
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Video Recording
|
||||
|
||||
The Docker image includes `ffmpeg` and tools to record the simulation.
|
||||
|
||||
### Record Flight
|
||||
Run a flight pattern and save the video automatically:
|
||||
|
||||
```bash
|
||||
python scripts/record_flight.py --pattern square --quality high --output my_flight_video
|
||||
python scripts/record_flight.py --pattern square --quality high --output my_flight
|
||||
```
|
||||
Videos are saved to the `recordings/` directory which is mounted inside the container (if you used a volume mount) or can be copied out.
|
||||
|
||||
### Record Manually
|
||||
Videos are saved to `recordings/` inside the container.
|
||||
|
||||
### Copy Recordings to Host
|
||||
|
||||
```bash
|
||||
./scripts/record_simulation.sh --duration 30
|
||||
# From host machine
|
||||
docker cp rdc-sim:/home/pilot/RDC_Simulation/recordings .
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Headless Mode (CI/Server)
|
||||
|
||||
If you are running on a remote server without a display attached, use the headless service. Note that standard Gazebo GUI will not show, but you can still run simulations and record videos (rendering offscreen).
|
||||
For running on servers without a display:
|
||||
|
||||
```bash
|
||||
docker compose run --rm simulation-headless
|
||||
```
|
||||
|
||||
Gazebo can still render offscreen for video recording using software rendering.
|
||||
|
||||
---
|
||||
|
||||
## Configuration Reference
|
||||
|
||||
### docker-compose.yml Services
|
||||
|
||||
| Service | Use Case |
|
||||
|---------|----------|
|
||||
| `simulation` | Full GUI with X11 display |
|
||||
| `simulation-headless` | No display, for CI/testing |
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Purpose |
|
||||
|----------|---------|
|
||||
| `DISPLAY` | X11 display (auto-detected) |
|
||||
| `NVIDIA_VISIBLE_DEVICES` | GPU visibility |
|
||||
| `HEADLESS` | Set to `1` for software rendering |
|
||||
|
||||
### Volumes
|
||||
|
||||
| Mount | Purpose |
|
||||
|-------|---------|
|
||||
| `/tmp/.X11-unix` | X11 socket for display |
|
||||
| `~/.Xauthority` | X11 authentication |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**"All CUDA-capable devices are busy or unavailable"**
|
||||
Check your NVIDIA drivers on the host. verifying with `nvidia-smi`.
|
||||
### "permission denied" on docker commands
|
||||
|
||||
**"Example: Could not connect to display :0"**
|
||||
Make sure you ran `xhost +local:docker` on the host machine.
|
||||
If using SSH, ensure X11 forwarding is enabled (`ssh -X`).
|
||||
```bash
|
||||
sudo usermod -aG docker $USER
|
||||
newgrp docker # or log out and back in
|
||||
```
|
||||
|
||||
**Persisting Data**
|
||||
The `docker-compose.yml` mounts the project directory. Changes to python scripts *inside* the container are reflected on your host machine.
|
||||
### "Could not select device driver 'nvidia'"
|
||||
|
||||
NVIDIA Container Toolkit not installed or configured:
|
||||
```bash
|
||||
# Reinstall toolkit
|
||||
sudo apt-get install -y nvidia-container-toolkit
|
||||
sudo nvidia-ctk runtime configure --runtime=docker
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
### "All CUDA-capable devices are busy"
|
||||
|
||||
```bash
|
||||
# Check GPU on host
|
||||
nvidia-smi
|
||||
|
||||
# Make sure no other process is using all GPU memory
|
||||
```
|
||||
|
||||
### "Could not connect to display :0"
|
||||
|
||||
```bash
|
||||
# On host machine, allow X11 connections
|
||||
xhost +local:docker
|
||||
|
||||
# If using SSH, ensure X11 forwarding is enabled
|
||||
ssh -X user@host
|
||||
```
|
||||
|
||||
### "Error response from daemon: No such image"
|
||||
|
||||
Build the image first:
|
||||
```bash
|
||||
docker compose build
|
||||
```
|
||||
|
||||
### Build fails at ArduPilot step
|
||||
|
||||
The ArduPilot build requires significant memory. If it fails:
|
||||
```bash
|
||||
# Increase Docker memory limit (Docker Desktop)
|
||||
# Or reduce parallel jobs in Dockerfile:
|
||||
# Change: make -j$(nproc)
|
||||
# To: make -j2
|
||||
```
|
||||
|
||||
### Container runs but Gazebo crashes
|
||||
|
||||
```bash
|
||||
# Check OpenGL inside container
|
||||
docker compose run --rm simulation glxinfo | grep "OpenGL renderer"
|
||||
|
||||
# Should show your NVIDIA GPU, not "llvmpipe" (software)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Edit Code on Host
|
||||
|
||||
Your code changes in `~/RDC_Simulation` are **not** automatically reflected in the container (unless you mount the volume). To test changes:
|
||||
|
||||
```bash
|
||||
# Option 1: Rebuild image (slow)
|
||||
docker compose build
|
||||
|
||||
# Option 2: Mount your code (edit docker-compose.yml to add volume)
|
||||
volumes:
|
||||
- .:/home/pilot/RDC_Simulation
|
||||
```
|
||||
|
||||
### Build Custom Image Tag
|
||||
|
||||
```bash
|
||||
docker build -t rdc-simulation:dev .
|
||||
docker run --gpus all -it rdc-simulation:dev
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What's Installed in the Container
|
||||
|
||||
| Component | Version |
|
||||
|-----------|---------|
|
||||
| Ubuntu | 24.04 LTS |
|
||||
| ROS 2 | Jazzy |
|
||||
| Gazebo | Harmonic |
|
||||
| Python | 3.12 |
|
||||
| ArduPilot SITL | Latest (git) |
|
||||
| ArduPilot Gazebo Plugin | Latest (git) |
|
||||
| MAVProxy | Latest (pip) |
|
||||
|
||||
All Gazebo development packages are included for plugin compilation.
|
||||
|
||||
@@ -7,11 +7,23 @@
|
||||
| **OS** | Ubuntu 22.04 | Ubuntu 24.04 |
|
||||
| **RAM** | 8 GB | 16 GB |
|
||||
| **Disk** | 10 GB | 20 GB |
|
||||
| **GPU** | OpenGL 3.3 | NVIDIA/AMD dedicated |
|
||||
| **GPU** | OpenGL 3.3 | NVIDIA dedicated |
|
||||
|
||||
> **Windows users**: Use WSL2 with Ubuntu (see below)
|
||||
> **Ubuntu 25.x+ users**: Native installation has dependency issues. Use [Docker](docker.md) instead (recommended).
|
||||
|
||||
## Quick Install (Ubuntu/WSL2)
|
||||
> **Windows users**: Use WSL2 with Ubuntu 24.04 (see below) or Docker.
|
||||
|
||||
## Installation Options
|
||||
|
||||
| Method | Best For |
|
||||
|--------|----------|
|
||||
| **[Docker](docker.md)** (Recommended) | Ubuntu 25+, consistent environments, no host modifications |
|
||||
| **Native Install** | Ubuntu 22.04/24.04, full control |
|
||||
| **WSL2** | Windows users |
|
||||
|
||||
---
|
||||
|
||||
## Quick Install (Ubuntu 22.04/24.04)
|
||||
|
||||
```bash
|
||||
# Clone repository
|
||||
|
||||
@@ -62,6 +62,10 @@ for arg in "$@"; do
|
||||
echo " - ROS 2 (use: sudo apt remove 'ros-*')"
|
||||
echo " - Gazebo (use: sudo apt remove 'gz-*')"
|
||||
echo " - System packages"
|
||||
echo ""
|
||||
echo "Docker cleanup (if using Docker):"
|
||||
echo " docker compose down --rmi all # Remove containers and images"
|
||||
echo " docker builder prune -f # Clear build cache"
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
Reference in New Issue
Block a user