A hands-on workshop for learning Retrieval-Augmented Generation (RAG) through implementing chunking and retrieval strategies. Build a production-ready RAG system in 4 hours.
# Verify Python 3.12
python3 --version # Should show Python 3.12.x
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip3 install -r requirements.txt
pip3 install -e
# Verify setup
python3 scripts/verify_setup.py
# Run the workshop application
python3 -m nicegui_app.mainInside WSL - make sure you are within the home directory
cd ~# update + install python3.12
sudo apt update
sudo apt install -y python3.12-full
# Verify Python 3.12
python3 --version # Should show Python 3.12.x
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip3 install -r requirements.txt
pip3 install -e
# Verify setup
python3 scripts/verify_setup.py
# Run the workshop application
python3 -m nicegui_app.mainTBD
# Verify Python 3.12
python --version # Should show Python 3.12.x
# Create and activate virtual environment
python -m venv .venv
.venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Verify setup
python scripts\verify_setup.py
# Run the workshop application
python -m nicegui_app.mainif not using UV, skip
# install uv; use homebrew, curl or whatever works on Windows
uv sync
uv run python scripts/verify_setup.py
uv run python -m nicegui_app.mainSet these in a .env file or export them:
export GOOGLE_API_KEY="your-key-here" # For Google AI Studio
export GOOGLE_CLOUD_PROJECT="your-project" # For Vertex AI
export GOOGLE_CLOUD_LOCATION="us-central1" # For Vertex AI
export OPENAI_API_KEY="your-key-here" # For OpenAI
export ANTHROPIC_API_KEY="your-key-here" # For Anthropic
# LiteLLM Proxy Configuration (if using a proxy)
export OPENAI_API_BASE="https://your-litellm-proxy:port" # LiteLLM proxy endpoint
export SSL_VERIFY="false" # Set to "false" to disable SSL verificationThe workshop now supports LiteLLM as a proxy for accessing multiple LLM providers through a unified OpenAI-compatible API. This is useful for:
- Corporate environments with centralized LLM access
- Cost tracking and rate limiting across teams
- Environments requiring SSL inspection bypass
Models are configured via Hydra configs in configs/models/. To use LiteLLM:
- Set
OPENAI_API_BASEto your LiteLLM proxy endpoint - Configure models using the
litellm:prefix (e.g.,litellm:global.anthropic.claude-sonnet-4-5-20250929-v1:0) - If using self-signed certificates, set
SSL_VERIFY="false"environment variable
Follow instructions for your device.
Place the exported text file at chats/default_chat.txt or use the included chats/example_chat.txt.
See the Participant Guide for the full hands-on walkthrough with step-by-step instructions, exercises, and troubleshooting.
MIT