Skip to main content
Deploy TT-Bot in a Docker container using the official image built with Python 3.13 and uv for fast dependency management.

Prerequisites

Quick Start

1

Build the Docker image

Build the image using the included Dockerfile:
docker build -t tt-bot .
The image uses ghcr.io/astral-sh/uv:python3.13-trixie-slim as the base and leverages Docker layer caching for fast rebuilds.
2

Configure environment variables

Create a .env file based on .env.example:
cp .env.example .env
Set the required variables:Required:
  • BOT_TOKEN - Your Telegram bot token
API Server Configuration:Choose one of two options:
# Use Telegram's public API (simpler, but rate limited)
TG_SERVER=https://api.telegram.org
3

Run the container

Start the bot with your environment file:
docker run --rm --env-file .env tt-bot
When running a single container, ensure TG_SERVER is reachable from inside the container. If you set TG_SERVER=http://telegram-bot-api:8081, you must also run a telegram-bot-api container and place both on the same Docker network.

Single Container vs Docker Compose

Single Container

Pros:
  • Simple setup for testing
  • Minimal resource usage
  • Quick to start and stop
Cons:
  • Must use public Telegram API (rate limited)
  • No database persistence (unless configured separately)
  • Manual network configuration needed for local Bot API
Use when: You want to quickly test the bot or deploy to a managed container service.

Docker Compose

Pros:
  • Local Telegram Bot API server (faster, avoids rate limits)
  • PostgreSQL database included (persistent statistics)
  • All services networked automatically
  • Production-ready setup
Cons:
  • Requires more system resources
  • More configuration options to understand
Use when: You’re deploying for production or want the full feature set with statistics.
For production deployments, use Docker Compose to get the complete stack with local Bot API server and database.

Dockerfile Details

The Dockerfile uses a multi-stage approach for optimal caching:
FROM ghcr.io/astral-sh/uv:python3.13-trixie-slim

WORKDIR /app

# Copy dependencies first (cached layer)
COPY pyproject.toml uv.lock ./
RUN uv sync --locked --no-install-project --extra main

# Copy source and install project
COPY . .
RUN uv sync --locked --extra main

ENV PATH="/app/.venv/bin:$PATH" \\
    PYTHONUNBUFFERED=1

CMD ["uv", "run", "main.py"]
Key features:
  • Uses uv for fast dependency resolution
  • Separates dependency and source layers for better caching
  • Installs main extra dependencies (includes yt-dlp, Pillow, curl_cffi)
  • Unbuffered Python output for real-time logs

Environment Variables

VariableRequiredDescriptionDefault
BOT_TOKENYesTelegram bot token from @BotFather-
TG_SERVERNoTelegram API endpointhttps://api.telegram.org
TELEGRAM_API_IDConditional*API ID from my.telegram.org-
TELEGRAM_API_HASHConditional*API hash from my.telegram.org-
DB_URLNoPostgreSQL connection stringSQLite (file-based)
*Required only when using local Bot API server (TG_SERVER=http://telegram-bot-api:8081)

Troubleshooting

Container exits immediately

Check the logs for error messages:
docker logs <container_id>
Common issues:
  • Invalid BOT_TOKEN
  • TG_SERVER pointing to unreachable host
  • Missing TELEGRAM_API_ID/TELEGRAM_API_HASH when using local Bot API

Cannot reach telegram-bot-api service

When using TG_SERVER=http://telegram-bot-api:8081 in single container mode, you must create a Docker network and run both containers on it, or use Docker Compose instead.
Example with manual networking:
# Create network
docker network create tt-bot-network

# Run telegram-bot-api
docker run -d \\
  --name telegram-bot-api \\
  --network tt-bot-network \\
  --env-file .env \\
  aiogram/telegram-bot-api:latest

# Run tt-bot
docker run --rm \\
  --network tt-bot-network \\
  --env-file .env \\
  tt-bot

Next Steps

Docker Compose

Deploy the full stack with all services

Local Development

Run the bot locally without Docker