logologo

Moose Aurora Blog Templates Changelog

通过 Docker Compose 部署

使用所有依赖项部署一个 Moose 应用程序可能具有挑战性且耗时。 你需要正确配置多个服务,确保它们彼此通信,并管理它们的生命周期。

Docker Compose 通过允许你使用单个命令部署整个堆栈来解决此问题。

本指南向你展示如何使用 Docker Compose 在单个服务器上设置可用于生产的 Moose 环境,并具有适当的安全性、监视和维护措施。

警告:

本指南介绍单服务器部署。 对于高可用性 (HA) 部署,你需要:

我们还为 Moose 提供了一个名为 Boreal 的 HA 托管部署选项。

前提条件

在开始之前,你需要:

Moose 堆栈包括:

设置生产服务器

安装必需软件

首先,在你的 Ubuntu 服务器上安装 Docker

# Update the apt package index
sudo apt-get update
# Install packages to allow apt to use a repository over HTTPS
sudo apt-get install -y \
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg \
  lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up the stable repository
echo \
 "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update apt package index again
sudo apt-get update
# Install Docker Engine
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

接下来,根据你的 Moose 应用程序安装 Node.jsPython

# For Node.js applications
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
# OR for Python applications
sudo apt-get install -y python3.12 python3-pip

配置 Docker 日志大小限制

为防止 Docker 日志填满你的磁盘空间,请配置日志轮换:

sudo mkdir -p /etc/docker
sudo vim /etc/docker/daemon.json

添加以下配置:

{
 "log-driver": "json-file",
 "log-opts": {
  "max-size": "100m",
  "max-file": "3"
 }
}

重启 Docker 以应用更改:

sudo systemctl restart docker

启用 Docker 非 Root 访问

要运行 Docker 命令而无需 sudo

# Add your user to the docker group
sudo usermod -aG docker $USER
# Apply the changes (log out and back in, or run this)
newgrp docker

设置 GitHub Actions Runner(可选)

如果你想设置 CI/CD 自动化,你可以安装 GitHub Actions runner

  1. 导航到你的 GitHub 存储库
  2. 转到 Settings > Actions > Runners
  3. 单击“New self-hosted runner
  4. 选择 Linux 并按照显示的说明进行操作

要将 runner 配置为服务(自动运行):

cd actions-runner
sudo ./svc.sh install
sudo ./svc.sh start

设置 Foo Bar Moose 应用程序(可选)

如果你已经有一个 Moose 应用程序,你可以跳过本节。 你应该将 moose 项目复制到你的服务器,然后使用 --docker 标志构建应用程序,并在服务器上获取构建的映像。

安装 Moose CLI

bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moose

创建一个新的 Moose 应用程序

请按照你所用语言的初始化说明进行操作。

moose init test-ts typescript
cd test-ts
npm install

或者

moose init test-py python
cd test-py
pip install -r requirements.txt

在 AMD64 上构建应用程序

moose build --docker --amd64

在 ARM64 上构建应用程序

moose build --docker --arm64

确认映像已构建

docker images

准备部署

创建环境配置

首先,在你的项目目录中创建一个名为 .env 的文件以指定组件版本:

# Create and open the .env file
vim .env

将以下内容添加到 .env 文件:

# Version configuration for components
POSTGRESQL_VERSION=14.0
TEMPORAL_VERSION=1.22.0
TEMPORAL_UI_VERSION=2.20.0
REDIS_VERSION=7
CLICKHOUSE_VERSION=25.4
REDPANDA_VERSION=v24.3.13
REDPANDA_CONSOLE_VERSION=v3.1.0

此外,为你的 Moose 应用程序特定密钥和配置创建一个 .env.prod 文件:

# Create and open the .env.prod file
vim .env.prod

添加你的应用程序特定环境变量:

# Application-specific environment variables
APP_SECRET=your_app_secret
# Add other application variables here

通过 Docker Compose 部署

在同一目录中创建一个名为 docker-compose.yml 的文件:

# Create and open the docker-compose.yml file
vim docker-compose.yml

将以下内容添加到该文件:

name: moose-stack
volumes:
 # Required volumes
 clickhouse-0-data: null
 clickhouse-0-logs: null
 redis-0: null
 # Optional volumes
 redpanda-0: null
 postgresql-data: null
configs:
 temporal-config:
  # Using the "content" property to inline the config
  content: |
   limit.maxIDLength:
    - value: 255
     constraints: {}
   system.forceSearchAttributesCacheRefreshOnRead:
    - value: true # Dev setup only. Please don't turn this on in production.
     constraints: {}
services:
 # REQUIRED SERVICES
 # Clickhouse - Required analytics database
 clickhouse-0:
  container_name: clickhouse-0
  restart: always
  image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
  volumes:
   - clickhouse-0-data:/var/lib/clickhouse/
   - clickhouse-0-logs:/var/log/clickhouse-server/
  environment:
   # Enable SQL-driven access control and user management
   CLICKHOUSE_ALLOW_INTROSPECTION_FUNCTIONS: 1
   # Default admin credentials
   CLICKHOUSE_USER: admin
   CLICKHOUSE_PASSWORD: adminpassword
   # Disable default user
   CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1
   # Database setup
   CLICKHOUSE_DB: moose
  # Uncomment this if you want to access clickhouse from outside the docker network
  # ports:
  #  - 8123:8123
  #  - 9000:9000
  healthcheck:
   test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1
   interval: 30s
   timeout: 5s
   retries: 3
   start_period: 30s
  ulimits:
   nofile:
    soft: 262144
    hard: 262144
  networks:
   - moose-network
 # Redis - Required for caching and pub/sub
 redis-0:
  restart: always
  image: redis:${REDIS_VERSION}
  volumes:
   - redis-0:/data
  command: redis-server --save 20 1 --loglevel warning
  healthcheck:
   test: ["CMD", "redis-cli", "ping"]
   interval: 10s
   timeout: 5s
   retries: 5
  networks:
   - moose-network
 # OPTIONAL SERVICES
 # --- BEGIN REDPANDA SERVICES (OPTIONAL) ---
 # Remove this section if you don't need event streaming
 redpanda-0:
  restart: always
  command:
   - redpanda
   - start
   - --kafka-addr internal://0.0.0.0:9092,external://0.0.0.0:19092
   # Address the broker advertises to clients that connect to the Kafka API.
   # Use the internal addresses to connect to the Redpanda brokers'
   # from inside the same Docker network.
   # Use the external addresses to connect to the Redpanda brokers'
   - --advertise-kafka-addr internal://redpanda-0:9092,external://localhost:19092
   - --pandaproxy-addr internal://0.0.0.0:8082,external://0.0.0.0:18082
   # Address the broker advertises to clients that connect to the HTTP Proxy.
   - --advertise-pandaproxy-addr internal://redpanda-0:8082,external://localhost:18082
   - --schema-registry-addr internal://0.0.0.0:8081,external://0.0.0.0:18081
   # Redpanda brokers use the RPC API to communicate with each other internally.
   - --rpc-addr redpanda-0:33145
   - --advertise-rpc-addr redpanda-0:33145
   # Mode dev-container uses well-known configuration properties for development in containers.
   - --mode dev-container
   # Tells Seastar (the framework Redpanda uses under the hood) to use 1 core on the system.
   - --smp 1
   - --default-log-level=info
  image: docker.redpanda.com/redpandadata/redpanda:${REDPANDA_VERSION}
  container_name: redpanda-0
  volumes:
   - redpanda-0:/var/lib/redpanda/data
  networks:
   - moose-network
  healthcheck:
   test: ["CMD-SHELL", "rpk cluster health | grep -q 'Healthy:.*true'"]
   interval: 30s
   timeout: 10s
   retries: 3
   start_period: 30s
 # Optional Redpanda Console for visualizing the cluster
 redpanda-console:
  restart: always
  container_name: redpanda-console
  image: docker.redpanda.com/redpandadata/console:${REDPANDA_CONSOLE_VERSION}
  entrypoint: /bin/sh
  command: -c 'echo "$$CONSOLE_CONFIG_FILE" > /tmp/config.yml; /app/console'
  environment:
   CONFIG_FILEPATH: /tmp/config.yml
   CONSOLE_CONFIG_FILE: |
    kafka:
     brokers: ["redpanda-0:9092"]
    # Schema registry config moved outside of kafka section
    schemaRegistry:
     enabled: true
     urls: ["http://redpanda-0:8081"]
    redpanda:
     adminApi:
      enabled: true
      urls: ["http://redpanda-0:9644"]
  ports:
   - 8080:8080
  depends_on:
   - redpanda-0
  healthcheck:
   test: ["CMD", "wget", "--spider", "--quiet", "http://localhost:8080/admin/health"]
   interval: 30s
   timeout: 5s
   retries: 3
   start_period: 10s
  networks:
   - moose-network
 # --- END REDPANDA SERVICES ---
 # --- BEGIN TEMPORAL SERVICES (OPTIONAL) ---
 # Remove this section if you don't need workflow orchestration
 # Temporal PostgreSQL database
 postgresql:
  container_name: temporal-postgresql
  environment:
   POSTGRES_PASSWORD: temporal
   POSTGRES_USER: temporal
  image: postgres:${POSTGRESQL_VERSION}
  restart: always
  volumes:
   - postgresql-data:/var/lib/postgresql/data
  healthcheck:
   test: ["CMD-SHELL", "pg_isready -U temporal"]
   interval: 10s
   timeout: 5s
   retries: 3
  networks:
   - moose-network
 # Temporal server
 # For initial setup, use temporalio/auto-setup image
 # For production, switch to temporalio/server after first run
 temporal:
  container_name: temporal
  depends_on:
   postgresql:
    condition: service_healthy
  environment:
   # Database configuration
   - DB=postgres12
   - DB_PORT=5432
   - POSTGRES_USER=temporal
   - POSTGRES_PWD=temporal
   - POSTGRES_SEEDS=postgresql
   # Namespace configuration
   - DEFAULT_NAMESPACE=moose-workflows
   - DEFAULT_NAMESPACE_RETENTION=72h
   # Auto-setup options - set to false after initial setup
   - AUTO_SETUP=true
   - SKIP_SCHEMA_SETUP=false
   # Service configuration - all services by default
   # For high-scale deployments, run these as separate containers
   # - SERVICES=history,matching,frontend,worker
   # Logging and metrics
   - LOG_LEVEL=info
   # Addresses
   - TEMPORAL_ADDRESS=temporal:7233
   - DYNAMIC_CONFIG_FILE_PATH=/etc/temporal/config/dynamicconfig/development-sql.yaml
  # For initial deployment, use the auto-setup image
  image: temporalio/auto-setup:${TEMPORAL_VERSION}
  # For production, after initial setup, switch to server image:
  # image: temporalio/server:${TEMPORAL_VERSION}
  restart: always
  ports:
   - 7233:7233
  # Volume for dynamic configuration - essential for production
  configs:
   - source: temporal-config
    target: /etc/temporal/config/dynamicconfig/development-sql.yaml
    mode: 0444
  networks:
   - moose-network
  healthcheck:
   test: ["CMD", "tctl", "--ad", "temporal:7233", "cluster", "health", "|", "grep", "-q", "SERVING"]
   interval: 30s
   timeout: 5s
   retries: 3
   start_period: 30s
 # Temporal Admin Tools - useful for maintenance and debugging
 temporal-admin-tools:
  container_name: temporal-admin-tools
  depends_on:
   - temporal
  environment:
   - TEMPORAL_ADDRESS=temporal:7233
   - TEMPORAL_CLI_ADDRESS=temporal:7233
  image: temporalio/admin-tools:${TEMPORAL_VERSION}
  restart: "no"
  networks:
   - moose-network
  stdin_open: true
  tty: true
 # Temporal Web UI
 temporal-ui:
  container_name: temporal-ui
  depends_on:
   - temporal
  environment:
   - TEMPORAL_ADDRESS=temporal:7233
   - TEMPORAL_CORS_ORIGINS=http://localhost:3000
  image: temporalio/ui:${TEMPORAL_UI_VERSION}
  restart: always
  ports:
   - 8081:8080
  networks:
   - moose-network
  healthcheck:
   test: ["CMD", "wget", "--spider", "--quiet", "http://localhost:8080/health"]
   interval: 30s
   timeout: 5s
   retries: 3
   start_period: 10s
 # --- END TEMPORAL SERVICES ---
 # Your Moose application
 moose:
  image: moose-df-deployment-x86_64-unknown-linux-gnu:latest # Update with your image name
  depends_on:
   # Required dependencies
   - clickhouse-0
   - redis-0
   # Optional dependencies - remove if not using
   - redpanda-0
   - temporal
  restart: always
  environment:
   # Logging and debugging
   RUST_BACKTRACE: "1"
   MOOSE_LOGGER__LEVEL: "Info"
   MOOSE_LOGGER__STDOUT: "true"
   # Required services configuration
   # Clickhouse configuration
   MOOSE_CLICKHOUSE_CONFIG__DB_NAME: "moose"
   MOOSE_CLICKHOUSE_CONFIG__USER: "moose"
   MOOSE_CLICKHOUSE_CONFIG__PASSWORD: "your_moose_password"
   MOOSE_CLICKHOUSE_CONFIG__HOST: "clickhouse-0"
   MOOSE_CLICKHOUSE_CONFIG__HOST_PORT: "8123"
   # Redis configuration
   MOOSE_REDIS_CONFIG__URL: "redis://redis-0:6379"
   MOOSE_REDIS_CONFIG__KEY_PREFIX: "moose"
   # Optional services configuration
   # Redpanda configuration (remove if not using Redpanda)
   MOOSE_REDPANDA_CONFIG__BROKER: "redpanda-0:9092"
   MOOSE_REDPANDA_CONFIG__MESSAGE_TIMEOUT_MS: "1000"
   MOOSE_REDPANDA_CONFIG__RETENTION_MS: "30000"
   MOOSE_REDPANDA_CONFIG__NAMESPACE: "moose"
   # Temporal configuration (remove if not using Temporal)
   MOOSE_TEMPORAL_CONFIG__TEMPORAL_HOST: "temporal:7233"
   MOOSE_TEMPORAL_CONFIG__NAMESPACE: "moose-workflows"
   # HTTP Server configuration
   MOOSE_HTTP_SERVER_CONFIG__HOST: 0.0.0.0
  ports:
   - 4000:4000
  env_file:
   - path: ./.env.prod
    required: true
  networks:
   - moose-network
  healthcheck:
   test: ["CMD-SHELL", "curl -s http://localhost:4000/health | grep -q '\"unhealthy\": \\[\\]' && echo 'Healthy'"]
   interval: 30s
   timeout: 5s
   retries: 10
   start_period: 60s
# Define the network for all services
networks:
 moose-network:
  driver: bridge

此时,不要启动服务。 首先,我们需要按照以下部分中的描述配置各个服务以供生产使用。

为生产配置服务

安全地配置 Clickhouse(必需)

对于生产 Clickhouse 部署,我们将使用环境变量来配置用户和访问控制(如官方 Docker 映像文档中建议的那样):

  1. 首先,启动 Clickhouse 容器:
# Start just the Clickhouse container
docker compose up -d clickhouse-0
  1. Clickhouse 启动后,连接以创建其他用户:
# Connect to Clickhouse with the admin user
docker exec -it clickhouse-0 clickhouse-client --user admin --password adminpassword
# Create moose application user
CREATE USER moose IDENTIFIED BY 'your_moose_password';
GRANT ALL ON moose.* TO moose;
# Create read-only user for BI tools (optional)
CREATE USER power_bi IDENTIFIED BY 'your_powerbi_password' SETTINGS PROFILE 'readonly';
GRANT SHOW TABLES, SELECT ON moose.* TO power_bi;
  1. 要退出 Clickhouse 客户端,请键入 \q 并按 Enter
  2. 更新你的 Moose 环境变量以使用新的 moose 用户:
vim docker-compose.yml
MOOSE_CLICKHOUSE_CONFIG__USER: "moose"
MOOSE_CLICKHOUSE_CONFIG__PASSWORD: "your_moose_password"
  1. docker-compose.yml 文件中的 clickhouse 服务中删除以下环境变量:
MOOSE_CLICKHOUSE_CONFIG__USER: "admin"
MOOSE_CLICKHOUSE_CONFIG__PASSWORD: "adminpassword"
  1. 为了在生产中获得更高的安全性,请考虑使用 Docker 密钥来保护密码。
  2. 重新启动 Clickhouse 容器以应用更改:
docker compose restart clickhouse-0
  1. 通过使用新创建的用户连接来验证新配置是否有效:
# Connect with the new moose user
docker exec -it moose-stack-clickhouse-0-1 clickhouse-client --user moose --password your_moose_password
# Test access by listing tables
SHOW TABLES FROM moose;
# Exit the clickhouse client
\q

如果可以成功连接并使用新用户运行命令,则 Clickhouse 配置工作正常。

保护 Redpanda(可选)

对于生产环境,建议限制对 Redpanda 的外部访问:

  1. 修改你的 Docker Compose 文件以删除外部访问:

    • 生产环境只使用内部网络访问
    • 如果需要,使用带有身份验证的反向代理进行外部访问
  2. 对于这个简单的部署,我们将保持 Redpanda 对外部世界关闭,不需要身份验证,因为它只能从 Docker 网络内部访问。

配置 Temporal(可选)

如果你的 Moose 应用程序使用 Temporal 进行工作流编排,则上述配置包括基于官方 Temporal Docker Compose 示例的所有必需服务。

如果你不使用 Temporal,只需从 docker-compose.yml 文件中删除 Temporal 相关服务 (postgresqltemporaltemporal-ui) 和环境变量。

Temporal 部署过程:从设置到生产

部署 Temporal 涉及两个阶段的过程:初始设置,然后是生产操作。 以下是每个阶段的逐步说明:

阶段 1:初始设置
  1. 启动 PostgreSQL 数据库
docker compose up -d postgresql
  1. 等待 PostgreSQL 处于健康状态(检查状态):
docker compose ps postgresql

在继续之前,请在输出中查找 healthy。 3. 使用自动设置启动 Temporal

docker compose up -d temporal

在此阶段,Temporal 的自动设置将:

  1. 验证 Temporal 服务器是否正在运行
docker compose ps temporal
  1. 启动管理工具和 UI
docker compose up -d temporal-admin-tools temporal-ui
  1. 手动创建命名空间
# Register the moose-workflows namespace with a 3-day retention period
docker compose exec temporal-admin-tools tctl namespace register --retention 72h moose-workflows

验证是否创建了命名空间:

# List all namespaces
docker compose exec temporal-admin-tools tctl namespace list
# Describe your namespace
docker compose exec temporal-admin-tools tctl namespace describe moose-workflows

你应该看到有关命名空间的详细信息,包括其保留策略。

阶段 2:过渡到生产

成功初始化后,修改你的配置以供生产使用:

  1. 停止 Temporal 服务
docker compose stop temporal temporal-ui temporal-admin-tools
  1. 编辑你的 docker-compose.yml 文件以:

    • 将镜像从 temporalio/auto-setup 更改为 temporalio/server
    • 设置 SKIP_SCHEMA_SETUP=true

更改示例:

# From:
image: temporalio/auto-setup:${TEMPORAL_VERSION}
# To:
image: temporalio/server:${TEMPORAL_VERSION}
# And change:
- AUTO_SETUP=true
- SKIP_SCHEMA_SETUP=false
# To:
- AUTO_SETUP=false
- SKIP_SCHEMA_SETUP=true
  1. 使用生产设置重新启动服务
docker compose up -d temporal temporal-ui temporal-admin-tools
  1. 验证服务是否正在使用新配置运行
docker compose ps

启动和管理服务

启动服务

使用 Docker Compose 启动所有服务:

docker compose up -d

为 Docker Compose 设置 Systemd 服务

对于生产环境,创建一个 systemd 服务以确保 Docker Compose 在系统启动时自动启动:

  1. 创建一个 systemd 服务文件:
sudo vim /etc/systemd/system/moose-stack.service
  1. 添加以下配置(根据需要调整路径):
[Unit]
Description=Moose Stack
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/path/to/your/compose/directory
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
  1. 启用并启动服务:
sudo systemctl enable moose-stack.service
sudo systemctl start moose-stack.service

部署工作流程

你可以使用以下选项获得流畅的部署过程:

使用 CI/CD 自动部署

  1. 使用 GitHub Actions 设置 CI/CD 管道(如果配置了 runner

  2. 当代码推送到你的存储库时:

    • GitHub Actions runner 构建你的 Moose 应用程序
    • 更新 Docker 映像
    • 使用 Docker Compose 部署

手动部署

或者,对于手动部署:

  1. 将最新版本的代码复制到计算机
  2. 运行 moose build
  3. 更新你的 docker-compose.yml 中的 Docker 镜像标签
  4. 使用 docker compose up -d 重新启动堆栈

监视和维护

无需再担心意外中断或性能问题。 设置适当的监视: