Skip to content
Skip to Content
Getting StartedInstallation

Installation

This guide covers installing rbee on your system. rbee consists of several components that work together to create your AI colony.

System Requirements

Minimum Requirements

  • Operating system: Linux (Ubuntu 22.04+, Debian 12+), macOS 13+, or Windows 11 with WSL2
  • RAM: 8GB minimum, 16GB recommended
  • Storage: 20GB for base system, plus space for models (typically 4-50GB per model)
  • Network: SSH access between machines (for multi-machine setups)
  • Rust: 1.75+ (if building from source)

GPU Requirements

rbee supports heterogeneous GPU configurations:

  • NVIDIA GPUs: CUDA 11.8+ (RTX 20/30/40 series, A-series, H-series)
  • Apple Silicon: M1/M2/M3 series with Metal acceleration
  • CPU-only: Supported but significantly slower
  • AMD GPUs (ROCm): Planned for future release

Installation Methods

Planned for future release:

# NOT AVAILABLE YET curl -sSL https://install.rbee.dev | sh

Will install:

  • rbee-keeper - The CLI tool for managing rbee infrastructure
  • queen-rbee - The orchestrator daemon (port 7833)
  • rbee-hive - The worker host daemon (port 7835)

Manual Installation (Pre-Built Binaries)

Planned for future release:

  1. Download the latest release:

    Visit github.com/veighnsche/llama-orch/releases  and download the appropriate binaries for your platform.

  2. Extract and install:

    tar -xzf rbee-*.tar.gz sudo mv rbee-* /usr/local/bin/
  3. Verify installation:

    rbee-keeper --build-info queen-rbee --build-info rbee-hive --build-info

Building from Source (Current Method)

This is currently the ONLY way to install rbee.

Requirements:

  • Rust 1.75+ (rustup recommended)
  • Git
  • C compiler (gcc/clang)
  • OpenSSL development headers

Steps:

# 1. Clone the repository git clone https://github.com/veighnsche/llama-orch.git cd llama-orch # 2. Build all binaries (requires Rust 1.75+) cargo build --release # 3. Binaries will be in target/release/ ls -lh target/release/ # 4. Install to system (optional) sudo cp target/release/rbee-keeper /usr/local/bin/ sudo cp target/release/queen-rbee /usr/local/bin/ sudo cp target/release/rbee-hive /usr/local/bin/ sudo cp target/release/llm-worker-rbee /usr/local/bin/

Build time: 5-15 minutes depending on your machine.

Initial Configuration

Single Machine (Localhost)

No configuration needed! Just run:

# Queen auto-starts on first command rbee-keeper infer -m llama-3-8b -p "Hello world"

Multi-Machine Setup

For remote hives, create SSH config:

# Create SSH config for hives mkdir -p ~/.ssh cat >> ~/.ssh/config << 'EOF' Host gaming-pc HostName 192.168.1.100 User vince Port 22 Host mac-studio HostName 192.168.1.101 User vince Port 22 EOF # Install hive on remote machine rbee-keeper hive install gaming-pc

See: Remote Hives Setup for detailed multi-machine configuration.

Verify Installation

Check that all components are installed correctly:

# Check keeper (CLI) rbee-keeper --build-info # Output: debug or release # Check queen (orchestrator) queen-rbee --build-info # Output: debug or release # Check hive (worker host) rbee-hive --build-info # Output: debug or release # Check worker binary llm-worker-rbee --build-info # Output: debug or release

Note: Use --build-info instead of --version. The version is always 0.0.0 (early development).

What Gets Installed

Core Binaries

ParameterTypeRequiredDefaultDescription
rbee-keeperCLIRequiredCLI tool for managing rbee infrastructure. Manages queen lifecycle, SSH-based hive installation, worker/model/inference commands.
queen-rbeeDaemonRequiredThe orchestrator daemon (port 7833). Makes ALL intelligent decisions. Job-based architecture, routes operations to hives.
rbee-hiveDaemonRequiredWorker host daemon (port 7835). Runs ON GPU machines. Manages workers on THAT machine only. One hive per GPU machine.
llm-worker-rbeeWorkerRequiredLLM inference worker daemon. llama.cpp-based inference. Spawned by rbee-hive.

Directory Structure

~/.local/bin/ # Binaries installed here ~/.cache/rbee/ # Model cache and catalogs models/ # Downloaded models (JSON metadata) workers/ # Worker binaries (JSON metadata) ~/.ssh/config # SSH config for remote hives

Next Steps

Troubleshooting

Permission errors

If you encounter permission errors, ensure the binaries are executable:

chmod +x /usr/local/bin/rbee-*

GPU not detected

Verify your GPU drivers are installed:

# NVIDIA nvidia-smi # AMD rocm-smi # Apple Silicon (should show GPU info) system_profiler SPDisplaysDataType

Network connectivity issues

For multi-machine setups, ensure SSH is configured and accessible:

ssh user@remote-machine "echo 'SSH works'"

Completed by: TEAM-427
Based on:

  • /README.md - Project overview and architecture
  • /Cargo.toml - Workspace structure and binaries
  • bin/10_queen_rbee/src/main.rs - Queen implementation
  • bin/20_rbee_hive/src/main.rs - Hive implementation

Status: Documentation reflects current build process and available features (v0.1.0)

2025 © rbee. Your private AI cloud, in one command.
GitHubrbee.dev