Installation
This guide covers installing rbee on your system. rbee consists of several components that work together to create your AI colony.
Version: 0.1.0 (M0 - Core Orchestration)
Completion: 68% (42/62 BDD scenarios passing)
License: GPL-3.0-or-later (free and open source, copyleft)
System Requirements
Minimum Requirements
- Operating system: Linux (Ubuntu 22.04+, Debian 12+), macOS 13+, or Windows 11 with WSL2
- RAM: 8GB minimum, 16GB recommended
- Storage: 20GB for base system, plus space for models (typically 4-50GB per model)
- Network: SSH access between machines (for multi-machine setups)
- Rust: 1.75+ (if building from source)
GPU Requirements
rbee supports heterogeneous GPU configurations:
- NVIDIA GPUs: CUDA 11.8+ (RTX 20/30/40 series, A-series, H-series)
- Apple Silicon: M1/M2/M3 series with Metal acceleration
- CPU-only: Supported but significantly slower
- AMD GPUs (ROCm): Planned for future release
Installation Methods
Quick Install (Recommended)
The quick install script is not yet available. Use manual installation or build from source.
Planned for future release:
# NOT AVAILABLE YET
curl -sSL https://install.rbee.dev | shWill install:
rbee-keeper- The CLI tool for managing rbee infrastructurequeen-rbee- The orchestrator daemon (port 7833)rbee-hive- The worker host daemon (port 7835)
Manual Installation (Pre-Built Binaries)
Pre-built binaries are not yet available. Use “Building from Source” method below.
Planned for future release:
-
Download the latest release:
Visit github.com/veighnsche/llama-orch/releases and download the appropriate binaries for your platform.
-
Extract and install:
tar -xzf rbee-*.tar.gz sudo mv rbee-* /usr/local/bin/ -
Verify installation:
rbee-keeper --build-info queen-rbee --build-info rbee-hive --build-info
Building from Source (Current Method)
This is currently the ONLY way to install rbee.
Requirements:
- Rust 1.75+ (
rustuprecommended) - Git
- C compiler (gcc/clang)
- OpenSSL development headers
Steps:
# 1. Clone the repository
git clone https://github.com/veighnsche/llama-orch.git
cd llama-orch
# 2. Build all binaries (requires Rust 1.75+)
cargo build --release
# 3. Binaries will be in target/release/
ls -lh target/release/
# 4. Install to system (optional)
sudo cp target/release/rbee-keeper /usr/local/bin/
sudo cp target/release/queen-rbee /usr/local/bin/
sudo cp target/release/rbee-hive /usr/local/bin/
sudo cp target/release/llm-worker-rbee /usr/local/bin/Build time: 5-15 minutes depending on your machine.
rbee uses a Cargo workspace with 100+ crates. The main binaries are:
bin/00_rbee_keeper- CLI toolbin/10_queen_rbee- Orchestrator daemonbin/20_rbee_hive- Worker host daemonbin/30_llm_worker_rbee- LLM inference worker
Initial Configuration
rbee is designed to work with zero configuration for single-machine setups. Just run rbee-keeper and it auto-starts the queen.
Single Machine (Localhost)
No configuration needed! Just run:
# Queen auto-starts on first command
rbee-keeper infer -m llama-3-8b -p "Hello world"Multi-Machine Setup
For remote hives, create SSH config:
# Create SSH config for hives
mkdir -p ~/.ssh
cat >> ~/.ssh/config << 'EOF'
Host gaming-pc
HostName 192.168.1.100
User vince
Port 22
Host mac-studio
HostName 192.168.1.101
User vince
Port 22
EOF
# Install hive on remote machine
rbee-keeper hive install gaming-pcSee: Remote Hives Setup for detailed multi-machine configuration.
Verify Installation
Check that all components are installed correctly:
# Check keeper (CLI)
rbee-keeper --build-info
# Output: debug or release
# Check queen (orchestrator)
queen-rbee --build-info
# Output: debug or release
# Check hive (worker host)
rbee-hive --build-info
# Output: debug or release
# Check worker binary
llm-worker-rbee --build-info
# Output: debug or releaseNote: Use --build-info instead of --version. The version is always 0.0.0 (early development).
What Gets Installed
Core Binaries
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| rbee-keeper | CLI | Required | — | CLI tool for managing rbee infrastructure. Manages queen lifecycle, SSH-based hive installation, worker/model/inference commands. |
| queen-rbee | Daemon | Required | — | The orchestrator daemon (port 7833). Makes ALL intelligent decisions. Job-based architecture, routes operations to hives. |
| rbee-hive | Daemon | Required | — | Worker host daemon (port 7835). Runs ON GPU machines. Manages workers on THAT machine only. One hive per GPU machine. |
| llm-worker-rbee | Worker | Required | — | LLM inference worker daemon. llama.cpp-based inference. Spawned by rbee-hive. |
Directory Structure
~/.local/bin/ # Binaries installed here
~/.cache/rbee/ # Model cache and catalogs
models/ # Downloaded models (JSON metadata)
workers/ # Worker binaries (JSON metadata)
~/.ssh/config # SSH config for remote hivesNext Steps
Single Machine Setup
Run rbee on one computer
Homelab Setup
Connect multiple machines
Worker Types
Available worker binaries
Troubleshooting
Permission errors
If you encounter permission errors, ensure the binaries are executable:
chmod +x /usr/local/bin/rbee-*GPU not detected
Verify your GPU drivers are installed:
# NVIDIA
nvidia-smi
# AMD
rocm-smi
# Apple Silicon (should show GPU info)
system_profiler SPDisplaysDataTypeNetwork connectivity issues
For multi-machine setups, ensure SSH is configured and accessible:
ssh user@remote-machine "echo 'SSH works'"Completed by: TEAM-427
Based on:
/README.md- Project overview and architecture/Cargo.toml- Workspace structure and binariesbin/10_queen_rbee/src/main.rs- Queen implementationbin/20_rbee_hive/src/main.rs- Hive implementation
Status: Documentation reflects current build process and available features (v0.1.0)