- Linux (Pop!_OS). Node 20+, PNPM 9+, Python 3.11+, Docker or Podman.
- Build deps:
sudo apt install -y build-essential python3-dev libsqlite3-dev
chmod +x scripts/*.sh
./scripts/install.sh./scripts/dev.sh./scripts/build-linux.shcd tools/faissd
docker build -t voide/faissd:latest .
docker run -d --name faissd \
-v $HOME/.voide/faiss:/data \
-p 50051:50051 voide/faissd:latestEdit models/models.json. For llama.cpp set LLAMA_BIN env to your compiled llama-cli path.
Open flows/sample-self-debate.flow.json in the app. Choose adapter mock to run offline, or llama.cpp / gpt4all if installed.