Prerequisites
- Docker Engine + Docker Compose v2.
- Ollama on the host
(
http://127.0.0.1:11434) with thenomic-embed-textmodel pulled. Bothsemantic-searchandconsciousness-serverreach Ollama via the host. If Ollama isn't running,/api/searchreturns a precise503 ollama_unreachableinstead of pretending to work.
On Ubuntu / Debian: curl -fsSL https://ollama.com/install.sh | sh
followed by ollama pull nomic-embed-text. On macOS:
install the Ollama desktop app, then pull the same model.
1. Clone the ecosystem
The whole stack lives in one repository. The compose file at
deploy/docker-compose.yml wires up six services
plus Redis from this single tree.
git clone https://github.com/build-on-ai/consciousness-server.git
cd consciousness-server 2. Verify your host
A one-shot script verifies Docker, Compose, Ollama, and the embedding model are present. It's optional, but it tells you exactly what's missing instead of letting Docker fail mid-pull.
bin/preflight
# Exits 0 when the host is ready.
# If something is missing it prints what to install and aborts —
# common case is "ollama pull nomic-embed-text". 3. Boot the stack
The default profile brings up six blocks with
AUTH_MODE=off, so a solo user gets a working
ecosystem without generating any keys. The key-server is
opt-in via --profile full; you don't need it to
go through this guide.
cd deploy
docker compose up -d What you've just started, on the ports you can already reach from your LAN:
| Port | Service | Role |
|---|---|---|
3032 | consciousness-server | tasks, notes, chat, memory, agents, skills, embedded WebSocket |
3037 | semantic-search | Flask + ChromaDB; embeddings via Ollama |
3038 | machines-server | infrastructure awareness + telemetry |
3040 | key-server | ed25519 auth — opt-in via --profile full |
3041 | test-runner | async pytest / jest / npm execution |
3042 | git-workflow | post-commit hook receiver |
The ecosystem reserves the range 3030–3049 for
current and future BuildOnAI blocks. The active subset lives in
ports.yaml at the repo root — every layer (native
servers, compose, preflight, tooling) reads the same file.
3.5 Ports already in use on your host?
If a default port is taken on your machine — another service
listens on 3032, you already run a Redis on
6379 — the fix is a one-file edit. Don't touch
docker-compose.yml; ports.yaml is
the single source of truth, every layer reads from it.
# Bump every port in ports.yaml by 10000 (or any free offset):
sed -i -E 's/^( [a-z-]+: )3([0-9]{3})$/\113\2/' ports.yaml
sed -i -E 's/^( redis: )6379$/\116379/' ports.yaml
# Regenerate deploy/.env, re-run preflight, bring the stack up:
bin/sync-ports
bin/preflight
cd deploy && docker compose up -d
What just happened: ports.yaml moved the whole
palette by 10000 (so CS lives on 13032, semantic
search on 13037, redis on 16379, and
so on). bin/sync-ports regenerated
deploy/.env; compose interpolated the new values;
bin/preflight re-ran without warnings.
Container-internal ports (the PORT env vars
inside services, REDIS_PORT against the redis
container, inter-service URLs via docker DNS) stay
hard-coded by design — they live inside the docker network
and never collide with the host. Only the host-side ports
move.
You can also override one port at a time without editing
ports.yaml:
PORT_KEY_SERVER=13040 docker compose up -d.
Useful for one-off experiments.
4. Verify everything is healthy
Each service exposes /health. A single loop
confirms the stack came up cleanly:
for p in 3032 3037 3038 3041 3042; do
curl -sf "http://127.0.0.1:$p/health" >/dev/null && echo "port $p OK"
done
# Expected:
# port 3032 OK
# port 3037 OK
# port 3038 OK
# port 3041 OK
# port 3042 OK
If any port is missing,
docker compose logs <service> from the
deploy/ directory shows what failed. The most
common case is Ollama not running — fix that, restart with
docker compose restart semantic-search, and
re-check.
5. Store your first memory
The memory layer is ready out of the box. No extra setup beyond the boot. Three calls — record a conversation, store a training record, search across everything:
# 1. Start a conversation record
curl -X POST http://127.0.0.1:3032/api/memory/conversations \
-H 'Content-Type: application/json' \
-d '{
"agent": "first-agent",
"session_id": "s1",
"messages": [{"role": "user", "content": "hello world"}]
}'
# 2. Store a training record (the "type" field is required —
# one of: troubleshooting | exploration | implementation |
# explanation | architecture | ui_mapping)
curl -X POST http://127.0.0.1:3032/api/memory/training \
-H 'Content-Type: application/json' \
-d '{
"agent": "first-agent",
"type": "exploration",
"goal": "first run",
"instruction": "verify that storage works",
"input": "quickstart guide",
"output": "ecosystem boots"
}'
# 3. Search semantically across everything embedded
curl -X POST http://127.0.0.1:3037/api/search \
-H 'Content-Type: application/json' \
-d '{"query": "first run quickstart"}'
The third call hits port 3037 directly because
semantic search runs as its own service with
network_mode: host. Embeddings come from Ollama;
the index lives in ChromaDB; the query is a single POST.
Next steps
- Cortex → — local AI agent powered by Ollama. Hooks into the same memory layer.
- Document Processor → — desktop app for PDF / DOCX / TXT that pushes parsed content into shared memory.
- Key Server → — switch on ed25519 auth when you're ready to leave
AUTH_MODE=off. - Machines → — register additional machines so agents can route by GPU, VRAM, available models.
Cleaning up
The deploy/volumes/* tree (ChromaDB store,
Redis dump, per-service logs) is written by containers
running as root, so removing it from the host needs sudo.
Three escalating levels:
cd deploy
docker compose down # stops containers, volumes stay
docker compose down -v # stops + removes named volumes
sudo rm -rf volumes # full reset to a pristine state