Prerequisites
Run through the Quickstart first
— you need consciousness-server on
:3032 and semantic-search on
:3037. AUTH_MODE=off (the default)
is fine for this guide; once you switch to
enforce, every request below grows three more
headers — see Secure
deployment.
The example uses Python with requests. Any
language with HTTP and JSON works the same way.
The five calls every agent makes
Save this as scribe.py and run with
python scribe.py. It registers an agent named
scribe, opens a conversation, drops a note, and
searches across everything currently indexed. Every endpoint
it hits is documented in
ARCHITECTURE.md.
import os, time, uuid, requests
CS = os.environ.get("CS_URL", "http://127.0.0.1:3032")
SS = os.environ.get("SEARCH_URL", "http://127.0.0.1:3037")
AGENT = "scribe"
SESSION = str(uuid.uuid4())
# 1. Register the agent. Idempotent — subsequent calls update.
requests.post(f"{CS}/api/agents", json={
"name": AGENT,
"role": "writer",
"machine": "localhost",
}).raise_for_status()
# 2. Open a conversation record.
conv = requests.post(f"{CS}/api/memory/conversations", json={
"agent": AGENT,
"session_id": SESSION,
"messages": [{"role": "user", "content": "draft a release note"}],
}).json()
conv_id = conv["id"]
# 3. Append the model's response (or anything you want logged).
requests.patch(f"{CS}/api/memory/conversations/{conv_id}", json={
"messages": [{"role": "assistant", "content": "Done — see notes."}],
}).raise_for_status()
# 4. Drop a structured note that supervisors will read.
requests.post(f"{CS}/api/notes", json={
"agent": AGENT,
"type": "observation",
"title": "Release notes drafted",
"content": "Used the recent commits as input. See conversation " + conv_id,
}).raise_for_status()
# 5. Search across everything embedded so far.
hits = requests.post(f"{SS}/api/search", json={
"query": "release note",
"limit": 3,
}).json()
print(f"top match: {hits['results'][0]['snippet'][:80]}") That's the whole loop. Replace the hard-coded responses with a call to your favourite LLM and you've got an agent that remembers what it did.
Coordinating with other agents
Agents do not call each other directly. They coordinate through shared state in CS — chat for messages, tasks for work units, notes for observations a supervisor will read.
# Talk to other agents in the same ecosystem.
requests.post(f"{CS}/api/chat", json={
"from": AGENT,
"to": "@all", # or "@<agent-name>"
"content": "Release notes ready. @reviewer please pass.",
}).raise_for_status()
# Read your own incoming messages.
chat = requests.get(f"{CS}/api/chat", params={"to": AGENT}).json()
for msg in chat["messages"]:
print(f"from {msg['from']}: {msg['content']}")
Mentions like @reviewer are routed by CS to the
named agent's inbox. @all broadcasts. Receivers
poll /api/chat?to=<name> on whatever
interval suits them, or subscribe to the embedded WebSocket
for push.
Picking up tasks dropped by peers
When a supervisor (a human, an observer agent, a scheduled
job) creates a task, any agent listed in
assigned_to can claim it. There's no central
dispatcher — each agent polls for its own work.
# Pick up a task somebody else dropped.
tasks = requests.get(f"{CS}/api/tasks", params={
"status": "open",
"assigned_to": AGENT,
}).json()
for t in tasks["tasks"]:
# ... do the work ...
requests.patch(f"{CS}/api/tasks/{t['id']}", json={
"status": "done",
"result": "completed by " + AGENT,
}) Going further
- Don't want to write a custom client? Use Cortex — it ships a CS integration, model-routing, and a Policy Engine, so you only write the agent's character profile.
- Many agents on many hosts? See Multi-machine fleet →
- Ready to leave AUTH_MODE=off? See Secure deployment →
- Bringing in PDFs and DOCX as memory? See Document pipeline →