Base URL: https://localhost (TLS, self-signed cert)
Time-of-day greeting with personal context.
{"greeting": "Good morning, Marguerite!", "period": "morning", "should_initiate": true}Safety alerts for caregivers.
{"alerts": ["Pain mentioned 4 times"], "alert_count": 1, "personal_summary": "...", "recent_moods": [...]}All personal facts + conversation history.
{"name": "Marguerite", "facts": [...], "context": "...", "recent_conversation": [...]}Personality-driven conversation with emotion detection and personal memory.
// Request
{"message": "I miss my husband Jean"}
// Response
{
"response": "I'm here with you. When you say that, I think of Sad music...",
"facts_extracted": ["Jean is-husband-of user"],
"grounding": {"semantic_matches": [...], "personal": "Name: Marguerite...", "knowledge": [...]},
"grounded": true
}LLM-based dialogue (requires Ollama or local LLM).
Process webcam image → visual understanding.
{"image_b64": "<base64 JPEG>", "top_k": 10}
→ {"summary": {"i_see": [...], "i_expect_to_hear": [...], "confidence": 0.95}}Process audio → sound understanding.
{"audio_b64": "<base64 PCM float32 16kHz>", "top_k": 10}
→ {"summary": {"i_hear": [...], "audio_duration": 2.0, "confidence": 0.8}}World model prediction: what audio does this evoke?
{"query": "thunder storm", "top_k": 5}
→ {"predicted_audio": [{"label": "thunder", "similarity": 0.41}, ...]}Spreading activation reasoning through knowledge graph.
{"query": "what follows thunder?", "n_hops": 3, "top_k": 10}
→ {"start_concepts": ["thunder"], "chains": [...]}Concept arithmetic: add/subtract concepts.
{"add": ["dog", "music"], "subtract": ["cat"], "top_k": 5}
→ {"results": [{"concept": "dog growling", "similarity": 0.40}]}Decompose a phrase into concept components.
{"query": "thunderstorm at night", "k": 5}
→ {"components": [{"concept": "night", "weight": 0.41}, ...]}Generate an imagination chain.
{"seed": "piano", "steps": 5}
→ {"seed": "playing piano", "steps": [...], "avg_surprise": 0.35, "learning_pairs_generated": 5}Multi-step grounded reasoning.
{"question": "what is a dog?"}
→ {"chain_of_thought": [{"step": "activate", ...}, {"step": "predict", ...}]}Current 7-slot attention buffer.
Hopfield associative memory status.
Episodic memory timeline.
Learned concept prototypes.
Knowledge graph statistics by relation type.
BFS traversal through knowledge graph.
Overall brain state assessment.
Learning progress: pairs learned, dreams, autonomy cycles.
Confidence score for a query.
Novelty detection: is this concept new to the brain?
Buffer a (visual, audio) embedding pair for training.
Train MLP on buffered pairs (gradient InfoNCE).
Training buffer size and total learned count.
Bulk ingest pre-computed embeddings from any dataset directory.
{"dataset": "balanced", "batch_size": 22160}
// Also accepts: "eval", "unbalanced", or any directory name under outputs/cortex/Start the self-directed learning loop (5-minute cycles).
Stop the autonomy loop.
Autonomy loop status: cycles, videos, pairs learned.
300 concepts projected to 2D hexagonal grid.
Waypoints between two concepts in grid space.
Grid distance + cosine similarity between concepts.
Text-to-speech via espeak-ng. Returns WAV audio.
Speak the brain's current thought.
| URL | Page |
|---|---|
/ |
Dashboard |
/goals |
Goals & Progress |
/cognition |
Memory / Learning / Autonomy / Knowledge |
/imagine |
Imagination + Concept Arithmetic |
/explore |
Clips / Embeddings / Grid Map |
/training |
Experiments & Evolution |
/face |
Animated Face with mic/camera/chat |