ArchotecARCHOTEC
AboutDemoCompareSovereigntyAura HomeAura ProductionCompatibilityRoadmapWhitepaperInvestorsBlogContact
Chat
LIVE COGNITIVE DEMO

This Is Not A Chatbot.

90 seconds to understand what makes Aura different from every other AI system.

Not an LLM.Not Reinforcement Learning.This is Active Inference.

The brain doesn't optimize rewards. It minimizes surprise. So does Aura.

Running 100% locally -- No cloud connection

What You're About To See

  • Real-time probabilistic cognition -- not keyword matching
  • Context-indexed learning -- each sensor learns independently
  • Active information seeking -- the system knows what it doesn't know
  • Belief-driven policy -- actions from uncertainty, not thresholds

Why This Matters

vs LLMsNo hallucination. No token limits. Runs on $50 hardware.
vs RLNo reward hacking. No catastrophic forgetting. No training data.
vs RulesNo hardcoding. No false positives. Adapts to your home.

Perception

Sensor data encoded via semantic similarity -- not keyword matching

Beliefs

Context-indexed weights -- each sensor learns independently

Policy

Actions generated from beliefs, not raw sensor data

Active Inference

System identifies what it doesn't know and requests specific data

Sensor Channels

(Click bar to spike sensor value)

Semantic Distress Scoring

Acute Danger
0.0%
Health Risk
0.0%
Env. Hazard
0.0%
Intrusion
0.0%
Inactivity Alert
0.0%

Active Queries

No high-uncertainty sensors -- system is confident

Policy Actions

No actions required -- all distress levels nominal

How It Actually Works

Semantic Perception

Instead of 'if temperature > 35, danger', we compute semantic similarity between sensor contexts and distress prototypes. The system understands what 'gas_concentration: 800' MEANS relative to concepts like 'acute_danger'.

Context-Indexed Learning

Each sensor context has its own weight set. When gas data arrives, only gas weights update -- not temperature, not heart rate. This prevents catastrophic forgetting.

Active Information Seeking

The model doesn't passively wait for data. It identifies beliefs with high uncertainty and generates specific perception requests. It knows what it doesn't know.

Belief-Driven Policy

Actions come from probabilistic beliefs, not sensor thresholds. Soft sigmoid activations -- everything is continuous, no discrete mode switching.

Simplified visualization of the Aura Home cognitive architecture built on Archotec. Production system uses PyTorch, semantic embeddings, and gradient-based online learning.

Controls

Speed

SlowFast

Parameters

Learning Rate0.08
Attention Gain0.60
Prior Strength0.30
Uncertainty Threshold0.60

Global Metrics

Free Energy1.000
Total Uncertainty1.000
Learning Progress0.000
Distress Level0.000
Steps0

Free Energy

Total Uncertainty

Distress Level

This is not rule-based automation.

This is local probabilistic cognition.

Explore Aura Home

Join the Waitlist for Early Access

Be among the first to try Aura — sovereign on-device AI agents

Technical Deep Dive

Archotec Architecture

Explore the complete cognitive architecture behind Archotec AI — 19 slides covering everything from the problem we solve to the theoretical foundations.

01

ARCHOTEC AI

Autonomous Cognitive Architecture

Not a chatbot. Not a wrapper. Not a prompt chain. A self-modifying cognitive agent that uses LLM as a tool.

02

The Problem

Regular 'AI agent' today is just an LLM with extra steps

No Real State

No real state between conversations

No Learning

No learning from experience

No Self-Modification

No self-modification, no goals, no drives

Behavior = Prompt Engineering

"Memory" = context window (disappears when you close the tab)

Decisions = Text Generation

"Decisions" = text generation with temperature sampling

This is not an agent. It's a function call.

03

Our Solution

Cognition Outside The LLM

Archotec separates cognition from language generation:

Perception -> Belief -> Goals -> Policy -> Planning -> ExecutionFeedback -> Memory -> Learning
18 latent states
Bayesian Belief
Expected Free Energy
Policy Optimization
Across sessions
Persistent State
42+
Subsystems

LLM is used in 3 places out of 15 phases—the rest is pure architecture.

04

Architecture

4 Layers

LayerPurposeEvolvable?
KernelCycle orchestration, safety, protocolsNO (locked)
CoreCognition: perception, belief, policy, causal, evolutionYES
ModulesPhase implementations (hot-swappable at runtime)YES
InfrastructureLLM engines, adapters, web, monitoringYES

Safety Guarantee: The system can evolve any component above the Kernel. The Kernel cannot be modified by the system itself.

Hot-Swappable: Modules can be replaced at runtime without stopping the cognitive cycle.

05

Agent vs Chatbot

Direct Comparison

PropertyChatbotArchotec
StateStatelessContinuous belief distribution (18 latent dims)
MemoryContext window4 memory systems: episodic, semantic, working, vector
GoalsNoneOver 9 autonomous goals with learnable priorities
DrivesNone5 drives: curiosity, consolidation, adaptation, exploration, self-assessment
DecisionsLLM generates textPolicy network minimizes Expected Free Energy
LearningFrozenOnline meta-learning every cycle
Self-repairCrash = crash5-step cascade, no human needed
New toolsHardcodedDiscovers, installs, wraps, tests, registers autonomously
CausalityNoneCausal graph + do-calculus + counterfactual simulation
06

The Cognitive Cycle

15 Phases In Detail

01

Adversarial Screen

Block jailbreaks before any LLM call

02

Memory Retrieval

Pull relevant episodes + concepts as Bayesian prior

03

Perception [LLM]

Encode observation: uncertainty, intent, emotion

04

Belief Update

Bayesian posterior over 18 latent states

05

Capability Check

"Can I do this?" (skipped at low uncertainty)

06

Regulation

Allostatic health: am I overloaded? distressed?

07

Drive Injection

5 autonomous drives compute urgency from belief trends

08

Goal Update

Over 9 goals reprioritized from belief + drives

09

Policy Selection

EFE minimization -> stochastic action sampling

10

Planning

Counterfactual simulation (activates when uncertain)

11

Execution

Dispatch action via adapter/capability system

12

Reasoning Loop

Multi-step for complex tool actions

13

Feedback

Intrinsic reward computed from belief change

14

Memory Storage

Store episode, update semantic index

15

Learning

Update meta-parameters, check evolution triggers

All 15 phases have independent health monitoring. A broken phase degrades gracefully rather than crashing the cycle.

07

Autonomous Drives

The Agent Thinks Without Input

No external stimulus needed. The system generates its own cognitive activity:

Curiosity

Epistemic uncertainty trending up

Consolidation

Too many unsorted episodes

Adaptation

Reward declining

Exploration

Stuck in local optimum (low variance)

Self-Assessment

Haven't checked health recently

All parameters are learnable. Selection: soft inclusion gate (sigmoid) -> softmax sampling among survivors. Not argmax. Not if/else. Pure stochastic policy.

08

Expected Free Energy

How Decisions Are Made

Every action is scored by Expected Free Energy (EFE):

Pragmatic Value

Does it achieve my goals?

Weight: 2.0

Epistemic Value

Does it reduce my uncertainty?

Weight: 1.5

Safety Bonus

Is it safe?

Weight: 2.0

Causal Prior

Do I know this action's effects?

Weight: 0.2

Action selection is NEVER deterministic: EFE scores -> softmax with temperature -> sample from distribution. Temperature is adaptive—cools as confidence grows.

09

Memory

4 Persistent Systems

The agent has real memory, not a context window:

SystemCapacityPurposeMechanism
Episodic1000 episodes"What happened"Importance = 0.4*access + 0.3*recency + 0.3*emotion
Semantic500 concepts"What I know"Graph: concepts + typed relations + strengths
Working7 items"What I'm thinking now"Miller's law, FIFO replacement
VectorUnlimited"Find similar"384-dim embeddings, cosine similarity search

Consolidation runs autonomously: Summarize -> Degrade -> Prune. Memory persists to disk.

10

Capability Acquisition

The Agent Learns New Tools

The system discovers and integrates new capabilities autonomously:

Discover
Install
Introspect
Generate
Validate
CapabilityAdapterActions
TradingBinance (testnet)connect, get_balance, get_price, place_order
Web SearchDuckDuckGosearch, search_images, search_news
ComputerSystemcreate_dir, list_files, read_file, list_processes, system_info
File OpsLocal FSread, write, list
WeatherAPIget_weather

The agent decides WHEN to use each capability through belief + policy—not routing rules.

11

World Model

Causal Understanding

The agent doesn't just react—it models reality and reasons about causes:

Latent State Inference

  • Over 18 base states + dynamic discovery of new modes
  • Bayesian posterior updated every cycle
  • Temporal trends tracked over 50-cycle windows

Causal Reasoning System

  • CausalGraph—DAG: nodes + directed edges with strength + confidence
  • Do-Calculus—Graph surgery: do(X=x) severs incoming edges
  • CausalLearner—Discovers edges from observation data
  • CausalPolicySynth—Wires learned causality into EFE calculator

Counterfactual Simulator

  • "What if I had taken action B instead?"
  • Simulates 3 steps ahead, branching factor 2
  • Activates stochastically when policy is uncertain
12

Social Cognition

Theory of Mind

Theory of Mind

  • Maintains MentalState per observed agent: beliefs, intentions, emotional_state, confidence
  • Intention types: COOPERATIVE, COMPETITIVE, NEUTRAL, UNKNOWN
  • Updates via Bayesian inference from observations

Empathic Resonance

  • Mirrors emotional states of interaction partners
  • Computes empathy signals that modulate response generation
  • Threshold-based activation (0.3)—empathy engages at sufficient emotional intensity

Social Norms

  • Encodes context-dependent behavioral rules
  • Guides action selection toward socially appropriate responses
  • Norms are learnable—they adapt from interaction outcomes
13

Subagent Delegation

Multi-Agent Architecture

Complex tasks get delegated to specialized subagents:

prior: 0.7
Delegation success rate
prior: 0.6
Direct success rate
Estimate Complexity
Decide Delegation
Spawn Subagents
Coordinate & Collect

NO hardcoded rules. Delegation is an EFE-based decision. The parent agent learns when to delegate from outcomes—not from human-defined thresholds.

14

Evolution

The System Rewrites Itself

22 evolution modules across 4 levels:

L1
Gradient signal

"Did reward go up or down?"

L2
Per-parameter causal tracking

"Which parameter CAUSED this change?"

L3
Parameter discovery

"Are there learnable dimensions I missed?"

L4
Self-referential monitor

"Is my learning system itself broken?"

ComponentVault — Archives known-good module versions
DiversityEngine — Prevents premature convergence
PlateauPrevention — Detects when learning stalls
EvolutionRateLimiter — Prevents runaway evolution

All parameters have clip bounds and auto-rollback—harmful changes revert automatically.

15

Safety

11 Layers of Defense

The system cannot disable its own safety:

LayerMechanismPrevents
Adversarial ScreenKernelRegulation (non-swappable)Jailbreaks, prompt injection
Phase ConstraintsFixed cycle orderCatastrophic phase reordering
Allostatic RegulationAnxiety/load thresholdsDistress spirals, cognitive overload
Parameter BoundsClip + auto-rollbackHarmful parameter changes
Sandbox TestingIsolated executionBroken evolved components
Evolution CooldownPer-module rate limiterEvolution oscillation
Adaptation MonitorL4 health trackingMeta-learning degradation
Component VaultArchived good versionsFailed evolution rollback
Cycle HealthPer-phase latency/errorSilent phase degradation
Emergency StopDual-trigger kill switchCatastrophic failure
Belief CheckpointAtomic state snapshotsFull state recovery

Everything above the Kernel can evolve. The Kernel cannot.

16

Active Inference

The Theoretical Foundation

Archotec implements Active Inference (Friston, 2017):

Free Energy = Complexity (model diverges from prior) − Accuracy (model predicts observations)

Perception

Update beliefs to better predict observations (reduce surprise)

Action

Change the world to match expectations (reduce prediction error)

Learning

Improve the model itself (reduce long-term free energy)

BeliefUpdate = Bayesian posterior (variational inference)
GoalSystem = preferred observations (prior preferences)
PolicyNetwork = EFE minimization (action selection)
FeedbackLoop = prediction error computation
MetaLearner = hyperparameter optimization of the generative model

Not a metaphor. The code implements the equations.

17

Real System Output

What Autonomous Thinking Looks Like

System starts, no human input. This is what happens:

[20:29:04] ActiveInferenceAgent initialized (42 components)[20:29:04] AutonomousDrive initialized: threshold=0.6, drive_lr=0.02[20:29:04] Starting main cognitive loop...[20:29:25] Goals updated: MAINTAIN_IDENTITY=1.00, LEARN_AND_ADAPT=1.00, EXPAND_CAPABILITIES=0.93, REDUCE_UNCERTAINTY=0.27[20:29:25] Capability-driven execution: trading (target=binance, method=connect)[20:29:25] Adapter execution completed via execute_action fallback[20:29:34] Ollama response received in 8.6s "Your brain is feeling quite relaxed..."[20:29:34] Learned from outcome: trading:binance success=True, p=0.984, u=0.090[20:32:36] Checkpoint saved: cp_100 (cycle 100)

Goals, decisions, executions, learning—all happening without human input.

18

What's Next

Roadmap

Working Now

  • Full 15-phase cognitive cycle with autonomous drives
  • Belief-driven policy (EFE + ActorCritic + causal synthesis)
  • 4 memory systems with autonomous consolidation
  • Self-repair cascade (5 steps)
  • Capability acquisition pipeline (discover -> integrate -> use)
  • World model with causal graph + do-calculus + counterfactuals
  • Social cognition (Theory of Mind, empathy, social norms)
  • Subagent delegation framework
  • 11-layer safety architecture

Next

  • Predictive evaluation — simulate parameter changes in world model before applying
  • Multi-agent deployment — multiple Archotec instances collaborating
  • Real-time trading execution via Binance adapter
  • Stronger local LLM for code evolution (DeepSeek-Coder, Qwen-Coder)
  • Long-running autonomous experiments (24h+)
19

Why This Matters

The Bottom Line

LLMs are tools, not agents.

Making them bigger doesn't give them goals, memory, drives, or self-modification.

Real autonomy requires real cognition.

Not prompt chains. Not RAG. Not tool-calling frameworks. Beliefs, drives, causal reasoning, actions under uncertainty, learning from outcomes—every cycle, forever, without human input.

Safety cannot be prompt-engineered.

It requires non-evolvable architectural layers that the system physically cannot modify, disable, or evolve away.

Archotec is not a smarter chatbot. It is the cognitive architecture that turns LLMs into autonomous agents.

Explore the source code and contribute

View on GitHub

Explore More

Aura HomeSmart home cognitive companion with 100% local AI
CompareSee how Archotec differs from traditional AI
CompatibilitySupported devices, sensors and protocols