Unified Cognitive Architecture (UCA)
Brain-Inspired Framework for Large Language Models

Abstract
Large Language Models lack integrated emotional reasoning, long-term episodic memory, temporal awareness, and genuine self-monitoring. UCA unifies six core cognitive functions — sensory processing, emotional valuation, episodic memory, semantic knowledge with temporal sharding, executive control with quantum-like state superposition, and metacognitive self-modelling — within a single recurrent neural system. A Global Workspace enables information from all specialised layers to become globally available, mimicking conscious access.
6 Cognitive Layers
Sensory → Limbic → Hippocampal → Association → Executive → Metacognitive
42.3 Perplexity
Competitive with baseline transformer at similar parameter count
Episodic Memory
5,000-capacity external memory with emotional salience gating
Hallucination Detection
Metacognitive layer flags high-risk tokens with 2% intervention rate
Architecture
UCA is a single neural network with six vertically integrated layers communicating via Recurrent Processing (R=3 steps per forward pass) and a Global Workspace broadcast hub. Each layer maps to a distinct cognitive function, from raw token embedding in L1 to confidence estimation and error classification in L6.
Key Innovation
Quantum-inspired superposition in L5 (Prefrontal Executive) maintains n=3 parallel interpretation states per token. High uncertainty preserves superposition; low uncertainty collapses to a single interpretation — avoiding premature commitment to ambiguous inputs.
Results
A small-scale implementation (256 hidden, 6 layers, 100K steps on WikiText-2) achieved 42.3 validation perplexity (+0.5 vs. baseline) while demonstrating emergent cognitive behaviours: memory-guided disambiguation, temporal sharding of time-sensitive facts, and metacognitive intervention on hallucination-risk tokens.