Sheryar Shah
    AboutPortfolioResearchBlogContact
    Let's TalkLet's Talk

    Research

    AI Research &
    Independent Papers

    Independent research at the intersection of neuroscience and large language models. Exploring cognitive architectures that go beyond pattern matching toward genuinely trustworthy AI.

    2025Independent Research

    Unified Cognitive Architecture (UCA)

    Brain-Inspired Framework for Large Language Models

    Read Paper
    Unified Cognitive Architecture (UCA) screenshot

    Abstract

    Large Language Models lack integrated emotional reasoning, long-term episodic memory, temporal awareness, and genuine self-monitoring. UCA unifies six core cognitive functions — sensory processing, emotional valuation, episodic memory, semantic knowledge with temporal sharding, executive control with quantum-like state superposition, and metacognitive self-modelling — within a single recurrent neural system. A Global Workspace enables information from all specialised layers to become globally available, mimicking conscious access.

    6 Cognitive Layers

    Sensory → Limbic → Hippocampal → Association → Executive → Metacognitive

    42.3 Perplexity

    Competitive with baseline transformer at similar parameter count

    Episodic Memory

    5,000-capacity external memory with emotional salience gating

    Hallucination Detection

    Metacognitive layer flags high-risk tokens with 2% intervention rate

    Architecture

    UCA is a single neural network with six vertically integrated layers communicating via Recurrent Processing (R=3 steps per forward pass) and a Global Workspace broadcast hub. Each layer maps to a distinct cognitive function, from raw token embedding in L1 to confidence estimation and error classification in L6.

    Key Innovation

    Quantum-inspired superposition in L5 (Prefrontal Executive) maintains n=3 parallel interpretation states per token. High uncertainty preserves superposition; low uncertainty collapses to a single interpretation — avoiding premature commitment to ambiguous inputs.

    Results

    A small-scale implementation (256 hidden, 6 layers, 100K steps on WikiText-2) achieved 42.3 validation perplexity (+0.5 vs. baseline) while demonstrating emergent cognitive behaviours: memory-guided disambiguation, temporal sharding of time-sensitive facts, and metacognitive intervention on hallucination-risk tokens.

    PyTorchTransformer ArchitectureGlobal Workspace TheoryWikiText-2NVIDIA A100

    Interested in collaborating?

    Open to research partnerships, AI consulting, and speaking.

    Get in touch
    Sheryar Shah

    Hong Kong's leading SEO expert specialising in AI integration and conversion optimisation.

    Hong Kong

    Navigation

    • About
    • Portfolio
    • Research
    • Blog
    • Contact
    • SEO Tools

    Work

    • All Projects
    • lalalemon.ai
    • HQ Hair
    • PAT CPA
    • Blog

    Get in Touch

    Ready to grow your digital presence? Let's talk.

    Contact Me

    © 2026 Sheryar Shah. All rights reserved.

    PrivacyTermsSitemap