Эта статья является препринтом и не была отрецензирована.
О результатах, изложенных в препринтах, не следует сообщать в СМИ как о проверенной информации.
A Roadmap to Falsification of Principia Cognitia. Draft Tier-0 Falsification Protocols for the MLC–ELM Duality: Empirical Tests of Cognitive Language Decoupling in Artificial Systems
A central challenge in contemporary cognitive science is to explain how structured, symbol-like processes emerge from the stochastic dynamics of neural collectives. The Principia Cognitia (PC) framework offers a substrate-independent formalism, positing a duality between an internal Metalanguage of Cognition (MLC)—a high-dimensional vector space of semions, operations, and relations (⟨S,O,R⟩)—and an External Language of Meaning (ELM) used for communication. This duality is formalized in the Theorem of Decoupling of Languages (TH-LANG-04), which predicts that MLC alignment is a necessary precondition for effective communication.
This paper presents a detailed methodological roadmap for the rigorous falsification of this theorem, designed to bridge the gap between abstract theory and empirical validation. We provide a complete, Tier-0 experimental program, including three coordinated protocols—MPE-1 (probing spatial MLC misalignment), SCIT-1 (testing cognitive inertia), and CRS-1 (examining compositional understanding). The protocols are specified with a degree of detail sufficient for full reproducibility on consumer-grade hardware, including agent architectures, training corpora, and quantitative falsification criteria. By offering this actionable blueprint, this work serves as an open invitation to the research community to replicate, challenge, and extend the empirical testing of the Principia Cognitia framework.
1. Abbott, E. A. (1884). Flatland: A Romance of Many Dimensions. Seeley & Co.
2. Cunningham, H., Ewart, A., Riggs, L., Huben, R., & Sharkey, L. (2023). Sparse Autoencoders Find Highly Interpretable Features in Language Models. arXiv:2309.08600.
3. Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., ... & Olah, C. (2021). A Mathematical Framework for Transformer Circuits. Transformer Circuits Thread. https://transformer-circuits.pub/2021/framework/index.html
4. Karpathy, A. (n.d.). nanoGPT. GitHub. Retrieved from https://github.com/karpathy/nanoGPT
5. Meng, K., Bau, D., Andonian, A., & Belinkov, Y. (2022). Locating and Editing Factual Associations in GPT. Advances in Neural Information Processing Systems, 35, 17359-17372.
6. Obenchain, T. G. (2016). Genius Belabored: Childbed Fever and the Tragic Life of Ignaz Semmelweis. The University of Alabama Press.
7. Pal, K., Sun, J., Yuan, A., Wallace, B. C., & Bau, D. (2023). Future Lens: Anticipating subsequent tokens from a single hidden state. arXiv preprint arXiv:2311.04897.
8. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457.
9. Shai, A. S., Marzen, S. E., Teixeira, L., Oldenziel, A. G., & Riechers, P. M. (2025). Transformers represent belief state geometry in their residual stream. arXiv. https://doi.org/10.48550/arXiv.2405.15943
10. Snow, A. (2025). The Dual Nature of Language: MLC and ELM. DOI: 10.5281/ZENODO.16790120.
11. Snow, A. (2025). Principia Cognitia: Axiomatic Foundations. DOI: 10.5281/ZENODO.16916262.
12. Snow, A. (2025). From Axioms to Analysis: A Principia Cognitia Framework for Parametric and Parallel Models of Language. DOI: 10.5281/ZENODO.16934649.
13. Zbeeb, M., Hammoud, H. A. A. K., & Ghanem, B. (2025). Reasoning Vectors: Transferring Chain-of-Thought Capabilities via Task Arithmetic. arXiv. https://arxiv.org/abs/2509.01363.