My name is Monica Anderson. I have had a decades-long career in both 20th Century GOFAI (mostly NLP) and in 21st Century AI (Deep Neural Networks). I started working on my Deep Neural Networks of the Third Kind (Organic Learning) exactly on January 1, 2001. At that point, fewer than a dozen people were working in this domain, including Geoff Hinton, Yann Le Cun, Yoshua Bengio, Jürgen Schmidhuber, and some of their students. Most people did not learn about Deep Learning until 2012, which means I had an 11 year head start.
I focused from the very start on Deep Discrete Neuron Networks, where learning starts out with an empty machine and then builds a structure of pseudo-neurons and pseudo-synapses in main memory. This heavily interlinked graph constitutes the entire Smallish Language Model. Construction of this Model while learning does not require GPUs, Linear Algebra, or even Floating Point Arithmetic, which makes it radically different and much more efficient than anything based on Deep Learning. It is still possible to construct transformers on top of these radically cheaper-to-create data structures.
What I have learned from studying the domain and conducting 20,000+ experiments over the decades forms the basis of my educational outreach, of which SubStack is an important component. My main publishing site is called Experimental Epistemology . More of my work is accessible from my Corporate Website , and I also post quite a lot on Facebook.
Consider the following Epistemology domain statements:
Omniscience is unavailable.
All corpora are incomplete.
(therefore) All intelligences are fallible.
These are rather hard to argue with, but people worried about AIs as an existential risk are ignoring these facts and are positing vastly superhuman intelligences. Future posts will be discussing these issues.
My initial focus is to show that Machine Learning is not Scientific because they gather correlations and then jump to conclusions on scant evidence – Operations which are not allowed in a Scientific context. I will discuss the repercussions of this fact on Artificial Intelligences as demonstrated by Large Language Models (LLMs) such as ChatGPT.
I will also speculate on reasonable future impact on future AI systems from my (quite Holistic) point of view. I have many opinions and research results in Experimental AI Epistemology to share. I plan to discuss the impact on LLMs and other AI implementation strategies, Holistic AI, Deep Neural Networks, Organic Learning, Understanding Machine One, The Red Pill of Machine Learning, Natural Language Understanding, AI Ethics, and other topics in the AI domain.
I see AI as providing many opportunities for improvements in quality of life at all levels of competence and resources. Specifically, (short term) I can see Dialog based AI as a phone app rapidly providing useful answers to simple questions from people who do not fully understand how the world works. Some posts will be speculative fiction about plausible models for a future AI-enriched society and various AI based spot solutions to common problems.
We will shortly find that for all practical purposes, our AIs stop lying. Posts will explain why ChatGPT and its ilk are lying today (Spring 2023), and how we will fix that. Indeed, many of my posts will assume AIs have stopped lying. Because that is what matters in the medium run.