The Great Flip – AIs and LLM go Beyond Science.


In all of my workshops and most of my posts I end up discussing the differences between Reductionism and Holism. This has been my main message since 2005. Suddenly, but not unexpectedly, this has become critical knowledge.

Because we are about to transition.

Most people do not realize there is an inherent conflict between the two. They believe that LLMs must be scientific since they use Linear Algebra running on deterministic computer hardware and were designed by academic mathematicians and programmers.

And then they find out that theses scientists can explain how it works but not why (note that word) it works. And that LLMs learn “facts” and strange correlations from social media posts. And that LLMs lie to you. And the vendor says, “It’s not lying, it’s just jumping to conclusions on scant evidence”.

Doesn’t sound very scientific to me. Or anyone that tried the early GPTs.

Yet our AIs cracked Protein Folding, which Scientific Methods could not do.

An LLM learning English does it on hardware that is completely deterministic using algorithms based on very solid math.

But English is not deterministic. Neither is the world. Or fiction. Or social media. Or even biology – A cell is a bag of soup.

As we can see, something must be wrong in most people’s views of the roles of Science and AI. And for very good reasons. Science has been the winning strategy since around 1600. For 400 years. It got us to The Moon. I’m not arguing with this success. I still claim

Reductionism is the greatest invention our species has ever made

It is the right thing to do for every species at this stage because it is necessary for the transition. To enable The Great Flip. We want to be able to exploit advanced Holistic Methods, and we needed to bootstrap to the point where we could build LLMs. And we did. We are ready.

Even looking at just the history of AI it is pretty clear we did pretty much the right things all along the way. Starting in 1955, Minsky, John, Solomonoff, and others directed us towards behavior-based, logical, and Model based (Reductionist) attempts at AI. And it was the correct approach at the time, because it was the only thing our computers could handle. Today’s LLMs operate totally differently and require enormous resources.

When training an LLM, we are not creating anything scientific.
We are creating a scientist.

Science is the purest example we have of Reductionism. Our LLMs are Holistic Problem Solvers. This is the conflict. You can find this core message in all my other writing and videos. So let me try a caricature of my message:

Humans are not general intelligences. We are general learners. Language, walking, and causality can be observed around us and we learn everything that might come in handy later. When we learn to become Scientists or Engineers we are learning physics and other scientific disciplines at two levels: We are learning at a subconscious, intuition-based level using a Holistic brain architecture and certain learning “algorithms”, the same way we learn to snowboard. We learn very effectively from direct experience. We increase the voltage and suddenly we see smoke from our device.

And on top of that experience and other Understanding we add, by learning from the experiences of others though books and schools, a layer of Models. Equations, theories, hypotheses, and non-AI computer programs.

We learn Ohm’s Law which could have predicted the smoke. You cannot use Ohm’s Law unless you Understand why (there is that word again) it works and how it applies to your current situation.

So an Engineer or Scientist looks at a complex situation in the real world. They use their Understanding of the world to discard everything irrelevant and to split what remains into smaller and smaller pieces… until they find a piece at the bottom that fits one of the equations they learned in STEM.

They output that as the Model that fits the current problem. They measure the numbers that go into the equation, compute the result, and then use that to solve their real world problem. This is a super effective way of solving problems that fit this category. Many problem situations in the real world can be handled by a Reductionist (a Scientist or Engineer) using their experience and deep Understanding to simplify the complex world to a computable Model.

This is the sleight of hand of a STEM education. They teach you explicitly that you shall

  1. Select the Model to use

  2. Measure the values and compute a result using the Model

  3. Use those numbers as parameters of your Model

  4. Run the Model to compute a result

but they neglect to tell you that you also will

-3. Observe the world since you are born
-2. Build a Holistic Understanding of the world, and of causality
-1. Get a Reductionist Education at MIT on top.
0. Understand The Problem well enough to perform Epistemic Reduction

Humans writing text on the internet have solved -3 through -1. We can use that text to bootstrap our machines to perform steps -2 through 2. This is a very neat trick, Mostly because the humans are already done after filling the internet with text. The machines now do the reading.

AI allows the delegation of steps -2 to 2 to a machine. Humans also have to perform the last step:

5. Understand how to apply the result to the problem in the real world.

We can make some rough estimates of the effort that goes into these stages. In a novel problem solving situation we want to be able to tell whether involving an AI might help, or not. If your work requires you to choose between problem solving strategies, then Epistemology might come in handy.

There are hard limits to Reductionist Science. Reductionism has gotten a tainted reputation because people have applied it to problems outside of what Reductionism can comfortably do. The clue is that the world is Complex and Science cannot handle Complexity. The first step in a Reduction is to discard the irrelevant. What happens when everything is relevant and discarding anything at all would affect the solution in unknown ways?

This is the normal situation in this reality. After solving all the problems that Science can solve, we find that the remaining problems have one thing in common. Which is totally unsurprising to an Epistemologist. We now realize Complexity is the main enemy of progress for any species.

Species level problems require Holistic solutions

I and a few others call this set of problem domains Bizarre Systems to indicate that Scientific Methods cannot even get started. The global economy and stock market, cellular biology, drug interactions in a body, signaling in the human brains, generating language, folding proteins, allocating global resources, fighting a war, or governing a country.

We can find solutions in these domains only if we pragmatically loosen the standard requirements of Reductionist Science such as optimality, repeatability, and explainability. Or by radically restricting the problem domain to what our known domain Models can handle, which is extremely error prone in complex situations and often leads to unexpected consequences. Consider our models of the Economy. Some of the simpler ones treat the unemployment rate as an input and the population as rational.

We can understand our languages and fold our proteins only by using Neural Networks, which are not repeatable or necessarily explainable.

When building an AI, we are not trying to solve the problem of lung cancer. We are trying to build a machine that can learn to Understand anything, and the we feed it all the information we have about lung cancer. An LLM is a Model of Learning. That is as scientific as it gets, and even that is a stretch.

Science has no algorithms or even concepts for Understanding, Intuition, Reductionism, Holism, or even Abstraction. They can only be discussed in Epistemology. But insights from Epistemology can be used to guide AI algorithm development, and that is what I have been using it for since 2001.

We can imagine different kinds of Models of Learning, but these MetaModels are not yet well enough understood to guide progress. Instead, progress is made through computer algorithm based experiments that can be measured for effectiveness in a Reductionist Fashion but this is amazingly counterintuitive greenfield research with few active projects. The current winner is of course DNNs and their offspring, the Transformers, but they are not the only known strategy, and they are expensive enough to require GPUs.

Everything an intelligence learns is correlations. Understanding of the world is a web of correlations in the mind of a Scientists and in the weights in an LLM. They are comparable at the learning-algorithm level, which means LLMs understand everything they know in the same way humans and other animals Understand.

A correlation in a brain is a neuron-to-neuron synaptic connection. This is fundamental. If your LLM algorithm cannot be implemented in neurons and synapses – simulated or not – then it is not bioplausible and is unlikely to be as efficient as biological brain architectures. It may work, if your algorithm somehow simulates synapses.

Does this impact me personally? Will The Great Flip impact my business? Can I use this knowledge to my advantage? What does it mean to adopt a Holistic Stance? Where can I find more information? Is anyone tracking the “Little Flips” that will, in the end, be components of The Great one. Well, I will track them and I expect to be able toredict them.

[ There is so much to say about these things that I am making this the first introductory posts in a series about The Great Flip. I will speculate about the near future of AI and its adoption in the world as seen through the eyes of an Epistemologist. The Rest of this series will require a subscription but other posts will continue to be free. ]

We all know by now that AI will completely transform the world. The Great Flip has started, and cannot be stopped. We can directly observe the advantages we get from using LLMs for tasks they do well, and which Science does poorly. And this is now happening. We are seeing a new wave of Scientific progress based on LLMs Understanding larger chunks of reality than humans can hold in their heads. And some of that progress is indeed resulting in better Models of the world. Better Science! Science is by no means dead, and just like we cannot predict whether the number of artists or programmers will increase or decrease because of AI, we might see AI dominate Science to the point where we need fewer scientists, or we may see millions of amateur scientists in their garages, using common computers with coworker-level AI to advance Science breadth-first. Paid for by UBI, and not colleges or corporations.

Any Model of the world is incomplete and outdated the moment it is published. It’s the law. And when faced with the full complexity of our Mundane Reality, we must switch to Holistic methods because that’s what remains, when Science falls flat.

We can use AI. We can personally adopt the Holistic Stance and use our gut feel more often. Or we can build non-LLM specialized machines using a catalog of Holistic Methods which (to an Epistemologist) are the primitives that LLMs are made of.

In general, it means we will delegate more and more of our Understanding to more and more effective Understanders. To our AIs.

Should we invest in AI today? Would it help our business? Do our problems even require AI? Do our customers want it? Whom can we even ask?

Does your company have a Corporate Epistemologist?