This story is published on Outthere in partnership with Ulyces.

Karl Friston is the most quoted neuroscientist in the world. The impact of his publications may be twice that of even Albert Einstein. The key to his success? His free energy principle, considered by some as the most revolutionary multi-disciplinary theory since Charles Darwin’s theory of evolution. From mathematics to treating the mentally ill to artificial intelligence (AI), his free energy minimization principle could aid a wide array of fields.

According to the British scientist, our brain is a prediction making machine, partially to maintain itself, but more so to adapt to a constantly changing environment. That idea could inspire AIs to develop a true discernment of the world around them. In the next 10 years, Friston expects to see curious artificial agents with a consciousness, capable of learning on their own. We met with the neuroscientist whose theories could give rise to the robots of tomorrow.

Professor Karl Friston
Credits: British CouncilWhat is the free energy principle?

Free energy is the general theory that, in one way or another, we need to minimize surprises in adapting to change. One of the measures of this surprise is predicting error, or in other words the difference between what we’re observing and what we had expected to observe. That leads us to a particular formula for minimizing free energy such as predictive coding, which is one of the most popular ways of understanding the free energy principle in terms of predictive processing.

To put it succinctly: the brain is the product of an optimization process, whose objective is to ensure the proper integration of the subject into its environment.

Are we using the brain’s full potential?

Yes, unlike what we see in some sci-fi works, we’re using 100% of it, and there’s a mathematic reason for that. A system navigating successfully through the real world tries to minimize its free energy, or more simply, it’s looking for proof of its existence. Mathematically, the proof is broken down into two elements: precision and complexity. A good adaptive skill is precision minus complexity. That shows that in everything we do, we’re trying to provide precise explanations for our sensorial exchanges with the world, with the least complexity possible.

We’re always trying to predict the world as efficiently as possible. The cost of this complexity, mathematically, is the number of degrees of freedom and of belief that I need to change before and after having been confronted with new proofs, new data. If I do that with maximum efficiency, I’m displacing my beliefs as little as possible.

The brain does everything it can to provide a precise explanation of its sensorial exchanges with the world, using the least number of neural elements, the least amount of energy and the fewest possible connections. One could say that it uses all available resources as efficiently as possible. For a computer, that would mean operating with as little electricity as possible. This idea of minimizing complexity, which finds the simplest explanation for things, also comes with a very practical implication in artificial intelligence design.

How is this principle applied in developing artificial intelligence?

At the moment, we’re too focused on optimizing machines to do what we consider qualitative things, without wondering what these qualities are. But if you assume that these qualities boil down to solving uncertainties, you realize that all a robot needs to do is understand you. When you apply the free energy principle to what an AI ideally should be, you end up realizing that it needs to act with an incredibly curious intelligence about you and me. It should do whatever it takes to understand us as intimately as possible, and that way it will come to understand itself.

I don’t see any fundamental obstacle preventing us from developing an artificial consciousness over the next 10 years. Pretty soon we’ll be seeing the next wave of advances in the field of autonomous learning, robotics and artificial intelligence. Silicone creatures will be created and domesticated to serve as pets. We’ll be able to project ourselves through them, and we’ll come to appreciate our interactions with them. They’ll seem to have exactly the same mental states as us, the same intentions.

When did you first get into artificial intelligence?

When I was 15 years old, I was one of the first teenagers to use a computer that was supposed to be capable of giving career advice. We had to answer a bunch of questions about our skills and the kinds of things we liked to do. All this data was put into a computer in Liverpool before being analyzed over several hours, then finally revealed as the career chosen for us.

It turned out I was supposed to be a TV antenna rigger, and I believe that was based on the fact that I’d said I liked electrical mechanics and the outdoors. So the computer thought I had to be a TV antenna maintenance person!

Credits: British Council

Later on, I got different career advice. Personally, I wanted more than anything else to become a math psychologist, but at that time, the terminology for what I wanted to do didn’t really exist yet. My human career advisor didn’t really understand what I meant. He thought I meant psychiatrist. So I went into medicine, when I should have studied psychology. It was a happy accident, which provided me with eight years of study. Much later I learned about “math psychology,” and I realized it was a discipline that combined everything that would keep me active, happy and energized with a will to improve myself.

Where does your interest in mathematics come from?

My father was a civil engineer. He was math-oriented, and he was always coming up with interesting ways to use his brain. This included complicated and stimulating physics books… which he then read to me. Among them was a book on general relativity he read me when I was a pre-teen. I later found out he’d first done it to my poor mother when she was preparing for nursing exams. It was called Space, Time and Gravitation by Arthur Eddington. He read it to her before he’d even confessed his love for her. She must have been around 16 or 18 years old.

My mother, for her part, was passionate about psychology and the processes that made it possible to figure out where people’s problems came from. So my father had highly technical mathematics books, and my mother had mainstream and academic psychology books. I grew up surrounded by math from my father and nosing through books on my mother’s shelf, which gave me a perfect mix of math and psychology. For me, math was always a game with very strict rules.

That explains why you work in your free time.

More or less. Sunday afternoons, when I’m not with my family, I work on an artificial agent, a synthetic silicone subject I can converse with. I’m doing that to try to understand the characteristics of fundamental architecture, the generator model (a group of several variables) necessary for there to be an exchange. That leads us to wonder at what point these models will be developed int he future, particularly from a language and communication point of view. And how far can those generator models be pushed?

If we can not minimize the free energy, we disappear.

That also poses the question of physical interface: how do you get vocal recognition, vocal prediction to interfere with these deep generator models, these deep conceptions of the world? There can be no language without comprehension, so there has to be a shared narrative. A scene, a universe has to be present in this agent’s brain, but also in yours, for the exchange to work. The notion of empathy, and all the constructions that energize the creatures that we are, are linked to the fact that 99% of our brain is dedicated to understanding others. Artificial intelligences will have to be made the same way.

Do we have to use the human being as the model?

For an artificial intelligence to survive in this world, it’ll have to be very similar to us. If the imperative of existence is simply to minimize uncertainties by learning everything we can about the nature of our world, we need to model other beings. The same thing applies to artificial devices: there will be no eloquent communication between an artificial agent and a human agent unless they’re sufficiently similar.

That means that the model for this artificial agent can also be used as a model for myself, and vice-versa. Otherwise, there’s no communication, no empathy, no shared history. Without a common narrative, AIs will disappear and die out. The near completeness of my model is dedicated to resolving uncertainties toward others and what they’re going to do. That could range from walking down the street and guessing which direction walkers and cyclists are heading, to anticipating the emotional responses of someone I love. We’re constantly trying to gather evidence to resolve our uncertainty about what’s happening outside.

Japanese robots, or commercial technologies like Siri, that you can’t really relate with will end up dying out, because they don’t really resemble us. They don’t have a real coherence, compassion or human qualities. For AIs to survive, they’ll need to be similar to us in shape, beliefs, attitudes and structure.

Credits: Apple/Ulyces

If we can’t manage to minimize free energy, we’ll literally disappear, because that means the boundaries between what defines us and what defines the world we evolve in disintegrate. For example, the fact of maintaining a healthy human internal body temperature of 37°C is a form of predictive presence. If we can’t do it, we’ll die of hypothermia or hyperthermia. It’s the same for artificial intelligences: they’ll die out if their model doesn’t adapt to the world around it.