[INTERVIEW] Since its founding, OpenAI has tried to push the limits of artificial intelligence, while leaving the fruits of its research open to the public. We met with its director.

This story was published in partnership with Ulyces.co.

Greg Brockman doesn’t wear a coat or a suit. On a November Lisbon day, he matches a t-shirt to his stripped style of speech. So many stories are published these days on artificial intelligence (AI), that the specialist in smart machines was invited to the Web Summit to address a potential “hype” around the subject. It’s a curious thesis for the technical director of OpenAI, a company founded in 2015 by Elon Musk and Sam Altman to prevent technology from benefitting only a small group of experts. That danger has been underestimated, he says in an interview on the conference sidelines.

Brockman keeps his head shaved and his speech nearly as stark. Each sentence is released with the urgency he believes we need to avoid the risks of AI. At rare moments he hesitates, gestures, then resumes his speech. Progress is so rapid now that time is running out: the time for sharing the benefits of robots and algorithms is now. For that matter, if AI were really all hype, would one of its best connoisseurs claim prefer the t-shirt to the costume of knowledge?

When it comes to AI, progress is really fast. Three years after OpenAI’s creation, are you still trying to democratize access to it?

Yes. That said, we’re less interested by the kind of AI being developed today than by what we call highly autonomous systems, which do work that creates a great deal of added value. OpenAI builds these kinds of systems in order to assure that they benefit the greatest number of people. This technology, which will be the most decisive ever created, is appearing at a moment when inequalities are growing. We need to make sure everyone benefits from it, not just one company or one country.

When did you first get into AI?

It was sort of by chance. In high school, I spent a lot of time in math and chemistry competitions. I had the idea of writing a book combining the two disciplines. After high school, I started sketching the outline while I was taking a semester of classes in Russia. I took a kind of sabbatical year. After I’d written about 100 pages, I sent the manuscript to a friend. He told me, “You don’t have a degree. Nobody’s going to want to publish this.” Since finding the money to edit it myself was difficult, I created a website.

Greg Brockman
Credits: OpenAI

Programming language fascinated me. In mathematics, you try to solve a problem with available elements, and the answer interests five people. On the other hand, the solutions you come up with in computing can help a lot of people. What existed in your head suddenly exists in real life. Addicted to computer development, I was hooked by an article from the 1950s on the Turing test. According to the writer, machines couldn’t yet think in the sense that it could pass for a human being in conversation. But it might be possible in the 50 years to come. That was the first time I’d heard of artificial intelligence.

After a year and a half at Harvard, I transferred to the Massachusetts Institute of Technology (MIT). I knew how to solve the problems I was given, but I needed the right people to help me progress in development. When I did meet them, I only wanted to work with them, and I quit MIT.

In a 2016 article, you said that “deep learning” is hard to define. Have you found the right definition to explain what it’s about?

It’s still a complicated question. On technology sites, you see a bunch of articles on deep learning. It’s supposed to fix such and such a problem, but by the end we never know what it is exactly. At its heart, deep learning allows a machine to learn a series of rules in order to solve a problem. Learning happens in steps, in a circular way. That might seem simple, but you have to configure the system perfectly for those steps to be coherent.

For a lot of people, AI is still a black box. How do you prevent the few who really understand it from being the only ones profiting from it?

In the fields using AI, databases are crucial. You need a lot of information to use technology in the most efficient way possible. In other words, big companies are well placed. They operate especially on the internet. From there, the number of combinations is infinite: they can associate different parameters on consumers according to their online profiles or put them in relation with elements from elsewhere. AI is a supplement to resources more than a resource in itself.

Credits: OpenAI

Again, if you have the data and the expertise, you can build an efficient product. The sad truth is, I don’t see the same opportunities for startups with this technology that you saw when the internet was first emerging. In reality, the huge amount of data on the Web mostly benefits large companies.

The Cambridge Analytica scandal showed what could happen. Is there a chance that could get worse?

Autonomous systems are going to improve in creativity. They’re going to become better than human beings at launching companies and generating film and music ideas. You can also imagine networks of robot doctors sharing their expertise on studied cases, even autonomous robot scientists that have integrated all the literature of their field.

These applications are interesting, but they imply a very different economic model than the one we use today. There are a lot of questions behind them, such as: what’s the value of an AI system and the people that manage them? Where will the big profits be going? Will they be reserved for a small number of technicians or will they be spread out in a more or less equitable way? The organizations developing these kinds of technologies are going to have to answer these.

What will become more important: the data or the technical aspect?

Data is becoming less and less decisive, because we’re starting to understand how to use other things besides personal information. We’ve recently built an AI model that learned by reading dozens of books. It showed it was capable of understanding the function of language. That’s something we’ve never seen before, something we’ve dreamed about. It’s kind of like a baby learning how the world works by starting from concepts. We’re not completely there yet, but the initial results seem promising.

So the use of data is evolving, which is a good thing in terms of accessibility. But there’s a caveat: the competition to drive these models is rapidly increasing. It’s going to continue to grow exponentially over the next five years. That’s too fast for economies of scale to reduce the cost of research. Only those who can take on the construction of these massive systems will be getting advanced results.

Science-fiction works are stocked with robots taking over the world. Is that a fantasy, or do we need to seriously think about that?

It’s perfectly healthy to think about the risks. In the mainstream media, we unfortunately don’t see a lot of deep reflection on coming models. For example, what’s going to happen if you build a high-power AI system, but it only works in one place? Is this the world we want to live in? There are huge consequences that don’t necessarily appear in science-fiction films but that we should still study closely. The spectrum of risks is wide.

Credits: OpenAI

In the Manhattan Project, which resulted in the first atomic bomb, the nuclear physicist Edward Taller wondered what would happen if a bomb ignited the atmosphere. According to his calculations, that was possible. Fortunately, that hasn’t happened, but I’m happy to see that’s at least been considered. That also applies to artificial intelligence. If we build these extremely powerful systems, we should also use the technical progress to educate the world about what might be to come. When you develop AI like we are, which can outperform the best humans after being completely incompetent months earlier, using unrevealed and crafty strategies, you have to expect the same sneaky behavior from other forms of AI.

Neural networks only have problems that they solve. You only have to program them the right way, and that’s tough to do, but once it’s done, competence improves really quickly. We have to understand these risks before we deploy technology. But that hasn’t been discussed yet in the mainstream media.

If God exists, is it an AI?

That question goes beyond my field of expertise.