Thunderous applause erupts across the California sky on August 27, 2017. In Hawthorne, the headquarters of Space X – American billionaire Elon Musk’s astronautical company – a capsule speeds through a hyperloop. The crowd’s cheers boom out from a giant screen placed before the tube. Beneath the images on the screen, a gauge displays 324 km/h. Its designers at the Technical University of Munich have just set the new speed record in a competition to design the best futuristic train.

While sharing the event on Twitter, Tesla’s boss was working on another, quieter innovation. Without turning heads, his Neuralink nano-biotechnology startup, created in July 2016, raised $27 million. Ten anonymous investors contributed. The sum, which should grow to $100 million, will be used to “develop brain-machine interfaces to connect humans and computers,” the Neuralink site reads. Facebook and Kernel have set out to achieve the same goal.

Elon Musk, who’s chiefly known for his exploits with self-driving cars and supersonic trains, doesn’t want to stick to transport only: he wants to work on the human brain. He thinks humanity is already on track. “They may not have realized it, but people are already cyborgs,” he says. Computers, phones or apps that provide quick answers to complicated questions have started the job. All that remains to be done is to connect them to the cerebral cortex. Musk and a few others are hard at work.

Hands and feet

Next to the giant globe radiating Brazil’s colors in São Paulo’s Corinthians Arena, the official World Cup football looks tiny. On June 12, 2014, its arrival at the game went unnoticed between the opening ceremony’s performances and decorations. Many television cameras missed the symbolic kickoff, made from the edge of the pitch.

Before the players had even arrived, 29-year-old Juliano Pinto, unknown to the world of football, executed the Cup’s least impressive – but perhaps most promising – kick. He had been chosen to launch the games. Using the strength of his brain, this paraplegic Brazilian managed to set in motion his exoskeleton and move his right leg. The ball rolled a few centimeters in front of him before a referee seized it and took it towards the giant, spinning sphere.

Juliano Pinto wore a suit designed by the American neuroscientist Miguel Nicolelis. This “man-machine interface,” as he calls it, was connected to his brain by an electrodes helmet. As soon as the young man thought about moving, a command was sent in the form of an electrical signal similar to those which allow cells to communicate with each other. “We are able to read these signals and send them to devices,” says the Duke University professor.

Kick-off of the 2014 World Cup

Back in 2012, while football fans were tuned in to the Euro in Poland and the Ukraine, American researchers at Cybernetiks were showing how two people who had suffered a stroke could control a robotic arm with their thoughts. It was symbolic of the union between man and machine.

The Cybernetiks demonstration filled Elon Musk with hope. He didn’t think this machine development would threaten jobs, as many businessmen feared, but instead saw it as an expansion of his field of expertise. To the limbic system and the cortex of our cranial box, he wants to add a “tertiary digital layer.” In a sense, he says, “we already have one when Google gives an instant answer. It’s just the interface that will change.”

While much of our brain activity remains unknown, the motor cortex that controls our movements “is well mapped,” notes Tim Urban, an American blogger who attended Neuralink meetings. It is thus possible to identify the zone that controls any region of the body. Devices that do this “observe products derived from brain activity and a computer interprets these signals,” explains Nicolas Roussel, research director at the National Institute for Research in Digital and Computer Science. It is therefore not impossible that they are mistaken about our true intentions.

“If understanding the brain was a prerequisite to interacting with it, we would have some problems,” said Philip Sabes, a researcher at the University of California San Francisco (UCSF), involved in the project. “But it’s possible to decode a lot of things without really understanding the dynamics at work. We do not need to solve all scientific problems to progress. Like human beings, progress advances without revealing every detail behind its actions.”

Humanity 2.0

The very foundations of computing are shrouded in mystery, which even the recent discovery of 150 letters attributed to Alan Turing is not enough to dispel. Regarded as the father of the discipline, the British mathematician was the first to explore issues related to artificial intelligence in the journal Mind in 1950, in an article titled, “Computing Machinery and Intelligence.” The computer, he argued, may have a more complex behavior than its assembly suggests. The Turing test, named after him, challenges us to distinguish a man’s responses from those of a machine answering a series of questions. Turing predicted that within 50 years, we wouldn’t be able to distinguish the two.

Other mathematicians besides Turing are betting on exponential progress in computing. The American Stanislaw Ulam, for example, uses the concept of singularity, which describes a poorly defined value near zero, to think about the narrowing gap between human and machine intelligence. “The constant acceleration of technological progress and changes in the human way of life,” he wrote in 1958, “seems to bring us closer to a fundamental singularity in the evolutionary history of the species, beyond which human activity, as we know it, could not continue.” Our days could be numbered.

The rather pessimistic predictions are supported by Intel founder Gordon Moore. In 1965, before the American microprocessor company had been born, the engineer comes up with an assumption that would soon be considered an iron law. He noted that the “complexity of semiconductors offered at the entry level” had doubled every year since 1959 and correctly guessed it would continue to increase at an identical pace for years to come. That’s how Moore’s Law was born.

Four years after the law’s publication, Massachusetts Institute of Technology (MIT) researcher Eberhard Fetz managed to connect a neuron in a monkey’s brain to a dial. By activating it when it wanted food, the animal demonstrated that neural activity can be used to direct an apparatus outside the body. In 2013, Fetz described in Frontiers in Neural Circuits how a primate partially paralyzed by a spinal cord injury controlled his arm with an external link.

Throughout all of these developments, hypotheses predicting how machines would eventually replace humans suddenly entered the realm of possibility. The American engineer Ray Kurzweil  published the book The Age of Intelligent Machines in 1990. In it, he claims that humanity is about to give way to a new being smarter than it was. At the same time, the physicist and mathematician Douglas Lenat attempted to equip the Cyc machine with enough information about its environment so that it would one day communicate naturally with humans. But he only succeeded in building a huge database that could not learn from experience.

In order to create a closer relationship between humans and artificial intelligence, some people advocate going through the brain. In his 2005 book, Humanity 2.0, Kurzweil states that “an emulation of the human brain, powered by an electronic system, would work much faster than our biological brains.” In 2011, Russian billionaire Dmitry Itskov worked on designing a robot with a direct neural interface, which would serve as a receptacle for a human brain. It was a crazy idea inspired by the experiences of the British researcher Kevin Warwick, who claimed to have created a robot whose cortex contained rat neurons in 2008.

transcendent-man-Warwick_1-outthere

Kevin “Captain Cyborg” Warwick

The gate to the brain

The British cybernetics professor Kevin Warwick is among the most famous prophets hailing the advent of the machine-man. In his 2002 book, I, Cyborg, he imagines himself merging with technology and looks for ways to accomplish his dream. He says there will be “two distinct species” in the future, and he chooses to belong to the more evolved kind. “Those who want to stay human and refuse to improve will have a serious handicap,” he writes. “They will be a subspecies and will be like the chimpanzees of the future.” Determined to rise above this second-class humanity, Warwick implanted a range of electrodes in his own body that allowed him to activate a remote, robotic hand as early as 1998.

Called BrainGate, the technology Warwick used was further developed by the biotech company Cyberkinetics in 2003, in collaboration with the Department of Neuroscience at Brown University. It allowed a man to play a video game using his brain only, and let a paralyzed woman control a flight simulator with her thoughts in 2015. The latter relied on tools provided by the Defense Advanced Research Projects Agency, which depends on the US Defense Department.

A year earlier, Warwick had made some noise by announcing that one of his supercomputers, named Eugene Goostman, had passed the Turing test. But the scientific community was quick to challenge his method. Warwick’s conversation with the device had only lasted five minutes, while a real experiment should be “days or even hours,” says Murray Shanahan, a professor of cognitive robotics at Imperial College London.

The hundreds of “biohackers” who have implanted chips after Warwick often depend on obsolete tools, Nicolas Roussel says. For example, if an implant meant to unlock a door malfunctions, the person wearing the implant will be stuck. “Without some kind of physical action, how can I be sure that it has unlocked?” Roussell asks. With voice recognition or movement-detecting consoles, machines struggle to make sense of our actions. They don’t understand our involuntary gestures, either: if you were to accidentally pick up your phone during the middle of a video game with Kinect, you’d likely lose your game.

As to the brain-computer interfaces devised by Elon Musk, Facebook AI or Kernel, the pitfalls are even greater. The Neuralink team is counting on using a million neurons “to create an interface that could really change the world,” says Tim Urban. But right now, they can only survey 500 at a time. “We’re either far away from our goal, or very close, depending on the type of growth we see,” he adds. Musk and his scientists are rather optimistic: Moore’s Law allows him to hope for rapid progress.

But progress slowed down in early 2016, when important major chip microprocessor announced they wouldn’t stick to the same pace of innovation as they had in the past. Neuralink, however, needs them to continue to be active if it wants the wireless and biocompatible material. “The least invasive technique would be a kind of hard stent passed through a femoral artery, deployed in the vascular system to interact with neurons,” Elon Musk explains.

Will this apparatus understand what drives us? “Current brain-computer interfaces do not detect your thoughts or mood,” warns Nicolas Roussel. They just pick up signals. How could they possibly understand a thought a person might not be able to put into words? For the French researcher, we’ll just have to accept many errors. “We can’t stop thinking about something” unless we really want to, he says. Elon Musk, on his end, knows what he wants.