By: John H. Richardson
IN AN ORDINARY hospital room in Los Angeles, a young woman named Lauren Dickerson waits for her chance to make history.
She’s 25 years old, a teacher’s assistant in a middle school, with warm eyes and computer cables emerging like futuristic dreadlocks from the bandages wrapped around her head. Three days earlier, a neurosurgeon drilled 11 holes through her skull, slid 11 wires the size of spaghetti into her brain, and connected the wires to a bank of computers. Now she’s caged in by bed rails, with plastic tubes snaking up her arm and medical monitors tracking her vital signs. She tries not to move.
The room is packed. As a film crew prepares to document the day’s events, two separate teams of specialists get ready to work—medical experts from an elite neuroscience center at the University of Southern California and scientists from a technology company called Kernel. The medical team is looking for a way to treat Dickerson’s seizures, which an elaborate regimen of epilepsy drugs controlled well enough until last year, when their effects began to dull. They’re going to use the wires to search Dickerson’s brain for the source of her seizures. The scientists from Kernel are there for a different reason: They work for Bryan Johnson, a 40-year-old tech entrepreneur who sold his business for $800 million and decided to pursue an insanely ambitious dream—he wants to take control of evolution and create a better human. He intends to do this by building a “neuroprosthesis,” a device that will allow us to learn faster, remember more, “coevolve” with artificial intelligence, unlock the secrets of telepathy, and maybe even connect into group minds. He’d also like to find a way to download skills such as martial arts, Matrix-style. And he wants to sell this invention at mass-market prices so it’s not an elite product for the rich.
Right now all he has is an algorithm on a hard drive. When he describes the neuroprosthesis to reporters and conference audiences, he often uses the media-friendly expression “a chip in the brain,” but he knows he’ll never sell a mass-market product that depends on drilling holes in people’s skulls. Instead, the algorithm will eventually connect to the brain through some variation of noninvasive interfaces being developed by scientists around the world, from tiny sensors that could be injected into the brain to genetically engineered neurons that can exchange data wirelessly with a hatlike receiver. All of these proposed interfaces are either pipe dreams or years in the future, so in the meantime he’s using the wires attached to Dickerson’s hippocampus to focus on an even bigger challenge: what you say to the brain once you’re connected to it.
That’s what the algorithm does. The wires embedded in Dickerson’s head will record the electrical signals that Dickerson’s neurons send to one another during a series of simple memory tests. The signals will then be uploaded onto a hard drive, where the algorithm will translate them into a digital code that can be analyzed and enhanced—or rewritten—with the goal of improving her memory. The algorithm will then translate the code back into electrical signals to be sent up into the brain. If it helps her spark a few images from the memories she was having when the data was gathered, the researchers will know the algorithm is working. Then they’ll try to do the same thing with memories that take place over a period of time, something nobody’s ever done before. If those two tests work, they’ll be on their way to deciphering the patterns and processes that create memories.
Although other scientists are using similar techniques on simpler problems, Johnson is the only person trying to make a commercial neurological product that would enhance memory. In a few minutes, he’s going to conduct his first human test. For a commercial memory prosthesis, it will be the first human test. “It’s a historic day,” Johnson says. “I’m insanely excited about it.”
For the record, just in case this improbable experiment actually works, the date is January 30, 2017.
"Here’s the problem with artificial intelligence today," says David Cox. Yes, it has gotten astonishingly good, from near-perfect facial recognition to driverless cars and world-champion Go-playing machines. And it’s true that some AI applications don’t even have to be programmed anymore: they’re based on architectures that allow them to learn from experience.
Yet there is still something clumsy and brute-force about it, says Cox, a neuroscientist at Harvard. “To build a dog detector, you need to show the program thousands of things that are dogs and thousands that aren’t dogs,” he says. “My daughter only had to see one dog”—and has happily pointed out puppies ever since. And the knowledge that today’s AI does manage to extract from all that data can be oddly fragile. Add some artful static to an image—noise that a human wouldn’t even notice—and the computer might just mistake a dog for a dumpster. That’s not good if people are using facial recognition for, say, security on smartphones (see “Is AI Riding a One-Trick Pony?”).
A good rocket launch site has a few important characteristics. An unpopulated patch of land near an ocean is preferable, so no one gets showered with wayward bits of flaming metal. It’s also nice if it’s on the equator—like all spheres rotating on an axis, the Earth spins fastest in the middle, which provides rocket boosters with extra oomph. In other words, the best sites tend to be in remote, tropical locations. That such places are also often among the world’s poorest gives many launches a counterintuitive feel: billions of dollars in futuristic machinery rising up over rainforests and shantytowns.
That was so, at least, this February in Sriharikota, an island off India’s southeast coast, a couple of hours north of Chennai. To reach Sriharikota, which on maps looks like a 17-mile-long snake feasting on a 5-mile-wide goat, you cruise along a chaotic highway where semis vie for right of way with women carrying water buckets on their heads. Eventually you reach a causeway that, during the dry season, is flanked by marshlands, salt ponds, and mud. At the end of this road is the Satish Dhawan Space Centre.
The facility, which opened in 1971 and was named for an Indian rocket scientist, looks more like a defunct disco than a gateway to tomorrow. At the check-in area, splotches of concrete peek through yellow-painted walls where photos of rockets and renowned engineers hang haphazardly. Beneath bulbs dangling from exposed wires, a team of friendly barefoot officials takes your information, then sends you outside to a mango-tree-shaded security gate. The police officers in olive-green uniforms and dark blue berets take no notice of the occasional white cow lumbering through the gate.
From there you reach a central compound of pastel-colored offices and living quarters, surrounded by a jungle of casuarina, eucalyptus, and palm trees. A ways away, at the water’s edge, is the launch pad. More cows collect outside the entry gate, while monkeys chatter in the trees.
At 9:28 a.m. on Feb. 15, these animals watched anxiously as an Indian rocket lifted off, roaring through the hot, sticky air. Its payload consisted of 104 satellites, dwarfing the previous world record of 37 set by Russia in 2014. The largest of them weighed 1,500 pounds and was designed to map India’s infrastructure and monitor urban and rural development. Nestled alongside were around a dozen smaller satellites from universities, startups, and research groups. What made the launch a record were the 88 shoebox-size “Dove” satellites built by Planet Labs Inc., a startup in San Francisco.
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.