Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.
Thought provoking (and remarkably similar in spirit to our article, "Self-Driving Cars - Second Level Changes") article from Ben Evans of Andreessen Horowitz.
There are two foundational technology changes rolling through the car industry at the moment; electric and autonomy. Electric is happening right now, largely as a consequence of falling battery prices, while autonomy, or at least full autonomy, is a bit further off - perhaps 5-10 years, depending on how fast some pretty hard computer science problems get solved. Both of these will cycle into essentially the entire global stock of (today) around 1.1bn cars over a period of decades, subject to all sorts of variables, and both of them completely remake the car industry and its suppliers, as well as parts of the tech industry.
Both electric and autonomy have profound consequences beyond the car industry itself. Half of global oil production today goes to gasoline, and removing that demand will have geopolitical as well as industrial consequences. Over a million people are killed in car accidents every year around the world, mostly due to human error, and in a fully autonomous world all of those (and many more injuries) will also go away.
However, it's also useful, and perhaps more challenging, to think about second and third order consequences. Moving to electric means much more than replacing the gas tank with a battery, and moving to autonomy means much more than ending accidents. Quite what those consequences would be is much harder to predict: as the saying goes, it was easy to predict mass car ownership but hard to predict Wal-mart, and the broader consequences of the move to electric and autonomy will come in some very widely-spread industries, in complex interlocked ways. Still, we can at least point to where some of the changes might come. I can't tell you what will happen to car repairs, commercial real-estate or buses - I'm not an expert on any of those, and neither can anyone who is - but I can suggest that something will happen, and probably something big. Hence, this post is not a description of what will happen, but of where it might, and why, with some links to further reading.
On a velvety March evening in Mandeville Canyon, high above the rest of Los Angeles, Norman Lear’s living room was jammed with powerful people eager to learn the secrets of longevity. When the symposium’s first speaker asked how many people there wanted to live to two hundred, if they could remain healthy, almost every hand went up. Understandably, then, the Moroccan phyllo chicken puffs weren’t going fast. The venture capitalists were keeping slim to maintain their imposing vitality, the scientists were keeping slim because they’d read—and in some cases done—the research on caloric restriction, and the Hollywood stars were keeping slim because of course.
When Liz Blackburn, who won a Nobel Prize for her work in genetics, took questions, Goldie Hawn, regal on a comfy sofa, purred, “I have a question about the mitochondria. I’ve been told about a molecule called glutathione that helps the health of the cell?” Glutathione is a powerful antioxidant that protects cells and their mitochondria, which provide energy; some in Hollywood call it “the God molecule.” But taken in excess it can muffle a number of bodily repair mechanisms, leading to liver and kidney problems or even the rapid and potentially fatal sloughing of your skin. Blackburn gently suggested that a varied, healthy diet was best, and that no single molecule was the answer to the puzzle of aging.
Yet the premise of the evening was that answers, and maybe even an encompassing solution, were just around the corner. The party was the kickoff event for the National Academy of Medicine’s Grand Challenge in Healthy Longevity, which will award at least twenty-five million dollars for breakthroughs in the field. Victor Dzau, the academy’s president, stood to acknowledge several of the scientists in the room. He praised their work with enzymes that help regulate aging; with teasing out genes that control life span in various dog breeds; and with a technique by which an old mouse is surgically connected to a young mouse, shares its blood, and within weeks becomes younger.
A ground-breaking study released in the journal Physical Review Letters (arXiv.org version) offers what its authors call ‘the first observational evidence that the Universe could be a complex hologram.’ The study, led by University of Waterloo Professor Niayesh Afshordi, may lead to new beliefs on the Big Bang theory and quantum gravity.
Prof. Afshordi and his colleagues from UK, Canada and Italy, investigating irregularities in the Cosmic Microwave Background (CMB), the ‘afterglow’ of the Big Bang, have found there is substantial evidence supporting a holographic explanation of the Universe.
“We are proposing using this holographic Universe, which is a very different model of the Big Bang than the popularly accepted one that relies on gravity and inflation,” Prof. Afshordi said.
“Each of these models makes distinct predictions that we can test as we refine our data and improve our theoretical understanding — all within the next five years.”
A holographic Universe, an idea first suggested in the 1990s, is one where all the information, which makes up our 3D ‘reality’ (plus time) is contained in a 2D surface on its boundaries.
“Imagine that everything you see, feel and hear in three dimensions — and your perception of time — in fact emanates from a flat two-dimensional field,” explained co-author Prof. Kostas Skenderis, from the University of Southampton, UK.
Human enhancement is at least as old as human civilization. People have been trying to enhance their physical and mental capabilities for thousands of years, sometimes successfully – and sometimes with inconclusive, comic and even tragic results.
Up to this point in history, however, most biomedical interventions, whether successful or not, have attempted to restore something perceived to be deficient, such as vision, hearing or mobility. Even when these interventions have tried to improve on nature – say with anabolic steroids to stimulate muscle growth or drugs such as Ritalin to sharpen focus – the results have tended to be relatively modest and incremental.
But thanks to recent scientific developments in areas such as biotechnology, information technology and nanotechnology, humanity may be on the cusp of an enhancement revolution. In the next two or three decades, people may have the option to change themselves and their children in ways that, up to now, have existed largely in the minds of science fiction writers and creators of comic book superheroes.
Both advocates for and opponents of human enhancement spin a number of possible scenarios. Some talk about what might be called “humanity plus” – people who are still recognizably human, but much smarter, stronger and healthier. Others speak of “post-humanity,” and predict that dramatic advances in genetic engineering and machine technology may ultimately allow people to become conscious machines – not recognizably human, at least on the outside.
This enhancement revolution, if and when it comes, may well be prompted by ongoing efforts to aid people with disabilities and heal the sick. Indeed, science is already making rapid progress in new restorative and therapeutic technologies that could, in theory, have implications for human enhancement.
In her 30-year battle with breast cancer, Carmen Teixidor thought she had experienced every treatment doctors could hurl at the disease. She had endured multiple bouts of radiation and multiple courses of hormone therapy. She tried chemotherapy once, about 25 years ago, but it diminished the quality of her life so much that she’s tried to avoid it ever since. She had multiple surgeries, too, and she developed a dread of the moment when she came out of anesthesia and into consciousness, almost inevitably to hear bad news. That is how she first learned, in the summer of 1985, that after doctors had found a large tumor in her left breast they had felt compelled to perform a mastectomy.
“Absolute terror,” she recalls, staring down at the floor of her New York apartment. There’s never a good time for a cancer diagnosis, but for Teixidor, it came just as her career as an artist had begun to take off—two of her life-size sculptures had been acquired for the grounds of Rockefeller University, and she had recently completed a mural at Harlem Hospital. A slender woman now in her 70s, graying hair gathered in a youthful ponytail, she has dealt with one recurrence after another, submitting to medical tools from the scalpel to, most recently and perhaps most improbably, the molecule.
Teixidor barely noticed when, in the fall of 2013, her doctors at Memorial Sloan Kettering Cancer Center in New York analyzed a small snippet of her tumor and sequenced the DNA in her cancer cells. They did this, as an increasing number of academic cancer centers are doing, to look for telltale mutations that might drive malignant growth. Certain of these mutations are the targets of a new generation of specially designed drugs.
I acknowledge the Fermi Paradox has been discussed ad nauseam, which means to the point that continued discussion of a topic makes people want to vomit.
Knowing this, I naturally decided it would be a great idea to write about it.
As with most articles on OTA, there should be rolling disclaimer: I'm not an astrophysicist or particularly qualified to write on this subject. But this is the internet. A place where the unqualified are the world's foremost authorities on the most complicated matters imaginable.
So trust me, I'm an expert.
In case you haven't heard of the Fermi Paradox, let me briefly bring you up to speed.
The guy that came up with the Fermi Paradox was Enrico Fermi. And let me tell you something, Enrico Fermi was basically a genius. Ok, I'll just say it.
He was a genius.
Born in Rome, Italy in 1901, Fermi was one of the world's most prolific physicists. He created the world's first nuclear reactor, won the 1938 Nobel Prize in physics, worked on the Manhattan Project, had the synthetic element "fermium" named after him, and was one of the few physicists in history that excelled in both theoretical and experimental physics. Fermi died in Chicago, Illinois, in 1954.
It only makes sense then that Fermi coined the Fermi Paradox, a topic that has been studied and discussed countless times over the years, over lunch in 1950. It was at this lunch when Fermi asked several colleagues, "Where is everybody?" referring to intelligent extraterrestrial life.