The Replicant Dilemma What are androids for? I confess it’s a strange question. AI researchers chase after a human-type intelligence. A long article in Wired talks about a Japanese researcher’s multi-decade career of building human-like androids with higher and higher degrees of verisimilitude (link below). But why should we try to build something human-like? After all, we have a great abundance of humans on earth. If you require something with human-like intelligence, use a human. If you require something with human-like empathy and kindness, use a human. We are told that humans will be more likely to accept robots if they resemble humans, but that argument doesn’t stand up to even cursory scrutiny. Do we accept vacuum cleaners because they resemble humans? Or smart phones? Or coffee makers? Our only requirement for the shape of technology seems to be that it is functional. So why are we trying to bridge the uncanny valley? Why try to pass the Turing test? Perhaps the difficulty in recreating our own humanity drives us on. Or perhaps the (mostly male) researchers are attempting to compensate for their own modest role in the creation of human beings. But why do universities and technology companies and governments continue to pour money and time and effort into recreating humanity? Humanity is our most abundant natural resource. There is no need for an alternative. Why does this matter? Technology should have a purpose. Building things merely because they are hard to build is a lousy reason. For artificial intelligence research, recreating human intelligence is purposeless. We have human intelligence at our disposal. It makes much more sense to figure out what human intelligence is bad at and create tools that can compensate for those weaknesses. Creating androids is also purposeless. Someone planted the false idea in our heads that we require technology to be frictionless. So we try to think of ways to introduce technology without requiring any changes in the way we live and work. Where did this bizarre idea come from? The smartphone is not frictionless. Nor is the personal computer. Nor the typewriter, the printing press, indoor plumbing. The point is to offer technology that is so useful at solving a problem that we accommodate ourselves to it. Friction, androids and Turing tests are all just variants on a fundamental error. Stop hiding the innovation. Technology should be useful and obvious. Androids are neither. In a nutshell: “Human-like” is a dead end goal for technology. Read More Different (better?) Than Human I cannot beat my sister at chess. I have been trying for years. I come up with elaborate ways to deprive her of her best pieces and maneuver her king into checkmate. It never works. You see, my sister knows how I play. She sits and waits to see what kind of elaborate system I will create. Once she sees it take shape, she intentionally does something unexpected and illogical. When that happens, my system collapses and she mops the floor with me. Admittedly, I am a not a good chess player. But my failure reflects a weakness in how people play games. We are blinded by ego and paralyzed by failures of strategy. We try to win the game, rather than win the next move. DeepMind’s first Go-playing AI was trained against human opponents. One aspect of the system attempted to predict a human opponent’s next probable move and another was trained to predict the winner of the game following each move. These two systems were trained using human players and eventually beat the top Go player in the world. But DeepMind went back to the drawing board. They decided to create a new system called AlphaGo that didn’t require any human training. They created two different versions of the program and taught both the rules of Go. Then the two versions played each other, hundreds of times, then thousands of times. Each move was initially random. But the systems learned based on the results of all those millions of moves what would lead to success and what would lead to failure. The new system is so much improved that it beat the old (human-beating) system 100 games out of 100. The interesting thing is how the system plays Go. It’s initial moves look like the moves Go masters typically make at the outset of games. The end game also resembles that of human players. But in the middle of the game, there wasn’t anything like a strategy. The system seemed to focus on merely edging ahead of the opponent, even if it lost a bit on a given move. No ego and no strategy. Just tiny incremental gains. Why does this matter? People are a little too in love with their own intelligence. Once a person comes up with an idea, they will frequently stick to it long after it should be rationally abandoned based on repeated failure. Bad strategies. Bad relationships. Gambling. All depend on an egotistical desire to “stay the course.” Machine learning is (fortunately) without ego. Unlike my repeated failures at chess, it doesn’t try to go for the big win. DeepMind’s AlphaGo is not more intelligent than human beings. It just isn’t blinded by ego and it doesn’t need to display patience to grind out a slow, incremental win. In a nutshell: A weakness of human intelligence is ego. This is where machine learning can add value. Read More Ugly Sunglasses Unpopular Longtime readers may be familiar with my position on Google Glass and Snap Inc’s Spectacles. When they launched, I predicted both products would be a failure because people do not enjoy putting ugly things on their faces. That doesn’t make me Nostradamus. News came out this week that Snap was unprepared for the failure of their goofy, picture-taking sunglasses. Vast warehouses contain hundreds of thousands of pairs of sunglasses that now look like they will never be sold. After an initial spike in interest, sales of Spectacles fell off a cliff. Now it looks like Snap’s promises of an expanding hardware division will be a casualty of Spectacles’ failure. Why does this matter? Ugly sunglasses are unpopular with consumers. The fact that this is apparently news to executives at Snap is indicative of a real problem in technology. One didn’t need to have a deep understanding of technology or millennials to predict Spectacles would fail. It was enough to just look at them. I have no idea what kind of bubble has formed around Silicon Valley that seems impervious to common sense. So, let me say this one last time: Do not create products that make your customer’s faces look deformed or bizarre. Just don’t. Full stop. In a nutshell: Ridiculous looking products that served no compelling need are not popular with consumers. Also, bears shit in the woods. Read More