Nautilus

Artificial Intelligence Is Already Weirdly Inhuman

Nineteen stories up in a Brooklyn office tower, the view from Manuela Veloso’s office—azure skies, New York Harbor, the Statue of Liberty—is exhilarating. But right now we only have eyes for the nondescript windows below us in the tower across the street.

In their panes, we can see chairs, desks, lamps, and papers. They don’t look quite right, though, because they aren’t really there. The genuine objects are in a building on our side of the street—likely the one where we’re standing. A bright afternoon sun has lit them up, briefly turning the facing windows into mirrors. We see office bric-a-brac that looks ghostly and luminous, floating free of gravity.

Veloso, a professor of computer science and robotics at Carnegie Mellon University, and I have been talking about what machines perceive and how they “think”—a subject not nearly as straightforward as I had expected. “How would a robot figure that out?” she says about the illusion in the windows. “That is the kind of thing that is hard for them.”

Artificial intelligence has been conquering hard problems at a relentless pace lately. In the past few years, an especially effective kind of artificial intelligence known as a neural network has equaled or even surpassed human beings at tasks like discovering new drugs, finding the best candidates for a job, and even driving a car. Neural nets, whose architecture copies that of the human brain, can now—usually—tell good writing from bad, and—usually—tell you with great precision what objects are in a photograph. Such nets are used more and more with each passing month in ubiquitous jobs like Google searches, Amazon recommendations, Facebook news feeds, and spam filtering—and in critical missions like military security, finance, scientific research, and those cars that drive themselves better than a person could.

Not knowing why a machine did something strange leaves us unable to make sure it doesn’t happen again.

Neural nets sometimes make mistakes, which people can understand. (Yes, those desks look quite real; it’s hard for me, too, to see they are a reflection.) But some hard problems make neural nets respond in ways that aren’t understandable. Neural nets execute algorithms—a set of instructions for completing a task. Algorithms, of course, are written by human

You’re reading a preview, subscribe to read more.

More from Nautilus

Nautilus3 min read
Making Light of Gravity
1 Gravity is fun! The word gravity, derived by Newton from the Latin gravitas, conveys both weight and deadly seriousness. But gravity can be the opposite of that. As I researched my book during the sleep-deprived days of the pandemic, flashbacks to
Nautilus9 min read
The Marine Biologist Who Dove Right In
It’s 1969, in the middle of the Gulf of California. Above is a blazing hot sky; below, the blue sea stretches for miles in all directions, interrupted only by the presence of an oceanographic research ship. Aboard it a man walks to the railing, studi
Nautilus8 min read
10 Brilliant Insights from Daniel Dennett
Daniel Dennett, who died in April at the age of 82, was a towering figure in the philosophy of mind. Known for his staunch physicalist stance, he argued that minds, like bodies, are the product of evolution. He believed that we are, in a sense, machi

Related