As computers become better at accomplishing many things once thought impossible for them, it's becoming harder and harder to keep humanity one step ahead. That said, there are some intriguing new ways to help outwit our silicon friends.
This article gives one example. Here is how it opens:
Computers, like people, understand what they see in the world based on what they've seen before.
And computer brains have become really, really good at being able to identify all kinds of things. Machines can recognize faces, read handwriting, interpret EKGs, even describehttp://news.stanford.edu/news/2014/november/computer-vision-algorithm-111814.html what's happening in a photograph. But that doesn't mean that computers see all those things the same way that people do.
This might sound like a throwaway distinction. If everybody—computers and humans alike—can see an image of a lion and call it a lion, what does it matter how that lion looks to the person or computer processing it? And it's true that ending up at the same place can be more useful than tracing how you got there. But to a hacker hoping to exploit an automated system, understanding an artificial brain's way of seeing could be a way in.
Scientists at the University of Wyoming and Cornell have figured out how to present images that fool sophisticated computers (dense neural networks used for artificial intelligence) into seeing objects that show up as nonsense to humans. Here is an example of the images:
You may not see anything recognizable to you, but these neural networks are completely confident they see the titles shown below the pictures below!
One of the lead scientists explains the effect:
"To some extent these are optical illusions for artificial intelligence," co-author Jeff Clune told me via gchat. "Just as optical illusions exploit the particular way humans see... these images reveal aspects of how the DNNs see that [make] them vulnerable to being fooled, too. But DNN optical illusions don't fool us because our vision system is different."
Clune and his team used an algorithm to generate random images that appeared unrecognizable to humans. At first, Clune explains, the computer might be unsure about what it was seeing: "It then says, 'That doesn't look like much of anything, but if you forced me to guess, the best I see there is a lion. But it only 1 percent looks like a lion.'"
This might seem as if it is a harmless parlor trick. In actuality, it has very significant meaning:
To people in countries where governments restrict speech and publishing, citizens could theoretically communicate secretly by leveraging the opacity within deep neural networks. Clune: "People could embed messages discussing freedom of the press and get them past communist AI-censoring filters by making the image look like the communist party flag!"
Even when computers can be trained that what they're seeing isn't, as far as a human is concerned, the thing the computer thinks it sees—it's easy to retrain the computer to be fooled all over again, which, for now, leaves such networks vulnerable to hackers. Understanding such opportunities for exploitation will be critical as artificial intelligence becomes increasingly pervasive.
Here is a link to the full article: How to Fool a Computer With Optical Illusions - The Atlantic
No comments:
Post a Comment