Thursday, September 15, 2016

Machines make mistakes, too

In this article in Slate on advances in medicine, there is a fascinating discussion on machine learning.

Machine learning is making real progress on a variety of fronts:

Enter machine learning, that big shiny promise to solve all of our complicated problems. The field holds a lot of potential when it comes to handling questions where there are many possible right answers. Scientists often take inspiration from nature—evolution, ant swarms, even our own brains—to teach machines the rules for making predictions and producing outcomes without explicitly giving them step-by-step programming. Given the right inputs and guidelines, machines can be as good or even better than we are at recognizing and acting on patterns and can do so even faster and on a larger scale than humans alone are capable of pulling off.
In earlier days, scanners could only recognize letters of specific fonts. Today, after feeding computers tens of thousands of examples of handwritten digits to detect and extrapolate patterns from, ATMs are now reading handwriting on checks. In nanotechnology, the research is similar: Just like many slightly different shapes can mean the same letter, many slightly different molecules can mean the same effect. Setting up a computer to learn how different nanoparticles might interact with the complex human body can assist with what were previously impossibly complex computations to predict billions of possible outcomes.

That said, there are some times when we confuse amazing initial results with lasting progress:
One hundred percent accuracy sounds like a great accomplishment. But machine learning experts get skeptical when they see performance that high. The problem is that machine-learning algorithms sometimes do well on the training set but then fail when applied to the new inputs of their first test. Or, even worse, they do well on the first tests for the wrong reasons.
One famous example of this happened a few years ago when Google Flu Trendsmade waves for accurately “nowcasting” how many people had the flu. The initiative based its estimates on the patterns its algorithms had found in how search trends lined up with Centers for Disease Control and Prevention data on flu prevalence during a particular window of time. Although its real-time flu forecasts seemed to square with the numbers of CDC-tracked cases released with a two-week delay, its potentially life-saving predictive success didn’t last long. In fact, in subsequent years it failed rather dramatically.
It turned out that the algorithm was simply recognizing terms that people search a lot in winter, like “high school basketball.” It was just a coincidence that the number of people searching about basketball and the number of people getting the flu matched up so well the first year, and unsurprising it didn’t work in the long term. A human would never make that mistake.
It’s not to say that big data can’t be valuable, and advances in machine learning will absolutely lead to huge breakthroughs for nanomedicine. But it is reason to not turn the algorithms loose and let them do all the thinking, especially when the stakes are as high as letting new creations loose in our amazingly complex, fine-tuned human bodies. As the lower-stakes Google Flu Trends failure taught us, machines will need a lot more data before we can trust them to make predictions and, in some cases, a human hand to help transform the variables the machine starts with into more useful ones.
Here it’s useful to think of the example of the spam filter, one of machine learning’s greatest successes. When programmers just threw the text of a spam email into a prediction function, it only learned how to stop messages with wording very similar to the training examples. Tell the function to take into account a wider range of variables, such as the number of dollar signs, or percentage of all-caps words, and they got a much higher success rate.
The article goes on to point out that nature is quirky and irregular. I think that is true. We'll have to be patient as we search for the best ways to use machine learning in the future.

No comments:

Post a Comment