Showing posts with label algorithm. Show all posts
Showing posts with label algorithm. Show all posts

Monday, January 25, 2016

Have you talked to your computer lately?

Original post:  Sep 3, 2015

Many new smartphones come equipped with a digital assistant. My wife is constantly talking to Siri. I don't have an Android, but I believe there is something similar with Cortana. I am sure there are others on the horizon. Each new improvement makes it easier for us to locate information that once required laborious searches through labyrinths of card catalogs and dusty shelves.

This article in Wired talks about how computers might actually be changing the way we describe the world about us: 

The example they give is in language translation. The first attempts at programming computers to perform this complex task involved a simple substitution. They would replace nouns with nouns and verbs with verbs (and so on). In the 1980's, a team working on an early version of IBM's Watson (the famous computer that defeated humans at Jeopardy!) came up with a new approach. They decided to use a statistical model:

They did this in a clever way. They got hold of a copy of the transcripts of the Canadian parliament from a collection known as Hansard. By Canadian law, Hansard is available in both English and French. They then used a computer to compare corresponding English and French text and spot relationships.
For instance, the computer might notice that sentences containing the French word bonjour tend to contain the English wordhello in about the same position in the sentence. The computer didn’t know anything about either word—it started without a conventional grammar or dictionary. But it didn’t need those. Instead, it could use pure brute force to spot the correspondence between bonjour and hello.
By making such comparisons, the program built up a statistical model of how French and English sentences correspond. That model matched words and phrases in French to words and phrases in English. More precisely, the computer used Hansard to estimate the probability that an English word or phrase will be in a sentence, given that a particular French word or phrase is in the corresponding translation. It also used Hansard to estimate probabilities for the way words and phrases are shuffled around within translated sentences.
Using this statistical model, the computer could take a new French sentence—one it had never seen before—and figure out the most likely corresponding English sentence. And that would be the program’s translation.
Their model proved successful. Google Translate uses this very method to perform its magic!

Language is not the only area where computers are pioneering:

Related stories are playing out across science, not just in linguistics. In mathematics, for example, it is becoming more and more common for problems to be settled using computer-generated proofs. An early example occurred in 1976, when Kenneth Appel and Wolfgang Haken proved the four-color theorem, the conjecture that every map can be colored using four colors in such a way that no two adjacent regions have the same color. Their computer proof was greeted with controversy. It was too long for a human being to check, much less understand in detail. Some mathematicians objected that the theorem couldn’t be considered truly proved until there was a proof that human beings could understand.

As computers grow more and more powerful, it will be fascinating to watch how they literally begin to change the way we experience the world around us. I am sure that healthcare will be no different than the world at large!

Wednesday, July 1, 2015

Would you trust a robot?

Original post:  Jan 27, 2015

RobotSommelier.jpg
Would you trust a robot to pick out a wine for you?
If you did and it picked out a bottle that you didn't like, would you give it a second chance?

As technology is inserted into new areas of our lives, we will have to trust more and more decisionmaking power to fancy algorithms. Are we ready for the loss of control?

A recent study showed that it isn't quite as simple as placing blind faith in our computer overlords:

In a paper called “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” forthcoming in the Journal of Experimental Psychology: General, the University of Pennsylvania researchers Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey asked subjects to consider the challenge of making a difficult forecast: predicting either MBA students’ academic success or states’ airline traffic. They could choose to tie what they earned to either the prediction accuracy of a human (usually themselves) or that of a statistical model. Before making their decision, they first saw the model’s performance on several trial runs, or saw the human’s performance, or both, or neither.
When they had not seen the statistical model perform on the trial runs, the majority of subjects bet on the model being more accurate in the money-earning round—they chose to tie their earnings to its performance rather than the human’s. But if they had seen it perform, the majority bet on the human. That’s despite the fact that the model outperformed the humans in every comparison, by margins ranging from 13 to 97 percent. Even when people saw the performance of both the model and the human in the trial runs, and saw that the model did better, they still tended to tie their winnings in the earnings round to the human over the model. They were more accepting of the human’s mistakes.
These findings surprised the researchers. They had expected people to shy from algorithms at first and then change their minds after seeing their superior performance, Dietvorst says. Instead, he says, they “found completely the opposite.”
One of the study authors speculates that people are less likely to trust computers when it touches on their health or their ego. We may feel that human  judgment can provide superior advice by weighing variables no computer can possibly comprehend. That belief may not actually be supported by the data:

There can be a real cost to this aversion. A 2000 meta-analysis summarized 136 studies comparing predictions made by experts with those made by equations, in areas including medical prognosis, academic performance, business success, and criminal behavior. Mechanical predictions beat clinical predictions about half the time, while humans outperformed equations in only 6 percent of cases. Those are judgments with significant implications for our lives, and it’s a genuine loss to ignore a system that can give us much more accurate answers.

A possible solution might be trying to incorporate some user feedback to help return a level of control to the user:

The researchers at Penn are now exploring why people abandon algorithms so quickly. There may be a hint in the fact that subjects judged the statistical model as worse than humans at learning from mistakes and getting better with practice. Perhaps people could learn to trust algorithms more if they were told that computers can learn. Balfoort says that once you inform customers that WineStein gets better with feedback, “you get a satisfied nod.”
In current work, Dietvorst is finding that giving people the chance to alter an algorithm’s forecast even a trivial amount increases their adoption of it over a human counterpart, from less than 50 percent to more than 70 percent. People “don’t want to completely surrender control,” he says. Ironically, the user’s adjustment usually makes the computer forecast slightly worse, but at least it gets them to use it.
What do you think?

Sunday, June 14, 2015

But do you really know me?

Original post:  Mar 20, 2014

I found a fascinating article by Derek Thompson in the Atlantic that discusses the new problems created by the rise of computer technology. Now that we have easy access to nearly limitless amounts of information, how do we zero in on the things we want the most?

Two leading companies, Facebook and Amazon, use special algorithms to help their customers navigate their vast treasure troves of data.

An algorithm is just a piece of code that solves a problem. Facebook's problem, with the News Feed, is that each day, there are 1,500 pieces of content—news articles, baby photos, engagement updates—and much of it is boring, dumb, or both. Amazon's problem is that it wants you to keep shopping after you buy what you came for, even though you don't need the vast majority of what Amazon's got to sell.

Both organizations narrow the aperture of discovery by using their best, fastest, most scalable formulas to bring to the fore the few things they think you'll want, all with the understanding that, online, you are always half a second away from closing the tab.

Facebook uses your "likes" and combines it with paid placements from their advertisers to customize your News Feed. Amazon found that there was nothing that existed to help them, so they created their own solution. It churns through millions of items and returns your search results--usually in less than a second! Amazon came up with an algorithm that was both fast and scalable. The article includes this simplified diagram from the patent application:
These are two successful approaches, yet wildly divergent.

The strengths and weaknesses of each algorithm is clear. Facebook knows more about its users; Amazon knows more about its inventory. Each could stand to learn a bit from the other. Facebook is desperately trying to better identify its higher quality inventory, while it's often obvious that Amazon doesn't know its users. Amazon knows what's good, because it knows (a) what's been bought and (b) what's been highly rated. Facebook has likes, which are similar to ratings, butpeople might not be reading most of the content that they like, as Chartbeat CEO Tony Haile suggested in Time. In short, Amazon and Facebook are solving the problem of abundance with similar, but conceptually opposite, formulas.

The article goes on to complain about a mediocre book purchase on Amazon. The next visit brought 19 new recommendations for books by the same author!

Here is the closing argument:

Maybe we like it that way. The equivalent knock on Facebook has often been that it knows us too personally and that its insinuation into our lives is creepy. But that's just the thing. For the age of algorithms to succeed on its own terms, we have to embrace a new version of intimacy that felt natural with the local newspaper and corner shop clerk who knew our name. The machines have to know us.