There are some very smart people who just seem to have all the wrong instincts.
Throughout the presidential campaign season, Ted Cruz has consistently come up just a little short of his goals. He seems to have calculated through most of the solution and then left out the last few critical steps. I believe he's done it again.
At the most recent RNC convention, he was given a prime speaking slot and used it to encourage his fellow Republicans to "vote their conscience". It seemed like a brave move at the time. He could very well justify it based on the vicious personal attacks he had endured from Donald Trump during the long and contentious primary season. He also pointedly refused to endorse the nominee. Again, this seemed like a brave position that was all the more conspicuous by the absence of his peers.
In recent days, he has waffled and wavered. Finally, last Friday, he endorsed his bitter rival.
Some people have the talent for making all the wrong choices. I believe he is one of them.
I predict that we will look back on this as the high water mark for the Trump campaign.
I am also desperately hoping to be proven right!
Monday, September 26, 2016
Thursday, September 22, 2016
Wednesday, September 21, 2016
It's magic
Step by step instructions on a card trick:
http://www.vulture.com/2016/09/step-by-step-card-trick.html
http://www.vulture.com/2016/09/step-by-step-card-trick.html
Friday, September 16, 2016
Rain as a metaphor
In the movies, rain is often used to indicate a somber mood. I think it fits. Dreary locales drenched in precipitation just have a depressing feel.
Sometimes that can be sadly appropriate.
Sometimes that can be sadly appropriate.
Thursday, September 15, 2016
Machines make mistakes, too
In this article in Slate on advances in medicine, there is a fascinating discussion on machine learning.
Machine learning is making real progress on a variety of fronts:
Enter machine learning, that big shiny promise to solve all of our complicated problems. The field holds a lot of potential when it comes to handling questions where there are many possible right answers. Scientists often take inspiration from nature—evolution, ant swarms, even our own brains—to teach machines the rules for making predictions and producing outcomes without explicitly giving them step-by-step programming. Given the right inputs and guidelines, machines can be as good or even better than we are at recognizing and acting on patterns and can do so even faster and on a larger scale than humans alone are capable of pulling off.
In earlier days, scanners could only recognize letters of specific fonts. Today, after feeding computers tens of thousands of examples of handwritten digits to detect and extrapolate patterns from, ATMs are now reading handwriting on checks. In nanotechnology, the research is similar: Just like many slightly different shapes can mean the same letter, many slightly different molecules can mean the same effect. Setting up a computer to learn how different nanoparticles might interact with the complex human body can assist with what were previously impossibly complex computations to predict billions of possible outcomes.
That said, there are some times when we confuse amazing initial results with lasting progress:
The article goes on to point out that nature is quirky and irregular. I think that is true. We'll have to be patient as we search for the best ways to use machine learning in the future.
Machine learning is making real progress on a variety of fronts:
Enter machine learning, that big shiny promise to solve all of our complicated problems. The field holds a lot of potential when it comes to handling questions where there are many possible right answers. Scientists often take inspiration from nature—evolution, ant swarms, even our own brains—to teach machines the rules for making predictions and producing outcomes without explicitly giving them step-by-step programming. Given the right inputs and guidelines, machines can be as good or even better than we are at recognizing and acting on patterns and can do so even faster and on a larger scale than humans alone are capable of pulling off.
In earlier days, scanners could only recognize letters of specific fonts. Today, after feeding computers tens of thousands of examples of handwritten digits to detect and extrapolate patterns from, ATMs are now reading handwriting on checks. In nanotechnology, the research is similar: Just like many slightly different shapes can mean the same letter, many slightly different molecules can mean the same effect. Setting up a computer to learn how different nanoparticles might interact with the complex human body can assist with what were previously impossibly complex computations to predict billions of possible outcomes.
That said, there are some times when we confuse amazing initial results with lasting progress:
One hundred percent accuracy sounds like a great accomplishment. But machine learning experts get skeptical when they see performance that high. The problem is that machine-learning algorithms sometimes do well on the training set but then fail when applied to the new inputs of their first test. Or, even worse, they do well on the first tests for the wrong reasons.
One famous example of this happened a few years ago when Google Flu Trendsmade waves for accurately “nowcasting” how many people had the flu. The initiative based its estimates on the patterns its algorithms had found in how search trends lined up with Centers for Disease Control and Prevention data on flu prevalence during a particular window of time. Although its real-time flu forecasts seemed to square with the numbers of CDC-tracked cases released with a two-week delay, its potentially life-saving predictive success didn’t last long. In fact, in subsequent years it failed rather dramatically.
It turned out that the algorithm was simply recognizing terms that people search a lot in winter, like “high school basketball.” It was just a coincidence that the number of people searching about basketball and the number of people getting the flu matched up so well the first year, and unsurprising it didn’t work in the long term. A human would never make that mistake.
It’s not to say that big data can’t be valuable, and advances in machine learning will absolutely lead to huge breakthroughs for nanomedicine. But it is reason to not turn the algorithms loose and let them do all the thinking, especially when the stakes are as high as letting new creations loose in our amazingly complex, fine-tuned human bodies. As the lower-stakes Google Flu Trends failure taught us, machines will need a lot more data before we can trust them to make predictions and, in some cases, a human hand to help transform the variables the machine starts with into more useful ones.
Here it’s useful to think of the example of the spam filter, one of machine learning’s greatest successes. When programmers just threw the text of a spam email into a prediction function, it only learned how to stop messages with wording very similar to the training examples. Tell the function to take into account a wider range of variables, such as the number of dollar signs, or percentage of all-caps words, and they got a much higher success rate.
Wednesday, September 14, 2016
It's going to be harder to stay anonymous
Machine learning is now making it possible to recognize standard techniques to obscure details like pixelation.
PIXELATION HAS LONG been a familiar fig leaf to cover our visual media’s most private parts. Blurred chunks of text or obscured faces and license plates show up on the news, in redacted documents, and online. The technique is nothing fancy, but it has worked well enough, because people can’t see or read through the distortion. The problem, however, is that humans aren’t the only image recognition masters around anymore. As computer vision becomes increasingly robust, it’s starting to see things we can’t.
Researchers at the University of Texas at Austin and Cornell Tech say that they’ve trained a piece of software that can undermine the privacy benefits of standard content-masking techniques like blurring and pixelation by learning to read or see what’s meant to be hidden in images—anything from a blurred house number to a pixelated human face in the background of a photo. And they didn’t even need to painstakingly develop extensive new image uncloaking methodologies to do it. Instead, the team found that mainstream machine learning methods—the process of“training” a computer with a set of example data rather than programming it—lend themselves readily to this type of attack.
This new technique doesn't even have to be fully accurate to be concerning!
Even if the group’s machine learning method couldn’t always penetrate the effects of redaction on an image, it still represents a serious blow to pixelation and blurring as a privacy tool, says Lawrence Saul, a machine learning researcher at University of California, San Diego. “For the purposes of defeating privacy, you don’t really need to show that 99.9 percent of the time you can reconstruct” an image or string of text, says Saul. “If 40 or 50 percent of the time you can guess the face or figure out what the text is then that’s enough to render that privacy method as something that should be obsolete.”
Scary comic from xkcd
This comic describes the pace of global warming and how much it diverges from the norm.
xkcd: Earth Temperature Timeline
xkcd: Earth Temperature Timeline
Something to try with the boys
There are ways to speed up the process of becoming an expert.
Barking up the wrong tree: How to be an expert
Barking up the wrong tree: How to be an expert
Tuesday, September 13, 2016
Swift Playgrounds
I see that there is now a new coding app for kids.
OB has expressed his interest in learning how to code. This app will use the Swift programming language. It is the main language currently used by Apple for its own apps. I'd be interested to see what he comes up with.
OB has expressed his interest in learning how to code. This app will use the Swift programming language. It is the main language currently used by Apple for its own apps. I'd be interested to see what he comes up with.
Friday, September 9, 2016
MIT ILP
I recently attended the MIT Industrial Liaison Program (ILP) Digital Health Conference in Cambridge. It was a really amazing conference. There were presentations by so many brilliant minds. You could literally feel your brain aching as it struggled to keep up with all of the topics.
Here is an example of what we discussed (just on the first day!):
Here is an example of what we discussed (just on the first day!):
I found one slide in particular summed up my personal opinion on much of what we discussed. While there are many things that these technologies can do, it will be more important for us to focus that activity on improving a specific process to achieve a desirable outcome!
I also learned that I should be thankful I don't have to commute in to Boston every day. Even though it isn't nearly as crowded as NYC, it's still too reminiscent of an ant farm!
The MIT Media Lab also has a wonderful view of the Charles and the city of Boston from the sixth floor deck.
I anxiously await the next version of this meeting!
Tuesday, September 6, 2016
World's largest game of catch
On Sunday, September 4, 2016, we went to the Pawtucket Red Sox game. While the PawSox won the game, it was a relatively dull 1-0 game. The only run scored on a failed pickoff attempt in the first inning that went to centerfield and allowed the runner to race home from 2nd base! Justin Haley starred for the PawSox. He struck out seven and only walked one. He carried a no-hitter into the eighth inning, but lost it when a ball hit his leg and rolled behind the mound too far for him to throw out the runner at first.
After the game, we attempted to set a world record for the world's largest game of catch. The previous record had been 1,058 set in Cincinnati. We ended up with 1,133 people!
Over the Monster article
ABC news link Providence
Here are some photos and quick video from the event:
After the game, we attempted to set a world record for the world's largest game of catch. The previous record had been 1,058 set in Cincinnati. We ended up with 1,133 people!
Over the Monster article
ABC news link Providence
Here are some photos and quick video from the event:
Saturday, September 3, 2016
Something to try someday?
JetBlue is trying to bring a business class experience at half the cost of its competition.
Here are some of the ways that they are doing that:
Here are some of the ways that they are doing that:
Before going any further, let’s explain why airplane food has had the same lousy reputation since the 1980s. One part is stinginess, understandable in an industry with tiny profit margins. Another is that most food comes from the same few industrial catering kitchens. Airlines often partner with fancy chefs to plan menus, but catering chefs do the day-to-day grunt work of churning out meals for a bunch of carriers.
Then there’s the combination of cabin pressure, altitude, and dry air that knocks out roughly 30 percent of a person’s sense of taste, Farmerie says. Salt can fill that gap, but a heavy pour can create the over-processed texture people associate with plane food.
....
The chef relies on vinegar and earthy root spices to cut through airplane-induced malaise without going too salty. That’s why he serves his ribeye with a balsamic-ginger reduction. The citrus tang of grapefruit and Thai chili makes for poached salmon that’s bright, “not just this big, salty explosion which makes you feel dehydrated and horrible,” he says.
JetBlue goes to great lengths to keep Mint customers satisfied :
The illusion of constant luxury, JetBlue’s Perry says, relies on timing. After test runs in mock Mint cabins, Perry and his team realized passengers were most antsy while waiting for someone to take their order. Now, flight attendants ask them what they want to eat right away, and hold the orders until meal time. At least one crew member floats between Mint and economy, helping wherever they’re needed. “It reminds me of an old jazz quote about how drummers are the ones you don’t notice,” says Perry. “It’s only when they make a mistake that you notice them.”
Mint is working well for JetBlue. Those routes are some of the most profitable at the airline. Profits are up 20 percent and they are actively looking to expand.
Here is the link to the full article:
Subscribe to:
Posts (Atom)