I chose the following article because I thought it would be a good conversation piece given what we discussed in our lecture this past Saturday. Basically, the article is claiming that “big data” isn’t all it was promised to be (yet) because company’s still don’t know how to read and apply it correctly. The article reveals that most firms use big data the same way they have used small data in the past: reports and business intelligence. The author goes on to say it’s not anyone person or organization’s fault, either, since the human brain can only interpret small data sets, it’s natural for this process to occur.
The topic of predictive algorithms is discussed, and how they can assist in drilling for oil by predicting which drills will need maintenance. When applying this concept to cybersecurity, i.e. predictive analysis to help detect and stop fraud cyberattacks, it is a bit misleading since the term “predictive” isn’t quite accurate: the data isn’t able to predict attacks at this time, just identify and help prevent them. It also points to how risky using these algorithms is, referencing the Google Flu Trends project, which failed to predict the number of flu cases in a given area because it was based on the wrong data (attempting to predict the number of people affected vs simply the time and place of an outbreak). That failure was attributed to the author’s most interesting point of view, that “knowing how to rig the game so the computer easily wins is the most important trade secret in applied prediction.” I read that as data analysis is only as good as the insight that any given human attempts to get from it, which means the data is only as valuable as the person looking at it. Good news for all of us, I guess.