“It is a very sad thing that nowadays there is so little useless information,” Oscar Wilde, A Few Maxims for the Instruction of the Over-Educated
By reading this blog post you have created a marketable product.
If you use Facebook, Twitter, LinkedIn, Flikr, Google+, Tumblr, YouTube, Instagram, Foursquare or even a debit or credit card, you are creating a product these companies use and sell. This probably shouldn’t come as much of a surprise to anyone, given the controversy surrounding the recent use of social media information. But what should come as surprise is just how this information is used.
Everything from what you “like” on Facebook to the purchases you make in a store, to the places you go with your friends, is information that can be used to create a picture of you. Facebook currently controls a lot of information because of the different elements of your life you feed into it. The Facebook ‘Timeline’ software is a fairly recent innovation meant to encourage you to feed more of your personal information into the system. There are so many different media (e.g. apps, Google searches, phone calls, etc.) for learning minute details about your daily life, which can be brought together to form a picture about your likes, dislikes, tendencies and, in theory but increasingly in practice, can be used to predict what you will need, what you will want and what you will become.
Data aggregators, such as Dstillery, purchase your information from sources like Facebook and use it to create marketing models to target advertising back at you. Marketing models may not seem particularly sinister, and really they aren’t. However, you may not be aware of just how powerful statistics can actually be and, as a result, how valuable your personal information is to companies.
Consider the following ongoing project at the University of Toronto:
The project, run by the university’s neonatal care department, is studying web operations and medical care for premature babies. By using multiple data points on any given newborn and streaming information into a computer every second, doctors were able to effectively predict illnesses in newborns a full 24 hours before symptoms occurred. An amazing achievement in itself! But what is even more interesting is the indicator they discovered: rather than sudden erratic behaviour on the part of their vital signs, a stabilization of vital signs occurred–a sort of calm before the storm.
Why is this impressive? Well, there’s actually no medical explanation for it. Doctors can’t explain why vital signs stabilized but, lo and behold, they did. Doctors are then able to adjust treatment based on statistical probability alone–no medicine, strictly speaking, was required to reach a diagnosis. All that was used was statistics and probabilities determined by collecting data over an extended period of time.
The same thing is true for the information you plug into social media. All of that information can be used for a variety of reasons, not only related to health problems, but to unconscious lifestyle changes—possibly indicating something as huge as pregnancy. In 2012, Charles Duhigg of the New York Times reported a case where Target was able to predict a woman’s pregnancy before she told her father about it, solely based on her purchase of shampoos. Apparently, pregnant women experience an increased sense of smell and so are less likely to purchase scented shampoos. Target regularly records purchases made by members and creates a picture of their purchase habits. Target statisticians interpreted the errant purchasing pattern of this woman and determined she may be pregnant, which turned out to be true.
One fairly common reaction is to call for privacy laws. People argue this is personal information you own in some way and companies should ask permission before using it. The fact Target predicted a woman’s pregnancy is often seen as something they had no right to be looking into in the first place.
The most common reaction is a sense of violation. Many argue it is part of being human to think things which are your own are, by definition, private. Some would say part of being alive is to learn to share more and more of who we are with the world. We often consider our most profound relationships to consist in the ability to share these secrets. In Target’s case, there is no doubt someone like her father has earned the right to have that information before Target did. But the question still remains as to whether she has a right to information about herself that she doesn’t even know about?
In a medical context, we would say she does. A doctor might understand far more about what is happening in a person’s body, at least on a medical level, than that person does, but that doesn’t give him the right to share information with others without permission. In Canada, information collected for statistics to inform the health profession is gathered by means of biennial surveys conducted by Stats Canada, rather than directly from doctors’ treatment records. Even in a context where we might argue accurate information is a matter of public health, the individual’s right to information about themselves is protected by law.
The difference with social media is, of course, that we do give consent to the use of information–we ‘sign’ an agreement with Facebook giving them the rights. This is hardly news to anyone using Facebook, since the medium has even provided context for a dialogue on the subject. (Remember that viral post meant to ensure the protection of someone’s Facebook data?). What this particular example showed many users, I hope, was Facebook didn’t actually have a way of protecting their information from being sold to data aggregators. Yet, it also showed us the perceived benefits of Facebook outweighed any sense of loss felt on the part of users. There was no mass exodus from Facebook (“Let my profiles go!”).
People want to participate in social media even if it’s fair to say they are casually aware of the great power ownership of this vast store of information confers on companies like Facebook. But the power these companies actually wield in owning this information goes much deeper than simply making a dollar. Jaron Lanier, author of Who Owns the Future, argues the application of information actually has a limiting effect on the public imagination. In an interview from a documentary produced by the Netherlands Public Broadcasting Organization, Lanier argues companies such as Google are able to gradually influence the kinds of decisions people are able to make by presenting them with predetermined options. The underlying idea being the information provided by users is a two-way street–you make a decision based on both your own desires and the options available to you, but your future decisions will be influenced by past choices which, in turn, can be selected by search engines such as Google.
Privacy then becomes an extremely important question. It’s not simply about marketing departments pandering to your needs, but about their increased shaping to your needs.
The definition of privacy is an important question I don’t intend to be able to answer in a short blog post, but both Lanier and the CEO of Datacoup Matt Hogan both argue for understanding privacy in the context of this idea of a marketable product. Datacoup is actually already set up to accept new users who wish to log on and sell their data by connecting their existing subscriptions such as Google+, Instagram or Facebook to the Datacoup platform. As we’ve seen, data is power and it’s a marketable product but, as it currently works, you are making a product for someone else for free. It’s kind of like being a carpenter and giving Ikea free tables. Doesn’t make much sense, really. But Hogan and Lanier think we, as users of social media, can and should take control of this product by monetizing it and by charging money for the use of our information.
By extension, Hogan and Lanier think protection of information can consist in the willingness or unwillingness to sell one’s own information. By refusing to sell, you wouldn’t be able to participate in the various digital media available, but you would be protecting your privacy. Lanier even argues if the government was forced to pay for data, it’s power over the people would be checked by its purse strings–the people have some influence over when they vote.
Ultimately, we have already seen how much digital technology has changed the world and we are so beautifully, but also so terrifyingly, on the edge of something completely different for human civilization. Yet, if we fail to take control of all of the data we create about ourselves, then we tacitly assent to being mere data. There is a reason why, as Lanier says, there is still science and philosophy and not only statistics. It is because there is a solid, real world at which statistics merely glances sideways.
This is often called the information age but we are moving into a data age. The choice becomes whether one is in control and informed or whether one is powerless and merely a piece of information.