George Bandy

The saying goes, if you’re not paying for the product, then you are the product.
There is a general trend now towards being more conscious over how our own personal data is used and especially the type of impact it can have when you enter the realms of big data – with hundred of millions of data points. One of the most worrisome examples was the scandal surrounding the use of targeted political advertising during the 2016 US election campaign. The scheme for the advertising had been orchestrated by a British-American company that specialised in big data analysis. The company gained attention as the success of the campaign apparent. It was incredibly effective. The company, Cambridge Analytica, now quite a familiar name, was not shy to boast of its big data capacities. At one moment it stated it had approximately five thousand data points on every US voter. With all this data you can psycho-analyse a whole nation. The scandal broke when it was revealed how they had amassed this data behemoth. The primary source was from a personality test app on Facebook which scrapped the profiles of 87 million individuals. This process was not wholly consensual, and the individuals that did press “I agree” would not likely have a clear idea of how their data would be used. This use of data definitely feels to exist in a morally grey area, with many questions now being raised concerning its legitimacy. Brainwashing, indoctrination, and propaganda would not be completely unfitting to describe these targeted efforts to proselytise a nation – all driven by big data. Can it be just to influence the decisions of whole populations? Should my data be a commercial commodity for this purpose? At what point is our data being misused?
There’s a new term for this sort of business model – Surveillance capitalism. Coined by Harvard professor Shoshana Zuboff, she describes the model’s fundamental characteristics as ‘aggregating vast amounts of data on people, using it to infer incredibly detailed profiles on their lives and behaviour, and monetising it by selling these predictions to others such as advertisers.’ Commenting on Cambridge Analytica, she noted ‘[they] simply deployed the same basic model to target voters rather than consumers.’ It is not a surprise then considering the effectiveness of this form of targeting when it is said the world’s most valuable resource is no longer oil, but data.
Following the scandal Cambridge Analytica lost its patronage and filed for bankruptcy in May 2018 (…though it does exist in an afterlife under the auspices of another company – Emerdata – which acquired the company after bankruptcy.) Facebook, who had allowed this practice to slip under its radar, was fined $5 billion by the Federal Trade Commission for its negligence towards privacy.
The lack of consent from the individuals involved was what made this data mining illicit and why the authority was able to issue a fine. However, there are many instances when consent is given for data usage. We’ve all clicked “agree” after only the slightest glance at a terms & conditions at some stage. It’s not only that they take too long to read (… iTunes for example would take a little under four hours), it’s often that since the sites are so big that its hard not to use it them to some degree, so maybe we click yes and then try not to think about it too much. It is particularly difficult to avoid when these are social media sites for which there are no actual alternatives. Here the issue is not then whether consent has been given, but whether the company should be able to ask for it.
This last question strikes a particular bell. Big data, as well as many other aspects of the new digital markets, has been an incredibly quickly developing field to which the law has not kept up. Slowly different areas of law are closing in on the new data practices, but big firms are not letting up easy.
In February this year the German Competition Authority (Bundeskartellamt) brought a case against Facebook. Their claim was that Facebook had abused its dominant position on the market by the way it was collecting and sharing data among the different organs of its social media empire. We are talking Facebook, Instagram and WhatsApp, among others. In the terms and conditions, Facebook requires users to allow for the collection of personal data outside of Facebook from other websites and apps, including other Facebook-owned services, and to allow for this data to be processed in with the data from your Facebook account to create a combined data set on each user. In short, it means the reasonable fragmented trail you leave over the web can to some extent be followed and that the trail can be lead back to your social media presence. The Bundeslartellamt considered this exploitative of its users. In other words, the “price” was too high. The ruling required Facebook to remove this as a condition to use the platform, but added the company could continue the practice if it sort additional consent from users, though didn’t mention why users might give this extra consent.
The case was the first of its kind to connect data collecting practices and the concept of exploitative abuse of a dominant undertaking. It did not stand for long though. Facebook successfully appealed the case before the Higher Regional Court in Düsseldorf in August. Where the Competition Authority had fell short was that it had not fully developed a Theory of Harm which, in competition terms means, how Facebook’s behaviour causes actual or potential harm to consumers and competitors. Rather, the Court noted that the services are not indispensable and users sign up voluntarily. Additionally they said the Bundeskartellamt had not considered the counterfactual, i.e what benefits are brought about the alternative situation of a paid-for service without the same data collection, and would consumers go for it. Finally, the Court states that exploitative practices are about the company’s behaviour, and not the mind-set of the users. Quoting the judgement;
“there is no evidence that Facebook obtains the consent of users through coercion, pressure, exploitation of lack of willpower or otherwise unfair means […] whether the users act out of indifference or because they do not want to spend the necessary time and effort [to read the conditions] […] does not matter [as] their decision is ultimately free, uninfluenced and autonomous”. .
Besides the consideration that competition policy is maybe not the most appropriate field for these cases, it leaves us with the discussion on whether personal data should be more protected in legislation or should be left to the users to “spend it” how they please. In light of big data and the scandals that have emerged so far, perhaps legislators should be more wary of what can happen when companies can accumulate so much so easily.