The world is rapidly digitalizing, and the deployment of data offers many opportunities for economic development, achieving sustainability and a better quality of life. There are, however, considerable concerns about the misuse of (personal) data and undesirable outcomes of unbridled use of data. These concerns are legitimate, but we’re also running the risk of becoming too defensive when it comes to data, missing out on big opportunities and, more importantly, our selective opposition to data sharing may have undesirable effects.

Our observations

  • The implementation of the General Data Protection Regulation (GDPR) has, according to an evaluation by the European Commission, worked well when it comes to “empowering” consumers and giving them more insight into and control over the use of their personal data. At the same time, this regulation is expressly targeted at minimizing risks, and with that could lead to an all too defensive attitude on the part of governments and citizens that could, for instance, stand in the way of innovation.
  • The so-called privacy paradox plays a large role in this. We consider privacy to be highly important but time and again show willingness to exchange data for access to information or services. This applies most when the reward we receive is immediate and beneficial to us as individuals. It’s therefore likely that a more defensive attitude towards data sharing will lead to lower willingness to share data for collective purposes (e.g. relating to public health).
  • The recently launched Dutch coronavirus app was long-awaited, partly because of a painstaking approach to the privacy risks. The chosen solution, as developed by Apple and Google, minimizes the storing of privacy-sensitive data, but also limits the possibilities for researchers and policymakers to ascertain matters such as where contaminations took place (when location data is lacking). Ironically, some governments therefore actually asked for less protection of privacy than the tech parties were willing to offer.
  • When only or mostly contextual data is used, the risk of bias increases, along with the risk of undesirable consequences such as discrimination and the reinforcement of socio-economic inequality. This happens, for example, when predictive policing leads to higher deployment of police services in neighborhoods with above average crime rates, which then almost unavoidably leads to higher rates of reported crime. Another example is that theft insurance costs more in neighborhoods or cities with a bad (statistical) reputation, even when the individual takes all the necessary precautions to secure their belongings.

Connecting the dots

Like the great technologies of our past, digital technology enables us to increase our wealth and, more importantly, actually improve our well-being. On the one hand, technology can have direct financial benefits, such as cheaper services or more efficient use of energy and resources. On the other hand, and perhaps more crucially, technology enables us to improve our quality of life by facilitating matters such as better healthcare or a cleaner living environment. Opportunities are arising in our own daily lives as citizens and consumers, as well as in the public space, where we can organize matters more intelligently, better, more honestly and in a cleaner way. Data is the most vital resource in this, as data and the knowledge and insights it yields can help us to make existing processes more efficient or otherwise smarter and better. Along with all these promising prospects the datafied society offers, the other side of the coin is that there are great concerns over the use of (personal) data and the possible violation of our right to privacy and, worse, our civil rights. The societal and political knee-jerk reaction to this is to limit data sharing as much as possible in hopes of eliminating as many risks as possible. It’s questionable, however, whether this is the right and most productive approach.

First, this is causing us to miss out on great opportunities, for individuals and society as a whole. This can never be a valid argument for releasing all possible data to solve any problem that needs fixing. We have to be more fastidious about this issue and ask ourselves to what purposes we’re willing to allow the use of our data. At the moment, there seems to be an imbalance, in that we are willing to offer up our data to various (relatively anonymous) tech companies without asking any questions or setting conditions. Though this yields clear “rewards”, these rewards are often not related to the data we release or generate. In fact, we often don’t even know what they (can) do with our data, outside of personalizing the ads we see. We’re much more cautious with parties closer to us (such as the government or health insurers) and with applications in which the purpose of using our data is clear, visible and more concrete (such as the coronavirus app). In other words, the clearer and more concrete the value of our data is, the more reluctant we are to release it. That might make sense, because it’s easier for use to imagine our data being misused (e.g. resulting in higher health insurance premiums), but it should also be clear how this, most valuable, data could work to our own or collective advantage.

Second, we’re running the risk that, in the absence of reliable and/or individual data, inaccurate, incomplete or contextual data will be used, potentially resulting in disadvantageous decisions. That is, the role of data will certainly expand because of the promise it holds and the ubiquitous tendency to ascribe importance to anything that’s measurable. Conversely, we also have the tendency to reduce “problems” to what is easily scaled and solved by means of (digital) technology (which Evgeni Morozov calls solutionism). This implies that it’s clearly in our best interest to make sure that data about ourselves is in fact complete and accurate. If it’s not, we will be subject to judgment and treatment based on non-specific data that’s publicly accessible (e.g. features of the neighborhood we live in).

As mentioned, the promise of the datafied society is now at odds with concerns over the use of personal data. The only possible way to reconcile these two will be to develop systems that enable citizens to explicitly release data to parties that will use it for something of value, without relinquishing all control of their data. It’s also imperative that it becomes much clearer what these parties use the data for exactly and how this benefits the citizen or society as a whole. Many initiatives have already attempted to develop this kind of system and fix the internet, but there hasn’t been any real breakthrough as of yet. Hopefully, our (selectively) defensive attitude towards data sharing will eventually make way for a more wholehearted embrace of these systems that enable us to get the best out of our data.

Implications

  • There is a growing need for data management systems with which citizens can govern the use of their personal data and the data they produce through their everyday practices. Governing should not necessarily imply a strong focus on privacy or not-sharing of data. Individuals and society as a whole have a lot to gain from sharing data with others and allowing third parties to cooperate on the basis of such (possibly anonymized or aggregated data)
  • Developing and managing such a system is not necessarily a task for private companies or governments; there are good reasons not to trust either of them to the full. Both may be involved to maintain a balance between interests, but solutions fully owned by users (e.g. using a decentralized infrastructure) may also emerge.

Article by Freedomlab

Leave a Reply