Technology and Society
Negotiating a New Social Contract for Digital Data
July 30, 2015
Tracy Mitrano asks “Is the 4th Amendment dead in cyberspace?” She thinks the distinction between metadata and the content it is attached to can no longer be treated separately by U.S. law. I concur, and agree with her that FISA, a court created after the COINTELPRO hearings to give intelligence services some judicial oversight, needs to be thoroughly reformed if not abolished. Secret opinions made using secret legal rationales in a secret courtroom is not judicial oversight as I understand it.
But my concerns don’t stop there. the Bill of Rights primarily speaks to limiting the power of the state. We also need to think about regulating uses of our data by non-state actors. The capacity to generate, collect, and mine data has grown quickly, well beyond our capacity to negotiate a new social contract to govern these new realities. Arriving at a new social contract is complicated for a number of reasons, including
- the internet is international. Our concepts of where privacy belongs in the law are culturally situated, highly political, and have geographic boundaries. As an example, Google has just told a French regulatory authority that it will not apply the E.U. rule governing the “right to be forgotten” on its U.S. site. Google argues that if they have to apply local laws globally, they’ll have to remove massive numbers of links to content that offends one government or another. I see their point, but in an interconnected world, it makes things messy.
- the internet is porous. Security is extremely difficult to maintain, and the more data flows across the internet, the more opportunities there are to intercept it. This situation isn’t helped by the deliberate actions taken by states (including our own government) to undermine security in order to exploit backdoors and vulnerabilities.
- data is not neutral. How we gather and use data depends on human beings. Likewise, algorithms are not sui generis. They are created by people with agendas, biases, and blind spots. Further, the effect big data has on individuals and communities is also not evenly distributed.
- the ways data is collected and used is very often a trade secret in a black box and changes without our knowledge or consent.
- the companies that gather data don’t always have tight control over what happens to it. They may go out of business and have the data they collected sold as an asset to pay creditors. They may be hacked. They may have rogue API users who don’t abide by the terms of service, as the consumer genetic service 23andme learned recently when a coder, using its data, created a program to screen people seeking to use a web service for specific qualities, such as being of “pure” enough European ancestry.
- When anonymized data from multiple sources is combined, it stops being anonymous. Earlier this year, researchers found that they could identify individuals with as little as four pieces of anonymized shopping information 90 percent of the time.
-
data that is inaccurate can be hard to correct and those errors have consequences.
There is a lot of good stuff that can come from using large data sets, but we need to figure out who gets to decide which uses are beneficial.
Human geographer Rob Kitchen of Maynooth University recently wrote a thought-provoking article about data and how “smart cities” could use it. It’s heady stuff, but he acknowledges the work that needs to be done on the social contract side. I think it is a caveat that applies to all uses of big data.
[T]he transformations taking place are fast-paced and often too little debated or contested in the mainstream media and legislature, with disruptive technical and social innovations taking root and expanding rapidly before we have time to digest the implications or consider the need for oversight. Such thinking though is needed if we are to reap the benefits of big data and smart cities, rather than the negative consequences. How to gain the former and avoid the latter has to be worth pondering every time we interact with a digital device or traverse a city leaving a trail of data in our wake. The alternative is that smart cities are created that represent the interests of a select group of corporations, technocrats, and certain groups within society (particularly political elites and the wealthy), rather than producing ones that are in the best interests of all citizens.
How are we going to make those decisions? Here are some suggestions from Bruce Schneier:
- less secrecy, more transparency, and better oversight of government uses of data.
- regulate data collection and use by corporations and give people rights to their own data.
- educate ourselves, advocate for change, and don’t give up.
Admittedly, these fixes won’t be easy, but here’s a lot at stake.