Towards a Concept of Digital Citizenship: AI and the Universal Declaration of Human Rights

--

We’re presenting part one of our inaugural white paper below. You can download the full white paper in PDF here.

1. Introduction: Citizenship in the Age of Artificial Intelligence

The Universal Declaration of Human Rights (1948) and the subsequent International Covenant on Civil and Political Rights (1976) and International Covenant on Social and Economic Rights (1976) take the concepts of rights of the individual to self-determination in political, economic, and cultural spheres as foundational principles. At the time when these conventions were originally drafted, the rise of artificial intelligence and big data were unforeseeable. Nonetheless, United Nations High Commissioner for Human Rights Michelle Bachelet has recently argued that existing laws adequately encompass and address the due rights of the individual in AI-mediated cyberspace, noting that it is not “a human rights black hole” while “the same rights exist online and offline” (Office of the High Commissioner for Human Rights, 2019). This political stance is further advanced by the embrace of artificial intelligence as a driver of economic and political development by other UN directorates in support of the Sustainable Development Goals agenda. However, since the initial adoption of the UDHR and ratification of the ICCPR, two underlying assumptions have radically changed:

  • the first, that human rights law was developed to protect the individual from infringement of rights by a nation-state;
  • and the second, that individuals exercise their rights to self-determination in the physical world.

The UN Special Rapporteur on Extreme Poverty has also brought to light the oft- neglected concept of economic rights in the context of the UDHR by having called into question the adverse impact AI has had on individual self-determination in the economic sphere, such as lack of access to skilled work and increased economic inequality (Alston, 2019).

Indeed, the deployment of AI worldwide has proceeded with almost teleological determinism in the public and private sectors, to include the international community, and as the Special Rapporteur noted, “the era of digital governance is upon us” (Alston, 2019). Whilst there has been a flurry of white papers by NGOs and human rights groups since 2016, they have largely addressed the problem of human rights from a general standpoint in the context of data privacy. Currently, the academic and policy discussions have relied on the legal assumption that the rights of an individual are distinct from an individual’s data, the latter of which being deemed a separate question concerning one’s property rights (Amnesty International, 2019; Ahktar, 2019; Latonero, Big Data Analytics and Human Rights, 2018). However, given the extent to which peoples’ lives exist online by way of necessity, the two are becoming increasingly inseparable.

It is our position that international legal frameworks must adapt to this new reality by establishing a universally applicable definition of digital citizenship (the individual in both the physical world and cyberspace concurrently) and the role of corporate actors (not just nation-states, who have remained weak actors in the digital space). Using the works of Hin- Yan Liu (Liu, 2019) and Carsten Momsen and Caecilia Rennert (Momsen & Rennert, 2020) we provide a critical analysis of specific human rights impacted by the rise of artificial intelligence in both decision-making and in an individual’s free exercise of freedom of expression, self-determination and political and economic rights.

To illustrate, we will address two key focal areas in this white paper with accompanying case studies:

  1. political rights in the free exercise of citizenship (as enshrined in UDHR articles 3, 6, 13 and 15)
  2. political rights to be free of torture, cruel and inhumane treatment, or arbitrary arrest, detention or exile (UDHR articles 5 and 9)

In two subsequent white papers we will explore two additional focal area: the impact of AI- mediated distributed content delivery on the rights of freedom of expression and free and fair elections (UDHR articles 19, 20, and 21) and economic rights to social security and work (UDHR articles 22 and 23).

Human rights are based on the premise that every single human being, regardless of background or domain — including the online sphere — is entitled to freedoms of personal liberty and assemblies along with protections against oppression and inhumane treatment. From these cases alone, it is clear that rapid modernization and commercialization of the online sphere must be met with a sound digital human rights framework to ensure that such rights and liberties remain guaranteed for all inhabitants of UN member states. By utilizing a case study approach we hope to bring a level of specificity to the discussion both in terms of specific UDHR articles as well as the shared role of corporate and governmental actors in preserving and advancing economic and political human rights in all regions of the world. We will conclude with recommendations for further study as well as recommendations for policymakers to address the impacts of AI more concretely and specifically from a whole-of- society approach with the appropriate stakeholder engagement.

2. Can We Extend the Existing Human Rights Framework into Cyberspace?

In her remarks cited above, Michelle Bachelet has highlighted the existing human rights framework under the Universal Declaration and corollary conventions and treaties serve as a solid “legal foundation on which States and firms can build their responses in the digital age,” to includes guidance on acceptable behavior in the digital age (Office of the High Commissioner for Human Rights, 2019). While it is sensible to leverage and expand upon existing processes where necessary, we question whether the existing frameworks and instruments can be extended to the digital age fully, especially given the fact that human-rights law is predominantly State-centric. The Toronto Declaration comes closest to achieving Bachelet’s model of leveraging existing international human rights law from a standpoint of the right of equality and non-discrimination by tasking States to serve as fundamental guarantors of human rights obligations, with corporations using due diligence in building equity into artificial intelligence and machine learning technologies deployed across multiple industry domains (Access Now and Amnesty International, 2018). These models, however, do not necessarily fully address the tectonic shifts in extending existing international human rights law into cyberspace. The ubiquity of digital platforms in democratic life is not disputed by most within the international community. However, Rikke Frank Jorgensen has noted:

“[what] is less debated is the fact that facilitating this democratic potential critically relies on private actors [….] Despite the increasing role that these private actors play in facilitating democratic experience online, the governance of this social infrastructure has largely been left to companies to address through corporate social responsibility frameworks, terms of service, and industry initiatives such as the Global Network Initiative.” (Jorgensen, 2018)

This extended social-corporate relationship that permeates everyday life in the digital age does not rely on an independent judiciary or a civil or criminal justice system that is bound to protect an individual’s rights under the UDHR, but rather corporate oversight and mechanisms that provide neither due process nor transparency while being based upon norms of corporate social responsibility rather than human rights law. Prima facie evidence shows that these technology companies are inherently incapable of self-regulating when asked to do so. Such was the case of Google, which has recently gutted its own Ethical AI team barring any sound justifications (Fried, 2018). Jorgensen goes on to note that the case law of the European Court of Human Rights (ECtHR) “confirms that States have an obligation to protect individuals against violations by business enterprises,” thus outlining a potential conflict of interest within human rights law in governments and state actors are also customers of the same corporations through procurement contracts (Jorgensen, 2018). Momsen and Rennert have determined that utilizing digital platforms to create predictive policing systems for American and German police forces presents several conflicts of interest wherein the citizen is no longer confronted with the state (as victim or as a perpetrator facing justice), but rather a “non-transparent mixture of state authority and private factual or contractual power” (Momsen & Rennert, 2020).

Hin-Yan Liu points to another disruptive aspect of digital technologies, namely that “from a legally doctrinal point of view, there must be a direct causal connection between the actions or omissions of a State or its agents and the concrete enumerated right possessed by an individual for the human rights system itself to become engaged” (Liu, 2019). Drawing on ECtHR case law, Liu also notes that the concept of a “victim” requires causation and agency to access the justice system. However, the digital age is played out on the internet, which does not have the same boundaries of jurisdiction, sovereignty or even place as does the physical world. The internet, in Liu’s alternative perspective, occupies three loci simultaneously: the system, the network and the distributor and dissipator (Liu, 2019). Consequently, existing frameworks of causality and agency do not function analogously. Particularly problematic from a human rights standpoint is a dynamic system composed of artificial intelligence and algorithms rendering systemic — rather than man made — failures. This can be even more problematic when looked at from a system’s underlying infrastructure “that prejudices towards infringements against human rights or makes their occurrence more probable” (Liu, 2019). Given artificial intelligence’s underlying distributed architecture across multiple data centers, databases, big data, the cloud, machine learning algorithms that make sense of this data and provide predictive models do so at multiple instances and nodes: each with the ability to provide biased datasets or biased judgments incorporated into the algorithmic logic at each node.

Similarly, the network effects of the internet can result in an accumulation of small wrongs (such as racially biased data used in facial recognition systems used by police forces), amplified via artificial intelligence, even though each individual infraction may not in and of itself rise to the threshold of severity required by courts or tribunals (Liu, 2019). This ‘death by a thousand cuts’ phenomenon is further compounded by the complexity of code, servers, algorithms and databases that can self-manifest an entire system of additional bias, decision-making, and independent action. That these are decentralized and subject to change by corporate architects and designers underscores the internet’s role as distributor and dissipator, which further blurs causation and exacerbates power differentials between the individual and the alleged human rights violator (Liu, 2019). While the UN Human Rights Council appears to address this on some level in its Guiding Principles on Business and Human Rights, it nonetheless resides on a dualistic framework: on the one hand, States are obligated to protect and fulfill human rights, while on the other, business enterprises are required to act as specialized organs of society that respect human rights under the guidance of States, particularly in conflict zones (Office of the High Commissioner for Human Rights, 2011). In this model, the State is the principal guarantor of its citizens’ human rights, while businesses are primarily held responsible for monitoring, due diligence, risk mitigation, compliance, and reporting to states, in whose hands remediation ultimately rests.

What is problematic about this vertical integration of State and corporate agency and remediation is that not all human rights violations caused by digital businesses will fit the causality and agency model required by human rights law, but that the very nature of digital business as system, network, and distributor and dissipator is a horizontal integration of decision-making and amplification of potential harm that does not fit state-based grievance mechanisms. Expecting corporations and the internet to self-govern and self-regulate against potential human rights abuses is, as Jorgensen noted above, a conflict of interest — and one that does not adequately address the “death by a thousand cuts” that multinational distributed networks can cause across multiple victims. As it stands, the full impact of human rights infringements and violations resulting from AI systems is still unknown (Latonero, Governing Artificial Intelligence: Upholding Human Rights & Dignity, 2018), especially as discrimination (from an American civil rights perspective) can be an “artifact of the data mining process itself, rather than a result of programmers assigning certain factors inappropriate weight” thus disadvantaging protected classes in ways that are harder to both enumerate and remediate (Barocas & Selbst, 2016). This is particularly problematic in the context of predictive policing (not only in the West, but in China as we shall see below) since this strips potential defendants of the presumption of innocence enshrined in UDHR article 11, and even remedies, which have been based largely on intent, which is nigh impossible to determine through networked algorithms (Momsen & Rennert, 2020).

We agree with the assertions of Momsen & Rennert and others that a fundamental re-envisioning of the existing human rights framework is needed to address a concept of digital citizenship. The COVID-19 public health measures across the world have put additional pressure on democratic rights in an era of rapid digitization as entire lives are now lived online due to quarantine orders. In the context of education and youth participation in civil and political life during the COVID-19 pandemic, Buchholz et al (2020) have noted that this has resulted in a shift from “digital literacy” to a more fundamental and expansive re-envisioning of “digital citizenship” from an experiential point of view that will extend post-COVID-19. Being a digital citizen “requires individuals to confront complex ideas about the enactment of identities online as citizens who collectively work for equity and change” in a democratic society (Buchholz, DeHart, & Moorman, 2020). Digital citizenship also differs fundamentally from traditional notions of citizenship as it is performative and defined through actions, “rather than by their formal status of belonging to a nation-state and the rights and responsibilities that come with it” (Hintz, Denck, & Wahl- Jorgensen, 2017). In the next section, we will examine two case studies of how “digital citizenship” is performed on the global stage as examples of how to re-conceptualize this against the backdrop of traditional human rights law.

Thank you for reading part one of our inaugural white paper above. To continue reading the full white paper and references, please download the PDF here

--

--

Center for International Human Rights

A research center at John Jay College focused on a critical examination of long-standing and emerging issues on the human rights agenda.