Role of AI in Mass Surveillance of Uyghurs
Joseph Shiovitz, Graduate Research Assistant, MPA Candidate in Public Policy, John Jay College
Artificial intelligence can serve the world in many ways. Like all technologies, it serves the purpose set forth for it by some user (however opaque that user may be). As the functional use of AI grows, what constitutes a legitimate use of the technology remains contested, especially when it is serving government purposes. At the same time, the internet has already impacted society so extensively that our personal data is being obtained and stored across large databases, often without our knowledge or permission, and without our understanding of who can access it. This increasing datafication of individuals combined with growing utilization of artificial intelligence is laying the groundwork for new forms of surveillance.
There has been a series of reports, since 2017, by human rights groups and media outlets discussing the problematic mass surveillance and mass detention of Uyghur Muslims and other minority populations in China’s northwestern territory of Xinjiang, known formally as Xinjiang Uyghur Autonomous Region (XUAR). The region shares a border with eight different countries, including Russia, India, and Pakistan; as its formal name suggests, Uyghurs have historically comprised a significant portion of the population. XUAR’s first recorded census, in 1953, indicated more than 75% of the population were Uyghur, while only 7% were of China’s dominant ethnicity (and world’s largest), Han Chinese. Today, the population is more balanced, consisting of 45% Uyghurs and 42% Han Chinese.
There has been widespread international pressure for the Office of the High Commissioner for Human Rights (OHCHR) at the United Nations (UN) to investigate the claims and release an official report. However, the Covid-19 pandemic delayed the High Commissioner’s visit to China, which would be the first OHCHR visit to China since 2005. Finally, that visit took place in May 2022.
On August 31, 2022, the OHCHR published its report: “Assessment of human rights concerns in the Xinjiang Uyghur Autonomous Region.” The 46-page assessment states (section 148), “The extent of arbitrary and discriminatory detention of members of Uyghur and other predominantly Muslim groups, pursuant to law and policy, in context of restrictions and deprivation more generally of fundamental rights enjoyed individually and collectively, may constitute international crimes, in particular crimes against humanity.”
How artificial intelligence enabled mass surveillance by the state
The mass surveillance by Chinese authorities throughout Xinjiang drew international scrutiny for its enormous scale and complexity, enhanced by the advanced integration of artificial intelligence that set it apart from other cases of state surveillance around the world. The OHCHR report also states (section 98), that its investigation would “suggest key elements of a consistent pattern of invasive electronic surveillance that can be, and are, directed at the Uyghur and other predominantly Muslim populations, whereby certain behaviours, such as downloading of Islamic religious materials or communicating with people abroad, can be automatically monitored and flagged to law enforcement as possible signs of “extremism” requiring police follow-up, including potential referral to a VETC facility or other detention facilities.”
Some groups — PBS, Electronic Frontier Foundation, and Center for Strategic and International Studies — have suggested China is using Xinjiang as a testing ground for its facial recognition software and alert systems. This software, which can be covertly positioned in traffic lights and elsewhere, collects biometric data in real-time as people walk through the streets and go about their day. It tracks people according to facial features, voice samples, iris scans, and other types of data, and then links together with a comprehensive database to quickly identify a person by name, address, and other state records. This surveillance has been used to restrict free movement of people, particularly Muslims, by quartering them to their homes or specific parts of an area, like a neighborhood. Based on the Universal Declaration of Human Rights, such restrictions violate the human rights to liberty (Article 3), movement (Article 13), and work (Article 23), among others.
The use of AI has not been limited to external forms of surveillance like security cameras. The Uyghurs and other groups have also been forced to download apps to their phone that helps law enforcement to monitor online behaviors. For example, the apps search through text messages and internet searches for mentions of Quran verses or donations to a mosque, and even staying off social media can raise suspicions. Such acts can result in an indefinite holding at a detention center.
AI alerts trigger law enforcement investigations
Human Rights Watch (HRW) was able to discover and “reverse engineer” one of the apps that police in Xinjiang use for surveillance, called IJOP, or Integrated Joint Operations Platform. Functions of the app include geolocation and mapping, information searches using personal data, facial recognition features, and wifi detecting. Their findings indicate that wide collection of data, ranging from DNA samples to the color of a vehicle, have been used to setup alerts for people found to exhibit such behaviors as spending time abroad, having certain types of content on their phone, using large amounts of electricity, and being foreign nationals.
Once the app’s algorithm identifies such behavior, it then triggers an alert to law enforcement prompting an investigation. The HRW revealed mock examples of such prompts they found in the app’s source code. Here is one example provided by HRW:
Report text: Suspicious person Maimaiti Muhemuti, who originally lives in Xinjiang’s Urumqi, ID number 653222198502043215, phone number 13803021458.
Report time: 2017–09–25 14:01:53
[Mission] text: Please carefully investigate whether he still lives in Urumqi and investigate his family situation.
These types of investigations can once again result in the indefinite holding of a person at a detention center or “political education camps…”
Wider Implications of using AI for mass surveillance
As surveillance capabilities increase, so do the purposes it can be put forth to serve, as indicated by the following references:
· The New York Times reported “It is the first known example of a government intentionally using artificial intelligence for racial profiling…”
· Human Rights Watch warned, “The goal is apparently to identify patterns of, and predict, the everyday life and resistance of its population, and, ultimately, to engineer and control reality.”
· According to The Brookings Institution, “China’s campaign against its Uighur minority gives an indication of how it may use AI surveillance technologies in other systems. For example, consider how a similar system might be built to identify American servicemembers or U.S. government officials.”
· The policy think tank Research ICT Africa stated, “The use of facial recognition technology is proliferating at a rapid rate, faster than measures ensuring that people’s rights are protected in its deployment,” and that “it is evident that AI surveillance is an emerging issue in the region.”
· UN High Commissioner on Human Rights said, “The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”
These nuanced purposes further call into question the distinction between individual rights and individual data, along with concerns of accountability for the corporate actors supplying governments with artificial intelligence technology.
To learn more about digital citizenship, and the connection between AI and human rights, read the Center for International Human Right’s white paper here.