Guardians of digital civil rights
Author
Barbara Werdmuller
Published
30 October 2023
Reading time
10 minutes
Bits of Freedom, Amnesty International and the Racism and Technology Center are committed to our digital civil rights. In the Masterclass Net politics, these organizations talked about their work. On the basis of three recent cases, I explain their approach and achieved successes. Read how they fight for surveillance by the secret services and against xenophobic machines at the Tax And Customs Administration or biased facial recognition software in education.
This is part 4 of a series of articles on the Masterclass Netpolitics 2022.
Power and counterpower in digital domain - Bits of Freedom
Bits of Freedom has been committed to an open and just information society since 1999. The civil rights organization influences policy and laws and regulations through advocacy, campaigns, and legal action. In a free society, power and counterpower must be in balance. Intelligence agencies have far-reaching powers that can have a major impact on digital civil rights. Bits of Freedom therefore keeps a close eye on the work of secret services and their supervisors.
Surveillance
Secret services operate behind the scenes. The supervisory task lies almost exclusively on the shoulders of the supervisors. This supervision is under pressure. Due to a lack of binding powers, the Committee of Oversight of Intelligence and Security Services (Cvtid) is now unable to intervene itself if necessary, e.g. when services illegally store or use data. In addition, the services are given more leeway to combat cyber attacks. In 2018, the Intelligence and Security Services Act (Wiv) stipulated that tapping must take place under close supervision: in advance, during and afterwards. The Review Committee on The Use of Powers (TIB) now checks in advance whether deployment is necessary, proportionate to the goal and as focused as possible. In the proposed Cyber Act, under the guise of speed, supervision is exchanged in advance for 'dynamic supervision' during and afterwards. Services then no longer have to request approval for data collection in large-scale tap actions. Digital human rights such as the right to privacy, freedom of communication and freedom of expression are compromised.
Use of Bits of Freedom
Bits of Freedom uses various means to create awareness about the growing power of the secret services and to stimulate countervailing power:
- Campaigns. In 2017, Bits of Freedom organised a campaign around the consultative referendum on the Wiv, also known as the Sleep Act. A majority of voters voted against the Towing Act: 49.66% voted against and 46.53% in favour. The government had to reconsider the law because of this ruling.
- Influencing policy. Bits of Freedom regularly speaks to policymakers about what can or should be improved and also engages with them. For example, the organisation has extensively examined the bill for the Cyber Act and shared critical notes and advice via a letter to the ministers involved and by participating in a round table in the House of Representatives..
- Participation in social debate. Employees participate in interviews and write opinion pieces about the risks of the Cyber Act and the importance of strict supervision, such as ‘Intelligence services should be under strict supervision’.
- Submitting complaints. Because the regulator itself cannot intervene, Bits of Freedom filed a class action complaint on behalf of all citizens in 2022 against the illegal storage and use of bulk datasets by the secret services. Bits of Freedom was right. The AIVD and MIVD must delete the data of millions of citizens who are collected in bulk as bycatch as soon as possible.
Embracing regulators
In the masterclass, Bits of Freedom explained that their working method relies on three pillars: enforcement in technology, facilitation of citizens and embrace of regulators. In practice, the power of supervisors is often limited and they cannot effectively exercise their role. For example, in the four years after the introduction of the GDPR, the Dutch Data Protection Authority struggled with a shortage of budget. As a result, it had insufficient capacity to enforce and data protection officers (DPOs) in the municipalities could not function properly. Bits of Freedom's criticism of undermining surveillance often receives support from experts in the field. According to a resigned former supervisor, the bill for the Cyber Act is ‘almost irresponsible’ and unnecessary, because 'almost everything that people say they want to achieve with this temporary law can also be achieved under the current law, but with adequate supervision'. Another former secret service supervisor also warns that fundamental rights are at stake.
Human rights and technology - Amnesty International
One of Amnesty International's themes is the protection of human rights in the digital age. The Dutch branch of the organization takes a critical look at the use of technology by our national and local government. The tricky point is that organizations such as the Tax and Customs Administration, municipalities and police use risk profiles, algorithms and big data to assess risks and detect fraud. Discrimination and privacy violations can then result.
Xenophobic machines
In the report ‘Xenophobic machines’, Amnesty International explains how unregulated use of algorithms has led to abuses in the Surcharges scandal. In 2013, the Tax and Customs Administration introduced an algorithmic decision-making system for detecting incorrect applications for childcare allowance. One of the risk indicators used was nationality: Dutch nationality yes/no. Applicants with a non-Dutch nationality received a higher risk score. This led to a difference in treatment on the basis of ethnicity and to discrimination. The algorithmic system was also a so-called black box: the input and operation were not visible to users or other parties such as citizens. A self-learning algorithm also allowed the system to learn independently and change the way it worked, without being entered by a programmer. This prevented transparency and the Tax and Customs Administration could therefore not be sufficiently accountable for policies and decisions, which is a legal obligation for the government.
Amnesty International's commitment
Amnesty International has been working for a number of years for the following measures regarding the use of big data and algorithms through lobbying, legal advice and lawsuits:
- Prohibition of the use of ethnicity and nationality as an indicator in risk profiles used for law enforcement. The government has not yet taken this measure, but an important step in this direction is the ban on ethnic profiling by the Royal Netherlands Marechaussee.
- Prohibition of the use of autonomous and self-learning algorithms in the performance of government tasks that have a major impact on people and society. A report by the Digital Affairs Committee shows that there is still a lot of uncertainty about the application of these types of risky algorithms.
- Transparency about the use of algorithms by combating black boxes and by developing a public register. According to a study by the Netherlands Court of Audit, there is no question of black boxes at the central government: most algorithms are simple and the operation can always be traced. However, the government is advised to lay down quality requirements and agreements on the use of algorithms and to monitor them continuously. There is now a public algorithm register, but according to some experts the intended transparency is not possible because the set-up is too non-committal.
- Establishment of an independent algorithm supervisor. Algorithm supervision has been entrusted to the Dutch Data Protection Authority as of 1 January 2023.
- Introduction of a binding human rights test before the use of algorithms and automated decision-making. The Ministry of the Interior and Kingdom Relations has developed an Impact Assessment for Human Rights and Algorithms (IAMA); the House of Representatives passed a motion in 2022 that makes the application of this test mandatory.
Necessary evil
In the masterclass there was a lot of discussion about the use of algorithms. A critical view of the phenomenon is important, but the fact is also that the government can no longer do without it. According to the Netherlands Court of Audit, algorithms allow the government to carry out millions of actions per month; Solving problems or making predictions. A personal handling or review is not only much less efficient, but also increases the chance of human error or inconsistent policies. Furthermore, Amnesty International's presentation shows that responsible deployment strongly depends on the context. The application of a factor such as nationality or ethnicity is not necessarily out of the question: this can actually offer added value in detection for the prevention of hereditary diseases. And a factor such as postcode area may seem neutral, but in practice it can lead to an overrepresentation of a population group and unequal treatment. In short, a complex issue that requires a lot of knowledge and insight.
Technology as a mirror - Racism and Technology Center
The Racism and Technology Center wants to make visible how racism in Dutch society manifests itself in technology. The aim is to 'dismantle systems of oppression and injustice'. The knowledge centre is active on various issues that are often high on Amnesty International's agenda: data-driven law enforcement, algorithmic bias, Big Tech. A hot topic that has been in the news a lot recently is the impact of facial recognition software.
Software bias
When you think of facial recognition software abuses, you might immediately think of China's infamous surveillance system, but there are also cases closer to home.
During the corona pandemic, a Dutch student at VU Amsterdam has to install proctoring software (anti-peak software) in order to be able to take exams remotely. The software uses face detection and does not detect her until she continuously shines a bright lamp on her face. The student suspects that this is due to her dark skin color: fellow students with a light skin color do not have to do this.
It has long been known that facial recognition software has a detrimental effect on people of color. A study by Joy Buolamwini shows that the darker the skin, the worse the software performs. An algorithm in facial recognition software produces output based on input and is trained with examples of faces. If there are few photos of people with color in that set, the software will recognize these faces less well. If you want to know more about the subject, the documentary Coded Bias is a must.
Commitment of Racism and Technology Center
The Racism and Technology Center draws attention to racist technology in several ways:
- Knowledge sharing through a newsletter, presentations/lectures and a knowledge base. In the broadcast 'How do we tame the big 5?' an employee tells about her research into online exclusion (based on e.g. race or gender) by large digital platforms. The public online knowledge base contains international articles on technology and racism clustered in themes such as AI, algorithmic bias and facial recognition and biometrics. You can search by topics (digital rights, ethics, etc.), sectors (education, healthcare, etc.) and companies involved (Google, Microsoft, etc).
- Influencing public opinion in the form of interviews and opinion pieces. In a column in Het Parool, employees of the centre discuss the use of racist surveillance software by educational institutions such as the VU. This has led to parliamentary questions and a response from the Minister of Education Ingrid Engelshoven that large-scale use of online proctoring should be avoided.
- Supporting individuals and activist organisations in their fight against racism. Following the case described above at VU Amsterdam, the centre and the student concerned filed a complaint with the College of Human Rights (CRM) in 2022. In an interim ruling the CRM has come to the conclusion that there is probably algorithmic discrimination. A unique statement, because never before has anyone been able to make this form of discrimination plausible. The ball is now in VU Amsterdam's court to prove its defence that the system makes no distinction.
A good mirror but not a solution
In the masterclass, speakers from the Racism and Technology Center emphasized that there are a few widespread misunderstandings about technology and racism. First of all, the assumption that technology is neutral. In practice, flesh-and-blood people determine the input and functioning of technology: the choice of factors in an algorithm; how facial recognition software works. So you can think of a design as 'applied ethics'. Second, the idea that technology can only be racist if it is designed with the wrong intention. Unfortunately, the use of technology can also inadvertently lead to discrimination, e.g. due to a limited perspective among the designers that they are not aware of or due to unrepresentative training data. Finally, the misconception that improving technology will solve the problems. Racism is basically a social problem that cannot be combated with (only) technological interventions.
Reflection
What is the relationship with our work as designers? All three cases are examples of (proposals for) policy or supporting tooling in which public values such as the right to equal treatment or privacy are insufficiently safeguarded. It is to be commended that these organisations keep their finger firmly on the pulse and take action in the event of abuses. However, prevention is better than cure. The recently developed human rights impact assessment is a practical tool. The government could benefit greatly from the application of design principles, i.e. in the development/deployment of software and policy. Think of principles that we have been using for a (long time), such as: value-driven design; early and sustainable involvement of user groups; investment in a diverse/inclusive design team; validation of solutions or policy proposals among various user groups. The government has to deal with legal frameworks and often has to make difficult trade-offs between conflicting values such as public safety and privacy. But at the end of the day, the government is of and for the citizens. Design thinking can help the government to develop methodical policies and tools in the midst of the many challenges in which the interests of citizens are central.
About the author

Barbara Werdmuller
Content designer
Events
Public sector
Government