17th February 2026
Artificial intelligence (AI) is increasingly used to make decisions that directly affect people’s lives – from recruitment and credit approval to social benefits and the administration of public services. While these technologies promise efficiency and objectivity, international examples reveal another side: automated decision-making can create discrimination that is sometimes difficult to detect.
The Dutch childcare benefits scandal is considered one of the most painful examples of algorithmic decision-making in Europe. In an effort to combat fraud more effectively, the state used risk assessment algorithms that automatically assigned certain individuals to a higher-risk category. In practice, this meant that thousands of families were unexpectedly accused of fraud, forced to repay large sums of money, and pushed into debt. It later emerged that the algorithm disproportionately affected people with dual citizenship and migrant families.
Other cases show that the problem is not limited to public authorities. In the United States, Amazon developed a recruitment algorithm that, trained on historical data, “learned” that men were more suitable candidates. The system automatically downgraded CVs that contained words associated with women.
Invisible discrimination
Equal Opportunities Ombudsperson of Lithuania, Birutė Sabatauskaitė stresses that these different cases share one particularly dangerous feature – the invisibility of discrimination.
“This is one of the most shocking aspects of these cases. People not only did not understand why they were being treated this way, but often did not even know that decisions had been made automatically and that some aspect of their identity – gender, origin, social status, or other characteristics – had influenced the outcome,” she says.
People suddenly faced extremely serious consequences – demands to repay large sums of money and years of debt. According to the Ombudsperson, this fundamentally changes a person’s relationship with the state or an institution.
Why is algorithmic discrimination often more dangerous?
Although algorithms are created by humans, their decisions are often perceived as more neutral and objective. However, B. Sabatauskaitė points out that algorithms are still designed by people and therefore cannot be fully separated from human assumptions and biases.
Once a system starts operating, the direct human relationship often disappears.
“In a live job interview, you can see the employer’s reaction, ask questions, or notice unfair behavior. But if you receive a rejection before the interview even takes place, you may not know that your CV was filtered out by an algorithm,” she explains.
One of the greatest threats identified by the Equal Opportunities Ombudsperson is scale and blind trust.
“One system can affect thousands. A person sees only the outcome of the decision, not the process. Moreover, people tend to place too much trust in AI decisions and do not consider that the system may be wrong,” she emphasizes.
Who is affected the most?
Algorithmic systems often hit hardest those who are already in vulnerable situations.
For example, people who struggle to complete official documents – due to language barriers or unfamiliar administrative systems – often find themselves in a weaker position. This is especially relevant for migrants, refugees, or temporarily residing students.
“International examples show that even minor errors, such as imprecise answers or grammatical mistakes, can be interpreted by algorithmic systems as risk indicators. This creates a vicious circle that reinforces existing social inequality,” explains B. Sabatauskaitė.
Who is responsible for harm when an algorithm makes the decision?
In Lithuania, AI is already being applied in the public sector, often without a clear strategy or sufficient expertise. An audit by the National Audit Office revealed that automated decision-making systems are used in policing, public transport planning, and healthcare, while also highlighting a lack of knowledge and skills.
Nevertheless, both existing and newly developed AI systems must comply with the principle of equality, remain impartial, and avoid discrimination. Ultimately, humans are still responsible for final decisions, and those decisions can be challenged.
Human rights expert Eitvydas Zurba emphasizes that algorithms do not change the fundamental principle of responsibility.
“In most cases, the institution or company that made the decision will be held accountable, regardless of the tools used in the decision-making process,” he explains.
According to the expert, the phrase “the system decided” has no legal meaning. “It is merely a convenient excuse and, in some cases, may even be interpreted as an admission of fault.”
E. Zurba notes that individuals today have important rights – everyone has the right to request that a real person review the decision, to receive an explanation of the logic behind it, and to have the possibility to appeal.
The European Union’s Artificial Intelligence Act (AI Act), adopted in 2024 and set to enter into force this summer, establishes rules on the use of artificial intelligence and strengthens human rights protections in this area.
“The AI Act provides that a person must be informed when they are interacting with an AI system. This helps ensure awareness and clarity. Secondly, EU law grants everyone the right to request human intervention – which is perhaps the most important safeguard,” he says.
The AI Act: more safeguards, but not a miracle cure
Although the EU AI Act sets high requirements for so-called high-risk systems, E. Zurba warns that not all dangers will disappear.
“Privacy remains the greatest challenge – not just the data you provide, but what an algorithm can infer. From your social media activity, location history, and shopping basket, AI can very accurately predict your political views, sexual orientation, health status (for example, depression or pregnancy), or financial situation,” he says.
Moreover, E. Zurba points out that the Act does not apply in the field of national security. This means that states may attempt to justify certain AI systems – for example, those used at borders to manage migration flows – under this exemption and thereby avoid some human rights safeguards.
“It is difficult to imagine the consequences and what the application of such technologies would mean in a real armed conflict without greater human oversight,” he reflects.
Both experts agree: the key lesson for Lithuania is not to rush into purchasing systems whose decision-making processes cannot be explained. It is essential to ensure that systems are developed and operate fairly. Furthermore, without critically thinking users and properly trained staff, even the best technologies can become a risk.

The survey is part of the project “EquiTech – improving response to risks of discrimination, bias and intolerance in automated decision-making systems to promote equality“, funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. The European Union cannot be held responsible for them.”