koliažas
Pexels nuotr./LGKT koliažas
25th February 2026
Living in a Sci-Fi Scenario: The Real Risks of Social Scoring

Imagine a world in which algorithms decide — based on your behaviour, financial debts, or even your lifestyle — whether you are allowed to travel, access public services, or exercise certain rights. It may sound like a science fiction scenario, yet similar practices have already been experienced in China, and Europe has chosen to prevent such systems before they become a reality.

A human rights expert explains why social scoring is considered one of the most dangerous practices in a democratic society — and what could happen if there were no safeguards against it.

When discussing social scoring, the example of China is often mentioned. While there is no single universal score determining every aspect of a person’s life, certain practices in China allow data about individuals — most commonly related to debts, court decisions, or other administrative violations — to result in restrictions in various areas of the public sector.

It is from this example that Rūta Juodelytė, an expert at the Office of the Equal Opportunities Ombudsperson of Lithuania, draws when speaking not about the actual situation in Lithuania, but about a hypothetical scenario: what would happen if social scoring were applied in a democratic state?

“The problem arises when such mechanisms are applied broadly and automatically to all citizens, without regard to individual life context,” says Juodelytė.

What Is Social Scoring?

Social scoring is the practice of collecting and processing various types of data about a person in order to evaluate them and, on that basis, grant or restrict access to services, opportunities, or rights.

According to Juodelytė, the core problem lies in reducing a human being to a line in a database:

“In essence, social scoring can be described as an inhumane approach to people. It reduces a person to data and fails to take into account their individual life context.”

China as a Warning Example

Referring to real-world examples, Juodelytė mentions China — not as a model to replicate, but as a warning of what can happen when the state begins to systematically evaluate its citizens.

“I have heard examples that in China, individuals with outstanding debts were restricted from using intercity transport or other services,” she explains.

However, Juodelytė stresses that social scoring should not be equated with a bank’s credit rating.

“A credit score assesses a specific risk — whether a person will be able to repay a loan. Social scoring goes beyond these limits: it can evaluate how a person lives or whether they comply with certain social norms,” the human rights expert explains.

In her view, this would amount to classifying lifestyles as ‘appropriate’ or ‘inappropriate’. It is precisely here, she argues, that a dangerous line emerges between legitimate regulation and social control.

What If Such a Scenario Were Applied in Lithuania?

In Lithuania, as in other democratic countries, social scoring is not practiced. However, if it were introduced, Juodelytė believes that most people might not even notice at first.

“The state already holds a vast amount of data about individuals — their movements, payments, and service usage. The problem begins when a decision is made to combine this data and use it against the person,” she says.

According to the expert, systems based on aggregated data can easily become discriminatory, even if they appear formally neutral.

“Social scoring could entrench systemic discrimination — especially against groups that may already face barriers, such as ethnic minorities. Historical injustices would not be reduced but further deepened,” Juodelytė explains.

It is important to stress that such practices do not exist in Lithuania. However, this is precisely why the discussion is necessary — to understand why social scoring is considered unacceptable and why the European Union banned it in its Artificial Intelligence Act.

Amnesty International: Algorithms Can Reinforce Inequality

The risks of using artificial intelligence in the public sector have also been highlighted by international human rights organisations. Amnesty International has repeatedly warned that algorithmic systems used to make decisions about people can reinforce discrimination and violate fundamental rights.

The organisation emphasises that automated decision-making in areas such as social protection, security, or public services is often opaque, and individuals have no real opportunity to understand why certain restrictions were imposed on them. It also stresses that such systems frequently have a disproportionate impact on socially vulnerable groups.

These insights align with Juodelytė’s concerns about the dangers of social scoring in democratic societies.

How Is Europe Responding?

The European Union’s Artificial Intelligence Act, adopted in 2024 and set to enter into force this year, explicitly designates social scoring as a prohibited practice due to its unpredictable and potentially irreversible impact.

“These practices were banned to draw a clear line — they are not acceptable in democratic societies and contradict the values of the European Union,” Juodelytė says.

She emphasises that humanity is inseparable from the ability to make mistakes and to be understood. Social scoring denies this premise by turning life into constant behavioural monitoring and evaluation.

“It would be like a permanent punishment — small or large, but constant. Every action would be measured, compared, and assessed, and you would no longer be able to make even minor mistakes,” the expert explains.

According to her, it is precisely this logic of systemic, continuous punishment that poses the greatest threat. Social scoring may be efficient for the state, but for individuals it would mean dehumanisation — living permanently under a magnifying glass of evaluation, without a second chance.

For this reason, social scoring was clearly identified as a prohibited practice in the European Union’s Artificial Intelligence Act.

In a democracy, Juodelytė concludes, individuals must always have the opportunity to change — and systems must be allowed to make mistakes. “We do not have the right to act as if we were gods,” she stresses.

The survey is part of the project “EquiTech – improving response to risks of discrimination, bias and intolerance in automated decision-making systems to promote equality“, funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. The European Union cannot be held responsible for them.”