Скопировано

Technology and Human Rights: New UN Resolution

08.04.2025 13:28:00
Дата публикации

On April 4, 2025, the UN Human Rights Council adopted a resolution to protect users and human rights defenders from digital threats. Norway's initiative, supported by over 50 countries, reflects a consensus: the digital environment requires clear rules. The resolution covers risks from AI and mass biometric data collection to shutdowns and spyware, placing responsibility on states and companies.

The UN Council demands legality, justification, and proportionality in technological intrusions into privacy. Special attention is given to opaque algorithms and AI used for facial recognition, prediction, and image generation.

Mass biometric data collection without oversight is deemed particularly dangerous. The resolution requires states and companies to ensure transparency in technology development and regular risk assessments.

For the first time at this level, the issue of internet shutdowns is raised, especially in the context of elections, protests, and emergencies. These measures violate freedom of expression and hinder human rights and journalistic work. The Council urges countries to refrain from shutdowns.

The necessity of civil society participation in digital regulation is emphasized. As technology permeates daily life, inclusivity and human rights become key to sustainable digital development.

The resolution mandates the Office of the UN High Commissioner for Human Rights to conduct seminars and prepare a report on digital persecution. The document serves not only as a political declaration but also as a practical guide.

The organization Article 19 called the resolution timely and significant. According to human rights defenders, communication shutdowns and digital surveillance using AI and biometrics have become a global problem. The UN, they say, has clearly stated: technology should protect freedoms, not destroy them.

Article 19 also noted that protecting rights is an international obligation, not a voluntary gesture.

Meanwhile, debates continue in the EU over the draft Code of Practice for AI model developers. Despite the AI Act coming into force, its third draft has faced sharp criticism. Human rights defenders, including Article 19, believe that moving risk assessment to a non-binding annex contradicts the spirit of the law.

In their view, rights protection cannot be optional. Attempting to make it a business decision element reduces developer responsibility and undermines transparency principles. AI risks are already tangible—from algorithmic bias to intimate content leaks.

Critics insist: developers of foundational AI models must assess threats. Excluding this obligation creates loopholes for evading responsibility. They demand a return to clear, binding provisions reflecting the international approach to digital rights.

Thus, the UN resolution and the European Code project underscore the importance of a firm stance on digital rights. In the context of global technological transformation, compromises that contradict human rights are unacceptable.


(the text translation was done automatically)