Скопировано

NIST proposes measures to combat deepfakes and increase transparency of digital content

04.12.2024 14:29:00
Дата публикации
The US National Institute of Standards and Technology (NIST) has published a report, “Mitigating the Risks of Synthetic Content,” which presents measures to combat deepfakes and other threats from AI-generated content.

The document includes three key areas: tracking the origin of content, labeling AI-generated data, and combating the creation of child sexual abuse material (CSAM) and non-consensual intimate images (NCII).

An important part of the work is transparency in the creation of such content. Here, recording information about the origin of the content and its history of changes is important. Such data can include metadata or watermarks confirming the source of the material.

Labels and tools for identifying synthetic content are seen as a key way to help society distinguish between authentic and AI-generated content.

NIST focuses on threats that can range from the impact on individuals (for example, when creating deepfakes) to society as a whole, including the spread of false information.

Experts emphasize the importance of international standards and coordination of efforts between states to combat threats from synthetic content.

It is noted that technical methods alone are not enough to solve the problem. Digital literacy of users and educational initiatives are important.

Among the proposed solutions: the use of digital watermarks, which can be hidden or obvious, and tracking the history of content changes.

The report emphasizes the importance of transparency of digital content. However, the implementation of the technologies proposed by NIST will take time, since many of them are at the research stage, and their mass use is not expected earlier than in several years.


(the text translation was done automatically)