CIVIL LIABILITY FOR THE USE OF DEEPFAKES IN THE MANIPULATION OF THE IMAGE AND VOICE OF PUBLIC PERSONS: PROTECTION OF HONOR AND DIGNITY UNDER BRAZILIAN CIVIL LAW
DOI:
https://doi.org/10.51891/rease.v11i12.22918Keywords:
Deepfakes. Civil Liability. Public Figures.Abstract
This scientific article analyzes Civil Liability for the use of deepfakes in the manipulation of the image and voice of public figures within the scope of Brazilian Civil Law. Deepfakes, defined as advanced artificial intelligence tools that manipulate images and voices in an extremely convincing manner, represent the "new technological stage of disinformation" and an intricate challenge for the legal system. The study demonstrates that the malicious dissemination of deepfakes constitutes a direct attack on personality rights, harming honor, image, and dignity. The conduct qualifies as an unlawful act (art. 186, CC) and can be intentional or negligent. Due to the complexity and viral nature of the content, the causal link is facilitated by the theory of adequate causation, leading to joint and several liability among the agents (art. 942, CC). In terms of reparation, deepfake offenses generate presumed moral damages (in re ipsa), applying STJ Precedent 403, which can be accumulated with patrimonial damages, such as lost profits, especially relevant for public figures. The work points out significant regulatory gaps, demanding the creation of specific legal frameworks. It suggests the application of strict liability (risk theory) for developers of commercial deepfake systems, the strengthening of injunctive relief, and the consolidation of jurisprudence through specific STJ/STF precedents. It concludes that the protection of human dignity in the digital age requires an integrated legal approach that reconciles innovation with the safeguarding of fundamental rights.
Downloads
Downloads
Published
How to Cite
Issue
Section
Categories
License
Atribuição CC BY