OLG judgment: Platforms must now recognize Deepfake videos yourself!

OLG judgment: Platforms must now recognize Deepfake videos yourself!

The Higher Regional Court (OLG) Frankfurt am Main made a pioneering decision on March 4, 2025, which concerns the procedure of social media against infringing content such as deepepake videos. Specifically, the court decided that platform operators like Meta not only have to react to references to infringing content, but also have to actively search for meaningful content. This primarily affects content that are identical to the statement and content despite different design. The actors of this decision refer to a case that Dr. Eckart of Hirschhausen affects the victim of manipulated videos, in which he was incorrectly presented as an advertiser for a weight loss product. The OLG found that it is not enough to only delete the reported contribution; The platforms also have to work independently in order to remove similar but not directly registered content in order to act against the spread of false information.

The court trial began in July 2024 when Dr. von Hirschhausen, supported by legal help, applied for the deletion of a specific video. After analyzing the situation, the court found that Meta should have acted with another almost identical video that contained the same statements. In November 2024, Dr. von Hirschhausen his application, and the Higher Regional Court in parts gave him right by prohibiting the spread of the identical video. This trend-setting judgment makes it clear that host providers are obliged to take measures to actively combat the distribution of false information and enable more protection of those affected.

obligation to actively search

The legal requirements that the OLG formulated in its decision clearly state that platform operators have to search for meaningful content according to a specific reference to a violating content. Such content can arise from identical texts or images, but due to different design (for example other resolutions or filters). A crucial point in this context is that a separate warning is not necessary for every similar video if it is almost identical. This represents progress in combating Deepfakes, especially in view of the fact that fakes can influence the thinking and acting of the people in public and thus represent a significant danger to formation.

The decision also has consequences for the legal responsibility of the platforms. These must now ensure that they provide the technical means to recognize identical or sensory content. Against the background, these legal developments are made that the use of Deepfakes has increasingly gained importance in various areas, from political influence to pornography. However, the legal protective measures against such content are still inadequate, which is often a challenge for the platforms and those affected.

effects on those affected

The consequences of this decision are of great importance: those affected will no longer have to actively search for infringing content in the future. The platforms now have a higher level of responsibility and can no longer only rely on explicit messages. Legal support can be crucial to enforce rights effectively. In the discussed case of Dr. Von Hirschhausen was removed by a reported video after a message, the almost identical video only followed after a second message. This type of legal discussion shows how important it is that platforms are offset by their obligations and react faster to Hinwiese to eliminate such false information.

Overall, the judgment of the OLG Frankfurt am Main shows the need for more intensive legal clarification in dealing with Deepfakes and other forms of disinformation. The European Commission has already submitted a draft regulation to regulate artificial intelligence and deeper, which shows that the legal framework for this technology has not yet been fully established. The future will show how platforms can implement these new requirements and protect the rights of those affected.

Details
Quellen

Kommentare (0)