Legal peculiarities of using artificial intelligence in the media

8 Min Read

The use of ChatGPT, an AI-powered chatbot, in the media has already caused several scandals, such as false accusations of sexual harassment and a fake interview with Michael Schumacher. We propose to consider what legal claims may result from the use of texts generated by artificial intelligence.

First of all, we should note that the chat itself is not a subject of law, and a computer program cannot be sued or anything like that. The one who distributes the text generated by artificial intelligence (and theoretically, the one who created and made the chat itself publicly available) can answer. In part, the disseminator of information is the entity that has used ChatGPT and is now distributing the text generated by the chat on its command. If it somehow happens that such a generated text is distributed by another person, then he or she also faces the risk of being held liable for the content of such a text, because, for example, according to the Civil Code of Ukraine, “an individual who disseminates information is obliged to make sure that it is accurate” (part 2 of Article 302).

The possibility, type and scope of liability for disseminating certain information depends on the jurisdiction, i.e., on the country of residence of the disseminator and the person whose interests it affects.

Since the results of ChatGPT’s work are not limited to attributing works of some writers to others (who did not write them), and there are already precedents for false accusations of infringement, the defamation aspect of disputes over the operation of the chat room is potentially significant. In general, in order to win a case to recognize certain information as untrue, it is necessary to prove that such information concerns the plaintiff, defames him or her and is untrue. If the plaintiff wins the case, the disseminator of the information may be obliged to remove the information and compensate for non-pecuniary damage.

The defendant’s defense strategies in the case may include proving that the information is accurate, that it was not disseminated by him or her, or that it does not concern the plaintiff (this has been typical in defamation cases before, so we will not dwell on it). It can also include proving that the plaintiff’s information is not defamatory (even if it is unreliable) or cannot be perceived as trustworthy by definition (we do not consider the fantasies of a five-year-old child or a person with mental disabilities to be reliable). So why should imperfect AI be treated differently?). However, in contrast to these two strategies, other legal claims can be made regarding interference with privacy.

If a certain person is mentioned in the text, either literally or indirectly, it affects their interests. Even if such a mention does not dishonor her (the case of Schumacher). The right to privacy in a broad sense is often formulated as the right to be left alone. This applies, in particular, to texts that do not claim to be authentic. Therefore, for example, the Civil Code of Ukraine states: “the use of the name of an individual in literary and other works, except for works of a documentary nature, as a character (actor) is allowed only with his/her consent…” (part 2 of Article 296). In other words, it is not allowed to use a person’s name in non-documentary works without their consent. It is very likely that this can also be applied to text generated by artificial intelligence. Why “probably”? In the context of Ukrainian law, we say this because we will have to apply the analogy of the law (indirect regulation of legal relations by the rules for similar cases, if the law is not specifically prescribed for this case, part 1 of Article 8 of the Civil Code). After all, the text generated by AI is not literally a work (more on this below). In other words, just like in a defamation case, if a text from ChatGPT is disseminated that mentions a person without their consent and without a factual basis for mentioning them in this context, you may lose a court case and be forced to compensate for moral damages and remove the disseminated information.

In conclusion, let us consider the conceptual aspect of copyright in the context of AI-generated texts. The Law of Ukraine “On Copyright and Related Rights” defines the concept of a work, in particular, through the phrase “original intellectual creation of the author”. As for the textual product of AI, there are questions about originality, intellectuality (let the name AI not be misleading), and whether it has an author. After all, Ukrainian law always refers to an individual (a human) as an author, which AI does not belong to (a person who used AI to create a text is also not an author, because this text, among other things, is not the result of his or her intellectual efforts and is not a work). In view of this, we refrain from calling AI-generated texts “works,” although they may be subject to the rules of the law on works by analogy, as noted above.

There are also many fears that AI-generated texts will cause an explosion of “plagiarism,” but here we use the term from the copyright field in quotation marks. The point is not that AI will steal other people’s intellectual property, but that AI-generated texts will be used in abstracts, research papers, etc. when the authors of these works present them as their own text. In our opinion, there is indeed a problem, but such abuses should not be called “plagiarism” but rather manifestations of academic dishonesty or something similar.

Therefore, when distributing artificially generated texts, you should make sure that their content is reliable (or immediately make a clear note that it is fiction). Even if you have complied with this rule, but the text mentions a certain person, you need to make sure that the information about them is accurate and based on documents. If not, ask for permission to distribute such text and not distribute it without it (or remove identifying features from the text).

Source: Roman Golovenko, Institute of Mass Information

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version
X