A lawsuit over questions to artificial intelligence? Polish law is clear.

Poland has filed a complaint with the European Commission about Elon Musk's AI activities. The chatbot Grok, provoked by users of the portal X, publicly insulted Polish politicians.
Who is responsible for the hate generated by AI? According to lawyers, it's the people who asked the provocative prompt and then disseminated the response.
Content generated by AI may violate personal rights and be subject to civil or criminal law, argue CIS interlocutors.
Elon Musk's artificial intelligence published offensive messages about Polish politicians , including Donald Tusk and Jarosław Kaczyński. On July 9, the Polish Minister of Digital Affairs sent a letter to the European Commission, pointing out that such messages may constitute a "serious violation" of EU content moderation law , the Digital Services Act (DSA). The EC has been conducting an investigation into the platform for a year and a half. The Polish thread is intended to supplement it.
How did it come about that a chatbot insulted Polish politicians?There's an account on the social networking site X that allows you to interact with the Grok language model. Questions and answers to the artificial intelligence are publicly available to everyone. Anyone can join the conversation.
Last week, Grok's rules were relaxed. He could respond using profanity and without political correctness. When Polish users realized the new rules, Grok was provoked into posting vulgar posts . The artificial intelligence called Civic Coalition MP Roman Giertych a "scoundrel" and a "liar" and Prime Minister Donald Tusk a "traitor who sold Poland to Germany and the EU."
Other politicians also suffered blows. Ultimately, AI-generated responses reached the point where Polish politicians and journalists were being described as "fuck him."
A similar phenomenon has been observed around the world. A Turkish court blocked access to a chatbot after it generated responses that insulted Turkish President Recep Tayyip Erdogan.
Shortly after the scandal, Linda Yaccarino, head of the X platform, resigned from her position , and the company issued a statement saying that its management was aware of Grok's posts and was working to remove them, as well as to improve the chatbot's training model.
Can chatbots like ChatGPT or Grok violate personal rights?According to the lawyers we asked for their opinion, Polish law clearly indicates that AI-generated content can violate personal rights. As attorney Mateusz Grosicki from Graś i Wspólnicy reminds:
Under Article 23 of the Civil Code, individuals have the right to protection of their personal rights, such as dignity, honor, reputation, and image. In the case of AI-generated content, Polish law provides no exception to this protection.
"People who feel affected by offensive statements generated by AI can pursue their rights both in Poland and abroad. At the domestic level, victims can file a civil lawsuit to protect their personal rights or file private lawsuits , demanding an apology, compensation, or removal of content that harms their reputation," adds Grosicki.
He emphasizes, however, that public figures or those holding state functions must be more tolerant of criticism and controversial opinions.
Who can be held liable – the user, the AI creator or the platform?As lawyers told WNP, the issue of liability for AI-generated content is complex. Liability depends on who generated or published the content that violates personal rights.
- Let's remember that although all AI-based tools are personified and presented as artificial people in public discussion, they do not connect to the power supply themselves, nor do they make decisions about integration with the website and overwriting comments on it - emphasize attorney Zuzanna Miąsko and legal counsel Kacper Krawczyk from the Dubois i Wspólnicy law firm.
"In everyday life, with non-automated tools, obtaining specific and potentially infringing content from a chatbot requires the user to formulate a query. In such circumstances, the situation is clear – by disseminating the content, the user is violating personal rights or spreading slander, and this user may be subject to both criminal and civil liability, " says Zuzanna Miąsko.
Can you really answer for the "prompt"?What about the Grok case? " First and foremost, liability may rest with users who deliberately formulate queries or questions to AI intended to elicit specific, controversial responses. Such actions may be considered active infringement of personal rights and may entail civil or even criminal liability if defamation occurs," confirms Mateusz Grosicki.
He believes that AI system creators may also be liable if they fail to ensure their tools have adequate safeguards and control what they generate. If the algorithms are poorly designed, resulting in offensive or defamatory content, the creators could be sued.
Liability may also apply to platforms where such content appears – especially if they know that it violates someone's personal rights and do nothing about it, even though they are obliged to react in accordance with their own regulations.
Why is it so difficult to hold platforms accountable for content?However, as CIS interlocutors argue, seeking liability from platforms is a long and difficult process.
"The biggest problem is that social media platforms like X, Facebook, Instagram, and TikTok practically do not respond to reports of defamatory or offensive content," admits attorney Zuzanna Miąsko. "In my opinion, the most effective solution would be to adapt the law to current communication standards, which occur via social media. It is necessary to create a body in Poland with a well-developed structure that will efficiently and decisively respond to illegal content appearing online. Digital service providers cannot continue to feel impunity for content published on platforms for which they are responsible," she says.
The Office of Electronic Communications is supposed to be such a body. It has been designated by the Ministry of Digital Affairs as the Coordinator of Digital Services in Poland. However, it has not yet become operational.
"In my opinion, the system's weakness in addressing such violations stems not from a lack of appropriate tools, but from a lack of inaction in using them," says legal counsel Kacper Krawczyk. "I believe we should focus on regulatory action and enforcement of provider X's obligations under the DSA (Digital Services Act - editor's note) – i.e., act systemically, primarily, to prevent a precedent of impunity. Regulatory authorities' tools are much more harmful to digital service providers than potential damages or compensation, which are built into the costs of many "modern" internet solutions," he says.
What can a person harmed by posts written by AI do?A person who feels defamed or offended by AI-generated content should:
- Secure evidence – take screenshots, write down links and publication dates.
- Report the violation directly to the platform where the harmful content was posted – most of them provide forms for such reports.
- If this proves ineffective, the injured party may file a civil lawsuit for the protection of personal rights, demanding an apology, removal of the content, or compensation.
- In more serious cases – e.g. defamation – it is also possible to report the matter to the prosecutor's office or file a private indictment.
As experts remind us, it is worth remembering that if a platform operates in the European Union, it is subject to the provisions of the Digital Services Act, which imposes on it the obligation to respond to illegal content.
TL;DR: AI systems are not responsible for anything, humans are.Artificial intelligence, even the most advanced, is not a legal entity. A provocative prompt that becomes grounds for defamation can result in a lawsuit and even criminal liability. As Kacper Krawczyk summarizes:
In the case of AI, we need to stop personifying it and look for the person behind the effects caused by the algorithm's actions – someone always clicks the enter button that leads to a violation. Existing regulations should be interpreted in this spirit.
wnp.pl