As the level of widespread computerization has increased in recent years, we may increasingly encounter chatbots. This happens, for example, when we surf the Internet, run an errand in electronic banking or contact a customer service center. We often don’t realize that these and many other popular solutions are based on artificial intelligence (AI for short). They can significantly improve the functioning of an organization, but they are also not without imperfections.
GPT chatbot will tell you how to commit a criminal act
Recently, there has been a lot of buzz about a popular chatbot (to be precise, Chatbot GPT, a solution that uses the GPT-3.5 language and advanced deep machine learning technologies in conversation with the user) that, in response to a properly formulated question from the user, told him how to commit a crime. The situation has revived an already heated debate about the ethics of AI-based solutions, as well as the liability (in this case, primarily criminal) associated with their operation. Can the mere use of chatbots to plan the commission of a criminal act or, as in the case of Chatbot GPT providing users with information giving them the opportunity to do so, be considered a crime?
Apparent “impunity” in connection with the operation of artificial intelligence
As of today, there are no legal regulations in Poland that relate strictly to artificial intelligence. This does not mean at all that the functioning of artificial intelligence systems is outlawed. Indeed, such systems – as well as the liability associated with them – are still subject to general regulations. Although this is not the only possible scenario, in the current legal order, criminal liability associated with the functioning of chatbots should be evaluated primarily from the perspective of the so-called “stage forms” of the crime. According to the Polish Criminal Code, criminal responsibility is extended not only to the commission or attempted commission of a crime, but – with regard to certain crimes – already the preparation for the commission of a crime, which may consist, among other things, in the collection of information used to commit the crime consequently, the use of chatbots for the purpose of obtaining information intended subsequently to be used to commit a criminal act may, under certain conditions, constitute a crime. Attributing responsibility for preparation, in this case, requires that the subject acted with the purpose of committing the crime, and therefore wanted to commit it. In contrast, it is not punishable to obtain information useful for the commission of a crime for other purposes, including crime research, crime prevention or at least to satisfy the user’s personal curiosity.
Ethics lesson – to whom to attribute responsibility for the commission of a crime?
So, since the collection of information used to commit a crime using AI – as was the case with the GPT Chatbot – can itself constitute a crime, a logical question arises: is criminal liability also covered by providing users with chatbots enabling such action?
And in this case the answer is ambiguous, after all, the Criminal Code allows liability to be assigned for so-called “aiding and abetting” a crime, understood as facilitating the commission of a criminal act by, for example, providing the perpetrator with advice or information. Liability for aiding and abetting is independent of that of the perpetrator, who is liable for his own crime. However, a prerequisite for the imputation of liability for aiding and abetting is that the helper wanted his actions to contribute to the commission of the crime, or at least condoned such an eventuality. On the other hand, there will be no aiding and abetting in a situation in which the information obtained was used to commit a crime, but this was not included in the intention of the person who provided such assistance. Attributing criminal liability to the chatbot provider will therefore not be possible in a situation where, for example, he made reasonable efforts to eliminate the possibility of using the chatbot to commit a crime.
While, in the absence of specific regulations, the provision or use of AI-based solutions (including chatbots) is not legally irrelevant, it often seems to escape the applicable legal framework. This problem particularly concerns liability related to the operation of AI, the determination of which often poses many difficulties due to the complexity of artificial intelligence systems .
European solution
In view of a number of challenges associated with the development of this technology, the European Union is currently working on a draft regulation establishing harmonized rules for artificial intelligence (Artificial Intelligence Act) with complementary legislation. The aim of this regulation is to create a comprehensive legal framework for AI and for the community to become a global leader in the development of safe, reliable and ethical artificial intelligence. The broad scope of this legislation will also cover the issue of chatbots. Although the planned regulations do not regulate criminal liability related to the operation of AI (because that is the domain of national legal orders), they do provide, among other things, for severe fines for providers of AI systems whose solutions do not meet the required standards.