24.10.2025

Weronika Szachniewicz for Rzeczpospolita: “AI in business: who is responsible for the mistakes of artificial intelligence?”

AI algorithms are increasingly making decisions that affect our health, safety, and careers. Artificial intelligence, although undoubtedly useful, is not a perfect creation and, like humans, makes mistakes. In the absence of clear regulations covering such situations, the question of who should be held responsible naturally arises. Associate Weronika Szachniewicz writes on this topic in Rzeczpospolita, analyzing cases that may help find an answer to this question.

The dynamic development of AI and its legal consequences

Artificial intelligence and algorithms responsible for automation are revolutionizing various sectors of the economy — from healthcare, where the AI solutions market grew by around 45% between 2022 and 2023 (reaching USD 22.4 billion), to the automotive industry, where autonomous taxis are becoming everyday life in the US and China.

Learning systems support imaging diagnostics, tailor treatments to individual patient characteristics, and also make recruitment decisions. However, with growing algorithmic autonomy, a fundamental legal challenge arises — the question of responsibility for AI system errors.

Landmark court cases define the limits of liability

Judicial practice around the world is gradually establishing the framework for legal responsibility for AI actions.

The Pieces Technologies case in Texas demonstrated that healthcare AI tool providers can be held responsible for misleading marketing claims about algorithm accuracy. Meanwhile, a Miami court ruling ordering Tesla to pay USD 243 million in damages for a fatal autonomous vehicle accident showed that manufacturers bear responsibility for the decisions made by their systems.

Equally significant is the case Mobley v. Workday, where a California court recognized an AI-based recruitment tool provider as an “agent” of the employer, potentially liable for candidate discrimination. In the case of a fatal Uber autonomous car accident in Arizona, criminal liability was imposed on the human vehicle operator who failed to monitor the road. These rulings create a set of precedents that, in the absence of coherent regulations, shape the practical limits of AI responsibility.

Due diligence as the key to risk minimization

To limit legal and financial risks associated with using algorithms, organizations implementing AI should treat responsibility as an integral part of the product and process. Broadly understood “due diligence” is important, meaning a systematic approach including: transparent documentation of system limitations, comprehensive testing and validation before deployment, continuous monitoring of algorithm performance, and mechanisms for rapid error correction. Equally important are user training on AI limitations and contractual safeguards clearly defining liability conditions.

Who bears legal responsibility when an AI algorithm makes an incorrect decision affecting people’s health or lives? Which landmark court cases are currently shaping the framework of responsibility for AI systems? How can companies minimize legal risk when implementing AI-based solutions? Weronika Szachniewicz answers these and many other questions in her article published in Rzeczpospolita.

Read the article in Polish!
1 2 3 48

Newsletter

Want to stay up to date?
Subscribe to our newsletter.

By entering your e-mail address above and clicking ‘Subscribe!’ you declare that you have read and accept the Terms of Service and subscribe to the newsletter, i.e. information on legal topics, including information on important legal events, legislative changes and the Law Firm's activities, services and products, via e-mail communication.

The controller of your personal data is KWKR Konieczny Wierzbicki i Partnerzy S.K.A. with its registered office in Kraków, Kącik 4 Street, 30-549 Kraków. Your data will be processed in order to provide the newsletter service and thus send commercial and marketing information to the e-mail address provided, in accordance with the Privacy Policy and the Terms of Service. For more information on the principles of personal data processing, including your rights, please see the Privacy Policy.

Please wait...

Thank you for sign up!