- eISSN 2353-8414
- Phone.: +48 22 846 00 11 ext. 249
- E-mail: minib@ilot.lukasiewicz.gov.pl
Artificial intelligence and consumer rights: legal responsibility for algorithmic decisions in the Polish and EU regulatory context
Anna Maria Wierzchowska-Dziawgo
Warsaw School of Economics, 162 Niepodległości Ave., 02-554 Warsaw, Poland
E-mail: am.wierzchowska@gmail.com
ORCID: 0000-0002-9281-4479
DOI: 10.2478/minib-2025-0006
Abstract:
This article examines whether Polish and European Union legal frameworks, supported by institutional oversight, provide consumers with sufficient protection against the adverse consequences of decisions made by artificial intelligence (AI) systems, and whether legal gaps persist in this area. The study aims to identify and assess these gaps and to formulate recommendations for strengthening consumer safeguards in the age of algorithmic decision-making. A qualitative descriptive analysis was applied to selected Polish and international legal acts and scholarly literature, including the Act on Competition and Consumer Protection of 16 February 2007 and Regulation (EU) 2024/1689 of the European Parliament and of the Council (the AI Act), along with expert opinions from Polish legal scholars. The findings indicate that while existing Polish and EU provisions, reinforced by institutional supervision, afford consumers a degree of protection, this coverage does not extend to all potential risks associated with AI use in consumer markets. Significant legal gaps remain, and the development of new laws that keep pace with the rapid evolution of AI poses a substantial legislative challenge. As a result, fully eliminating these gaps in the near future may prove difficult, if not impossible.
ORCID: 0000-0002-9281-4479
DOI: 10.2478/minib-2025-0006
Kontakt: am.wierzchowska@gmail.com
MINIB, 2025, Vol. 56, Issue 2, P. 1-24
Artificial intelligence and consumer rights: legal responsibility for algorithmic decisions in the Polish and EU regulatory context
1.Introduction
The dynamic deployment of solutions based on Artificial Intelligence (AI) across global digital markets has opened up a new stage in the evolution of trade and consumer behavior (UNCTAD, 2024, p. 3). Contemporary purchasing decisions are increasingly being shaped by digital tools, often promoted as employing “Artificial Intelligence” to optimize the decision-making process from the consumer’s perspective (Paterson, 2022, p. 558).
Following Warszycki (2019, p. 115), AI may be understood as “a field of science encompassing disciplines, methods, tools, and techniques aimed at creating and developing a complete computer program that accurately reflects the model of human functioning and the human mind.” It has become an integral part of the modern consumer market, applied in both front-office processes (interfacing with consumers, clients, and supervisory bodies) and back-office processes (supporting the internal functioning of companies and institutions) (Keller et al., 2024, p. 417).
In consumer-facing applications, AI systems recommend products inferred from users’ preferences and histories, perform automated credit assessments, and provide customer support via virtual assistants (chatbots), among other functions (Myszakowska-Kaczała, 2024). On the operational side, companies are increasingly using AI-based analytics to understand consumer behavior, optimize pricing strategies, and improve supply chain management (GlobeNewswire, 2025).
Although the use of AI in customer service is often considered a hallmark of modern technological implementation, Artificial Intelligence itself is not a twenty-first-century innovation. Most technology historians trace the origins of the concept to the work of the British mathematician and cryptanalyst Alan Turing, who formulated its theoretical foundations in 1950 (Accenture, 2024, p. 8). Nevertheless, the dynamic development of AI was not widely recognized until 2011, when global technology companies such as Google, Facebook, Microsoft, and IBM began using it for business purposes (Ness et al., 2024, p. 1064).
From the perspective of the Polish AI landscape, 2023 marked a turning point, with 88% of respondents declaring familiarity with the term sztuczna inteligencja (“artificial intelligence”) – with this figure rising to 96% among individuals aged 18 to 24 (Digital Poland, 2023, p. 57). It is also notable that the jury of the Polish Language Council declared this term the Polish “Word of the Year” in 2023 (Kruszyńska, 2024).
This coincided with the rapid rise of ChatGPT, an AI–based application that achieved unprecedented global recognition. Between late 2022 and early 2023, the platform attracted approximately 100 million users (mp/dap, TVN24.pl, 2023). The scale and pace of its user growth may position ChatGPT as the fastest-growing consumer-facing web application to date (The Guardian, 2023). Its widespread adoption spurred the creation of numerous derivative solutions tailored to the needs of specific industries, including the banking sector (Capgemini, 2024, p. 44).
In 2025, the global AI market was valued at USD 757.58 billion, with forecasts projecting growth to approximately USD 3,680.4 billion by 2034 (Precedence Research, 2025). Within the global banking sector alone, AI is estimated to generate up to USD 1 trillion in additional value annually (Biswas et al., 2020, pp. 2–3).
The expanding use of AI in consumer services brings not only financial gains but also a range of other benefits – from mitigating risks associated with human error and improving service accessibility, to process automation that enhances efficiency and speeds up customer service. However, the adoption of AI-based tools by market entities also introduces new risks for consumers. The decision-making processes of AI algorithms may be opaque or difficult for the average client to comprehend (Ahn et al., 2024), which can hinder their ability to assess whether a system is operating correctly.
The opacity of AI systems, combined with their capacity to exploit biases and generate unintended side effects, has intensified debates on the need for responsible governance of AI technologies (Cheong, 2024, p. 2). A key challenge, therefore, lies in guaranteeing the effective protection of consumer rights when decisions affecting individuals are being made by algorithms, as well as in determining which parties bear responsibility in cases of algorithmic error or either unintentional or deliberate misuse.
This article seeks to address the following research question: Do Polish and EU legal acts, together with institutional oversight, provide consumers with adequate protection against the negative consequences of decisions made by AI systems, and are there legal gaps in this area? The approach taken is descriptive and analytical, based on selected legal acts (including the Act on Competition and Consumer Protection and the AI Act), relevant academic literature, and selected legal opinions. These sources form the basis for further, more detailed research on the topic.
The choice of a qualitative descriptive analysis stems from its suitability for examining phenomena within their real-world context – in this case, the institutional and regulatory environment. Its purpose is to capture ongoing processes, identify the actors involved, and situate them within their operational conditions. While serving as a starting point for more advanced analyses, this approach itself constitutes a valuable and independent methodological framework (Sandelowski, 2000, p. 339). It involves the following stages (Villamin et al., 2024, pp. 51–91):
- defining the research objective (application-oriented),
- determining the research method (descriptive analysis),
- establishing the theoretical framework (accountability for algorithmic decisions in the context of legal frameworks and institutional oversight),
- selecting the research sample (domestic and international literature, legal provisions, and opinions of Polish legal scholars),
- collecting data (reviewing available sources),
- analyzing data (evaluating sources in light of the research objective), and
- presenting the research findings.
The outcomes of this analysis are threefold: (i) a presentation of the current regulatory framework governing responsibility for AI-mediated decisions affecting consumers; (ii) the identification of potential gaps within the existing system of consumer protection; and (iii) the formulation of recommendations aimed at addressing these gaps in the Polish legal system, alongside proposals for new regulatory measures to strengthen consumer safeguards against the adverse consequences of AI-driven decision-making.
2. The use of AI – benefits and risks
Artificial Intelligence is now being applied across nearly all areas of human activity. It is already assisting the work of both teachers and students, including in schools and even in early childhood education (Iron Mountain, 2025). AI can automatically perform tasks such as grading tests and homework assignments or generating reports on student progress (Stecyk, 2025). Higher education institutions are also increasingly utilizing AI algorithms to enhance the efficiency of administrative and academic work. One example is the use of autonomous AI agents that assist in creating professional academic presentations based on an outline (Stecyk, 2025). AI can likewise improve communication processes within universities – for example, through the implementation of “intelligent” dean’s offices or automated student admissions systems. A student wishing to access publicly available university knowledge and documentation in real time needs only one condition to be met: access to the Internet (KALASOFT, n.d.).
It should be emphasized that in the context of higher education, where the student may be regarded as a client or consumer of educational services (Sojkin et al., 2012, pp. 565, 567), the use of Artificial Intelligence entails risks analogous to those observed in other sectors of digital services, particularly regarding data protection, algorithmic transparency, and the right to reliable information. Theoretically, information generated by software based on AI algorithms should be factually accurate. In practice, however, AI systems may rely on unreliable or outdated sources, creating a risk that users receive incorrect or misleading information.
Another risk associated with the use of Artificial Intelligence in higher education concerns the protection of student data collected by institutions employing AI tools, as well as the potential dehumanization of the educational process – where human interaction is diminished and the lecturer’s role shifts away from that of a mentor, becoming instead a mere supervisor of AI-driven systems (Kornaś, 2024).
An argument in favor of limiting the use of Artificial Intelligence in education is that decisions made without human intervention may result in the absence of a clearly identifiable responsible entity, as well as a lack of transparency regarding how such decisions are made (PARP Grupa PFR, 2023, p. 29). Insufficient oversight of these processes may, in turn, result in different types of misuse or abuse, potentially harming the interests of those affected (Iron Mountain, 2025). Table 1 presents examples of AI applications in the consumer market, along with their associated potential benefits and risks.

The examples of Artificial Intelligence applications presented in Table 1 illustrate the dual nature of AI’s impact on the consumer market. On the one hand, algorithms can enhance convenience, accessibility, and service efficiency, reduce operating costs, and minimize human error. On the other, AI-related risks include a lack of transparency in decision-making processes, potential discrimination, and incorrect decisions that may result in harm to the consumer. AI-powered tools may not only pose a threat to customer privacy but also increase the risk of consumers falling victim to deceptive or unfair market practices or even financial exclusion in cases where an insurer, on the basis of an AIgenerated analysis, determines that a given consumer represents too great a risk of potential payout (BEUC, 2021, p. 35).
An example of potential gender-based discrimination by an AI algorithm was the 2019 case in the United States involving the credit limit determination process for the Apple Card, issued jointly by Apple and Goldman Sachs. Customers observed that the algorithm responsible for assigning credit limits granted significantly higher limits to men than to women with comparable financial situations. One potential applicant reported that his credit limit was 20 times higher than that of his wife, even though they shared joint marital property and, in his view, her credit history was even better than his. Following the publication of this report, other couples also began to confirm such disparities, sharing examples suggesting that the algorithm favored men. The case attracted the attention of the New York Department of Financial Services, which launched an investigation to determine whether anti-discrimination laws had been violated in this instance (The Guardian, 2019), but it ultimately concluded that there was no discrimination against customers based on gender (Campbell, 2021).
The Apple Card case demonstrated, however, that a lack of algorithmic transparency can lead to public controversy. Customers did not receive a clear explanation as to why the decisions varied so significantly between genders. Being unable to understand the automated decision-making process led some users to perceive the differences in credit limits as negative gender discrimination, even though closer scrutiny showed that no such discrimination had actually occurred. A positive takeaway from this example is that regulators are prepared to intervene, treating the use of AI like any other credit procedure subject to the law.
It should be noted, however, that despite incidents raising concerns about the impartiality of AI-based solutions, there is also evidence suggesting that consumers perceive such systems as more objective than human-driven processes. The rationale in this context is the perceived absence of bias and emotions in AI decision-making (Nogueira et al., 2025, p. 2).
Another type of potential incident involving the use of AI tools and consumers could be having a chatbot incorrectly dismiss a complaint. This might occur, for example, if the chatbot misinterprets an image submitted by the customer and wrongly concludes that a product defect was caused by user error. Another possible example, negative from the customer’s perspective, would involve the misclassification of a complaint into the wrong category. In both cases, one of the possible consequences is the expiration of the statutory 14-day period for responding to a consumer complaint, which, under Polish consumer law, results in the complaint being deemed accepted by default (Polish Consumer Rights Act, Article 7a). A consumer’s lack of awareness of, or failure to invoke the above legal provision could result in their not receiving appropriate support in such a case, due to the algorithm’s improper functioning.
The next section of this article will examine the extent to which current Polish regulations address the challenges outlined above and what changes may be necessary to ensure that consumer rights are effectively protected in the era of widespread algorithmic use in the consumer market. This is a highly important issue, as the number of incidents involving AI systems is increasing alongside the growing adoption of Artificial Intelligence. Between 2022 and 2023 alone, the number of such incidents rose by approximately 1278% (OECD, n.d.).
3. Legal regulations and institutional oversight as pillars of accountability for algorithmic decisions impacting consumers
3.1. The current legal framework for consumer protection in relation to AI
Given the risks associated with the practical use of Artificial Intelligence, it is often perceived as a source of threats to individual rights (Contissa et al., 2018, p. 11). The noticeable rise in technological sophistication and the emergence of new risks have led regulatory bodies to recognize the necessity of legislative action in this domain (Lagioia et al., 2022, p. 482). Artificial Intelligence poses new and complex challenges to both consumers and the system of consumer law – challenges that existing regulatory mechanisms are not always capable of addressing effectively (Terryn & Martos Marquez, 2025, p. 210).
Based on the analysis of the current legal framework, it can be indicated that there is no comprehensive legal act that specifically addresses the use of Artificial Intelligence in the consumer context. Nevertheless, existing legal provisions offer a certain degree of protection to consumers against the negative consequences of decisions made by algorithms. These include data protection regulations and consumer protection laws (Table 2).

Moreover, successive parts of the relatively new EU Artificial Intelligence Regulation (AI Act) are now gradually entering into force. The aim of the regulation is “to improve the functioning of the internal market by laying down a uniform legal framework, in particular for the development, placing on the market, putting into service and use of Artificial Intelligence systems (…) to promote the uptake of human-centric and trustworthy Artificial Intelligence (…) and to support innovation” (Regulation (EU) 2024/1689 of the European Parliament and of the Council – The Artificial Intelligence Act). Although the AI Act will fully apply as of 2 August 2026, the provisions of Chapters I and II are already binding and should be applied now (AI Act, art. 113).
Despite the fact that the AI Act includes several significant provisions from a consumer protection standpoint, such as the prohibition of social scoring and the right to file a complaint with a market surveillance authority if an AI system is believed to violate the regulation, European consumer advocacy groups have raised concerns about legal gaps that fail to fully address the risks consumers are exposed to in the context of AI deployment. According to these organizations, the AI Act is not capable of fully eliminating the risks associated with the use of AI tools in consumer interactions. In their view, the regulation focuses primarily on high-risk systems, while many widespread applications of AI, such as the use of chatbots, fall outside its scope (BEUC, 2023).
Such a situation may lead to the emergence of national legislative solutions addressing selected risks associated with the use of AI, which in turn could result in the fragmentation of legal provisions and hinder the assurance of a uniform level of protection for European Union citizens with respect to the same technological products and services (Bertolini, 2025, pp. 9–10).
Referring back to the earlier example of potential gender discrimination in the Apple Card credit approval process, it is worth noting that, under EU law, a consumer in a similar situation could rely on Article 22 of the General Data Protection Regulation (GDPR). This provision entitles the data subject, whether a potential or actual client, to request clarification regarding the logic behind the algorithmic decision on their credit limit, and to demand a reassessment of the outcome by a human decision-maker.
Additionally, the European Union has in place anti-discrimination regulations – such as Directive 2004/113/EC of 13 December 2004, implementing the principle of equal treatment between men and women in the access to and supply of goods and services. Moreover, if such an incident were to occur in Poland, an entity that actually employed a discriminatory algorithm could face sanctions from the Office of Competition and Consumer Protection (UOKiK), as its actions may constitute a violation of collective consumer interests (Polish Act on Competition and Consumer Protection, Article 24). The activities of this Office will be discussed in more detail in the following sections of this article.
Similarly, in scenarios involving potentially incorrect decisions issued by an AI-driven complaint resolution system, or where a university student receives inaccurate information from an “intelligent” dean’s office, current legal frameworks would regard such instances as the equivalent of human error. Ultimately, responsibility for the functioning and consequences of AI systems rests with the individual or institution that has introduced and operates them (Paprocki, 2025).
The consumer submitting a complaint would retain the right to exercise their entitlement (e.g., to repair or replacement of the product) (Polish Consumer Rights Act, Article 43d). The consumer could also notify the UOKiK, which would assess whether the company had violated the collective interests of consumers (Polish Competition and Consumer Protection Act, Article 24). In cases where complaint processing is delegated to a malfunctioning algorithm, the UOKiK has begun examining such situations and emphasizes that the use of AI does not relieve businesses of their responsibility to review consumer complaints in a fair and timely manner (Infor.pl, 2023).
However, for a student who received incorrect information via an AI system, pursuing legal remedies in response to the negative consequences of such inadequate support may prove to be a significant challenge. Legal provisions do not always recognize a student as a consumer eligible for protection under all the legal acts listed in Table 2. However, if a student were to enter into an agreement based on incorrect information provided by a chatbot, the issue of determining liability for being misled by AI could have a valid legal basis (Warchoł-Lewucka, 2024). In the case of an incorrect response provided by a “smart” dean’s office – regarding, for instance, the current class schedule – the consequences of a student’s absence from mandatory classes held on a date not indicated by the chatbot would likely be borne solely by the student.
3.2. Regulatory and supervisory institutions and their role
Since the broad application of AI in areas such as the consumer market is a relatively new phenomenon, the institutional structure aimed at protecting consumers from AIrelated risks is still evolving. Additionally, the complexity of AI use cases necessitates coordination and cooperation among the various regulatory and supervisory authorities.
In the Polish legal system, the Office of Competition and Consumer Protection (UOKiK), established in 1990, serves as the main institution responsible for safeguarding consumer rights (UOKiK, n.d.). While no existing legal act explicitly names the UOKiK as the principal supervisory authority overseeing the impact of Artificial Intelligence on the consumer market, the Office has been actively engaged in addressing issues related to the use of algorithms in consumer-facing processes. Despite the lack of explicit regulatory designation, the UOKiK actively monitors and engages with developments concerning the application of algorithms in consumer interactions. Its current activities include assessments of chatbot functionality in the telecommunications market and in ecommerce services – most notably in food delivery apps and online marketplaces (Infor.pl, 2023).
The UOKiK is also striving to harness AI to enhance consumer protection on the Polish market. An example of this effort is the implementation of the project entitled “Detection and elimination of dark patterns using Artificial Intelligence,” which aims to develop an AI-based tool capable of identifying unfair uses of so-called dark patterns on commercial websites (UOKiK, 2024). These are user-interface designs intentionally created to mislead consumers, hinder the expression of genuine preferences, or manipulate users into taking predetermined actions. Such practices are intended to pressure consumers into making purchases they do not truly desire, or to manipulate them into revealing personal information they would not voluntarily provide in a more transparent context (Luguri & Strahilevitz, 2021, p. 43).
It can be assumed that in the near future, the scope of UOKiK’s activities and responsibilities related to the use of Artificial Intelligence in the consumer market will continue to expand. It is likely that the authority will gradually acquire additional statutory powers aimed at enhancing the effectiveness of its supervisory activities in this area.
An additional authority involved in addressing the use of Artificial Intelligence with respect to personal data protection in Poland is the Personal Data Protection Office (UODO). Its counterpart at the EU level is the European Data Protection Board (EDPB), which coordinates data protection policies across member states.
The President of the UODO is the “competent authority for personal data protection” (Polish Personal Data Protection Act, Article 34(1)), with tasks including monitoring and enforcing the provisions of the GDPR, as well as promoting public awareness and understanding of the risks, rules, safeguards, and rights related to data processing (GDPR, Article 57(1)(a) and (b)).
In the context of Artificial Intelligence, the Personal Data Protection Office (UODO), examines the impact of AI on individuals’ privacy and the protection of their personal data (UODO, n.d.). The UODO is authorized, among other things, to impose administrative fines for violations of the GDPR, including the aforementioned Article 22 (e.g., failure to provide human verification of automated data processing in cases where the decision produces legal effects for the consumer).
Among the responsibilities of the European Data Protection Board (EDPB, or EROD) is providing guidance to the European Commission on issues concerning data protection – particularly with regard to proposed amendments to the GDPR and broader legislative initiatives within the EU (EDPB, n.d.). Notably, at its inaugural plenary meeting in 2018, the EDPB adopted guidelines addressing automated decision-making and profiling (EDPB, 2018).
At the EU level, the European Artificial Intelligence Board was established to oversee the proper implementation of the AI Act (European Commission). Moreover, the European Data Protection Supervisor (EDPS) plays a key role in ensuring that all EU institutions and bodies respect citizens’ privacy rights during personal data processing. The EDPS is also responsible for tracking the development of emerging technologies that may impact data protection and for carrying out investigations into relevant matters falling within its jurisdiction (european-union.europa.eu). Accordingly, it may be concluded that the enforcement of legal standards regarding the protection of Polish consumers’ personal data and the appropriate use of AI-assisted tools involves multiple institutions operating at both the national and European levels.
Determining which body is responsible in a specific case should depend exclusively on the type of suspected violation (Table 3).
Table 3. Comparison of the scope of responsibilities of Polish institutions overseeing the consumer market.

However, due to the fast-paced development of Artificial Intelligence in an evergrowing range of consumer-facing applications, it is highly probable that not all risks stemming from AI usage are adequately addressed in existing legal frameworks, and that responsibility for such risks may not fall solely within the remit of a single regulatory body. A relevant example would be a chatbot’s improper handling of a consumer complaint, accompanied by a breach of personal data protection regulations – particularly involving sensitive data. In such circumstances, the case would require joint consideration by at least two competent authorities, such as the UOKiK and UODO.
Thus, it is crucial to ensure not only the constant oversight of emerging AI-related risks and the ongoing adjustment of relevant legislation and institutional responsibilities, but also effective interdisciplinary collaboration between the entities tasked with safeguarding consumer rights.
4. Responsibility for algorithmic decision-making
When analyzing the risks associated with the use of Artificial Intelligence in consumer services, it is essential to consider the issue of responsibility for erroneous decisions made by algorithms. AI itself does not possess legal personality and therefore cannot be held directly accountable (Bączyk-Rozwadowska, 2022, p. 9). Responsibility may lie solely with a natural or legal person who exercises control over the operation and deployment of AIdriven systems (Kulicki, 2025). As one analyst has put it, “In principle, liability for errors stemming from the system’s architecture or software should rest with the manufacturer, whereas responsibility for misuse of the system lies with the end user” (Trzaska, 2024). However, given that there is currently no specific legal act that directly assigns responsibility for damages caused by Artificial Intelligence, it remains challenging to clearly designate a natural or legal person as directly liable for errors resulting from AI operations (Trzaska, 2024).
The existing academic literature offers a range of proposals concerning the entity that could be considered “responsible” for decisions made by AI systems: ranging from the software developer who implemented faulty algorithms (programistajava.pl, 2025), through the system operator or controller (Kaniewski & Kowacz, 2023), to the end user (Infinity Insurance Brokers, n.d.), which may be responsible for the proper use of artificial intelligence systems (Buiten, 2024, pp. 256–257).
Certain authors suggest a model in which responsibility is distributed among various groups of stakeholders (programistajava.pl, 2025). Meanwhile, other sources highlight the possibility that, given the considerable complexity of the AI value chain, it may not always be possible to clearly identify the entity responsible for a specific error (Jelińska-Sabatowska, 2025). In many AI-driven processes involved in the provision of products and services, multiple entities participate (Buiten et al., 2023, p. 11). Legal counsels also point to a new type of risk associated with the use of AI – namely, the risk of a “liability gap” (Nogacki, 2024).
The challenge of assigning liability for the outcomes of Artificial Intelligence stems from factors including the following (Nogacki, 2025):
• autonomy – AI systems make decisions without human oversight,
• opacity – the AI decision-making process may be difficult to understand,
• data dependency – flawed data can lead AI to make erroneous decisions,
• value chain complexity – the development and implementation of AI involves multiple entities.
Nevertheless, the most frequently cited example of a party considered responsible for decisions made by AI is the entrepreneur who implements an AI-based process within their organization. As such, they must take into account the possibility of incurring contractual liability in the event that damage is caused by Artificial Intelligence – such as when an error results in the failure to fulfill a contract concluded with a business partner (Tak Prawnik, 2025). They may also face tort liability, for example in the case of an accident caused by an autonomous vehicle (Kaniewski et al., 2023). However, some sources argue that the previously mentioned “opacity” of AI decision-making undermines the application of standard principles of tort liability (Nogacki, 2025).
Apart from the legal challenge of clearly identifying the entity liable for damage caused by Artificial Intelligence, another significant obstacle is the difficulty in proving the “fault” of the AI system itself. To do so, the consumer – or their legal representative – must gain access to and understand how the AI tool functions, which may require insight into complex and often non-transparent decision-making processes. In practice, however, this may prove difficult or even impossible. Among other factors, this is due to the so-called “black box problem” (Taveira da Fonseca et al., 2024, p. 300) – that is, the system’s recommendations may not be explainable within the framework of traditional linear cause-and-effect logic (Kroplewski, 2023, p. 112).
An additional risk for banking customers related to the use of Artificial Intelligence is the potential overdependence on AI systems in decision-making, predictive analytics, and recommendation processes. Even if a human remains the final decision-maker, they may defer too strongly to the suggestions provided by AI – perceiving them as inherently correct or derived from deep and reliable analysis (Szostek et al., 2022, p. 55). In practice, however, there may be uncertainty as to whether the data used by automated models is of adequate quality, which may result, for example, in an inaccurate assessment of a customer’s creditworthiness (Szostek et al., 2022, p. 26). In such circumstances, the harmed consumer may face significant challenges in demonstrating that the unfair treatment resulted from the actions of both the AI system and the bank’s staff.
In the context of seeking redress against an erroneous AI-generated decision, the consumer must first be aware that such an irregularity has occurred. The literature on accountability for AI-driven decisions highlights the so-called “information gap,” whereby an individual may not realize that their adverse situation results from the actions of Artificial Intelligence (Ziosi et al., 2023, p. 9). What is crucial, therefore, is not only the existence of legal provisions designed to prevent the effects of erroneous AI decisions, but also the consumer’s own awareness of the protections available under the relevant legal framework.
5. Conclusions and recommendations
While the application of Artificial Intelligence in the consumer market brings various advantages – such as personalized product offerings – it also entails significant risks. These include the reliance of algorithms on outdated or biased data, which may result in the unequal treatment of certain customer groups.
Additionally, consumers’ inability to logically explain how algorithms operate may also lead to their misinterpretation of AI-generated decisions, as exemplified by the case concerning the determination of credit limits in the Apple Card program.
Although existing legislation ensures a certain level of protection for consumers against the risks posed by Artificial Intelligence – such as the right to human oversight and the prohibition of discriminatory practices – there are still notable legal gaps. In particular, the opacity of AI decision-making processes creates challenges in proving errors and seeking redress. The lack of algorithmic explainability may also result in consumers misinterpreting automated decisions, as exemplified by the Apple Card case referenced earlier, in which the credit limit allocation raised concerns about fairness and transparency.
One of the legal gaps identified in the article concerns the question of who should be held accountable for decisions made by Artificial Intelligence. Since AI does not have legal personality, it cannot itself bear responsibility for erroneous algorithmic decisions, and no existing provision in either Polish or EU legislation explicitly designates the entity liable for the malfunction of AI systems. In the scholarly literature, the entity most frequently identified as “responsible” is the provider making the AI-based solution available to consumers. However, responsibility for the harms caused by Artificial Intelligence is also sometimes attributed to the software developers whose algorithms prove faulty, as well as to end users.
In summary, the answer to the research question posed in this article is as follows: Polish and EU legal acts, together with institutional oversight, provide consumers with protection against the negative consequences of decisions made by AI systems. However, this protection does not extend to the full spectrum of potential risks arising from the use of Artificial Intelligence in consumer markets. Legal gaps remain in this area, and the introduction of new legislation that keeps pace with the ongoing development of AI capabilities represents a major regulatory challenge, making the complete elimination of such gaps difficult – if not impossible – in the foreseeable future.
As the use of Artificial Intelligence becomes more widespread, the frequency of incidents involving AI systems continues to rise. Regulatory bodies at both the European and national levels, along with consumer protection authorities, are still building the expertise and acquiring the instruments required to monitor and control AI effectively. This transitional phase contributes to the persistence of certain regulatory blind spots and legal uncertainties. To enhance consumer protection in a market environment where an ever-growing number of processes are supported by Artificial Intelligence – systems that may still be prone to error – it is crucial to implement reforms across legislative, institutional, and educational spheres.
With regard to recommendations, priority should be given to measures designed to address the identified shortcomings in the Polish legal system and to strengthen safeguards for consumers affected by AI-driven decision-making, such as the following:
• Clarifying legal liability for individual entities involved in the development, provision, and use of AI – for example, by introducing a provision into the Polish Competition and Consumer Protection Act stating that liability for errors made by Artificial Intelligence rests with the entity that makes the AI-based tool available to consumers, or with another entity explicitly designated by that provider in the applicable terms and conditions.
• Introducing a legal provision that facilitates the burden of proof for consumers in disputes concerning the malfunctioning of Artificial Intelligence – given that proving an AI-related error is often difficult or even impossible for the average consumer, a reasonable solution would be to shift the burden of proof to the entity providing the AI-based tool to consumers (or to another entity explicitly designated in the relevant terms and conditions). In the event of a dispute, this entity would be required to demonstrate that the AI system did not make an error; otherwise, the case would be resolved in favor of the consumer.
• Requiring algorithmic transparency – consumers should have the right to understand the logic behind decisions made by Artificial Intelligence that affect them personally; for example, by being granted access to terms and conditions that include information about the characteristics or factors the AI takes into account when making specific decisions
• Establishing a statutory definition of the competences of supervisory authorities – for example, a dedicated department could be established within Poland’s Office of Competition and Consumer Protection (UOKiK), staffed with experts in artificial intelligence systems, tasked with analyzing cases in the consumer market suspected of involving faulty operation of AI-based systems.
• Promoting consumer education on AI – through initiatives aimed at increasing consumer awareness of the risks associated with artificial intelligence, as well as of the rights they have with regard to protection against such risks.
References
Accenture. (2024). Banking on AI: Banking top 10 trends for 2024. https://www.accenture.com/content/dam/accenture/final/industry/banking/document/Accenture-Banking-Top- 10-Trends-2024.pdf
Ahn, D., Almaatouq, A., Gulabani, M., & Hosanagar, K. (2021). Will we trust what we don’t understand? Impact of model interpretability and outcome feedback on trust in AI. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3964332
Bączyk-Rozwadowska, K. (2022). Odpowiedzialność cywilna za szkody wyrządzone w związku z zastosowaniem sztucznej inteligencji w medycynie [Civil liability for damages arising from the use of artificial intelligence in medicine]. Przegląd Prawa Medycznego, 3(3–4). https://przegladprawamedycznego.pl/index.php/ppm/article/view/142
Bertolini, A. (2025). Artificial intelligence and civil liability: A European perspective. Policy Department for Citizens’ Rights and Constitutional Affairs, Directorate-General for Internal Policies
BEUC. (2021, October 7). Regulating AI to protect the consumer. Brussels: BEUC. https://www.beuc.eu/sites/default/files/publications/beuc-x-2021-088_regulating_ai_to_protect_the _consumer.pdf
BEUC. (2023, June 11). EU rules on AI lack punch to sufficiently protect consumers. https://www.beuc.eu/pressreleases/eu-rules-ai-lack-punch-sufficiently-protect-consumers
Biswas, S., Carson, B., Chung, V., Singh, S., & Thomas, R. (2020). AI-bank of the future: Can banks meet the AI challenge? McKinsey & Company. https://www.mckinsey.com/industries/financial-services/ourinsights/ai-bank-of-the-future-can-banks-meet-the-ai-challenge Bondos, I. (2016). Reakcje na dynamicznie ustalane ceny – czy konsumenci mają podwójne standardy oceny uczciwości cen online? [Reactions to dynamic pricing: Do consumers apply double standards when assessing online price fairness?]. Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu, (460), 173– 188. https://www.dbc.wroc.pl/publication/40250
Buiten, M. C. (2024). Product liability for defective AI. European Journal of Law and Economics, 57, 239–273. https://doi.org/10.1007/s10657-024-09794-z
Buiten, M., de Streel, A., & Peitz, M. (2023). The law and economics of AI liability. Computer Law & Security Review, 48, Article 105794. https://doi.org/10.1016/j.clsr.2023.105794
Campbell, I. C. (2021, March 23). The Apple Card doesn’t actually discriminate against women, investigators say. The Verge. https://www.theverge.com/2021/3/23/22347127/goldman-sachs-apple-card-no-genderdiscrimination
Capgemini. (2024). World retail banking report 2024. Capgemini Research Institute.
Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, Article 1421273. https://doi.org/10.3389/fhumd.2024.1421273
Contissa, G., Docter, K., Lagioia, F., Lippi, M., Micklitz, H. W., Palka, P., Sartor, G., & Torroni, P. (2018). CLAUDETTE meets GDPR: Automating the evaluation of privacy policies using artificial intelligence. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3208596
Digital Poland. (2023). Technologia w służbie społeczeństwu: Czy Polacy zostaną społeczeństwem 5.0? [Technology in the service of society: Will Poland become a Society 5.0?]. Warsaw.
Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU as regards the better enforcement and modernisation of Union consumer protection rules. Official Journal of the European Union, L 328, 7–28 (18 December 2019).
European Commission. (n.d.). AI Board (European Artificial Intelligence Board). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/ai-board
European Data Protection Board (EDPB). (2018). Zautomatyzowane podejmowanie decyzji i profilowanie [Automated decision-making and profiling]. https://www.edpb.europa.eu/our-work-tools/ourdocuments/guidelines/automated-decision-making-and-profiling_enpl
European Data Protection Board (EDPB). (n.d.). Rola EROD [Role of the EDPB]. https://www.edpb.europa.eu/role-edpb_enpl
European Union. (n.d.). European Data Protection Supervisor (EDPS)Europejski Inspektor Ochrony Danych [European Data Protection Supervisor]. https://european-union.europa.eu/institutions-lawbudget/institutions-and-bodies/search-all-eu-institutions-and-bodies/european-data-protectionsupervisor-edps_enpl
EY (Ernst & Young). (2024, September 18). Badanie EY: Rosną obawy konsumentów o bezpieczeństwo ich danych [EY study: Consumer concerns about data security are growing]. https://www.ey.com /pl_pl/newsroom/2024/09/rosna-obawy-konsumentow-o-bezpieczenstwo-ich-danych
GlobeNewswire / Precedence Research. (2025, February 11). Artificial Intelligence skyrocketing, shaking the market with $3,680.47 Bn by 2034. https://www.globenewswire.com/news-release/2025/02/11/ 3024340/0/en/Artificial-Intelligence-Skyrocketing-Shaking-the-Market-with-3-680-47-Bn-by-2034.html
Infinity Insurance Brokers. (n.d.). Odpowiedzialność za szkody wywołane przez AI [Liability for damages caused by AI]. https://ibu.pl/blog/odpowiedzialnosc-za-szkody-wywolane-przez-ai
Infor.pl. (2023, April 13). Chatboty, algorytmy, sztuczna inteligencja a prawa konsumenta – stanowisko UOKiK [Chatbots, algorithms, AI and consumer rights – UOKiK’s position]. https://ksiegowosc.infor.pl/ wiadomosci/5722900,chatboty-algorytmy-sztuczna-inteligencja-a-prawa-konsumenta-stanowiskouokik.html
Iron Mountain. (2025, July 15). Sztuczna inteligencja w edukacji – szansa czy zagrożenie? [Artificial intelligence in education – Opportunity or threat?]. https://www.ironmountain.com/pl-pl/resources/blogs-andarticles/a/artificial-intelligence-in-education-opportunity-or-threat (see newsroom listing)
Jelińska-Sabatowska, A. (2025, May 28). Prawa konsumentów w erze AI: Jak sztuczna inteligencja zmienia relacje w sferze B2C [Consumer rights in the AI era]. Legalis. C.H.Beck. https://www.legalis.pl/prawa-konsumentow-werze-ai-jak-sztuczna-inteligencja-zmienia-relacje-w-sferze-b2c/
Jurczak, T. (2023, December 30 February 12). UOKiK otrzymuje skargi na boty [UOKiK receives complaints about bots]. Gazeta Prawna. https://serwisy.gazetaprawna.pl/poradnik-konsumenta/artykuly/ 8658758,chatboty-voiceboty-uokik-boty-prawa-konsumenta.html
KALASOFT. (n.d.). Inteligentny dziekanat, inteligentna rekrutacja: Jak sztuczna inteligencja zmieniła komunikację uczelni ze studentami [Smart dean’s office, smart admissions]. https://www.kalasoft.pl/sztucznainteligencja/
Kaniewski, P., & Kowacz, K. (2023, October 3). Co jeśli AI zawiedzie, czyli odpowiedzialność cywilna za sztuczną inteligencję [What if AI fails? Civil liability for AI]. ITwiz. https://itwiz.pl/co-jesli-ai-zawiedzie-czyliodpowiedzialnosc-cywilna-za-sztuczna-inteligencje/
Keller, A., Martins Pereira, C., & Lucas Pires, M. (2024). The European Union’s approach to artificial intelligence and the challenge of systemic risk. In H. Sousa Antunes, P. M. Freitas, A. L. Oliveira, C. Martins Pereira, E. Vaz de Sequeira, & L. Barreto Xavier (Eds.), Multidisciplinary perspectives on artificial intelligence and the law (pp. 415–439). Springer. https://doi.org/10.1007/978-3-031-41264-6_22
Kornaś, W. (2024, August 22). Sztuczna inteligencja w szkolnictwie wyższym [Artificial intelligence in higher education]. Wyższa Szkoła Bezpieczeństwa (WSB) Blog. https://www.wsb.net.pl/technologia/sztucznainteligencja-w-szkolnictwie-wyzszym/
Kroplewski, R. (2023). Odporność AI dla odpornej wspólnoty [AI resilience for a resilient community]. In A. Szczęsna & M. Stachoń (Eds.), Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie (pp. 111–122). CyberPOLICY NASK – Państwowy Instytut Badawczy. https://cyberpolicy.nask.pl/wp-content/ uploads/2023/09/Cyberbezpieczenstwo-AI.-AI-w-cyberbezpieczenstwie.pdf
Kruszyńska, A. (2024, January 4). Wyłoniono Słowo Roku 2023. Kapituła podała wyniki [The Word of the Year 2023 has been announced]. Polska Agencja Prasowa (PAP). https://www.pap.pl/aktualnosci/wylonionoslowo-roku-2023-kapitula-podala-wyniki
Kulawik, T. (2024, September 17). Zrozumieć decyzje algorytmów – wyjaśnialność sztucznej inteligencji [Understanding algorithmic decisions – Explainability of AI]. ING Tech Blog. https://techblog.ing.pl/blog/zrozumiec-decyzje-algorytmow-wyjasnialnosc-sztucznej-inteligencji
Kulicki, Ł. (2025). Szkody wyrządzone przez sztuczną inteligencję – Kto ponosi odpowiedzialność? [Damages caused by AI – Who is liable?]. After Legal Kancelaria. https://umowywit.pl/szkody-wyrzadzone-przez-aikto-odpowiada/
Lagioia, F., Jabłonowska, A., Liepiòa, R., & Drazewski, K. (2022). AI in search of unfairness in consumer contracts: The terms of service landscape. Journal of Consumer Policy, 45(3), 481–536. https://doi.org/10.1007/s10603-022-09520-9
Luguri, J., & Strahilevitz, L. J. (2021). Shining a light on dark patterns. The Journal of Legal Analysis, 13(1), 43–109. https://doi.org/10.1093/jla/jaab001https://doi.org/10.1093/jla/laaa006
mp/dap. (2023, December 30). Taki był rok 2023 w gospodarce. Dziesięć najważniejszych wydarzeń [2023 in review: Ten key economic events]. TVN24.pl. https://tvn24.pl/biznes/z-kraju/rok-2023-w-gospodarce-dziesiecnajwazniejszych-wydarzen-st7537170
Myszakowska-Kaczała, D. (2024). AI – Jak sztuczna inteligencja zmienia życie konsumentów? [AI – How AI is changing consumers’ lives]. LexCultura. https://lexcultura.pl/ai-jak-sztuczna-inteligencja-zmieniazycie-konsumentow/
Ness, S., Volkivskyi, M., Muhammad, T., & Balzhyk, K. (2024). Banking 4.0: The impact of artificial intelligence on the banking sector and its transformation of modern banks. International Journal of Innovative Science and Research Technology, 9(2), 1064–1072. https://ijisrt.com/banking-40-the-impactof-artificial-intelligence-on-the-banking-sector-and-its-transformation-of-modern-banks
Nogacki, R. (2024, April 5). Prawne problemy ze sztuczną inteligencją: Czy prawo powstrzyma „bunt maszyn”? [Legal issues with artificial intelligence: Will the law stop the “machine rebellion”?]. Gazeta Prawna / Kancelaria Prawna Skarbiec. https://www.gazetaprawna.pl/firma-i-prawo/artykuly/9422540, prawne-problemy-ze-sztuczna-inteligencja-czy-prawo-powstrzyma-bunt-m.html
Nogacki, R. (2025, February 10). Odpowiedzialność prawna za decyzje systemów AI: Kto odpowiada, gdy algorytm się myli? [Legal responsibility for AI system decisions: Who is responsible when the algorithm makes a mistake?]. Business Centre Club / Kancelaria Prawna Skarbiec. https://www.bcc.org.pl/ odpowiedzialnosc-prawna-za-decyzje-systemow-ai-kto-odpowiada-gdy-algorytm-sie-myli/
Nogueira, E., Lopes, J. M., & Gomes, S. (2025). The new era of artificial intelligence in consumption: Theoretical framing, review and research agenda. Management Review Quarterly, 75(3), 965–1000. https://doi.org/10.1007/s11301-024-00390-1
Nowakowski, M. (2021, August 3). Czy zbyt samodzielne bankowe algorytmy AI mogą dyskryminować klientów ubiegających się o kredyty? [Can overly autonomous AI algorithms in banking discriminate against loan applicants?]. Bank.pl. https://bank.pl/czy-zbyt-samodzielne-bankowe-algorytmy-ai-mogadyskryminowac-klientow-ubiegajacych-sie-o-kredyty/
Organisation for Economic Co-operation and Development (OECD). (n.d.). Artificial intelligence. https://www.oecd.org/en/topics/policy-issues/artificial-intelligence.html
Paprocki, T. (2025, June 4). Prawne wymogi automatyzacji: Co może zrobić AI, a co nadal wymaga pracy człowieka? AI Act od 2026 roku – co nowe przepisy zmienią w biznesie? [Legal requirements for automation: What can AI do, and what still requires human work? The AI Act from 2026 – what will change for business?]. Infor.pl / Kancelaria Paprocki, Wojciechowski & Partnerzy. https://kadry.infor.pl/zatrudnienie/umowa-o-prace/6960354,prawne-wymogi-automatyzacji-co-mozezrobic-ai-a-co-nadal-wymaga-pracy-czlowieka-ai-act-od-2026-roku-co-nowe-przepisy-zmienia-w-bizn esie.html
PARP Grupa PFR. (2023). Rynek pracy, edukacja, kompetencje: Aktualne trendy i wyniki badań [Labour market, education, skills: Current trends and research findings]. Wydanie specjalne. Polska Agencja Rozwoju Przedsiêbiorczoœci.
Paterson, J. M. (2022). Misleading AI: Regulatory strategies for algorithmic transparency in technologies augmenting consumer decision-making. Loyola Consumer Law Review, 34(3), 558–589. https://doi.org/10.2139/ssrn.4164809
Polish Act on Counteracting Unfair Market Practices (2007). Ustawa z dnia 23 sierpnia 2007 r. o przeciwdziałaniu nieuczciwym praktykom rynkowym [Act of 23 August 2007 on Counteracting Unfair Market Practices], Journal of Laws 2007, No. 171, item 1206, as amended.
Polish Civil Code (1964). Ustawa z dnia 23 kwietnia 1964 r. – Kodeks cywilny [Act of 23 April 1964 – Civil Code], Journal of Laws 1964, No. 16, item 93, as amended.
Polish Competition and Consumer Protection Act (2007). Ustawa z dnia 16 lutego 2007 r. o ochronie konkurencji i konsumentów [Act of 16 February 2007 on Competition and Consumer Protection], Journal of Laws 2007, No. 50, item 331, as amended.
Polish Consumer Rights Act (2014). Ustawa z dnia 30 maja 2014 r. o prawach konsumenta [Act of 30 May 2014 on Consumer Rights], Journal of Laws 2014, item 827, as amended.
Polish Personal Data Protection Act (2018). Ustawa z dnia 10 maja 2018 r. o ochronie danych osobowych [Act of 10 May 2018 on the Protection of Personal Data], Journal of Laws 2018, item 1000, as amended.
Precedence Research. (2025, February 11). Artificial intelligence (AI) market size, share, and trends 2025 to 2034. https://www.precedenceresearch.com/artificial-intelligence-market
ProgramistaJava.pl. (2025, April 9). Prawo a AI – czy maszyna może mieć odpowiedzialność? [Law and AI: Can a machine bear responsibility?]. https://programistajava.pl/2025/04/09/prawo-a-ai-czy-maszyna-mozemiec-odpowiedzialnosc/
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88 (4 May 2016).
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union, L , 2024 (18 July 2024).
Sandelowski, M. (2000). Whatever happened to qualitative description? Research in Nursing & Health, 23(4), 334–340. https://doi.org/10.1002/1098-240X(200008)23:4<334::AID-NUR9>3.0.CO;2-G
Sojkin, B., Bartkowiak, P., & Skuza, A. (2012). Determinants of higher education choices and student satisfaction: The case of Poland. Higher Education, 63(5), 565–581. https://doi.org/10.1007/s10734-011-9459-2
Stecyk, A. (2025, June 9). Manus AI: Rewolucja w tworzeniu prezentacji akademickich i zmiana paradygmatu oceniania w środowisku edukacyjnym [Manus AI: A revolution in academic presentation creation and a paradigm shift in educational assessment]. Uniwersytet Szczeciński – AI Blog. https://ai.usz.edu.pl/2025/06/09/manusai-rewolucja-w-tworzeniu-prezentacji-akademickich-i-zmiana-paradygmatu-oceniania-w-srodowisku -edukacyjnym/
Stecyk, A. (2025, May 6). Sztuczna inteligencja w edukacji – szansa i wyzwanie [Artificial intelligence in education – opportunity and challenge]. Uniwersytet Szczeciński – AI Blog. https://ai.usz.edu.pl/2025/05/06/ sztuczna-inteligencja-w-edukacji-szansa-i-wyzwanie/
Szostek, D., Bar, G., Prabucki, R. T., & Nowakowski, M. (2022). Zastosowanie sztucznej inteligencji w bankowości – szanse oraz zagrożenia [The use of artificial intelligence in banking – opportunities and risks]. Program Analityczno-Badawczy Fundacji Warszawski Instytut Bankowości. Warszawa.
Tak Prawnik. (2025, April 30). Sztuczna inteligencja a przedsiębiorcy: Kto ponosi odpowiedzialność? [Artificial intelligence and entrepreneurs: Who bears responsibility?]. Poradnik Przedsiębiorcy. https://poradnikprzedsiebiorcy.pl/-sztuczna-inteligencja-a-przedsiebiorcy-kto-ponosi-odpowiedzialnosc
Taveira da Fonseca, A., Vaz de Sequeira, E., & Barreto Xavier, L. (2024). Liability for AI-driven systems. In H. Sousa Antunes, P. M. Freitas, A. L. Oliveira, C. Martins Pereira, E. Vaz de Sequeira, & L. Barreto Xavier (Eds.), Multidisciplinary perspectives on artificial intelligence and the law (pp. 395–414). Springer. https://doi.org/10.1007/978-3-031-41264-6_21
Terryn, E., & Martos Marquez, S. (2025). AI and consumer protection. In N. A. Smuha (Ed.), The Cambridge handbook of the law, ethics and policy of artificial intelligence (pp. 401–418). Cambridge University Press. https://doi.org/10.1017/9781009264844.029
The Guardian. (2019, November 10). Apple Card issuer investigated after claims of sexist credit checks. https://www.theguardian.com/technology/2019/nov/10/apple-card-issuer-investigated-after-claimsof-sexist-credit-checks
The Guardian. (2023, February 2). ChatGPT reaches 100 million users two months after launch. https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastestgrowing-app
Trzaska, K. (2024, June 12). Wciąż nie wiadomo, kto ponosi odpowiedzialność za szkodę wyrządzoną przez AI [It is still unclear who bears responsibility for damages caused by artificial intelligence]. Prawo.pl / Kancelaria Prawna Maciej Panfil i Partnerzy. https://www.prawo.pl/biznes/szkoda-wyrzadzona-przez-al-ktoponosi-odpowiedzialnosc-,528456.html
United Nations Conference on Trade and Development (UNCTAD). (2024). Artificial intelligence and consumer protection. Geneva: United Nations.
Urząd Ochrony Danych Osobowych (UODO). (n.d.). Sztuczna inteligencja [Artificial intelligence]. https://uodo.gov.pl/pl/p/sztuczna-inteligencja
Urząd Ochrony Konkurencji i Konsumentów (UOKiK). (2024, March 14). Wielkie „wymiatanie” złych praktyk w e-commerce [The great “cleanup” of unfair practices in e-commerce]. https://uokik.gov.pl/wielkiewymiatanie-zlych-praktyk-w-e-commerce
Urząd Ochrony Konkurencji i Konsumentów (UOKiK). (n.d.). O UOKiK [About UOKiK]. https://uokik.gov.pl/o-uokik
Villamin, P., Lopez, V., Thapa, D. K., & Cleary, M. (2024). A worked example of qualitative descriptive design: A step-by-step guide for novice and early career researchers. Journal of Advanced Nursing, 82(8), 1729–1745. https://doi.org/10.1111/jan.15756
Warchoł-Lewucka, R. (2024, July 29). Kto ponosi odpowiedzialność, gdy chatbot udzieli błędnej odpowiedzi? [Who bears responsibility if a chatbot provides misleading or inaccurate information?]. GSW Gorazda, Świstuń, Wątroba i Partnerzy – Adwokaci i Radcowie Prawni. https://gsw.com.pl/publikacje/prawo-it/ktoponosi-odpowiedzialnosc-gdy-chatbot-udzieli-blednej-odpowiedzi/
Warszycki, M. (2019). Wykorzystanie sztucznej inteligencji do predykcji emocji konsumentów [The use of artificial intelligence for predicting consumer emotions]. Studia i Prace Kolegium Zarządzania i Finansów, 173, 115–129. Warszawa: Oficyna Wydawnicza SGH.

