<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>systematic literature review &#8211; Marketing Instytucji Naukowych i Badawczych &#8211; Kwartalnik Naukowy Instytutu Lotnictwa</title>
	<atom:link href="https://minib.pl/tag/systematic-literature-review-pl/feed/" rel="self" type="application/rss+xml" />
	<link>https://minib.pl</link>
	<description></description>
	<lastBuildDate>Tue, 17 Feb 2026 13:31:21 +0000</lastBuildDate>
	<language>pl-PL</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.4</generator>

 
	<item>
		<title>Artificial intelligence and consumer rights: legal responsibility for algorithmic decisions in the Polish and EU regulatory context</title>
		<link>https://minib.pl/numer/2-2025/artificial-intelligence-and-consumer-rights-legal-responsibility-for-algorithmic-decisions-in-the-polish-and-eu-regulatory-context/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Thu, 19 Jun 2025 17:21:33 +0000</pubDate>
				<category><![CDATA[academic writing]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[management]]></category>
		<category><![CDATA[process management]]></category>
		<category><![CDATA[systematic literature review]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=8536</guid>

					<description><![CDATA[1.Introduction The dynamic deployment of solutions based on Artificial Intelligence (AI) across global digital markets has opened up a new stage in the evolution of trade and consumer behavior (UNCTAD, 2024, p. 3). Contemporary purchasing decisions are increasingly being shaped by digital tools, often promoted as employing “Artificial Intelligence” to optimize the decision-making process from...]]></description>
										<content:encoded><![CDATA[<h2>1.Introduction</h2>
<p>The dynamic deployment of solutions based on Artificial Intelligence (AI) across global digital markets has opened up a new stage in the evolution of trade and consumer behavior (UNCTAD, 2024, p. 3). Contemporary purchasing decisions are increasingly being shaped by digital tools, often promoted as employing “Artificial Intelligence” to optimize the decision-making process from the consumer’s perspective (Paterson, 2022, p. 558).</p>
<p>Following Warszycki (2019, p. 115), AI may be understood as “a field of science encompassing disciplines, methods, tools, and techniques aimed at creating and developing a complete computer program that accurately reflects the model of human functioning and the human mind.” It has become an integral part of the modern consumer market, applied in both front-office processes (interfacing with consumers, clients, and supervisory bodies) and back-office processes (supporting the internal functioning of companies and institutions) (Keller et al., 2024, p. 417).</p>
<p>In consumer-facing applications, AI systems recommend products inferred from users’ preferences and histories, perform automated credit assessments, and provide customer support via virtual assistants (chatbots), among other functions (Myszakowska-Kaczała, 2024). On the operational side, companies are increasingly using AI-based analytics to understand consumer behavior, optimize pricing strategies, and improve supply chain management (GlobeNewswire, 2025).</p>
<p>Although the use of AI in customer service is often considered a hallmark of modern technological implementation, Artificial Intelligence itself is not a twenty-first-century innovation. Most technology historians trace the origins of the concept to the work of the British mathematician and cryptanalyst Alan Turing, who formulated its theoretical foundations in 1950 (Accenture, 2024, p. 8). Nevertheless, the dynamic development of AI was not widely recognized until 2011, when global technology companies such as Google, Facebook, Microsoft, and IBM began using it for business purposes (Ness et al., 2024, p. 1064).</p>
<p>From the perspective of the Polish AI landscape, 2023 marked a turning point, with 88% of respondents declaring familiarity with the term sztuczna inteligencja (“artificial intelligence”) – with this figure rising to 96% among individuals aged 18 to 24 (Digital Poland, 2023, p. 57). It is also notable that the jury of the Polish Language Council declared this term the Polish “Word of the Year” in 2023 (Kruszyńska, 2024).<br />
This coincided with the rapid rise of ChatGPT, an AI–based application that achieved unprecedented global recognition. Between late 2022 and early 2023, the platform attracted approximately 100 million users (mp/dap, TVN24.pl, 2023). The scale and pace of its user growth may position ChatGPT as the fastest-growing consumer-facing web application to date (The Guardian, 2023). Its widespread adoption spurred the creation of numerous derivative solutions tailored to the needs of specific industries, including the banking sector (Capgemini, 2024, p. 44).</p>
<p>In 2025, the global AI market was valued at USD 757.58 billion, with forecasts projecting growth to approximately USD 3,680.4 billion by 2034 (Precedence Research, 2025). Within the global banking sector alone, AI is estimated to generate up to USD 1 trillion in additional value annually (Biswas et al., 2020, pp. 2–3).<br />
The expanding use of AI in consumer services brings not only financial gains but also a range of other benefits – from mitigating risks associated with human error and improving service accessibility, to process automation that enhances efficiency and speeds up customer service. However, the adoption of AI-based tools by market entities also introduces new risks for consumers. The decision-making processes of AI algorithms may be opaque or difficult for the average client to comprehend (Ahn et al., 2024), which can hinder their ability to assess whether a system is operating correctly.</p>
<p>The opacity of AI systems, combined with their capacity to exploit biases and generate unintended side effects, has intensified debates on the need for responsible governance of AI technologies (Cheong, 2024, p. 2). A key challenge, therefore, lies in guaranteeing the effective protection of consumer rights when decisions affecting individuals are being made by algorithms, as well as in determining which parties bear responsibility in cases of algorithmic error or either unintentional or deliberate misuse.</p>
<p>This article seeks to address the following research question: Do Polish and EU legal acts, together with institutional oversight, provide consumers with adequate protection against the negative consequences of decisions made by AI systems, and are there legal gaps in this area? The approach taken is descriptive and analytical, based on selected legal acts (including the Act on Competition and Consumer Protection and the AI Act), relevant academic literature, and selected legal opinions. These sources form the basis for further, more detailed research on the topic.</p>
<p>The choice of a qualitative descriptive analysis stems from its suitability for examining phenomena within their real-world context – in this case, the institutional and regulatory environment. Its purpose is to capture ongoing processes, identify the actors involved, and situate them within their operational conditions. While serving as a starting point for more advanced analyses, this approach itself constitutes a valuable and independent methodological framework (Sandelowski, 2000, p. 339). It involves the following stages (Villamin et al., 2024, pp. 51–91):</p>
<ul>
<li>defining the research objective (application-oriented),</li>
<li>determining the research method (descriptive analysis),</li>
<li>establishing the theoretical framework (accountability for algorithmic decisions in the context of legal frameworks and institutional oversight),</li>
<li><span class="fontstyle0">selecting the research sample (domestic and international literature, legal provisions, and opinions of Polish legal scholars),</span></li>
<li>collecting data (reviewing available sources),</li>
<li>analyzing data (evaluating sources in light of the research objective), and</li>
<li>presenting the research findings.</li>
</ul>
<p style="text-align: left;">The outcomes of this analysis are threefold: (i) a presentation of the current regulatory framework governing responsibility for AI-mediated decisions affecting consumers; (ii) the identification of potential gaps within the existing system of consumer protection; and (iii) the formulation of recommendations aimed at addressing these gaps in the Polish legal system, alongside proposals for new regulatory measures to strengthen consumer safeguards against the adverse consequences of AI-driven decision-making.</p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">2. The use of AI – benefits and risks</span></strong></p>
<p><span class="fontstyle0">Artificial Intelligence is now being applied across nearly all areas of human activity. It is already assisting the work of both teachers and students, including in schools and even in early childhood education (Iron Mountain, 2025). AI can automatically perform tasks such as grading tests and homework assignments or generating reports on student progress (Stecyk, 2025). Higher education institutions are also increasingly utilizing AI algorithms to enhance the efficiency of administrative and academic work. One example is the use of autonomous AI agents that assist in creating professional academic presentations based on an outline (Stecyk, 2025). AI can likewise improve communication processes within universities – for example, through the implementation of “intelligent” dean’s offices or automated student admissions systems. A student wishing to access publicly available university knowledge and documentation in real time needs only one condition to be met: access to the Internet (KALASOFT, n.d.).</span></p>
<p><span class="fontstyle0">It should be emphasized that in the context of higher education, where the student may be regarded as a client or consumer of educational services (Sojkin et al., 2012, pp. 565, 567), the use of Artificial Intelligence entails risks analogous to those observed in other sectors of digital services, particularly regarding data protection, algorithmic transparency, and the right to reliable information. Theoretically, information generated by software based on AI algorithms should be factually accurate. In practice, however, AI systems may rely on unreliable or outdated sources, creating a risk that users receive incorrect or misleading information.</span></p>
<p><span class="fontstyle0">Another risk associated with the use of Artificial Intelligence in higher education concerns the protection of student data collected by institutions employing AI tools, as well as the potential dehumanization of the educational process – where human interaction is diminished and the lecturer’s role shifts away from that of a mentor, becoming instead a mere supervisor of AI-driven systems (Kornaś, 2024).</span></p>
<p><span class="fontstyle0">An argument in favor of limiting the use of Artificial Intelligence in education is that decisions made without human intervention may result in the absence of a clearly identifiable responsible entity, as well as a lack of transparency regarding how such decisions are made (PARP Grupa PFR, 2023, p. 29). Insufficient oversight of these processes may, in turn, result in different types of misuse or abuse, potentially harming the interests of those affected (Iron Mountain, 2025). Table 1 presents examples of AI applications in the consumer market, along with their associated potential benefits and risks. </span></p>
<p><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-8520" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1.jpg" alt="" width="1744" height="2464" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1.jpg 1744w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-212x300.jpg 212w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-725x1024.jpg 725w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-768x1085.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-1087x1536.jpg 1087w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-1450x2048.jpg 1450w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-1320x1865.jpg 1320w" sizes="(max-width: 1744px) 100vw, 1744px" /></p>
<p><span class="fontstyle0">The examples of Artificial Intelligence applications presented in Table 1 illustrate the dual nature of AI’s impact on the consumer market. On the one hand, algorithms can enhance convenience, accessibility, and service efficiency, reduce operating costs, and minimize human error. On the other, AI-related risks include a lack of transparency in decision-making processes, potential discrimination, and incorrect decisions that may result in harm to the consumer. AI-powered tools may not only pose a threat to customer privacy but also increase the risk of consumers falling victim to deceptive or unfair market practices or even financial exclusion in cases where an insurer, on the basis of an AIgenerated analysis, determines that a given consumer represents too great a risk of potential payout (BEUC, 2021, p. 35).</span></p>
<p><span class="fontstyle0">An example of potential gender-based discrimination by an AI algorithm was the 2019 case in the United States involving the credit limit determination process for the Apple Card, issued jointly by Apple and Goldman Sachs. Customers observed that the algorithm responsible for assigning credit limits granted significantly higher limits to men than to women with comparable financial situations. One potential applicant reported that his credit limit was 20 times higher than that of his wife, even though they shared joint marital property and, in his view, her credit history was even better than his. Following the publication of this report, other couples also began to confirm such disparities, sharing examples suggesting that the algorithm favored men. The case attracted the attention of the New York Department of Financial Services, which launched an investigation to determine whether anti-discrimination laws had been violated in this instance (The Guardian, 2019), but it ultimately concluded that there was no discrimination against customers based on gender (Campbell, 2021).</span></p>
<p><span class="fontstyle0">The Apple Card case demonstrated, however, that a lack of algorithmic transparency can lead to public controversy. Customers did not receive a clear explanation as to why the decisions varied so significantly between genders. Being unable to understand the automated decision-making process led some users to perceive the differences in credit limits as negative gender discrimination, even though closer scrutiny showed that no such discrimination had actually occurred. A positive takeaway from this example is that regulators are prepared to intervene, treating the use of AI like any other credit procedure subject to the law.</span></p>
<p><span class="fontstyle0">It should be noted, however, that despite incidents raising concerns about the impartiality of AI-based solutions, there is also evidence suggesting that consumers perceive such systems as more objective than human-driven processes. The rationale in this context is the perceived absence of bias and emotions in AI decision-making (Nogueira et al., 2025, p. 2).</span></p>
<p><span class="fontstyle0">Another type of potential incident involving the use of AI tools and consumers could be having a chatbot incorrectly dismiss a complaint. This might occur, for example, if the chatbot misinterprets an image submitted by the customer and wrongly concludes that a product defect was caused by user error. Another possible example, negative from the customer’s perspective, would involve the misclassification of a complaint into the wrong category. In both cases, one of the possible consequences is the expiration of the statutory 14-day period for responding to a consumer complaint, which, under Polish consumer law, results in the complaint being deemed accepted by default (Polish Consumer Rights Act, Article 7a). A consumer’s lack of awareness of, or failure to invoke the above legal provision could result in their not receiving appropriate support in such a case, due to the algorithm’s improper functioning.</span></p>
<p><span class="fontstyle0">The next section of this article will examine the extent to which current Polish regulations address the challenges outlined above and what changes may be necessary to ensure that consumer rights are effectively protected in the era of widespread algorithmic use in the consumer market. This is a highly important issue, as the number of incidents involving AI systems is increasing alongside the growing adoption of Artificial Intelligence. Between 2022 and 2023 alone, the number of such incidents rose by approximately 1278% (OECD, n.d.).</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0">3. Legal regulations and institutional oversight as pillars of accountability for algorithmic decisions impacting consumers</span></strong></span></p>
<p>&nbsp;</p>
<p><span class="fontstyle0"><strong>3.1. The current legal framework for consumer protection in relation to AI</strong></span></p>
<p><span class="fontstyle2">Given the risks associated with the practical use of Artificial Intelligence, it is often perceived as a source of threats to individual rights (Contissa et al., 2018, p. 11). The noticeable rise in technological sophistication and the emergence of new risks have led regulatory bodies to recognize the necessity of legislative action in this domain (Lagioia et al., 2022, p. 482). Artificial Intelligence poses new and complex challenges to both consumers and the system of consumer law – challenges that existing regulatory mechanisms are not always capable of addressing effectively (Terryn &amp; Martos Marquez, 2025, p. 210).</span></p>
<p><span class="fontstyle2">Based on the analysis of the current legal framework, it can be indicated that there is no comprehensive legal act that specifically addresses the use of Artificial Intelligence in the consumer context. Nevertheless, existing legal provisions offer a certain degree of protection to consumers against the negative consequences of decisions made by algorithms. These include data protection regulations and consumer protection laws (Table 2).</span></p>
<p><img decoding="async" class="aligncenter size-full wp-image-8521" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-scaled.jpg" alt="" width="1714" height="2560" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-scaled.jpg 1714w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-201x300.jpg 201w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-686x1024.jpg 686w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-768x1147.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-1028x1536.jpg 1028w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-1371x2048.jpg 1371w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-1320x1971.jpg 1320w" sizes="(max-width: 1714px) 100vw, 1714px" /></p>
<p><span class="fontstyle0">Moreover, successive parts of the relatively new EU Artificial Intelligence Regulation (AI Act) are now gradually entering into force. The aim of the regulation is “to improve the functioning of the internal market by laying down a uniform legal framework, in particular for the development, placing on the market, putting into service and use of Artificial Intelligence systems (…) to promote the uptake of human-centric and trustworthy Artificial Intelligence (…) and to support innovation” (Regulation (EU) 2024/1689 of the European Parliament and of the Council – The Artificial Intelligence Act). Although the AI Act will fully apply as of 2 August 2026, the provisions of Chapters I and II are already binding and should be applied now (AI Act, art. 113).</span></p>
<p><span class="fontstyle0">Despite the fact that the AI Act includes several significant provisions from a consumer protection standpoint, such as the prohibition of social scoring and the right to file a complaint with a market surveillance authority if an AI system is believed to violate the regulation, European consumer advocacy groups have raised concerns about legal gaps that fail to fully address the risks consumers are exposed to in the context of AI deployment. According to these organizations, the AI Act is not capable of fully eliminating the risks associated with the use of AI tools in consumer interactions. In their view, the regulation focuses primarily on high-risk systems, while many widespread applications of AI, such as the use of chatbots, fall outside its scope (BEUC, 2023).</span></p>
<p><span class="fontstyle0">Such a situation may lead to the emergence of national legislative solutions addressing selected risks associated with the use of AI, which in turn could result in the fragmentation of legal provisions and hinder the assurance of a uniform level of protection for European Union citizens with respect to the same technological products and services (Bertolini, 2025, pp. 9–10).</span></p>
<p><span class="fontstyle0">Referring back to the earlier example of potential gender discrimination in the Apple Card credit approval process, it is worth noting that, under EU law, a consumer in </span><span class="fontstyle0">a similar situation could rely on Article 22 of the General Data Protection Regulation (GDPR). This provision entitles the data subject, whether a potential or actual client, to request clarification regarding the logic behind the algorithmic decision on their credit limit, and to demand a reassessment of the outcome by a human decision-maker.</span></p>
<p><span class="fontstyle0">Additionally, the European Union has in place anti-discrimination regulations – such as Directive 2004/113/EC of 13 December 2004, implementing the principle of equal treatment between men and women in the access to and supply of goods and services. Moreover, if such an incident were to occur in Poland, an entity that actually employed a discriminatory algorithm could face sanctions from the Office of Competition and Consumer Protection (UOKiK), as its actions may constitute a violation of collective consumer interests (Polish Act on Competition and Consumer Protection, Article 24). The activities of this Office will be discussed in more detail in the following sections of this article.</span></p>
<p><span class="fontstyle0">Similarly, in scenarios involving potentially incorrect decisions issued by an AI-driven complaint resolution system, or where a university student receives inaccurate information from an “intelligent” dean’s office, current legal frameworks would regard such instances as the equivalent of human error. Ultimately, responsibility for the functioning and consequences of AI systems rests with the individual or institution that has introduced and operates them (Paprocki, 2025).</span></p>
<p><span class="fontstyle0">The consumer submitting a complaint would retain the right to exercise their entitlement (e.g., to repair or replacement of the product) (Polish Consumer Rights Act, Article 43d). The consumer could also notify the UOKiK, which would assess whether the company had violated the collective interests of consumers (Polish Competition and Consumer Protection Act, Article 24). In cases where complaint processing is delegated to a malfunctioning algorithm, the UOKiK has begun examining such situations and emphasizes that the use of AI does not relieve businesses of their responsibility to review consumer complaints in a fair and timely manner (Infor.pl, 2023).</span></p>
<p><span class="fontstyle0">However, for a student who received incorrect information via an AI system, pursuing legal remedies in response to the negative consequences of such inadequate support may prove to be a significant challenge. Legal provisions do not always recognize a student as a consumer eligible for protection under all the legal acts listed in Table 2. However, if a student were to enter into an agreement based on incorrect information provided by a chatbot, the issue of determining liability for being misled by AI could have a valid legal basis (Warchoł-Lewucka, 2024). In the case of an incorrect response provided by a “smart” dean’s office – regarding, for instance, the current class schedule – the consequences of a student’s absence from mandatory classes held on a date not indicated by the chatbot would likely be borne solely by the student.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0"><span class="fontstyle2">3.2. Regulatory and supervisory institutions and their role</span></span></strong></p>
<p><span class="fontstyle0">Since the broad application of AI in areas such as the consumer market is a relatively new phenomenon, the institutional structure aimed at protecting consumers from AIrelated risks is still evolving. Additionally, the complexity of AI use cases necessitates coordination and cooperation among the various regulatory and supervisory authorities.</span></p>
<p><span class="fontstyle0">In the Polish legal system, the Office of Competition and Consumer Protection (UOKiK), established in 1990, serves as the main institution responsible for safeguarding consumer rights (UOKiK, n.d.). While no existing legal act explicitly names the UOKiK as the principal supervisory authority overseeing the impact of Artificial Intelligence on the consumer market, the Office has been actively engaged in addressing issues related to the use of algorithms in consumer-facing processes. Despite the lack of explicit regulatory designation, the UOKiK actively monitors and engages with developments concerning the application of algorithms in consumer interactions. Its current activities include assessments of chatbot functionality in the telecommunications market and in ecommerce services – most notably in food delivery apps and online marketplaces (Infor.pl, 2023).</span></p>
<p><span class="fontstyle0">The UOKiK is also striving to harness AI to enhance consumer protection on the Polish market. An example of this effort is the implementation of the project entitled “Detection and elimination of dark patterns using Artificial Intelligence,” which aims to develop an AI-based tool capable of identifying unfair uses of so-called dark patterns on commercial websites (UOKiK, 2024). These are user-interface designs intentionally created to mislead </span><span class="fontstyle0">consumers, hinder the expression of genuine preferences, or manipulate users into taking predetermined actions. Such practices are intended to pressure consumers into making purchases they do not truly desire, or to manipulate them into revealing personal information they would not voluntarily provide in a more transparent context (Luguri &amp; Strahilevitz, 2021, p. 43).</span></p>
<p><span class="fontstyle0">It can be assumed that in the near future, the scope of UOKiK’s activities and responsibilities related to the use of Artificial Intelligence in the consumer market will continue to expand. It is likely that the authority will gradually acquire additional statutory powers aimed at enhancing the effectiveness of its supervisory activities in this area.</span></p>
<p><span class="fontstyle0">An additional authority involved in addressing the use of Artificial Intelligence with respect to personal data protection in Poland is the Personal Data Protection Office (UODO). Its counterpart at the EU level is the European Data Protection Board (EDPB), which coordinates data protection policies across member states.</span></p>
<p><span class="fontstyle0">The President of the UODO is the “competent authority for personal data protection” (Polish Personal Data Protection Act, Article 34(1)), with tasks including monitoring and enforcing the provisions of the GDPR, as well as promoting public awareness and understanding of the risks, rules, safeguards, and rights related to data processing (GDPR, Article 57(1)(a) and (b)).</span></p>
<p><span class="fontstyle0">In the context of Artificial Intelligence, the Personal Data Protection Office (UODO), examines the impact of AI on individuals’ privacy and the protection of their personal data (UODO, n.d.). The UODO is authorized, among other things, to impose administrative fines for violations of the GDPR, including the aforementioned Article 22 (e.g., failure to provide human verification of automated data processing in cases where the decision produces legal effects for the consumer).</span></p>
<p><span class="fontstyle0">Among the responsibilities of the European Data Protection Board (EDPB, or EROD) is providing guidance to the European Commission on issues concerning data protection – particularly with regard to proposed amendments to the GDPR and broader legislative initiatives within the EU (EDPB, n.d.). Notably, at its inaugural plenary meeting in 2018, the EDPB adopted guidelines addressing automated decision-making and profiling (EDPB, 2018).</span></p>
<p><span class="fontstyle0">At the EU level, the European Artificial Intelligence Board was established to oversee </span><span class="fontstyle0">the proper implementation of the AI Act (European Commission). Moreover, the European Data Protection Supervisor (EDPS) plays a key role in ensuring that all EU institutions and bodies respect citizens’ privacy rights during personal data processing. The EDPS is also responsible for tracking the development of emerging technologies that may impact data protection and for carrying out investigations into relevant matters falling within its jurisdiction (european-union.europa.eu). Accordingly, it may be concluded that the enforcement of legal standards regarding the protection of Polish consumers’ personal data and the appropriate use of AI-assisted </span><span class="fontstyle0">tools involves multiple institutions operating at both the national and European levels. </span></p>
<p><span class="fontstyle0">Determining which body is responsible in a specific case should depend exclusively on the type of suspected violation (Table 3).</span></p>
<p><strong><span class="fontstyle2">Table 3. </span><span class="fontstyle3">Comparison of the scope of responsibilities of Polish institutions overseeing the consumer market.</span></strong></p>
<p><img decoding="async" class="aligncenter size-full wp-image-8522" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-scaled.jpg" alt="" width="1019" height="2560" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-scaled.jpg 1019w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-119x300.jpg 119w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-408x1024.jpg 408w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-768x1929.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-611x1536.jpg 611w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-815x2048.jpg 815w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-1320x3316.jpg 1320w" sizes="(max-width: 1019px) 100vw, 1019px" /></p>
<p><span class="fontstyle0">However, due to the fast-paced development of Artificial Intelligence in an evergrowing range of consumer-facing applications, it is highly probable that not all risks stemming from AI usage are adequately addressed in existing legal frameworks, and that responsibility for such risks may not fall solely within the remit of a single regulatory body. A relevant example would be a chatbot’s improper handling of a consumer complaint, accompanied by a breach of personal data protection regulations – particularly involving sensitive data. In such circumstances, the case would require joint consideration by at least two competent authorities, such as the UOKiK and UODO.</span></p>
<p><span class="fontstyle0">Thus, it is crucial to ensure not only the constant oversight of emerging AI-related risks and the ongoing adjustment of relevant legislation and institutional responsibilities, but also effective interdisciplinary collaboration between the entities tasked with safeguarding consumer rights.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle2">4. Responsibility for algorithmic decision-making</span></strong></span></p>
<p><span class="fontstyle0">When analyzing the risks associated with the use of Artificial Intelligence in consumer services, it is essential to consider the issue of responsibility for erroneous decisions made by algorithms. AI itself does not possess legal personality and therefore cannot be held directly accountable (Bączyk-Rozwadowska, 2022, p. 9). Responsibility may lie solely with a natural or legal person who exercises control over the operation and deployment of AIdriven systems (Kulicki, 2025). As one analyst has put it, “In principle, liability for errors stemming from the system’s architecture or software should rest with the manufacturer, whereas responsibility for misuse of the system lies with the end user” (Trzaska, 2024). However, given that there is currently no specific legal act that directly assigns responsibility for damages caused by Artificial Intelligence, it remains challenging to clearly designate a natural or legal person as directly liable for errors resulting from AI operations (Trzaska, 2024).</span></p>
<p><span class="fontstyle0">The existing academic literature offers a range of proposals concerning the entity that could be considered “responsible” for decisions made by AI systems: ranging from the software developer who implemented faulty algorithms (programistajava.pl, 2025), through the system operator or controller (Kaniewski &amp; Kowacz, 2023), to the end user (Infinity Insurance Brokers, n.d.), which may be responsible for the proper use of artificial intelligence systems (Buiten, 2024, pp. 256–257).</span></p>
<p><span class="fontstyle0">Certain authors suggest a model in which responsibility is distributed among various groups of stakeholders (programistajava.pl, 2025). Meanwhile, other sources highlight the possibility that, given the considerable complexity of the AI value chain, it may not always be possible to clearly identify the entity responsible for a specific error (Jelińska-Sabatowska, 2025). In many AI-driven processes involved in the provision of products and services, multiple entities participate (Buiten et al., 2023, p. 11). Legal counsels also point to a new type of risk associated with the use of AI – namely, the risk of a “liability gap” (Nogacki, 2024).</span></p>
<p><span class="fontstyle0">The challenge of assigning liability for the outcomes of Artificial Intelligence stems from factors including the following (Nogacki, 2025):</span></p>
<p><span class="fontstyle0">• autonomy – AI systems make decisions without human oversight,</span></p>
<p><span class="fontstyle0">• opacity – the AI decision-making process may be difficult to understand,</span></p>
<p><span class="fontstyle0">• data dependency – flawed data can lead AI to make erroneous decisions,</span></p>
<p><span class="fontstyle0">• value chain complexity – the development and implementation of AI involves multiple entities.</span></p>
<p>&nbsp;</p>
<p><span class="fontstyle0">Nevertheless, the most frequently cited example of a party considered responsible for decisions made by AI is the entrepreneur who implements an AI-based process within their organization. As such, they must take into account the possibility of incurring contractual liability in the event that damage is caused by Artificial Intelligence – such as when an error results in the failure to fulfill a contract concluded with a business partner (Tak Prawnik, 2025). They may also face tort liability, for example in the case of an accident caused by an autonomous vehicle (Kaniewski et al., 2023). However, some sources argue that the previously mentioned “opacity” of AI decision-making undermines the application of standard principles of tort liability (Nogacki, 2025).</span></p>
<p><span class="fontstyle0">Apart from the legal challenge of clearly identifying the entity liable for damage caused by Artificial Intelligence, another significant obstacle is the difficulty in proving the “fault” of the AI system itself. To do so, the consumer – or their legal representative – must gain access to and understand how the AI tool functions, which may require insight into complex and often non-transparent decision-making processes. In practice, however, this may prove difficult or even impossible. Among other factors, this is due to the so-called “black box problem” (Taveira da Fonseca et al., 2024, p. 300) – that is, the system’s recommendations may not be explainable within the framework of traditional linear cause-and-effect logic (Kroplewski, 2023, p. 112).</span></p>
<p><span class="fontstyle0">An additional risk for banking customers related to the use of Artificial Intelligence is the potential overdependence on AI systems in decision-making, predictive analytics, and recommendation processes. Even if a human remains the final decision-maker, they may defer too strongly to the suggestions provided by AI – perceiving them as inherently correct </span><span class="fontstyle0">or derived from deep and reliable analysis (Szostek et al., 2022, p. 55). In practice, however, there may be uncertainty as to whether the data used by automated models is of adequate quality, which may result, for example, in an inaccurate assessment of a customer’s creditworthiness (Szostek et al., 2022, p. 26). In such circumstances, the harmed consumer may face significant challenges in demonstrating that the unfair treatment resulted from the actions of both the AI system and the bank’s staff.</span></p>
<p><span class="fontstyle0">In the context of seeking redress against an erroneous AI-generated decision, the consumer must first be aware that such an irregularity has occurred. The literature on accountability for AI-driven decisions highlights the so-called “information gap,” whereby an individual may not realize that their adverse situation results from the actions of Artificial Intelligence (Ziosi et al., 2023, p. 9). What is crucial, therefore, is not only the existence of legal provisions designed to prevent the effects of erroneous AI decisions, but also the consumer’s own awareness of the protections available under the relevant legal framework.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0"><span class="fontstyle2">5. Conclusions and recommendations</span></span></strong></span></p>
<p><span class="fontstyle0">While the application of Artificial Intelligence in the consumer market brings various advantages – such as personalized product offerings – it also entails significant risks. These include the reliance of algorithms on outdated or biased data, which may result in the unequal treatment of certain customer groups.</span></p>
<p><span class="fontstyle0">Additionally, consumers’ inability to logically explain how algorithms operate may also lead to their misinterpretation of AI-generated decisions, as exemplified by the case concerning the determination of credit limits in the Apple Card program.</span></p>
<p><span class="fontstyle0">Although existing legislation ensures a certain level of protection for consumers against the risks posed by Artificial Intelligence – such as the right to human oversight and the prohibition of discriminatory practices – there are still notable legal gaps. In particular, the opacity of AI decision-making processes creates challenges in proving errors and seeking redress. The lack of algorithmic explainability may also result in consumers misinterpreting automated decisions, as exemplified by the Apple Card case referenced earlier, in which the credit limit allocation raised concerns about fairness and transparency.</span></p>
<p><span class="fontstyle0">One of the legal gaps identified in the article concerns the question of who should be held accountable for decisions made by Artificial Intelligence. Since AI does not have legal personality, it cannot itself bear responsibility for erroneous algorithmic decisions, and no existing provision in either Polish or EU legislation explicitly designates the entity liable </span><span class="fontstyle0">for the malfunction of AI systems. In the scholarly literature, the entity most frequently identified as “responsible” is the provider making the AI-based solution available to consumers. However, responsibility for the harms caused by Artificial Intelligence is also sometimes attributed to the software developers whose algorithms prove faulty, as well as to end users. </span></p>
<p><span class="fontstyle0">In summary, the answer to the research question posed in this article is as follows: Polish and EU legal acts, together with institutional oversight, provide consumers with protection against the negative consequences of decisions made by AI systems. However, this protection does not extend to the full spectrum of potential risks arising from the use of Artificial Intelligence in consumer markets. Legal gaps remain in this area, and the introduction of new legislation that keeps pace with the ongoing development of AI capabilities represents a major regulatory challenge, making the complete elimination of such gaps difficult – if not impossible – in the foreseeable future.</span></p>
<p><span class="fontstyle0">As the use of Artificial Intelligence becomes more widespread, the frequency of incidents involving AI systems continues to rise. Regulatory bodies at both the European and national levels, along with consumer protection authorities, are still building the expertise and acquiring the instruments required to monitor and control AI effectively. This transitional phase contributes to the persistence of certain regulatory blind spots and legal uncertainties. To enhance consumer protection in a market environment where an ever-growing number of processes are supported by Artificial Intelligence – systems that may still be prone to error – it is crucial to implement reforms across legislative, institutional, and educational spheres.</span></p>
<p><span class="fontstyle0">With regard to recommendations, priority should be given to measures designed to address the identified shortcomings in the Polish legal system and to strengthen safeguards for consumers affected by AI-driven decision-making, such as the following:</span></p>
<p><span class="fontstyle0">• Clarifying legal liability for individual entities involved in the development, provision, and use of AI – for example, by introducing a provision into the Polish Competition and Consumer Protection Act stating that liability for errors made by Artificial Intelligence rests with the entity that makes the AI-based tool available to consumers, or with another entity explicitly designated by that provider in the applicable terms and conditions.</span></p>
<p><span class="fontstyle0">• Introducing a legal provision that facilitates the burden of proof for consumers in disputes concerning the malfunctioning of Artificial Intelligence – given that proving an AI-related error is often difficult or even impossible for the average consumer, a reasonable solution would be to shift the burden of proof to the entity providing the AI-based tool to consumers (or to another entity explicitly designated in the relevant terms and conditions). In the event of a dispute, this entity would be required to demonstrate that the AI system did not make an error; otherwise, the case would be resolved in favor of the consumer.</span></p>
<p><span class="fontstyle0">• Requiring algorithmic transparency – consumers should have the right to understand the logic behind decisions made by Artificial Intelligence that affect them personally; for example, by being granted access to terms and conditions that include information about the characteristics or factors the AI takes into account when making specific decisions</span></p>
<p><span class="fontstyle0">• Establishing a statutory definition of the competences of supervisory authorities – for example, a dedicated department could be established within Poland’s Office of Competition and Consumer Protection (UOKiK), staffed with experts in artificial intelligence systems, tasked with analyzing cases in the consumer market suspected of involving faulty operation of AI-based systems.</span></p>
<p><span class="fontstyle0">• Promoting consumer education on AI – through initiatives aimed at increasing consumer awareness of the risks associated with artificial intelligence, as well as of the rights they have with regard to protection against such risks.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0" style="font-size: 18pt;"><span class="fontstyle2">References</span></span></strong></p>
<p><span class="fontstyle0">Accenture. (2024). <span class="fontstyle3">Banking on AI: Banking top 10 trends for 2024. https://www.accenture.com/content/dam/accenture/final/industry/banking/document/Accenture-Banking-Top- 10-Trends-2024.pdf</span></span></p>
<p><span class="fontstyle0">Ahn, D., Almaatouq, A., Gulabani, M., &amp; Hosanagar, K. (2021). Will we trust what we don’t understand? Impact of model interpretability and outcome feedback on trust in AI. <span class="fontstyle3">SSRN Electronic Journal. </span>https://doi.org/10.2139/ssrn.3964332</span></p>
<p><span class="fontstyle0">Bączyk-Rozwadowska, K. (2022). Odpowiedzialność cywilna za szkody wyrządzone w związku z zastosowaniem sztucznej inteligencji w medycynie [Civil liability for damages arising from the use of artificial intelligence in medicine]. <span class="fontstyle3">Przegląd Prawa Medycznego, 3</span>(3–4). https://przegladprawamedycznego.pl/index.php/ppm/article/view/142</span></p>
<p><span class="fontstyle0">Bertolini, A. (2025). <span class="fontstyle3">Artificial intelligence and civil liability: A European perspective. </span>Policy Department for Citizens’ Rights and Constitutional Affairs, Directorate-General for Internal Policies</span></p>
<p><span class="fontstyle0">BEUC. (2021, October 7). <span class="fontstyle3">Regulating AI to protect the consumer. </span>Brussels: BEUC. https://www.beuc.eu/sites/default/files/publications/beuc-x-2021-088_regulating_ai_to_protect_the _consumer.pdf</span></p>
<p><span class="fontstyle0">BEUC. (2023, June 11). <span class="fontstyle3">EU rules on AI lack punch to sufficiently protect consumers. </span>https://www.beuc.eu/pressreleases/eu-rules-ai-lack-punch-sufficiently-protect-consumers</span></p>
<p><span class="fontstyle0">Biswas, S., Carson, B., Chung, V., Singh, S., &amp; Thomas, R. (2020). <span class="fontstyle3">AI-bank of the future: Can banks meet the AI </span></span><span class="fontstyle0"><span class="fontstyle3">challenge? </span>McKinsey &amp; Company. https://www.mckinsey.com/industries/financial-services/ourinsights/ai-bank-of-the-future-can-banks-meet-the-ai-challenge Bondos, I. (2016). Reakcje na dynamicznie ustalane ceny – czy konsumenci mają podwójne standardy oceny </span><span class="fontstyle0">uczciwości cen online? [Reactions to dynamic pricing: Do consumers apply double standards when assessing online price fairness?]. <span class="fontstyle3">Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu, (460), 173– 188. </span>https://www.dbc.wroc.pl/publication/40250</span></p>
<p><span class="fontstyle0">Buiten, M. C. (2024). Product liability for defective AI. <span class="fontstyle3">European Journal of Law and Economics, 57</span>, 239–273. https://doi.org/10.1007/s10657-024-09794-z</span></p>
<p><span class="fontstyle0">Buiten, M., de Streel, A., &amp; Peitz, M. (2023). The law and economics of AI liability. <span class="fontstyle3">Computer Law &amp; Security Review, 48</span>, Article 105794. https://doi.org/10.1016/j.clsr.2023.105794 </span></p>
<p><span class="fontstyle0">Campbell, I. C. (2021, March 23). The Apple Card doesn’t actually discriminate against women, investigators say. </span><span class="fontstyle2">The Verge. </span><span class="fontstyle0">https://www.theverge.com/2021/3/23/22347127/goldman-sachs-apple-card-no-genderdiscrimination</span></p>
<p><span class="fontstyle0">Capgemini. (2024). </span><span class="fontstyle2">World retail banking report 2024. </span><span class="fontstyle0">Capgemini Research Institute.</span></p>
<p><span class="fontstyle0">Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. </span><span class="fontstyle2">Frontiers in Human Dynamics, 6</span><span class="fontstyle0">, Article 1421273. https://doi.org/10.3389/fhumd.2024.1421273</span></p>
<p><span class="fontstyle0">Contissa, G., Docter, K., Lagioia, F., Lippi, M., Micklitz, H. W., Palka, P., Sartor, G., &amp; Torroni, P. (2018). CLAUDETTE meets GDPR: Automating the evaluation of privacy policies using artificial intelligence. </span><span class="fontstyle2">SSRN Electronic Journal. </span><span class="fontstyle0">https://doi.org/10.2139/ssrn.3208596</span></p>
<p><span class="fontstyle0">Digital Poland. (2023). </span><span class="fontstyle2">Technologia w służbie społeczeństwu: Czy Polacy zostaną społeczeństwem 5.0? </span><span class="fontstyle0">[Technology in the service of society: Will Poland become a Society 5.0?]. Warsaw.</span></p>
<p><span class="fontstyle0">Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU as regards the better enforcement and modernisation of Union consumer protection rules. </span><span class="fontstyle2">Official Journal of the European Union, L 328, </span><span class="fontstyle0">7–28 (18 December 2019).</span></p>
<p><span class="fontstyle0">European Commission. (n.d.). </span><span class="fontstyle2">AI Board (European Artificial Intelligence Board)</span><span class="fontstyle0">. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/ai-board</span></p>
<p><span class="fontstyle0">European Data Protection Board (EDPB). (2018). </span><span class="fontstyle2">Zautomatyzowane podejmowanie decyzji i profilowanie </span><span class="fontstyle0">[Automated decision-making and profiling]. https://www.edpb.europa.eu/our-work-tools/ourdocuments/guidelines/automated-decision-making-and-profiling_enpl</span></p>
<p><span class="fontstyle0">European Data Protection Board (EDPB). (n.d.). </span><span class="fontstyle2">Rola EROD </span><span class="fontstyle0">[Role of the EDPB]</span><span class="fontstyle2">. </span><span class="fontstyle0">https://www.edpb.europa.eu/role-edpb_enpl</span></p>
<p><span class="fontstyle0">European Union. (n.d.). </span><span class="fontstyle2">European Data Protection Supervisor (EDPS)Europejski Inspektor Ochrony Danych [European Data Protection Supervisor]. </span><span class="fontstyle0">https://european-union.europa.eu/institutions-lawbudget/institutions-and-bodies/search-all-eu-institutions-and-bodies/european-data-protectionsupervisor-edps_enpl</span></p>
<p><span class="fontstyle0">EY (Ernst &amp; Young). (2024, September 18). </span><span class="fontstyle2">Badanie EY: Rosną obawy konsumentów o bezpieczeństwo ich danych </span><span class="fontstyle0">[EY study: Consumer concerns about data security are growing]</span><span class="fontstyle2">. </span><span class="fontstyle0">https://www.ey.com /pl_pl/newsroom/2024/09/rosna-obawy-konsumentow-o-bezpieczenstwo-ich-danych</span></p>
<p><span class="fontstyle0">GlobeNewswire / Precedence Research. (2025, February 11). </span><span class="fontstyle2">Artificial Intelligence skyrocketing, shaking the market with $3,680.47 Bn by 2034</span><span class="fontstyle0">. https://www.globenewswire.com/news-release/2025/02/11/ 3024340/0/en/Artificial-Intelligence-Skyrocketing-Shaking-the-Market-with-3-680-47-Bn-by-2034.html</span></p>
<p><span class="fontstyle0">Infinity Insurance Brokers. (n.d.). </span><span class="fontstyle2">Odpowiedzialność za szkody wywołane przez AI </span><span class="fontstyle0">[Liability for damages caused by AI]. https://ibu.pl/blog/odpowiedzialnosc-za-szkody-wywolane-przez-ai</span></p>
<p><span class="fontstyle0">Infor.pl. (2023, April 13). </span><span class="fontstyle2">Chatboty, algorytmy, sztuczna inteligencja a prawa konsumenta – stanowisko UOKiK </span><span class="fontstyle0">[Chatbots, algorithms, AI and consumer rights – UOKiK’s position]. https://ksiegowosc.infor.pl/ wiadomosci/5722900,chatboty-algorytmy-sztuczna-inteligencja-a-prawa-konsumenta-stanowiskouokik.html</span></p>
<p><span class="fontstyle0">Iron Mountain. (2025, July 15). </span><span class="fontstyle2">Sztuczna inteligencja w edukacji – szansa czy zagrożenie? </span><span class="fontstyle0">[Artificial intelligence in education – Opportunity or threat?]. https://www.ironmountain.com/pl-pl/resources/blogs-andarticles/a/artificial-intelligence-in-education-opportunity-or-threat (see newsroom listing)</span></p>
<p><span class="fontstyle0">Jelińska-Sabatowska, A. (2025, May 28). </span><span class="fontstyle2">Prawa konsumentów w erze AI: Jak sztuczna inteligencja zmienia relacje w sferze B2C </span><span class="fontstyle0">[Consumer rights in the AI era]. Legalis. C.H.Beck. https://www.legalis.pl/prawa-konsumentow-werze-ai-jak-sztuczna-inteligencja-zmienia-relacje-w-sferze-b2c/</span></p>
<p><span class="fontstyle0">Jurczak, T. (2023, December 30 February 12). </span><span class="fontstyle2">UOKiK otrzymuje skargi na boty </span><span class="fontstyle0">[UOKiK receives complaints about bots]. </span><span class="fontstyle2">Gazeta Prawna</span><span class="fontstyle0">. https://serwisy.gazetaprawna.pl/poradnik-konsumenta/artykuly/ 8658758,chatboty-voiceboty-uokik-boty-prawa-konsumenta.html</span></p>
<p><span class="fontstyle0">KALASOFT. (n.d.). </span><span class="fontstyle2">Inteligentny dziekanat, inteligentna rekrutacja: Jak sztuczna inteligencja zmieniła komunikację uczelni ze studentami </span><span class="fontstyle0">[Smart dean’s office, smart admissions]. https://www.kalasoft.pl/sztucznainteligencja/</span></p>
<p><span class="fontstyle0">Kaniewski, P., &amp; Kowacz, K. (2023, October 3). </span><span class="fontstyle2">Co jeśli AI zawiedzie, czyli odpowiedzialność cywilna za sztuczną inteligencję </span><span class="fontstyle0">[What if AI fails? Civil liability for AI]. </span><span class="fontstyle2">ITwiz</span><span class="fontstyle0">. https://itwiz.pl/co-jesli-ai-zawiedzie-czyliodpowiedzialnosc-cywilna-za-sztuczna-inteligencje/</span></p>
<p><span class="fontstyle0">Keller, A., Martins Pereira, C., &amp; Lucas Pires, M. (2024). The European Union’s approach to artificial intelligence and the challenge of systemic risk. In H. Sousa Antunes, P. M. Freitas, A. L. Oliveira, C. Martins Pereira, E. Vaz de Sequeira, &amp; L. Barreto Xavier (Eds.), </span><span class="fontstyle2">Multidisciplinary perspectives on artificial intelligence and the law </span><span class="fontstyle0">(pp. 415–439). Springer. https://doi.org/10.1007/978-3-031-41264-6_22</span></p>
<p><span class="fontstyle0">Kornaś, W. (2024, August 22). </span><span class="fontstyle2">Sztuczna inteligencja w szkolnictwie wyższym </span><span class="fontstyle0">[Artificial intelligence in higher education]. </span><span class="fontstyle2">Wyższa Szkoła Bezpieczeństwa </span><span class="fontstyle0">(WSB) Blog. https://www.wsb.net.pl/technologia/sztucznainteligencja-w-szkolnictwie-wyzszym/</span></p>
<p><span class="fontstyle0">Kroplewski, R. (2023). Odporność AI dla odpornej wspólnoty [AI resilience for a resilient community]. In A. Szczęsna &amp; M. Stachoń (Eds.), </span><span class="fontstyle2">Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie </span><span class="fontstyle0">(pp. 111–122). CyberPOLICY NASK – Państwowy Instytut Badawczy. https://cyberpolicy.nask.pl/wp-content/ uploads/2023/09/Cyberbezpieczenstwo-AI.-AI-w-cyberbezpieczenstwie.pdf</span></p>
<p><span class="fontstyle0">Kruszyńska, A. (2024, January 4). </span><span class="fontstyle2">Wyłoniono Słowo Roku 2023. Kapituła podała wyniki </span><span class="fontstyle0">[The Word of the Year 2023 has been announced]. </span><span class="fontstyle2">Polska Agencja Prasowa (PAP). </span><span class="fontstyle0">https://www.pap.pl/aktualnosci/wylonionoslowo-roku-2023-kapitula-podala-wyniki</span></p>
<p><span class="fontstyle0">Kulawik, T. (2024, September 17). </span><span class="fontstyle2">Zrozumieć decyzje algorytmów – wyjaśnialność sztucznej inteligencji </span><span class="fontstyle0">[Understanding algorithmic decisions – Explainability of AI]. </span><span class="fontstyle2">ING Tech Blog. </span><span class="fontstyle0">https://techblog.ing.pl/blog/zrozumiec-decyzje-algorytmow-wyjasnialnosc-sztucznej-inteligencji</span></p>
<p><span class="fontstyle0">Kulicki, Ł. (2025). </span><span class="fontstyle2">Szkody wyrządzone przez sztuczną inteligencję – Kto ponosi odpowiedzialność? </span><span class="fontstyle0">[Damages caused by AI – Who is liable?]. After Legal Kancelaria. https://umowywit.pl/szkody-wyrzadzone-przez-aikto-odpowiada/</span></p>
<p><span class="fontstyle0">Lagioia, F., Jabłonowska, A., Liepiòa, R., &amp; Drazewski, K. (2022). AI in search of unfairness in consumer contracts: The terms of service landscape. </span><span class="fontstyle2">Journal of Consumer Policy, 45</span><span class="fontstyle0">(3), 481–536. https://doi.org/10.1007/s10603-022-09520-9</span></p>
<p><span class="fontstyle0">Luguri, J., &amp; Strahilevitz, L. J. (2021). Shining a light on dark patterns. </span><span class="fontstyle2">The Journal of Legal Analysis, 13</span><span class="fontstyle0">(1), 43–109. https://doi.org/10.1093/jla/jaab001https://doi.org/10.1093/jla/laaa006</span></p>
<p><span class="fontstyle0">mp/dap. (2023, December 30). </span><span class="fontstyle2">Taki był rok 2023 w gospodarce. Dziesięć najważniejszych wydarzeń </span><span class="fontstyle0">[2023 in review: Ten key economic events]. </span><span class="fontstyle2">TVN24.pl</span><span class="fontstyle0">. https://tvn24.pl/biznes/z-kraju/rok-2023-w-gospodarce-dziesiecnajwazniejszych-wydarzen-st7537170</span></p>
<p><span class="fontstyle0">Myszakowska-Kaczała, D. (2024). </span><span class="fontstyle2">AI – Jak sztuczna inteligencja zmienia życie konsumentów? </span><span class="fontstyle0">[AI – How AI is changing consumers’ lives]. LexCultura. https://lexcultura.pl/ai-jak-sztuczna-inteligencja-zmieniazycie-konsumentow/</span></p>
<p><span class="fontstyle0">Ness, S., Volkivskyi, M., Muhammad, T., &amp; Balzhyk, K. (2024). Banking 4.0: The impact of artificial intelligence on the banking sector and its transformation of modern banks. </span><span class="fontstyle2">International Journal of Innovative Science and Research Technology, 9</span><span class="fontstyle0">(2), 1064–1072. https://ijisrt.com/banking-40-the-impactof-artificial-intelligence-on-the-banking-sector-and-its-transformation-of-modern-banks</span></p>
<p><span class="fontstyle0">Nogacki, R. (2024, April 5). </span><span class="fontstyle2">Prawne problemy ze sztuczną inteligencją: Czy prawo powstrzyma „bunt maszyn”? </span><span class="fontstyle0">[Legal issues with artificial intelligence: Will the law stop the “machine rebellion”?]. </span><span class="fontstyle2">Gazeta Prawna / Kancelaria Prawna Skarbiec. </span><span class="fontstyle0">https://www.gazetaprawna.pl/firma-i-prawo/artykuly/9422540, prawne-problemy-ze-sztuczna-inteligencja-czy-prawo-powstrzyma-bunt-m.html</span></p>
<p><span class="fontstyle0">Nogacki, R. (2025, February 10). </span><span class="fontstyle2">Odpowiedzialność prawna za decyzje systemów AI: Kto odpowiada, gdy algorytm się myli? </span><span class="fontstyle0">[Legal responsibility for AI system decisions: Who is responsible when the algorithm makes a mistake?]. </span><span class="fontstyle2">Business Centre Club / Kancelaria Prawna Skarbiec. </span><span class="fontstyle0">https://www.bcc.org.pl/ odpowiedzialnosc-prawna-za-decyzje-systemow-ai-kto-odpowiada-gdy-algorytm-sie-myli/</span></p>
<p><span class="fontstyle0">Nogueira, E., Lopes, J. M., &amp; Gomes, S. (2025). The new era of artificial intelligence in consumption: Theoretical framing, review and research agenda. </span><span class="fontstyle2">Management Review Quarterly, 75</span><span class="fontstyle0">(3), 965–1000. https://doi.org/10.1007/s11301-024-00390-1</span></p>
<p><span class="fontstyle0">Nowakowski, M. (2021, August 3). </span><span class="fontstyle2">Czy zbyt samodzielne bankowe algorytmy AI mogą dyskryminować klientów ubiegających się o kredyty? </span><span class="fontstyle0">[Can overly autonomous AI algorithms in banking discriminate against loan applicants?]. </span><span class="fontstyle2">Bank.pl. </span><span class="fontstyle0">https://bank.pl/czy-zbyt-samodzielne-bankowe-algorytmy-ai-mogadyskryminowac-klientow-ubiegajacych-sie-o-kredyty/</span></p>
<p><span class="fontstyle0">Organisation for Economic Co-operation and Development (OECD). (n.d.). </span><span class="fontstyle2">Artificial intelligence. </span><span class="fontstyle0">https://www.oecd.org/en/topics/policy-issues/artificial-intelligence.html</span></p>
<p><span class="fontstyle0">Paprocki, T. (2025, June 4). </span><span class="fontstyle2">Prawne wymogi automatyzacji: Co może zrobić AI, a co nadal wymaga pracy człowieka? AI Act od 2026 roku – co nowe przepisy zmienią w biznesie? </span><span class="fontstyle0">[Legal requirements for automation: What can AI do, and what still requires human work? The AI Act from 2026 – what will change for business?]. Infor.pl / Kancelaria Paprocki, Wojciechowski &amp; Partnerzy. https://kadry.infor.pl/zatrudnienie/umowa-o-prace/6960354,prawne-wymogi-automatyzacji-co-mozezrobic-ai-a-co-nadal-wymaga-pracy-czlowieka-ai-act-od-2026-roku-co-nowe-przepisy-zmienia-w-bizn esie.html</span></p>
<p><span class="fontstyle0">PARP Grupa PFR. (2023). </span><span class="fontstyle2">Rynek pracy, edukacja, kompetencje: Aktualne trendy i wyniki badań </span><span class="fontstyle0">[Labour market, education, skills: Current trends and research findings]. Wydanie specjalne. Polska Agencja Rozwoju Przedsiêbiorczoœci.</span></p>
<p><span class="fontstyle0">Paterson, J. M. (2022). Misleading AI: Regulatory strategies for algorithmic transparency in technologies augmenting consumer decision-making. </span><span class="fontstyle2">Loyola Consumer Law Review, 34</span><span class="fontstyle0">(3), 558–589. https://doi.org/10.2139/ssrn.4164809</span></p>
<p><span class="fontstyle0">Polish Act on Counteracting Unfair Market Practices (2007). </span><span class="fontstyle2">Ustawa z dnia 23 sierpnia 2007 r. o przeciwdziałaniu nieuczciwym praktykom rynkowym </span><span class="fontstyle0">[Act of 23 August 2007 on Counteracting Unfair Market Practices], </span><span class="fontstyle2">Journal of Laws 2007, No. 171, item 1206, as amended.</span></p>
<p><span class="fontstyle0">Polish Civil Code (1964). </span><span class="fontstyle2">Ustawa z dnia 23 kwietnia 1964 r. – Kodeks cywilny </span><span class="fontstyle0">[Act of 23 April 1964 – Civil Code], </span><span class="fontstyle2">Journal of Laws 1964, No. 16, item 93, as amended.</span></p>
<p><span class="fontstyle0">Polish Competition and Consumer Protection Act (2007). </span><span class="fontstyle2">Ustawa z dnia 16 lutego 2007 r. o ochronie konkurencji i konsumentów </span><span class="fontstyle0">[Act of 16 February 2007 on Competition and Consumer Protection], </span><span class="fontstyle2">Journal of Laws 2007, No. 50, item 331, as amended.</span></p>
<p><span class="fontstyle0">Polish Consumer Rights Act (2014). </span><span class="fontstyle2">Ustawa z dnia 30 maja 2014 r. o prawach konsumenta </span><span class="fontstyle0">[Act of 30 May 2014 on Consumer Rights], </span><span class="fontstyle2">Journal of Laws 2014, item 827, as amended.</span></p>
<p><span class="fontstyle0">Polish Personal Data Protection Act (2018). </span><span class="fontstyle2">Ustawa z dnia 10 maja 2018 r. o ochronie danych osobowych </span><span class="fontstyle0">[Act of 10 May 2018 on the Protection of Personal Data], </span><span class="fontstyle2">Journal of Laws 2018, item 1000, as amended.</span></p>
<p><span class="fontstyle0">Precedence Research. (2025, February 11). </span><span class="fontstyle2">Artificial intelligence (AI) market size, share, and trends 2025 to 2034. </span><span class="fontstyle0">https://www.precedenceresearch.com/artificial-intelligence-market</span></p>
<p><span class="fontstyle0">ProgramistaJava.pl. (2025, April 9). </span><span class="fontstyle2">Prawo a AI – czy maszyna może mieć odpowiedzialność? </span><span class="fontstyle0">[Law and AI: Can a machine bear responsibility?]. https://programistajava.pl/2025/04/09/prawo-a-ai-czy-maszyna-mozemiec-odpowiedzialnosc/</span></p>
<p><span class="fontstyle0">Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). </span><span class="fontstyle2">Official Journal of the European Union, L 119, </span><span class="fontstyle0">1–88 (4 May 2016).</span></p>
<p><span class="fontstyle0">Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). </span><span class="fontstyle2">Official Journal of the European Union, L , 2024 (18 July 2024).</span></p>
<p><span class="fontstyle0">Sandelowski, M. (2000). Whatever happened to qualitative description? </span><span class="fontstyle2">Research in Nursing &amp; Health, 23</span><span class="fontstyle0">(4), 334–340. https://doi.org/10.1002/1098-240X(200008)23:4&lt;334::AID-NUR9&gt;3.0.CO;2-G</span></p>
<p><span class="fontstyle0">Sojkin, B., Bartkowiak, P., &amp; Skuza, A. (2012). Determinants of higher education choices and student satisfaction: The case of Poland. </span><span class="fontstyle2">Higher Education, 63</span><span class="fontstyle0">(5), 565–581. https://doi.org/10.1007/s10734-011-9459-2</span></p>
<p><span class="fontstyle0">Stecyk, A. (2025, June 9). </span><span class="fontstyle2">Manus AI: Rewolucja w tworzeniu prezentacji akademickich i zmiana paradygmatu oceniania w środowisku edukacyjnym </span><span class="fontstyle0">[Manus AI: A revolution in academic presentation creation and a paradigm shift in educational assessment]. </span><span class="fontstyle2">Uniwersytet Szczeciński – AI Blog. </span><span class="fontstyle0">https://ai.usz.edu.pl/2025/06/09/manusai-rewolucja-w-tworzeniu-prezentacji-akademickich-i-zmiana-paradygmatu-oceniania-w-srodowisku -edukacyjnym/</span></p>
<p><span class="fontstyle0">Stecyk, A. (2025, May 6). </span><span class="fontstyle2">Sztuczna inteligencja w edukacji – szansa i wyzwanie </span><span class="fontstyle0">[Artificial intelligence in education – opportunity and challenge]. </span><span class="fontstyle2">Uniwersytet Szczeciński – AI Blog. </span><span class="fontstyle0">https://ai.usz.edu.pl/2025/05/06/ sztuczna-inteligencja-w-edukacji-szansa-i-wyzwanie/</span></p>
<p><span class="fontstyle0">Szostek, D., Bar, G., Prabucki, R. T., &amp; Nowakowski, M. (2022). </span><span class="fontstyle2">Zastosowanie sztucznej inteligencji w bankowości – szanse oraz zagrożenia </span><span class="fontstyle0">[The use of artificial intelligence in banking – opportunities and risks]. Program Analityczno-Badawczy Fundacji Warszawski Instytut Bankowości. Warszawa.</span></p>
<p><span class="fontstyle0">Tak Prawnik. (2025, April 30). </span><span class="fontstyle2">Sztuczna inteligencja a przedsiębiorcy: Kto ponosi odpowiedzialność? </span><span class="fontstyle0">[Artificial intelligence and entrepreneurs: Who bears responsibility?]. </span><span class="fontstyle2">Poradnik Przedsiębiorcy. </span><span class="fontstyle0">https://poradnikprzedsiebiorcy.pl/-sztuczna-inteligencja-a-przedsiebiorcy-kto-ponosi-odpowiedzialnosc</span></p>
<p><span class="fontstyle0">Taveira da Fonseca, A., Vaz de Sequeira, E., &amp; Barreto Xavier, L. (2024). Liability for AI-driven systems. In H. Sousa Antunes, P. M. Freitas, A. L. Oliveira, C. Martins Pereira, E. Vaz de Sequeira, &amp; L. Barreto Xavier (Eds.), </span><span class="fontstyle2">Multidisciplinary perspectives on artificial intelligence and the law </span><span class="fontstyle0">(pp. 395–414). Springer. https://doi.org/10.1007/978-3-031-41264-6_21</span></p>
<p><span class="fontstyle0">Terryn, E., &amp; Martos Marquez, S. (2025). AI and consumer protection. In N. A. Smuha (Ed.), </span><span class="fontstyle2">The Cambridge handbook of the law, ethics and policy of artificial intelligence </span><span class="fontstyle0">(pp. 401–418). Cambridge University Press. https://doi.org/10.1017/9781009264844.029</span></p>
<p><span class="fontstyle0">The Guardian. (2019, November 10). </span><span class="fontstyle2">Apple Card issuer investigated after claims of sexist credit checks. </span><span class="fontstyle0">https://www.theguardian.com/technology/2019/nov/10/apple-card-issuer-investigated-after-claimsof-sexist-credit-checks</span></p>
<p><span class="fontstyle0">The Guardian. (2023, February 2). </span><span class="fontstyle2">ChatGPT reaches 100 million users two months after launch. </span><span class="fontstyle0">https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastestgrowing-app</span></p>
<p><span class="fontstyle0">Trzaska, K. (2024, June 12). </span><span class="fontstyle2">Wciąż nie wiadomo, kto ponosi odpowiedzialność za szkodę wyrządzoną przez AI </span><span class="fontstyle0">[It is still unclear who bears responsibility for damages caused by artificial intelligence]. </span><span class="fontstyle2">Prawo.pl / Kancelaria Prawna Maciej Panfil i Partnerzy. </span><span class="fontstyle0">https://www.prawo.pl/biznes/szkoda-wyrzadzona-przez-al-ktoponosi-odpowiedzialnosc-,528456.html</span></p>
<p><span class="fontstyle0">United Nations Conference on Trade and Development (UNCTAD). (2024). </span><span class="fontstyle2">Artificial intelligence and consumer protection. </span><span class="fontstyle0">Geneva: United Nations.</span></p>
<p><span class="fontstyle0">Urząd Ochrony Danych Osobowych (UODO). (n.d.). </span><span class="fontstyle2">Sztuczna inteligencja </span><span class="fontstyle0">[Artificial intelligence]. https://uodo.gov.pl/pl/p/sztuczna-inteligencja</span></p>
<p><span class="fontstyle0">Urząd Ochrony Konkurencji i Konsumentów (UOKiK). (2024, March 14). </span><span class="fontstyle2">Wielkie „wymiatanie” złych praktyk w e-commerce </span><span class="fontstyle0">[The great “cleanup” of unfair practices in e-commerce]. https://uokik.gov.pl/wielkiewymiatanie-zlych-praktyk-w-e-commerce</span></p>
<p><span class="fontstyle0">Urząd Ochrony Konkurencji i Konsumentów (UOKiK). (n.d.). </span><span class="fontstyle2">O UOKiK </span><span class="fontstyle0">[About UOKiK]. https://uokik.gov.pl/o-uokik</span></p>
<p><span class="fontstyle0">Villamin, P., Lopez, V., Thapa, D. K., &amp; Cleary, M. (2024). A worked example of qualitative descriptive design: A step-by-step guide for novice and early career researchers. </span><span class="fontstyle2">Journal of Advanced Nursing, 82</span><span class="fontstyle0">(8), 1729–1745. https://doi.org/10.1111/jan.15756</span></p>
<p><span class="fontstyle0">Warchoł-Lewucka, R. (2024, July 29). </span><span class="fontstyle2">Kto ponosi odpowiedzialność, gdy chatbot udzieli błędnej odpowiedzi? </span><span class="fontstyle0">[Who bears responsibility if a chatbot provides misleading or inaccurate information?]. </span><span class="fontstyle2">GSW Gorazda, Świstuń, Wątroba i Partnerzy – Adwokaci i Radcowie Prawni. </span><span class="fontstyle0">https://gsw.com.pl/publikacje/prawo-it/ktoponosi-odpowiedzialnosc-gdy-chatbot-udzieli-blednej-odpowiedzi/</span></p>
<p><span class="fontstyle0">Warszycki, M. (2019). </span><span class="fontstyle2">Wykorzystanie sztucznej inteligencji do predykcji emocji konsumentów </span><span class="fontstyle0">[The use of artificial intelligence for predicting consumer emotions]. </span><span class="fontstyle2">Studia i Prace Kolegium Zarządzania i Finansów, 173, </span><span class="fontstyle0">115–129. Warszawa: Oficyna Wydawnicza SGH.</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Automating the systematic literature review process in management science using artificial intelligence</title>
		<link>https://minib.pl/numer/2-2025/automating-the-systematic-literature-review-process-in-management-science-using-artificial-intelligence/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Thu, 19 Jun 2025 14:22:33 +0000</pubDate>
				<category><![CDATA[academic writing]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[management]]></category>
		<category><![CDATA[process management]]></category>
		<category><![CDATA[systematic literature review]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=8537</guid>

					<description><![CDATA[1. Introduction Systematic literature reviews (SLR) shape scholarship in many disciplines, functioning as a rigorous method for synthesizing existing primary research. They are particularly important in fields such as the health sciences and management, where the proliferation of publications entails a need for more effective and dependable methods to condense vast bodies of information into...]]></description>
										<content:encoded><![CDATA[<p><strong><span class="fontstyle0" style="font-size: 18pt;">1. Introduction</span></strong></p>
<p><span class="fontstyle2">Systematic literature reviews (SLR) shape scholarship in many disciplines, functioning as a rigorous method for synthesizing existing primary research. They are particularly important in fields such as the health sciences and management, where the proliferation of publications entails a need for more effective and dependable methods to condense vast bodies of information into practical insights (Tantawy et al., 2023; Tsafnat et al., 2013, 2014; Tranfield et al., 2003). The introduction of artificial intelligence (AI) into the SLR process promises to transform and greatly enhance its efficiency and accuracy through automation – especially in repetitive and time-consuming tasks, such as data extraction and synthesis (Clark et al., 2020; Lau, 2019).</span></p>
<p><span class="fontstyle2">The use of AI in SLRs represents more than just a technological advancement; it signifies a shift in the researcher’s role from a traditional examiner of literature to a manager of research processes. In process management, the manager plans, organizes, coordinates, and controls the work (Sommerville et al., 2010), whereas the employees execute the assigned tasks. Transferring this logic to the process of creating a systematic literature review, the researcher, acting as manager, can plan that process, organize the work, coordinate the use of AI applications, and monitor their effects on the outcomes. The AI algorithms carry out the instructions provided by the manager. The whole process remains grounded in the established methodological logic of systematic literature reviews (see Denyer &amp; Tranfield, 2009; Vrontis &amp; Christofi, 2021).</span></p>
<p><span class="fontstyle2">This shift brings both new opportunities and challenges that are redefining the academic research landscape (Vrontis &amp; Christofi, 2021; Wagner et al., 2022). AI tools can quickly become collaborative partners, enabling complex analyses that extend beyond simple automation, even supporting the generation of novel research questions and hypotheses (Saeidnia et al., 2024).</span></p>
<p><span class="fontstyle2">In this paper, we consider the role of AI in the SLR process. AI functions as a collaborator, with the potential to redefine the researcher’s role. Based on a systematic review of the relevant literature, this study explores how AI is currently utilized in SLRs and proposes a framework for future collaboration between humans and AI in academic writing and research. These practical and philosophical considerations highlight the evolving relationship between human researchers and AI technologies.</span></p>
<p><span class="fontstyle2">With the advancement of AI technologies, traditional ideas of authorship and the researcher&#8217;s role in knowledge creation are increasingly being challenged. AI can not only support the research process but also autonomously carry out certain tasks, raising questions about maintaining integrity and accountability in scientific output (Howard, 2024; Masukume, 2024).</span></p>
<p><span class="fontstyle0">This article also discusses the variability and difficulties associated with incorporating AI into management-focused systematic reviews, where the nuanced and contextual aspects of research may pose challenges for automation. The goal is to present a balanced perspective that acknowledges both the potential of AI to improve research methods and the need for researchers to ensure that AI applications align with academic standards and ethical considerations.</span></p>
<p><span class="fontstyle0">Building on this foundation, we formulated the following research question: How can AI support the SLR process in management? This question itself was then addressed through a systematic literature review.</span></p>
<p><span class="fontstyle0">This study adopts a transdisciplinary approach to research methodology, integrating perspectives from management, information science, and technology studies. By exploring how artificial intelligence can be meaningfully embedded in the process of conducting systematic literature reviews, the article addresses not only academic concerns but also the practical needs of external stakeholders – including research institutions, consulting firms, and organizations seeking evidence-based insights. The proposed human–AI collaboration framework encourages more inclusive and participatory models of knowledge creation, potentially involving non-academic actors in the innovation process by enabling faster and more accessible synthesis of research findings. In doing so, the paper aligns with broader efforts to make academic inquiry more responsive, collaborative, and relevant to real-world challenges in business and society.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">2. Research design</span></strong></p>
<p><strong><span class="fontstyle3">Data collection</span></strong></p>
<p><span class="fontstyle0">We conducted searches using commonly accepted search algorithms in the Scopus and Web of Science databases, which contain the largest collections of peer-reviewed academic publications (Glińska &amp; Siemieniako, 2018, Paul &amp; Criado, 2020). We formulated two search queries (one for each database) corresponding to the most common keywords of our basic research concepts, and we followed the database protocols regarding the use of Boolean operators AND, OR, and appropriate truncations (*).</span></p>
<p><span class="fontstyle0">1. (“Automation” OR “Automating” OR “Automated” OR “Automatic”</span><span class="fontstyle0">OR “Automates” OR “Mining”)</span></p>
<p><span class="fontstyle0">2. (“Systematic review*” OR “Systematic Literature Review*”)</span></p>
<p><span class="fontstyle0">3. (“Artificial intelligence” OR “AI”)</span></p>
<p><span class="fontstyle0">This yielded the following query for Scopus, which returned 1,297 studies:</span></p>
<p><span class="fontstyle0">TITLE-ABS-KEY((“Automation” OR “Automating” OR “Automated” OR “Automatic” </span><span class="fontstyle0">OR “Automates” OR “Mining”) AND (“Systematic review*” OR “Systematic Literature Review*”) AND (“Artificial intelligence” OR “AI”)).</span></p>
<p><span class="fontstyle0">On Web of Science, we used the following query, which returned 785 studies:</span></p>
<p><span class="fontstyle0">TS = ((“Automation” OR “Automating” OR “Automated” OR “Automatic” </span><span class="fontstyle0">OR “Automates” OR “Mining”) AND (“Systematic review*” OR “Systematic Literature Review*”) AND (“Artificial intelligence” OR “AI”)).</span></p>
<p><span class="fontstyle0">Together, both queries produced an initial sample of 2,082 studies.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2">Data selection</span></strong></p>
<p><span class="fontstyle0">Three inclusion criteria were applied to select which articles to review. Papers had to be scientific in nature, published in peer-reviewed scientific journals, and written in English. This reduced the initial sample to 1,649 papers. Next, two exclusion criteria were introduced during title and abstract screening. We excluded papers that merely mentioned AI automation in SLRs without describing its application, as well as studies focusing solely on specific phases of the SLR process rather than automation or AI in general. These were mainly technical articles not including any broader context or concept. We also eliminated duplicates from the two databases.</span></p>
<p><span class="fontstyle0">Following this exclusion process, 34 publications remained. Then we added four articles found through AI engines (Elicit and SciSpace software). We then conducted a backward citation search analysis on these 38 articles, yielding 17 additional papers (for a total of 55 in all). Finally, we performed a one-layer forward citation search, which produced 38 additional articles, proceedings, preprints and one doctoral thesis. The final sample consisted of 93 publications, collected as of April 8, 2024.</span></p>
<p><span class="fontstyle0">We chose not to conduct a formal quality assessment due to the emerging nature of the topic. At this nascent stage of the research field, we deemed it more valuable to analyse all available sources to ensure comprehensive coverage.</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8523" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1.jpg" alt="" width="1769" height="1750" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-300x297.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-1024x1013.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-768x760.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-1536x1520.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-1320x1306.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">The final set of 93 publications was analysed using thematic analysis. We coded the material to identify recurring themes related to the integration of AI into the SLR process. The themes were grouped according to the stages of the review process. The findings were then synthesized to build a framework supporting researchers’ collaboration with AI in academic writing. Based on our analysis of 93 studies, we identified how AI contributes to different stages of the SLR process. The review revealed that AI tools are used in scoping, research question formulation, literature identification and selection, data extraction, synthesis, and reporting. These findings of our analysis form the basis for the human researcher–AI collaboration framework we propose.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0">3. Results</span></strong></span></p>
<p><strong><span class="fontstyle2">Systematic literature review as a form of scientific writing in management</span></strong></p>
<p><span class="fontstyle3">A </span><span class="fontstyle4">systematic literature review </span><span class="fontstyle3">(SLR) is a rigorous method for identifying, selecting, evaluating, analysing, and synthesizing existing research findings on a specific topic. It follows a precisely defined and replicable procedure for systematically gathering knowledge on a given topic. The results are transparent and can be verified by other researchers (van Dinter et al., 2021). In contrast to traditional literature reviews used in empirical articles, SLRs employ detailed criteria for selecting and evaluating the quality of source articles and the possibility of using the results in different contexts. They are used to identify research gaps, develop new ideas, and generate comprehensive reviews of the state of the art in specific research fields (Denyer &amp; Tranfield, 2009).</span></p>
<p><span class="fontstyle3">Automation of the SLR process has so far been most widely implemented in the health sciences (Laynor, 2022; Tsafnat et al., 2013, 2014). This trend is reflected in our findings, as more than 70% of the articles in our sample are from that domain. Systematic literature reviews in the health sciences are a comprehensive and scientifically rigorous approach to summarizing existing evidence on a specific topic. As volume of research publications continues to increase, SLRs help researchers, healthcare providers, and medical practitioners stay informed about the latest evidence and practices (Laynor, 2022).</span></p>
<p><span class="fontstyle3">SLRs in management sciences, although no less important than in health sciences, are nevertheless considerably less developed. There therefore remains an under-satisfied need for rigorous synthesis of research findings in the field (Siemieniako et al., 2022), providing a comprehensive and relatively unbiased analysis of the existing literature on particular topics in management. SLRs help identify research gaps, inform directions for future research, and reduce the time spent synthesizing existing sources (Denyer &amp; Tranfield, 2009). Scholars have advocated for the use of systematic review methods in management and organizational studies to advance evidence-based management practices (Tranfield et al., 2003). While certain adjustments may be expected in traditional systematic review methodologies to accommodate the unique characteristics of the management field, the benefits of using systematic literature reviews are widely recognized (Tranfield et al., 2003).</span></p>
<p><span class="fontstyle3">Given the significant progress achieved in automating SLRs within the health sciences and their growing importance in management, it is worth exploring how similar automation could be implemented in this context. To address our research question, the following section presents the various phases of SLRs in management sciences and examines the current possibilities of their automation, based on practices in the health sciences.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0">Systematic literature review phases in management</span></strong></p>
<p><span class="fontstyle2">As outlined by Tranfield et al. (2003) and Denyer and Tranfield (2009), the general phases of a systematic literature review in management typically include:</span></p>
<p><span class="fontstyle3">Planning the review</span><span class="fontstyle2">: The researcher plans the review and defines the scope, protocol, and process for conducting the literature review. In this step, the researcher considers which databases and tools to use, what skills are needed, how to allocate time, and how to search for high-quality resources.</span></p>
<p><span class="fontstyle3">Conducting the search</span><span class="fontstyle2">: Next the researcher collects and selects primary studies that are relevant to the review topic. The researcher performs database searches, screens the citations, assesses the quality of the studies, extracts data, and monitors the activities.</span></p>
<p><span class="fontstyle3">Analyzing &amp; synthesizing the literature</span><span class="fontstyle2">: In the next phase, the researcher correlates the evidence from multiple sources, synthesizes results, and then arranges the data in order to address the research questions.</span></p>
<p><span class="fontstyle3">Reporting the findings</span><span class="fontstyle2">: This final stage involves preparing and disseminating the review results. This includes formatting the main report, reviewing the report, summarizing the findings, discussing limitations, formulating recommendations for policy and practice, and identifying future research areas.</span></p>
<p><span class="fontstyle2">For this study, we adopted the concise and clear procedure developed by Vrontis and Christofi (2021), which also corresponds to the process outlined by Denyer and Tranfield (2009). This procedure consists of the following steps.</span></p>
<p><span class="fontstyle3">Conducting a scoping review</span><span class="fontstyle2">: Scoping analysis defines the boundaries and focus of a research study, systematically determining which studies to include according to established criteria and the timeframe to be covered (Vrontis &amp; Christofi, 2021). The main aim is to develop a comprehensive, structured review of relevant literature. This analysis facilitates mapping the field; identifying the main trends, gaps, and opportunities for theoretical development; and providing solid and reliable evidence for further research. A scoping analysis, therefore, allows researchers to efficiently and effectively assemble, assess, and collate the available literature to inform study objectives and methodologies (Vrontis &amp; Christofi, 2021).</span></p>
<p><span class="fontstyle3">Identifying the research purpose and research question</span><span class="fontstyle2">: In the next step, the researcher identifies the research purpose and research question by defining the scope and the focus of the study. This process follows a comprehensive scoping review, which enhances awareness of gaps, trends, and what is already known on the subject of interest (Pereira et al., 2023). Finally, research questions are formulated based on this preliminary study to meet the review’s overall research objectives.</span></p>
<p><span class="fontstyle2">One effective way to formulate a research question is through the interplay between the researchers and feedback from experts in academia and from the relevant industries <span class="fontstyle0">(Vrontis &amp; Christofi, 2021). Such an iterative process may better focus the research question so as to better capture the intent. The research question should be grounded in an understanding of the interface between different variables or concepts under study (Billore et al., 2023).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">At this stage, it is also important to consider the inclusion criteria, regarding what the study will seek to address and what kinds of sources to include (Vrontis &amp; Christofi, 2021). Well-honed inclusion criteria ensure that a research question remains focused and relevant to the set research objectives. Generally, by following a structured methodology, researchers can formulate well-defined research questions in line with the overall research aim.</span></span></p>
<p><span class="fontstyle2">Identifying the research context<span class="fontstyle0">: The research context is the particular setting, condition, or background in which the study takes place. It incorporates the industry under study, participants’ cultural traits, geographical locations, time periods, and all those elements which may have an effect on the research topic or its findings (Vrontis et al., 2020).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">Understanding the research context is therefore crucial for interpreting and generalizing findings, since different contexts may lead researchers to varying outcomes with different implications (Christofi et al., 2017). Researchers usually design their studies with contextual factors in mind to ensure that their findings are relevant and applicable in particular situations (Baima et al., 2020). By examining different research contexts, scholars can gain new insights, refine theories, and enhance their understanding of a particular area of study (Vrontis et al., 2022).</span></span></p>
<p><span class="fontstyle2">Identifying the literature<span class="fontstyle0">: Literature identification is a systematic process of searching for, selecting, and analysing relevant publications and research studies with respect to a given topic or issue. This typically includes assessing the relevance and quality of the literature found and synthesizing key findings into insights about the current state of knowledge on the subject under investigation. Identifying the literature allows researchers to better grasp the theoretical approaches taken and the extant research gaps, trends, and challenges in the respective field of study. In other words, this step enables scholars to map out the current state of the subject and, consequently, to identify gaps and trends in order to support the development of scientific projects (Jain et al., 2022).</span></span></p>
<p><span class="fontstyle2">Selecting the literature<span class="fontstyle0">: In the fifth step, the relevant sources of information – such as research articles, books, and other publications – are selected for inclusion in the study or review. This process requires the setting of clear selection criteria, such as the studies’ research questions, objectives, and the quality of the sources. These criteria help to identify and screen potential sources and, finally, select relevant and high-quality literature to be further studied (Christofi et al., 2017). The systematic methodologies used in conducting literature reviews help researchers ensure a very rigorous and comprehensive selection process for this step of the review (Battisti et al., 2023). Through careful selection,</span> <span class="fontstyle0">researcher build a solid foundation of existing knowledge and findings relevant to their own study.</span></span></p>
<p><span class="fontstyle2">Extracting and synthesizing data<span class="fontstyle0">: Data extraction involves the systematic collection of relevant data from the selected articles or research papers, according to predefined criteria. This includes identifying and recording specific information such as publication details, author details, article type, methods used, key findings, and other relevant data points (Christofi et al., 2021). Data synthesis, by contrast, involves analysing the extracted material to identify patterns, relationships, or common themes in the literature. This stage aims at synthesizing the data from the different sources of information into a coherent framework or model that will then guide further research or provide practical implications (Christofi et al., 2021). This is then followed by thematic analysis to integrate the results into an overall framework, further enabling in-depth understanding of interrelating concepts (Battisti et al., 2023). In general, data synthesis facilitates the generation of meaningful inferences from the literature review and provides directions for future research.</span></span></p>
<p><span class="fontstyle2">Reporting and making recommendations<span class="fontstyle0">: This final stage involves preparing the report and recommendations, which requires summarizing and synthesizing the results of the reviewed studies in a structured and transparent manner. Principal results, themes, and lessons learned from the literature are organized and presented comprehensively. The authors of the review identify gaps in the literature, propose future directions, and offer recommendations for both academics and practitioners based on their analysis of the reviewed studies. The ultimate aim is to contribute valuable insights to the existing knowledge base of the research area and guide further research efforts (Christofi et al., 2017; Pereira et al., 2023).</span></span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2"><span class="fontstyle3">Automation of SLRs in management</span></span></strong></p>
<p><span class="fontstyle2"><span class="fontstyle0">In this section, we illustrate how artificial intelligence tools can be used to automate specific stages of the systematic literature review process. The examples come mainly from health sciences literature, but the same SLR procedures are increasingly being used in management (Denyer &amp; Tranfield, 2009).</span></span></p>
<p><span class="fontstyle2">Scientific automation <span class="fontstyle0">refers to the application of technological instruments and procedures to mechanize and enhance a number of scientific processes related to data collection, analysis, and reporting. Within the context of systematic reviews, it entails the use of software and algorithms to accelerate the review process and to efficiently and accurately synthesize evidence (Lau, 2019). The tasks that can be automated for systematic reviews include literature screening, data extraction, and meta-analysis (Tóth et al., 2023). More generally, science automation aims to improve efficiency, transparency, and</span> <span class="fontstyle0">reproducibility, while reducing costs by taking advantage of better technology and artificial intelligence (Laynor, 2022).</span></span></p>
<p><em><span class="fontstyle2">Scoping analysis</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI algorithms can help automate several tasks within scoping analysis, facilitating the extraction of key information from large bodies of scientific literature – such as author names, affiliations, keywords, citation counts, or topics (Saeidnia et al., 2024). By analysing citation networks, AI systems can identify highly cited and influential papers and reveal the dynamics of scientific knowledge diffusion. They may also predict the potential impact of scientific research based on a variety of factors. Moreover, they may detect and visualize research collaborations through co-authorship networks and publication histories. Applying natural language processing (NLP) techniques can make it easier for researchers to identify emerging trends and topics during the scoping analysis (Saeidnia et al., 2024).</span></span></p>
<p><em><span class="fontstyle2">Identifying the research purpose and research questions</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI can assist researchers in posing research questions by providing data-driven insights and optimized methodologies. It can identify gaps in the available literature, generate hypotheses, and even predict probable correlations or causal relationships. AI tools can, therefore, enhance the brainstorming process with insights drawn from existing trends, historical data, and cross-disciplinary studies that may ultimately set researchers onto new investigative paths (Wagner et al., 2022). Moreover, given AI’s advanced capacity to analyse data faster and more accurately than is humanly possible, it can reveal hidden patterns, correlations, and emerging research trends that enable the researcher to find new directions to pursue (Saeidnia et al., 2024; Tomczyk et al., 2024).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">However, while AI can significantly increase the efficiency of the research process, human judgment and critical thinking remain indispensable for determining which research gaps merit exploration and how they should be addressed (Spillias et al., 2023). While AI can open up ways to fast-track the process of identifying relevant literature and proposing hypotheses, human judgment is necessary for generating meaningful questions through problematization (Wagner et al., 2022).</span></span></p>
<p><em><span class="fontstyle2">Identifying the research context </span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI can also contribute to defining research contexts by generating ideas, reviewing the literature, analysing data, and mapping out collaboration networks (Saeidnia et al., 2024). AI algorithms are able to process large amounts of data to pinpoint underexplored areas within a field (Khalifa &amp; Albadawy, 2024). In this respect, using natural language processing techniques, AI can extract keywords, topics, and trends from scientific</span> <span class="fontstyle0">publications that may be helpful for the research community to find new directions and emerging areas of focus in the respective domains (Saeidnia et al., 2024). Moreover, AI can contribute to the generation of ideas and hypotheses and to the development of robust designs by proposing relevant research problems as well as methodologies (Khalifa &amp; Albadawy, 2024). It can be applied to predict emerging research trends; identify potential collaborators and influential research networks, and measure the impact and visibility of scientific papers, authors, and journals (Saeidnia et al., 2024).</span></span></p>
<p><em><span class="fontstyle2">Identifying literature</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI techniques can identify relevant literature in various ways. Algorithms can distinguish between authors with similar names by considering variables such as institutional affiliations and publication histories (Saeidnia et al., 2024). This guarantees that scholarly work is attributed correctly and also enhances the reliability of bibliometric analysis.</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">Researchers are increasingly applying AI techniques such as machine learning (ML) and data mining in bibliometrics in order to predict future publication trends, emerging research areas, and research impact (Saeidnia et al., 2024). AI algorithms can recognize patterns and relationships in large bibliographic datasets, and then deliver critical insights regarding what the scientific enterprise of research may look like in the years to come. Such studies may significantly enhance researchers’ capacity to recognize and remain abreast of key trends and research collaborations.</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">As Saeidnia et al. (2024) observed, AI algorithms can automatically collect bibliographic data from a variety of sources, such as online databases, academic libraries, and digital repositories, and this may save a lot of time and effort for researchers engaged in data collection. AI analysis of citation networks also helps locate influential papers, authors, and journals, highlighting the impact and visibility of research outputs and spotting key trends.</span></span></p>
<p><em><span class="fontstyle2">Selecting literature</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI can facilitate the literature-selection stage through advanced methods for knowledge representation and inference, text manipulation, and learning from large amounts of data. These techniques are particularly useful for tasks that are laborious or repetitive for humans, such as the critical analysis of scientific literature (de la Torre-López</span></span><span class="fontstyle2"><span class="fontstyle0">et al., 2023). AI tools support the clear specification of problem domains and literatureselection criteria, thus enabling researchers to apply search and selection criteria, save time, and ensure transparency and quality in the literature review (Ngwenyama &amp; Rowe, 2024). AI-based tools can potentially deal with fuzzy, weakly structured, and unstructured</span> <span class="fontstyle0">data, providing abstraction and semantic meaning-based analysis that can support searching and screening tasks for literature selection (Wagner et al., 2022). Advanced supervised machine learning methods, such as deep learning (DL), are used to automate decisions on the relevance of papers. This alleviates researchers from the tedious task of rule-codification and also makes the literature-selection processes more efficient (Wagner et al., 2022). Essentially, AI tools offer capabilities that can be harnessed to advance the effectiveness, efficiency, and accuracy of the literature-selection processes, thus proving very instrumental for researchers in their quest to navigate the veritable sea of literature available in many domains.</span></span></p>
<p><em><span class="fontstyle2">Extracting and synthesizing data</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">In the data-extraction phase, AI tools can automatically extract information from articles, whether structured through elements of the PICO framework or specific data points, using ML/DL/NLP methods (Santos et al., 2023). AI tools can assist in summarizing and interpreting the extracted information in formats that will enable graphic and statistical synthesis, including the generation of tables, diagrams, and graphs examining between-study heterogeneity, and in updating meta-analyses and related forest plots (Amezcua-Prieto et al., 2020). These capabilities of AI thus support faster data-extraction and synthesis processes in literature reviews, improving efficiency and quality in synthesizing evidence in scholarly research.</span></span></p>
<p><em><span class="fontstyle2">Reporting and preparing recommendations</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI-driven tools can contribute significantly to improved manuscript preparation, assisting in such stages as grammar correction, text rewriting, and recommendation generation – often tailored to the users’ individual preferences and writing style (Chemaya &amp; Martin, 2023). AI systems can also automatically identify missing data, synthesize evidence from source studies, and identify topics through automated text clustering (Santos et al., 2023). Moreover, AI algorithms can digest large numbers of scientific publications to retrieve information about author names, affiliations, keywords, or </span></span><span class="fontstyle2"><span class="fontstyle0">citations, all of which may help researchers gain a better grasp the publication patterns, underlying research networks, and collaborations in a scientific area (Saeidnia et al., 2024). </span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI-powered recommender systems can be used to recommend relevant scientific websites, online resources, and research collaborations based on user preferences, reading behaviour, and web data (Saeidnia et al., 2024). Natural language processing and machine learning techniques may play a central role in these systems, supporting the analysis of web-based documents, extraction of key information, understanding of research outputs, </span></span><span class="fontstyle2"><span class="fontstyle0">and assessment of impact and visibility of online scientific research (Saeidnia et al., 2024).</span> </span></p>
<p><span class="fontstyle2"> <span class="fontstyle0">The reviewed literature shows that AI capabilities in data extraction, analysis, and recommendation generation are transforming the process of reporting, explaining, and communicating research findings – bringing a revolution in how academic and research outputs are reported and shared. Table 1 presents a summary of this analysis.</span> </span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8533" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1.jpg" alt="" width="1769" height="2372" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-224x300.jpg 224w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-764x1024.jpg 764w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-768x1030.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-1146x1536.jpg 1146w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-1527x2048.jpg 1527w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-1320x1770.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">In summary, this section has demonstrated how artificial intelligence can support the automation of the various phases of systematic literature reviews, which therefore answers our core research question. More specifically, we investigated how AI applications implemented in the SLR procedures for health sciences can be applied to management sciences. The SLR procedure adopted here follows the framework proposed by Vrontis and Christofi (2021), who extended that of Denyer and Tranfield (2009).</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0" style="font-size: 18pt;"> 4. Conclusions, Limitations, and Future Research</span></strong></p>
<p>The integration of artificial intelligence into systematic literature reviews represents not Merely an evolution, but a revolution – one that challenges the very foundation of academic Research. The traditional painstaking process of identifying, analysing, and synthesizing Literature is being rapidly overtaken by ai-driven automation, fundamentally shifting the Researcher’s role from that of an intellectual labourer to that of a process manager.</p>
<p>To further illustrate the balance between human oversight and machine capability, This transformation can be productively examined in terms of the data–prediction–Judgment–action model (agrawal, gans, &amp; goldfarb, 2018). According to this model, ai Improves the prediction stage by processing large amounts of information, whereas the Stages of judgment and action remain the responsibility of humans. Applied to slrs, this Implies that researchers are not passive supervisors. Rather, they must critically evaluate Ai outputs, interpret them, and decide how to integrate them into existing theory. Ai can Automate the identification of literature and point to potential gaps, yet it cannot replace Human judgment in assessing relevance or drawing conclusions. A useful analogy can be Found in the military domain, where ai improves predictive capacities but decisionmaking authority ultimately remains with humans (agrawal, gans, &amp; goldfarb, 2018). This framework thus reinforces our view that ai does not eliminate the researcher’s role. Instead, it redefines it. Researchers remain managers of the process, with their judgment And action ensuring rigor and depth.</p>
<p>However, this transformation is not universally welcomed. While it may lead to Improved efficiency, scalability, and precision, one must ask: at what cost? Increased <span class="fontstyle0">reliance on AI threatens to erode the depth of critical engagement with literature, potentially reducing researchers to mere supervisors of algorithms rather than active participants in knowledge creation. Yet AI systems are not neutral; they inherit the biases of their training data, the priorities of their programmers, and the constraints of their algorithms. If left unchecked, these embedded biases could reshape academic discourse in ways we are only beginning to understand.</span></p>
<p><span class="fontstyle0">The present study has a number of limitations, which reflect broader concerns about AI’s role in research. The fact that most extant SLR automation techniques stem from health sciences raises a crucial question: Is management research even compatible with such mechanization? The field of management thrives on context, interpretation, and theoretical nuance – elements that AI, for all its computational power, struggles to grapple with. Applying automation techniques designed for medical trials to a discipline that values qualitative insight may, at best, be an oversimplification and, at worst, an intellectual misstep. Moreover, our reliance on peer-reviewed studies from established databases inadvertently sidelines alternative perspectives and cutting-edge discussions happening outside traditional academic publishing. If AI is trained only on what is deemed “acceptable” by established gatekeepers, are we not reinforcing the very same academic silos that researchers have long criticized? The omission of formal quality assessment further highlights the immaturity of this research area. We have embraced AI before rigorously questioning whether it genuinely improves the research process – or simply accelerates flawed methodologies.</span></p>
<p><span class="fontstyle0">As far as further limitations are concerned, the number of references included in this study could possibly have been larger, but it was the direct outcome of our systematic selection procedure. The final set of publications was determined through predefined keywords and strict inclusion and exclusion criteria, ensuring objectivity and transparency. As a result, the number of sources may have been smaller than expected, but it accurately reflects the available and relevant research within the scope of this emerging field.</span></p>
<p><span class="fontstyle0">The fact that our own study was itself conducted through the systematic literature </span><span class="fontstyle0">review method also invites some brief reflection on this process. We relied on established databases (Scopus and Web of Science) and complemented them with AI-based tools such </span><span class="fontstyle0">as Elicit and SciSpace to identify additional sources. While this approach provided a broad coverage of relevant studies, it also revealed challenges that are characteristic of AI-assisted reviews. For example, integrating results from traditional databases and AI tools required additional effort to ensure consistency and avoid duplication. Furthermore, while AI engines accelerated the retrieval of relevant articles, they sometimes produced results lacking sufficient context or theoretical framing, which required careful human judgment. These experiences confirm our broader argument: AI can support the prediction and data</span> <span class="fontstyle0">retrieval stages, but the stages of judgment and action remain dependent on researchers. By reflecting on our own process, we emphasise the importance of methodological transparency and show that the opportunities and limitations of AI-assisted SLRs are not only conceptual but also practical realities encountered during research.</span></p>
<p><span class="fontstyle0">Looking ahead, future research must confront these uncomfortable realities rather than blindly celebrate AI’s capabilities. Instead of merely asking how AI can make SLRs more efficient, we should ask whether AI-assisted reviews do actually produce better knowledge at all. If AI is allowed to dictate research agendas by prioritizing what is most frequently cited, we risk creating an academic echo chamber where innovation is stifled in favour of algorithmic consensus.</span></p>
<p><span class="fontstyle0">The ethical implications are equally alarming. Who takes responsibility when AIgenerated literature reviews misrepresent findings or reinforce biases? The obsession with automation must be tempered with a serious conversation about accountability and intellectual integrity. Scholars must resist the temptation to let AI do their thinking for them. The most pressing challenge is not improving AI but ensuring that human researchers remain the architects of inquiry rather than its passive facilitators. The future of AI-driven research is not inevitable – it is a choice. Whether that choice leads to a new era of intellectual empowerment or a hollowing out of academic rigor depends entirely on how critically we engage with this technology now.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">References</span></strong></p>
<p><span class="fontstyle0">Amezcua-Prieto, C., Fernández-Luna, J. M., Huete-Guadix, J. F., Bueno-Cavanillas, A., &amp; Khan, K. S. (2020). Artificial intelligence and automation of systematic reviews in women’s health. </span><span class="fontstyle3">Current Opinion in Obstetrics &amp; Gynecology</span><span class="fontstyle0">, </span><span class="fontstyle3">32</span><span class="fontstyle0">(5), 335–341. https://doi.org/10.1097/GCO.0000000000000643</span></p>
<p><span class="fontstyle0">Battisti, E., Graziano, E. A., Pereira, V., Vrontis, D., &amp; Giovanis, A. (2023). Talent management and firm performance in emerging markets: A systematic literature review and framework. </span><span class="fontstyle3">Management</span></p>
<p><span class="fontstyle3">Decision</span><span class="fontstyle0">, </span><span class="fontstyle3">61</span><span class="fontstyle0">(9), 2757–2783. https://doi.org/10.1108/MD-10-2021-1327 Billore, S., Anisimova, T., &amp; Vrontis, D. (2023). Self-regulation and goal-directed behavior: A systematic literature review, public policy recommendations, and research agenda. </span><span class="fontstyle3">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle3">156</span><span class="fontstyle0">. https://doi.org/10.1016/j.jbusres.2022.113435 Chemaya, N., &amp; Martin, D. (2023). Perceptions and detection of AI use in manuscript preparation for academic journals. Preprint. http://arxiv.org/abs/2311.14720 Christofi, M., Leonidou, E., &amp; Vrontis, D. (2017). Marketing research on mergers and acquisitions:</span></p>
<p><span class="fontstyle0">A systematic review and future directions. </span><span class="fontstyle3">International Marketing Review</span><span class="fontstyle0">, </span><span class="fontstyle3">34</span><span class="fontstyle0">(5), 629–651. https://doi.org/10.1108/IMR-03-2015-0100</span></p>
<p><span class="fontstyle0">Clark, J., Glasziou, P., Del Mar, C., Bannach-Brown, A., Stehlik, P., &amp; Scott, A. M. (2020). A full systematic review was completed in 2 weeks using automation tools: A case study. Journal of Clinical Epidemiology, </span><span class="fontstyle3">121</span><span class="fontstyle0">, 81–90. https://doi.org/10.1016/j.jclinepi.2020.01.008</span></p>
<p><span class="fontstyle0">de la Torre-López, J., Ramírez, A., &amp; Romero, J. R. (2023). Artificial intelligence to automate the systematic review of scientific literature. </span><span class="fontstyle3">Computing</span><span class="fontstyle0">, </span><span class="fontstyle3">105</span><span class="fontstyle0">(10), 2171–2194. https://doi.org/10.1007/s00607-023-01181-x</span></p>
<p><span class="fontstyle0">Denyer, D., &amp; Tranfield, D. (2009). Producing a systematic review. In D. A. Buchanan &amp; A. Bryman (Eds.), </span><span class="fontstyle2">Sage Handbook of Organizational Research Methods </span><span class="fontstyle0">(pp. 671–689). Sage Publications.</span></p>
<p><span class="fontstyle0">Glińska, E., &amp; Siemieniako, D. (2018). Binge drinking in relation to services – Bibliometric analysis of scientific research directions. </span><span class="fontstyle2">Engineering Management in Production and Services</span><span class="fontstyle0">, </span><span class="fontstyle2">10</span><span class="fontstyle0">(1), 45–54.</span></p>
<p><span class="fontstyle0">Howard, F. M., Li, A., Riffon, M. F., Garrett-Mayer, E., &amp; Pearson, A. T. (2024). Characterizing the increase in artificial intelligence content detection in oncology scientific abstracts from 2021 to 2023. </span><span class="fontstyle2">JCO Clinical Cancer Informatics</span><span class="fontstyle0">, </span><span class="fontstyle2">8</span><span class="fontstyle0">, e2400077. https://doi.org/10.1200/CCI.24.00077</span></p>
<p><span class="fontstyle0">Jain, R., Jain, K., Behl, A., Pereira, V., Del Giudice, M., &amp; Vrontis, D. (2022). Mainstreaming fashion rental consumption: A systematic and thematic review of literature. </span><span class="fontstyle2">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle2">139</span><span class="fontstyle0">, 1525– 1539. https://doi.org/10.1016/j.jbusres.2021.10.071</span></p>
<p><span class="fontstyle0">Khalifa, M., &amp; Albadawy, M. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. </span><span class="fontstyle2">Computer Methods and Programs in Biomedicine Update</span><span class="fontstyle0">, </span><span class="fontstyle2">5. </span><span class="fontstyle0">https://doi.org/10.1016/j.cmpbup.2024.100145</span></p>
<p><span class="fontstyle0">Lau, J. (2019). Editorial: Systematic review automation thematic series. </span><span class="fontstyle2">Systematic Reviews</span><span class="fontstyle0">, 8(1). https://doi.org/10.1186/s13643-019-0974-z</span></p>
<p><span class="fontstyle0">Laynor, G. (2022). Can systematic reviews be automated? </span><span class="fontstyle2">Journal of Electronic Resources in Medical Libraries</span><span class="fontstyle0">, </span><span class="fontstyle2">19</span><span class="fontstyle0">(3), 101–106.</span></p>
<p><span class="fontstyle0">Masukume, G. (2024). The impact of AI on scientific literature: A surge in AI-associated words in academic and biomedical writing. medRxiv, June 1, 2024. https://doi.org/10.1101/2024.05.31.24308296</span></p>
<p><span class="fontstyle0">Ngwenyama, O., &amp; Rowe, F. (2024). Should we collaborate with AI to conduct literature reviews? Changing epistemic values in a flattening world. </span><span class="fontstyle2">Journal of the Association for Information Systems</span><span class="fontstyle0">, </span><span class="fontstyle2">25</span><span class="fontstyle0">(1), 122–136. https://doi.org/10.17705/1jais.00869</span></p>
<p><span class="fontstyle0">Paul, J., &amp; Criado, A. R. (2020). The art of writing literature review: What do we know and what do we need to know?. International Business Review, </span><span class="fontstyle2">29</span><span class="fontstyle0">(4), 101717.</span></p>
<p><span class="fontstyle0">Pereira, V., Hadjielias, E., Christofi, M., &amp; Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. </span><span class="fontstyle2">Human Resource Management Review</span><span class="fontstyle0">, </span><span class="fontstyle2">33</span><span class="fontstyle0">(1). https://doi.org/10.1016/j.hrmr.2021.100857</span></p>
<p><span class="fontstyle0">Saeidnia, H. R., Hosseini, E., Abdoli, S., &amp; Ausloos, M. (2024). Unleashing the power of AI: A systematic review of cutting-edge techniques in AI-enhanced scientometrics, webometrics and bibliometrics. </span><span class="fontstyle2">Library Hi Tech</span><span class="fontstyle0">. https://doi.org/10.1108/LHT-10-2023-0514</span></p>
<p><span class="fontstyle0">Santos, Á. O. dos, da Silva, E. S., Couto, L. M., Reis, G. V. L., &amp; Belo, V. S. (2023). The use of artificial intelligence for automating or semi-automating biomedical literature analyses: A scoping review. </span><span class="fontstyle2">Journal of Biomedical Informatics</span><span class="fontstyle0">, </span><span class="fontstyle2">142</span><span class="fontstyle0">. https://doi.org/10.1016/j.jbi.2023.104389</span></p>
<p><span class="fontstyle0">Siemieniako, D., Mitrêga, M., &amp; Kubacki, K. (2022). The antecedents to social impact in inter-organizational relationships – A systematic review and future research agenda. </span><span class="fontstyle2">Industrial Marketing Management</span><span class="fontstyle0">, </span><span class="fontstyle2">101</span><span class="fontstyle0">(March 2021), 191–207. https://doi.org/10.1016/j.indmarman.2021.12.014</span></p>
<p><span class="fontstyle0">Sommerville, J., Craig, N., &amp; Hendry, J. (2010). The role of the project manager: All things to all people?. </span><span class="fontstyle2">Structural Survey</span><span class="fontstyle0">, </span><span class="fontstyle2">28</span><span class="fontstyle0">(2), 132–141.</span></p>
<p><span class="fontstyle0">Spillias, S., Andreotta, M., Annand-Jones, R., Boschetti, F., Cvitanovic, C., Duggan, J., Fulton, E., Karcher, D., Paris, C., Shellock, R., &amp; Trebilco, R. (2023). Human-AI collaboration to identify literature for evidence synthesis. Preprint. https://doi.org/10.21203/rs.3.rs-3099291/v1</span></p>
<p><span class="fontstyle0">Tantawy, A., Amankwah-Amoah, J., &amp; Puthusserry, P. (2023). Political ties in emerging markets: A systematic review and research agenda. International Marketing Review, </span><span class="fontstyle2">40</span><span class="fontstyle0">(6), 1344-1378. https://doi.org/10.1108/imr-09-2022-0197</span></p>
<p><span class="fontstyle0">Tomczyk, P., Brüggemann, P., Mergner, N., &amp; Petrescu, M. (2024). Exploring AI’s role in literature searching: Traditional methods versus AI-based tools in analyzing topical e-commerce themes. In Francisco J. Martínez-López, Luis F. Martinez, Philipp Brüggemann (Eds.), </span><span class="fontstyle2">Advances in Digital Marketing &amp; eCommerce – 5</span><span class="fontstyle2">th </span><span class="fontstyle2">Annual Conference, 2024 </span><span class="fontstyle0">(pp. 141 – 148). Springer, Cham. https://doi.org/10.1007/978- 3-031-62135-2_15</span></p>
<p><span class="fontstyle0">Tranfield, D., Denyer, D., &amp; Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. </span><span class="fontstyle2">British Journal of Management</span><span class="fontstyle0">, </span><span class="fontstyle2">14</span><span class="fontstyle0">(3), 207–222. https://doi.org/10.1111/1467-8551.00375</span></p>
<p><span class="fontstyle0">Tsafnat, G., Dunn, A., Glasziou, P., &amp; Coiera, E. (2013). The automation of systematic reviews. </span><span class="fontstyle2">BMJ </span><span class="fontstyle0">(Online), </span><span class="fontstyle2">345</span><span class="fontstyle0">(7891). https://doi.org/10.1136/bmj.f139</span></p>
<p><span class="fontstyle0">Tsafnat, G., Glasziou, P., Keen Choong, M., Dunn, A., Galgani, F., &amp; Coiera, E. (2014). Systematic review automation technologies. </span><span class="fontstyle2">Systematic Reviews</span><span class="fontstyle0">, </span><span class="fontstyle2">3</span><span class="fontstyle0">, 1–15. http://www.systematicreviewsjournal.com/ content/3/1/74</span></p>
<p><span class="fontstyle0">van Dinter, R., Tekinerdogan, B., &amp; Catal, C. (2021). Automation of systematic literature reviews: A systematic literature review. </span><span class="fontstyle2">Information and Software Technology</span><span class="fontstyle0">, </span><span class="fontstyle2">136</span><span class="fontstyle0">. https://doi.org/10.1016/j.infsof. 2021.106589</span></p>
<p><span class="fontstyle0">Vrontis, D., &amp; Christofi, M. (2021). R&amp;D internationalization and innovation: A systematic review, integrative framework and future research directions. </span><span class="fontstyle2">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle2">128</span><span class="fontstyle0">, 812–823. https://doi.org/10.1016/j.jbusres.2019.03.031</span></p>
<p><span class="fontstyle0">Vrontis, D., Hulland, J., Shaw, J. D., Gaur, A., Czinkota, M. R., &amp; Christofi, M. (2022). Guest editorial: Systematic literature reviews in international marketing: From the past to the future and beyond. </span><span class="fontstyle2">International Marketing Review</span><span class="fontstyle0">, </span><span class="fontstyle2">39</span><span class="fontstyle0">(5), 1025–1028. https://doi.org/10.1108/IMR-09-2022-390</span></p>
<p><span class="fontstyle0">Vrontis, D., Leonidou, E., Christofi, M., Kaufmann Hans, R., &amp; Kitchen, P. J. (2020). Intercultural service encounters: A systematic review and a conceptual framework on trust development. </span><span class="fontstyle2">EuroMed Journal of Business</span><span class="fontstyle0">, </span><span class="fontstyle2">16</span><span class="fontstyle0">(3), 306–323. https://doi.org/10.1108/EMJB-03-2019-0044</span></p>
<p><span class="fontstyle0">Wagner, G., Lukyanenko, R., &amp; Paré, G. (2022). Artificial intelligence and the conduct of literature reviews. </span><span class="fontstyle2">Journal of Information Technology</span><span class="fontstyle0">, </span><span class="fontstyle2">37</span><span class="fontstyle0">(2), 209–226. https://doi.org/10.1177/02683962211048201</span></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
