<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>artificial intelligence &#8211; Marketing of Scientific and Research Organizations &#8211; The scientific journal by the Institute of Aviation</title>
	<atom:link href="https://minib.pl/en/tag/artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://minib.pl</link>
	<description></description>
	<lastBuildDate>Tue, 17 Feb 2026 13:35:01 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.4</generator>

 
	<item>
		<title>Artificial intelligence and consumer rights: legal responsibility for algorithmic decisions in the Polish and EU regulatory context</title>
		<link>https://minib.pl/en/numer/no-2-2025/artificial-intelligence-and-consumer-rights-legal-responsibility-for-algorithmic-decisions-in-the-polish-and-eu-regulatory-context/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Thu, 19 Jun 2025 17:21:33 +0000</pubDate>
				<category><![CDATA[academic writing]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[management]]></category>
		<category><![CDATA[process management]]></category>
		<category><![CDATA[systematic literature review]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=8516</guid>

					<description><![CDATA[1.Introduction The dynamic deployment of solutions based on Artificial Intelligence (AI) across global digital markets has opened up a new stage in the evolution of trade and consumer behavior (UNCTAD, 2024, p. 3). Contemporary purchasing decisions are increasingly being shaped by digital tools, often promoted as employing “Artificial Intelligence” to optimize the decision-making process from...]]></description>
										<content:encoded><![CDATA[<h2>1.Introduction</h2>
<p>The dynamic deployment of solutions based on Artificial Intelligence (AI) across global digital markets has opened up a new stage in the evolution of trade and consumer behavior (UNCTAD, 2024, p. 3). Contemporary purchasing decisions are increasingly being shaped by digital tools, often promoted as employing “Artificial Intelligence” to optimize the decision-making process from the consumer’s perspective (Paterson, 2022, p. 558).</p>
<p>Following Warszycki (2019, p. 115), AI may be understood as “a field of science encompassing disciplines, methods, tools, and techniques aimed at creating and developing a complete computer program that accurately reflects the model of human functioning and the human mind.” It has become an integral part of the modern consumer market, applied in both front-office processes (interfacing with consumers, clients, and supervisory bodies) and back-office processes (supporting the internal functioning of companies and institutions) (Keller et al., 2024, p. 417).</p>
<p>In consumer-facing applications, AI systems recommend products inferred from users’ preferences and histories, perform automated credit assessments, and provide customer support via virtual assistants (chatbots), among other functions (Myszakowska-Kaczała, 2024). On the operational side, companies are increasingly using AI-based analytics to understand consumer behavior, optimize pricing strategies, and improve supply chain management (GlobeNewswire, 2025).</p>
<p>Although the use of AI in customer service is often considered a hallmark of modern technological implementation, Artificial Intelligence itself is not a twenty-first-century innovation. Most technology historians trace the origins of the concept to the work of the British mathematician and cryptanalyst Alan Turing, who formulated its theoretical foundations in 1950 (Accenture, 2024, p. 8). Nevertheless, the dynamic development of AI was not widely recognized until 2011, when global technology companies such as Google, Facebook, Microsoft, and IBM began using it for business purposes (Ness et al., 2024, p. 1064).</p>
<p>From the perspective of the Polish AI landscape, 2023 marked a turning point, with 88% of respondents declaring familiarity with the term sztuczna inteligencja (“artificial intelligence”) – with this figure rising to 96% among individuals aged 18 to 24 (Digital Poland, 2023, p. 57). It is also notable that the jury of the Polish Language Council declared this term the Polish “Word of the Year” in 2023 (Kruszyńska, 2024).<br />
This coincided with the rapid rise of ChatGPT, an AI–based application that achieved unprecedented global recognition. Between late 2022 and early 2023, the platform attracted approximately 100 million users (mp/dap, TVN24.pl, 2023). The scale and pace of its user growth may position ChatGPT as the fastest-growing consumer-facing web application to date (The Guardian, 2023). Its widespread adoption spurred the creation of numerous derivative solutions tailored to the needs of specific industries, including the banking sector (Capgemini, 2024, p. 44).</p>
<p>In 2025, the global AI market was valued at USD 757.58 billion, with forecasts projecting growth to approximately USD 3,680.4 billion by 2034 (Precedence Research, 2025). Within the global banking sector alone, AI is estimated to generate up to USD 1 trillion in additional value annually (Biswas et al., 2020, pp. 2–3).<br />
The expanding use of AI in consumer services brings not only financial gains but also a range of other benefits – from mitigating risks associated with human error and improving service accessibility, to process automation that enhances efficiency and speeds up customer service. However, the adoption of AI-based tools by market entities also introduces new risks for consumers. The decision-making processes of AI algorithms may be opaque or difficult for the average client to comprehend (Ahn et al., 2024), which can hinder their ability to assess whether a system is operating correctly.</p>
<p>The opacity of AI systems, combined with their capacity to exploit biases and generate unintended side effects, has intensified debates on the need for responsible governance of AI technologies (Cheong, 2024, p. 2). A key challenge, therefore, lies in guaranteeing the effective protection of consumer rights when decisions affecting individuals are being made by algorithms, as well as in determining which parties bear responsibility in cases of algorithmic error or either unintentional or deliberate misuse.</p>
<p>This article seeks to address the following research question: Do Polish and EU legal acts, together with institutional oversight, provide consumers with adequate protection against the negative consequences of decisions made by AI systems, and are there legal gaps in this area? The approach taken is descriptive and analytical, based on selected legal acts (including the Act on Competition and Consumer Protection and the AI Act), relevant academic literature, and selected legal opinions. These sources form the basis for further, more detailed research on the topic.</p>
<p>The choice of a qualitative descriptive analysis stems from its suitability for examining phenomena within their real-world context – in this case, the institutional and regulatory environment. Its purpose is to capture ongoing processes, identify the actors involved, and situate them within their operational conditions. While serving as a starting point for more advanced analyses, this approach itself constitutes a valuable and independent methodological framework (Sandelowski, 2000, p. 339). It involves the following stages (Villamin et al., 2024, pp. 51–91):</p>
<ul>
<li>defining the research objective (application-oriented),</li>
<li>determining the research method (descriptive analysis),</li>
<li>establishing the theoretical framework (accountability for algorithmic decisions in the context of legal frameworks and institutional oversight),</li>
<li><span class="fontstyle0">selecting the research sample (domestic and international literature, legal provisions, and opinions of Polish legal scholars),</span></li>
<li>collecting data (reviewing available sources),</li>
<li>analyzing data (evaluating sources in light of the research objective), and</li>
<li>presenting the research findings.</li>
</ul>
<p style="text-align: left;">The outcomes of this analysis are threefold: (i) a presentation of the current regulatory framework governing responsibility for AI-mediated decisions affecting consumers; (ii) the identification of potential gaps within the existing system of consumer protection; and (iii) the formulation of recommendations aimed at addressing these gaps in the Polish legal system, alongside proposals for new regulatory measures to strengthen consumer safeguards against the adverse consequences of AI-driven decision-making.</p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">2. The use of AI – benefits and risks</span></strong></p>
<p><span class="fontstyle0">Artificial Intelligence is now being applied across nearly all areas of human activity. It is already assisting the work of both teachers and students, including in schools and even in early childhood education (Iron Mountain, 2025). AI can automatically perform tasks such as grading tests and homework assignments or generating reports on student progress (Stecyk, 2025). Higher education institutions are also increasingly utilizing AI algorithms to enhance the efficiency of administrative and academic work. One example is the use of autonomous AI agents that assist in creating professional academic presentations based on an outline (Stecyk, 2025). AI can likewise improve communication processes within universities – for example, through the implementation of “intelligent” dean’s offices or automated student admissions systems. A student wishing to access publicly available university knowledge and documentation in real time needs only one condition to be met: access to the Internet (KALASOFT, n.d.).</span></p>
<p><span class="fontstyle0">It should be emphasized that in the context of higher education, where the student may be regarded as a client or consumer of educational services (Sojkin et al., 2012, pp. 565, 567), the use of Artificial Intelligence entails risks analogous to those observed in other sectors of digital services, particularly regarding data protection, algorithmic transparency, and the right to reliable information. Theoretically, information generated by software based on AI algorithms should be factually accurate. In practice, however, AI systems may rely on unreliable or outdated sources, creating a risk that users receive incorrect or misleading information.</span></p>
<p><span class="fontstyle0">Another risk associated with the use of Artificial Intelligence in higher education concerns the protection of student data collected by institutions employing AI tools, as well as the potential dehumanization of the educational process – where human interaction is diminished and the lecturer’s role shifts away from that of a mentor, becoming instead a mere supervisor of AI-driven systems (Kornaś, 2024).</span></p>
<p><span class="fontstyle0">An argument in favor of limiting the use of Artificial Intelligence in education is that decisions made without human intervention may result in the absence of a clearly identifiable responsible entity, as well as a lack of transparency regarding how such decisions are made (PARP Grupa PFR, 2023, p. 29). Insufficient oversight of these processes may, in turn, result in different types of misuse or abuse, potentially harming the interests of those affected (Iron Mountain, 2025). Table 1 presents examples of AI applications in the consumer market, along with their associated potential benefits and risks. </span></p>
<p><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-8520" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1.jpg" alt="" width="1744" height="2464" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1.jpg 1744w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-212x300.jpg 212w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-725x1024.jpg 725w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-768x1085.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-1087x1536.jpg 1087w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-1450x2048.jpg 1450w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-1-1320x1865.jpg 1320w" sizes="(max-width: 1744px) 100vw, 1744px" /></p>
<p><span class="fontstyle0">The examples of Artificial Intelligence applications presented in Table 1 illustrate the dual nature of AI’s impact on the consumer market. On the one hand, algorithms can enhance convenience, accessibility, and service efficiency, reduce operating costs, and minimize human error. On the other, AI-related risks include a lack of transparency in decision-making processes, potential discrimination, and incorrect decisions that may result in harm to the consumer. AI-powered tools may not only pose a threat to customer privacy but also increase the risk of consumers falling victim to deceptive or unfair market practices or even financial exclusion in cases where an insurer, on the basis of an AIgenerated analysis, determines that a given consumer represents too great a risk of potential payout (BEUC, 2021, p. 35).</span></p>
<p><span class="fontstyle0">An example of potential gender-based discrimination by an AI algorithm was the 2019 case in the United States involving the credit limit determination process for the Apple Card, issued jointly by Apple and Goldman Sachs. Customers observed that the algorithm responsible for assigning credit limits granted significantly higher limits to men than to women with comparable financial situations. One potential applicant reported that his credit limit was 20 times higher than that of his wife, even though they shared joint marital property and, in his view, her credit history was even better than his. Following the publication of this report, other couples also began to confirm such disparities, sharing examples suggesting that the algorithm favored men. The case attracted the attention of the New York Department of Financial Services, which launched an investigation to determine whether anti-discrimination laws had been violated in this instance (The Guardian, 2019), but it ultimately concluded that there was no discrimination against customers based on gender (Campbell, 2021).</span></p>
<p><span class="fontstyle0">The Apple Card case demonstrated, however, that a lack of algorithmic transparency can lead to public controversy. Customers did not receive a clear explanation as to why the decisions varied so significantly between genders. Being unable to understand the automated decision-making process led some users to perceive the differences in credit limits as negative gender discrimination, even though closer scrutiny showed that no such discrimination had actually occurred. A positive takeaway from this example is that regulators are prepared to intervene, treating the use of AI like any other credit procedure subject to the law.</span></p>
<p><span class="fontstyle0">It should be noted, however, that despite incidents raising concerns about the impartiality of AI-based solutions, there is also evidence suggesting that consumers perceive such systems as more objective than human-driven processes. The rationale in this context is the perceived absence of bias and emotions in AI decision-making (Nogueira et al., 2025, p. 2).</span></p>
<p><span class="fontstyle0">Another type of potential incident involving the use of AI tools and consumers could be having a chatbot incorrectly dismiss a complaint. This might occur, for example, if the chatbot misinterprets an image submitted by the customer and wrongly concludes that a product defect was caused by user error. Another possible example, negative from the customer’s perspective, would involve the misclassification of a complaint into the wrong category. In both cases, one of the possible consequences is the expiration of the statutory 14-day period for responding to a consumer complaint, which, under Polish consumer law, results in the complaint being deemed accepted by default (Polish Consumer Rights Act, Article 7a). A consumer’s lack of awareness of, or failure to invoke the above legal provision could result in their not receiving appropriate support in such a case, due to the algorithm’s improper functioning.</span></p>
<p><span class="fontstyle0">The next section of this article will examine the extent to which current Polish regulations address the challenges outlined above and what changes may be necessary to ensure that consumer rights are effectively protected in the era of widespread algorithmic use in the consumer market. This is a highly important issue, as the number of incidents involving AI systems is increasing alongside the growing adoption of Artificial Intelligence. Between 2022 and 2023 alone, the number of such incidents rose by approximately 1278% (OECD, n.d.).</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0">3. Legal regulations and institutional oversight as pillars of accountability for algorithmic decisions impacting consumers</span></strong></span></p>
<p>&nbsp;</p>
<p><span class="fontstyle0"><strong>3.1. The current legal framework for consumer protection in relation to AI</strong></span></p>
<p><span class="fontstyle2">Given the risks associated with the practical use of Artificial Intelligence, it is often perceived as a source of threats to individual rights (Contissa et al., 2018, p. 11). The noticeable rise in technological sophistication and the emergence of new risks have led regulatory bodies to recognize the necessity of legislative action in this domain (Lagioia et al., 2022, p. 482). Artificial Intelligence poses new and complex challenges to both consumers and the system of consumer law – challenges that existing regulatory mechanisms are not always capable of addressing effectively (Terryn &amp; Martos Marquez, 2025, p. 210).</span></p>
<p><span class="fontstyle2">Based on the analysis of the current legal framework, it can be indicated that there is no comprehensive legal act that specifically addresses the use of Artificial Intelligence in the consumer context. Nevertheless, existing legal provisions offer a certain degree of protection to consumers against the negative consequences of decisions made by algorithms. These include data protection regulations and consumer protection laws (Table 2).</span></p>
<p><img decoding="async" class="aligncenter size-full wp-image-8521" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-scaled.jpg" alt="" width="1714" height="2560" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-scaled.jpg 1714w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-201x300.jpg 201w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-686x1024.jpg 686w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-768x1147.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-1028x1536.jpg 1028w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-1371x2048.jpg 1371w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-2-1320x1971.jpg 1320w" sizes="(max-width: 1714px) 100vw, 1714px" /></p>
<p><span class="fontstyle0">Moreover, successive parts of the relatively new EU Artificial Intelligence Regulation (AI Act) are now gradually entering into force. The aim of the regulation is “to improve the functioning of the internal market by laying down a uniform legal framework, in particular for the development, placing on the market, putting into service and use of Artificial Intelligence systems (…) to promote the uptake of human-centric and trustworthy Artificial Intelligence (…) and to support innovation” (Regulation (EU) 2024/1689 of the European Parliament and of the Council – The Artificial Intelligence Act). Although the AI Act will fully apply as of 2 August 2026, the provisions of Chapters I and II are already binding and should be applied now (AI Act, art. 113).</span></p>
<p><span class="fontstyle0">Despite the fact that the AI Act includes several significant provisions from a consumer protection standpoint, such as the prohibition of social scoring and the right to file a complaint with a market surveillance authority if an AI system is believed to violate the regulation, European consumer advocacy groups have raised concerns about legal gaps that fail to fully address the risks consumers are exposed to in the context of AI deployment. According to these organizations, the AI Act is not capable of fully eliminating the risks associated with the use of AI tools in consumer interactions. In their view, the regulation focuses primarily on high-risk systems, while many widespread applications of AI, such as the use of chatbots, fall outside its scope (BEUC, 2023).</span></p>
<p><span class="fontstyle0">Such a situation may lead to the emergence of national legislative solutions addressing selected risks associated with the use of AI, which in turn could result in the fragmentation of legal provisions and hinder the assurance of a uniform level of protection for European Union citizens with respect to the same technological products and services (Bertolini, 2025, pp. 9–10).</span></p>
<p><span class="fontstyle0">Referring back to the earlier example of potential gender discrimination in the Apple Card credit approval process, it is worth noting that, under EU law, a consumer in </span><span class="fontstyle0">a similar situation could rely on Article 22 of the General Data Protection Regulation (GDPR). This provision entitles the data subject, whether a potential or actual client, to request clarification regarding the logic behind the algorithmic decision on their credit limit, and to demand a reassessment of the outcome by a human decision-maker.</span></p>
<p><span class="fontstyle0">Additionally, the European Union has in place anti-discrimination regulations – such as Directive 2004/113/EC of 13 December 2004, implementing the principle of equal treatment between men and women in the access to and supply of goods and services. Moreover, if such an incident were to occur in Poland, an entity that actually employed a discriminatory algorithm could face sanctions from the Office of Competition and Consumer Protection (UOKiK), as its actions may constitute a violation of collective consumer interests (Polish Act on Competition and Consumer Protection, Article 24). The activities of this Office will be discussed in more detail in the following sections of this article.</span></p>
<p><span class="fontstyle0">Similarly, in scenarios involving potentially incorrect decisions issued by an AI-driven complaint resolution system, or where a university student receives inaccurate information from an “intelligent” dean’s office, current legal frameworks would regard such instances as the equivalent of human error. Ultimately, responsibility for the functioning and consequences of AI systems rests with the individual or institution that has introduced and operates them (Paprocki, 2025).</span></p>
<p><span class="fontstyle0">The consumer submitting a complaint would retain the right to exercise their entitlement (e.g., to repair or replacement of the product) (Polish Consumer Rights Act, Article 43d). The consumer could also notify the UOKiK, which would assess whether the company had violated the collective interests of consumers (Polish Competition and Consumer Protection Act, Article 24). In cases where complaint processing is delegated to a malfunctioning algorithm, the UOKiK has begun examining such situations and emphasizes that the use of AI does not relieve businesses of their responsibility to review consumer complaints in a fair and timely manner (Infor.pl, 2023).</span></p>
<p><span class="fontstyle0">However, for a student who received incorrect information via an AI system, pursuing legal remedies in response to the negative consequences of such inadequate support may prove to be a significant challenge. Legal provisions do not always recognize a student as a consumer eligible for protection under all the legal acts listed in Table 2. However, if a student were to enter into an agreement based on incorrect information provided by a chatbot, the issue of determining liability for being misled by AI could have a valid legal basis (Warchoł-Lewucka, 2024). In the case of an incorrect response provided by a “smart” dean’s office – regarding, for instance, the current class schedule – the consequences of a student’s absence from mandatory classes held on a date not indicated by the chatbot would likely be borne solely by the student.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0"><span class="fontstyle2">3.2. Regulatory and supervisory institutions and their role</span></span></strong></p>
<p><span class="fontstyle0">Since the broad application of AI in areas such as the consumer market is a relatively new phenomenon, the institutional structure aimed at protecting consumers from AIrelated risks is still evolving. Additionally, the complexity of AI use cases necessitates coordination and cooperation among the various regulatory and supervisory authorities.</span></p>
<p><span class="fontstyle0">In the Polish legal system, the Office of Competition and Consumer Protection (UOKiK), established in 1990, serves as the main institution responsible for safeguarding consumer rights (UOKiK, n.d.). While no existing legal act explicitly names the UOKiK as the principal supervisory authority overseeing the impact of Artificial Intelligence on the consumer market, the Office has been actively engaged in addressing issues related to the use of algorithms in consumer-facing processes. Despite the lack of explicit regulatory designation, the UOKiK actively monitors and engages with developments concerning the application of algorithms in consumer interactions. Its current activities include assessments of chatbot functionality in the telecommunications market and in ecommerce services – most notably in food delivery apps and online marketplaces (Infor.pl, 2023).</span></p>
<p><span class="fontstyle0">The UOKiK is also striving to harness AI to enhance consumer protection on the Polish market. An example of this effort is the implementation of the project entitled “Detection and elimination of dark patterns using Artificial Intelligence,” which aims to develop an AI-based tool capable of identifying unfair uses of so-called dark patterns on commercial websites (UOKiK, 2024). These are user-interface designs intentionally created to mislead </span><span class="fontstyle0">consumers, hinder the expression of genuine preferences, or manipulate users into taking predetermined actions. Such practices are intended to pressure consumers into making purchases they do not truly desire, or to manipulate them into revealing personal information they would not voluntarily provide in a more transparent context (Luguri &amp; Strahilevitz, 2021, p. 43).</span></p>
<p><span class="fontstyle0">It can be assumed that in the near future, the scope of UOKiK’s activities and responsibilities related to the use of Artificial Intelligence in the consumer market will continue to expand. It is likely that the authority will gradually acquire additional statutory powers aimed at enhancing the effectiveness of its supervisory activities in this area.</span></p>
<p><span class="fontstyle0">An additional authority involved in addressing the use of Artificial Intelligence with respect to personal data protection in Poland is the Personal Data Protection Office (UODO). Its counterpart at the EU level is the European Data Protection Board (EDPB), which coordinates data protection policies across member states.</span></p>
<p><span class="fontstyle0">The President of the UODO is the “competent authority for personal data protection” (Polish Personal Data Protection Act, Article 34(1)), with tasks including monitoring and enforcing the provisions of the GDPR, as well as promoting public awareness and understanding of the risks, rules, safeguards, and rights related to data processing (GDPR, Article 57(1)(a) and (b)).</span></p>
<p><span class="fontstyle0">In the context of Artificial Intelligence, the Personal Data Protection Office (UODO), examines the impact of AI on individuals’ privacy and the protection of their personal data (UODO, n.d.). The UODO is authorized, among other things, to impose administrative fines for violations of the GDPR, including the aforementioned Article 22 (e.g., failure to provide human verification of automated data processing in cases where the decision produces legal effects for the consumer).</span></p>
<p><span class="fontstyle0">Among the responsibilities of the European Data Protection Board (EDPB, or EROD) is providing guidance to the European Commission on issues concerning data protection – particularly with regard to proposed amendments to the GDPR and broader legislative initiatives within the EU (EDPB, n.d.). Notably, at its inaugural plenary meeting in 2018, the EDPB adopted guidelines addressing automated decision-making and profiling (EDPB, 2018).</span></p>
<p><span class="fontstyle0">At the EU level, the European Artificial Intelligence Board was established to oversee </span><span class="fontstyle0">the proper implementation of the AI Act (European Commission). Moreover, the European Data Protection Supervisor (EDPS) plays a key role in ensuring that all EU institutions and bodies respect citizens’ privacy rights during personal data processing. The EDPS is also responsible for tracking the development of emerging technologies that may impact data protection and for carrying out investigations into relevant matters falling within its jurisdiction (european-union.europa.eu). Accordingly, it may be concluded that the enforcement of legal standards regarding the protection of Polish consumers’ personal data and the appropriate use of AI-assisted </span><span class="fontstyle0">tools involves multiple institutions operating at both the national and European levels. </span></p>
<p><span class="fontstyle0">Determining which body is responsible in a specific case should depend exclusively on the type of suspected violation (Table 3).</span></p>
<p><strong><span class="fontstyle2">Table 3. </span><span class="fontstyle3">Comparison of the scope of responsibilities of Polish institutions overseeing the consumer market.</span></strong></p>
<p><img decoding="async" class="aligncenter size-full wp-image-8522" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-scaled.jpg" alt="" width="1019" height="2560" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-scaled.jpg 1019w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-119x300.jpg 119w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-408x1024.jpg 408w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-768x1929.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-611x1536.jpg 611w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-815x2048.jpg 815w, https://minib.pl/wp-content/uploads/2025/06/2-2025-06-table-3-1320x3316.jpg 1320w" sizes="(max-width: 1019px) 100vw, 1019px" /></p>
<p><span class="fontstyle0">However, due to the fast-paced development of Artificial Intelligence in an evergrowing range of consumer-facing applications, it is highly probable that not all risks stemming from AI usage are adequately addressed in existing legal frameworks, and that responsibility for such risks may not fall solely within the remit of a single regulatory body. A relevant example would be a chatbot’s improper handling of a consumer complaint, accompanied by a breach of personal data protection regulations – particularly involving sensitive data. In such circumstances, the case would require joint consideration by at least two competent authorities, such as the UOKiK and UODO.</span></p>
<p><span class="fontstyle0">Thus, it is crucial to ensure not only the constant oversight of emerging AI-related risks and the ongoing adjustment of relevant legislation and institutional responsibilities, but also effective interdisciplinary collaboration between the entities tasked with safeguarding consumer rights.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle2">4. Responsibility for algorithmic decision-making</span></strong></span></p>
<p><span class="fontstyle0">When analyzing the risks associated with the use of Artificial Intelligence in consumer services, it is essential to consider the issue of responsibility for erroneous decisions made by algorithms. AI itself does not possess legal personality and therefore cannot be held directly accountable (Bączyk-Rozwadowska, 2022, p. 9). Responsibility may lie solely with a natural or legal person who exercises control over the operation and deployment of AIdriven systems (Kulicki, 2025). As one analyst has put it, “In principle, liability for errors stemming from the system’s architecture or software should rest with the manufacturer, whereas responsibility for misuse of the system lies with the end user” (Trzaska, 2024). However, given that there is currently no specific legal act that directly assigns responsibility for damages caused by Artificial Intelligence, it remains challenging to clearly designate a natural or legal person as directly liable for errors resulting from AI operations (Trzaska, 2024).</span></p>
<p><span class="fontstyle0">The existing academic literature offers a range of proposals concerning the entity that could be considered “responsible” for decisions made by AI systems: ranging from the software developer who implemented faulty algorithms (programistajava.pl, 2025), through the system operator or controller (Kaniewski &amp; Kowacz, 2023), to the end user (Infinity Insurance Brokers, n.d.), which may be responsible for the proper use of artificial intelligence systems (Buiten, 2024, pp. 256–257).</span></p>
<p><span class="fontstyle0">Certain authors suggest a model in which responsibility is distributed among various groups of stakeholders (programistajava.pl, 2025). Meanwhile, other sources highlight the possibility that, given the considerable complexity of the AI value chain, it may not always be possible to clearly identify the entity responsible for a specific error (Jelińska-Sabatowska, 2025). In many AI-driven processes involved in the provision of products and services, multiple entities participate (Buiten et al., 2023, p. 11). Legal counsels also point to a new type of risk associated with the use of AI – namely, the risk of a “liability gap” (Nogacki, 2024).</span></p>
<p><span class="fontstyle0">The challenge of assigning liability for the outcomes of Artificial Intelligence stems from factors including the following (Nogacki, 2025):</span></p>
<p><span class="fontstyle0">• autonomy – AI systems make decisions without human oversight,</span></p>
<p><span class="fontstyle0">• opacity – the AI decision-making process may be difficult to understand,</span></p>
<p><span class="fontstyle0">• data dependency – flawed data can lead AI to make erroneous decisions,</span></p>
<p><span class="fontstyle0">• value chain complexity – the development and implementation of AI involves multiple entities.</span></p>
<p>&nbsp;</p>
<p><span class="fontstyle0">Nevertheless, the most frequently cited example of a party considered responsible for decisions made by AI is the entrepreneur who implements an AI-based process within their organization. As such, they must take into account the possibility of incurring contractual liability in the event that damage is caused by Artificial Intelligence – such as when an error results in the failure to fulfill a contract concluded with a business partner (Tak Prawnik, 2025). They may also face tort liability, for example in the case of an accident caused by an autonomous vehicle (Kaniewski et al., 2023). However, some sources argue that the previously mentioned “opacity” of AI decision-making undermines the application of standard principles of tort liability (Nogacki, 2025).</span></p>
<p><span class="fontstyle0">Apart from the legal challenge of clearly identifying the entity liable for damage caused by Artificial Intelligence, another significant obstacle is the difficulty in proving the “fault” of the AI system itself. To do so, the consumer – or their legal representative – must gain access to and understand how the AI tool functions, which may require insight into complex and often non-transparent decision-making processes. In practice, however, this may prove difficult or even impossible. Among other factors, this is due to the so-called “black box problem” (Taveira da Fonseca et al., 2024, p. 300) – that is, the system’s recommendations may not be explainable within the framework of traditional linear cause-and-effect logic (Kroplewski, 2023, p. 112).</span></p>
<p><span class="fontstyle0">An additional risk for banking customers related to the use of Artificial Intelligence is the potential overdependence on AI systems in decision-making, predictive analytics, and recommendation processes. Even if a human remains the final decision-maker, they may defer too strongly to the suggestions provided by AI – perceiving them as inherently correct </span><span class="fontstyle0">or derived from deep and reliable analysis (Szostek et al., 2022, p. 55). In practice, however, there may be uncertainty as to whether the data used by automated models is of adequate quality, which may result, for example, in an inaccurate assessment of a customer’s creditworthiness (Szostek et al., 2022, p. 26). In such circumstances, the harmed consumer may face significant challenges in demonstrating that the unfair treatment resulted from the actions of both the AI system and the bank’s staff.</span></p>
<p><span class="fontstyle0">In the context of seeking redress against an erroneous AI-generated decision, the consumer must first be aware that such an irregularity has occurred. The literature on accountability for AI-driven decisions highlights the so-called “information gap,” whereby an individual may not realize that their adverse situation results from the actions of Artificial Intelligence (Ziosi et al., 2023, p. 9). What is crucial, therefore, is not only the existence of legal provisions designed to prevent the effects of erroneous AI decisions, but also the consumer’s own awareness of the protections available under the relevant legal framework.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0"><span class="fontstyle2">5. Conclusions and recommendations</span></span></strong></span></p>
<p><span class="fontstyle0">While the application of Artificial Intelligence in the consumer market brings various advantages – such as personalized product offerings – it also entails significant risks. These include the reliance of algorithms on outdated or biased data, which may result in the unequal treatment of certain customer groups.</span></p>
<p><span class="fontstyle0">Additionally, consumers’ inability to logically explain how algorithms operate may also lead to their misinterpretation of AI-generated decisions, as exemplified by the case concerning the determination of credit limits in the Apple Card program.</span></p>
<p><span class="fontstyle0">Although existing legislation ensures a certain level of protection for consumers against the risks posed by Artificial Intelligence – such as the right to human oversight and the prohibition of discriminatory practices – there are still notable legal gaps. In particular, the opacity of AI decision-making processes creates challenges in proving errors and seeking redress. The lack of algorithmic explainability may also result in consumers misinterpreting automated decisions, as exemplified by the Apple Card case referenced earlier, in which the credit limit allocation raised concerns about fairness and transparency.</span></p>
<p><span class="fontstyle0">One of the legal gaps identified in the article concerns the question of who should be held accountable for decisions made by Artificial Intelligence. Since AI does not have legal personality, it cannot itself bear responsibility for erroneous algorithmic decisions, and no existing provision in either Polish or EU legislation explicitly designates the entity liable </span><span class="fontstyle0">for the malfunction of AI systems. In the scholarly literature, the entity most frequently identified as “responsible” is the provider making the AI-based solution available to consumers. However, responsibility for the harms caused by Artificial Intelligence is also sometimes attributed to the software developers whose algorithms prove faulty, as well as to end users. </span></p>
<p><span class="fontstyle0">In summary, the answer to the research question posed in this article is as follows: Polish and EU legal acts, together with institutional oversight, provide consumers with protection against the negative consequences of decisions made by AI systems. However, this protection does not extend to the full spectrum of potential risks arising from the use of Artificial Intelligence in consumer markets. Legal gaps remain in this area, and the introduction of new legislation that keeps pace with the ongoing development of AI capabilities represents a major regulatory challenge, making the complete elimination of such gaps difficult – if not impossible – in the foreseeable future.</span></p>
<p><span class="fontstyle0">As the use of Artificial Intelligence becomes more widespread, the frequency of incidents involving AI systems continues to rise. Regulatory bodies at both the European and national levels, along with consumer protection authorities, are still building the expertise and acquiring the instruments required to monitor and control AI effectively. This transitional phase contributes to the persistence of certain regulatory blind spots and legal uncertainties. To enhance consumer protection in a market environment where an ever-growing number of processes are supported by Artificial Intelligence – systems that may still be prone to error – it is crucial to implement reforms across legislative, institutional, and educational spheres.</span></p>
<p><span class="fontstyle0">With regard to recommendations, priority should be given to measures designed to address the identified shortcomings in the Polish legal system and to strengthen safeguards for consumers affected by AI-driven decision-making, such as the following:</span></p>
<p><span class="fontstyle0">• Clarifying legal liability for individual entities involved in the development, provision, and use of AI – for example, by introducing a provision into the Polish Competition and Consumer Protection Act stating that liability for errors made by Artificial Intelligence rests with the entity that makes the AI-based tool available to consumers, or with another entity explicitly designated by that provider in the applicable terms and conditions.</span></p>
<p><span class="fontstyle0">• Introducing a legal provision that facilitates the burden of proof for consumers in disputes concerning the malfunctioning of Artificial Intelligence – given that proving an AI-related error is often difficult or even impossible for the average consumer, a reasonable solution would be to shift the burden of proof to the entity providing the AI-based tool to consumers (or to another entity explicitly designated in the relevant terms and conditions). In the event of a dispute, this entity would be required to demonstrate that the AI system did not make an error; otherwise, the case would be resolved in favor of the consumer.</span></p>
<p><span class="fontstyle0">• Requiring algorithmic transparency – consumers should have the right to understand the logic behind decisions made by Artificial Intelligence that affect them personally; for example, by being granted access to terms and conditions that include information about the characteristics or factors the AI takes into account when making specific decisions</span></p>
<p><span class="fontstyle0">• Establishing a statutory definition of the competences of supervisory authorities – for example, a dedicated department could be established within Poland’s Office of Competition and Consumer Protection (UOKiK), staffed with experts in artificial intelligence systems, tasked with analyzing cases in the consumer market suspected of involving faulty operation of AI-based systems.</span></p>
<p><span class="fontstyle0">• Promoting consumer education on AI – through initiatives aimed at increasing consumer awareness of the risks associated with artificial intelligence, as well as of the rights they have with regard to protection against such risks.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0" style="font-size: 18pt;"><span class="fontstyle2">References</span></span></strong></p>
<p><span class="fontstyle0">Accenture. (2024). <span class="fontstyle3">Banking on AI: Banking top 10 trends for 2024. https://www.accenture.com/content/dam/accenture/final/industry/banking/document/Accenture-Banking-Top- 10-Trends-2024.pdf</span></span></p>
<p><span class="fontstyle0">Ahn, D., Almaatouq, A., Gulabani, M., &amp; Hosanagar, K. (2021). Will we trust what we don’t understand? Impact of model interpretability and outcome feedback on trust in AI. <span class="fontstyle3">SSRN Electronic Journal. </span>https://doi.org/10.2139/ssrn.3964332</span></p>
<p><span class="fontstyle0">Bączyk-Rozwadowska, K. (2022). Odpowiedzialność cywilna za szkody wyrządzone w związku z zastosowaniem sztucznej inteligencji w medycynie [Civil liability for damages arising from the use of artificial intelligence in medicine]. <span class="fontstyle3">Przegląd Prawa Medycznego, 3</span>(3–4). https://przegladprawamedycznego.pl/index.php/ppm/article/view/142</span></p>
<p><span class="fontstyle0">Bertolini, A. (2025). <span class="fontstyle3">Artificial intelligence and civil liability: A European perspective. </span>Policy Department for Citizens’ Rights and Constitutional Affairs, Directorate-General for Internal Policies</span></p>
<p><span class="fontstyle0">BEUC. (2021, October 7). <span class="fontstyle3">Regulating AI to protect the consumer. </span>Brussels: BEUC. https://www.beuc.eu/sites/default/files/publications/beuc-x-2021-088_regulating_ai_to_protect_the _consumer.pdf</span></p>
<p><span class="fontstyle0">BEUC. (2023, June 11). <span class="fontstyle3">EU rules on AI lack punch to sufficiently protect consumers. </span>https://www.beuc.eu/pressreleases/eu-rules-ai-lack-punch-sufficiently-protect-consumers</span></p>
<p><span class="fontstyle0">Biswas, S., Carson, B., Chung, V., Singh, S., &amp; Thomas, R. (2020). <span class="fontstyle3">AI-bank of the future: Can banks meet the AI </span></span><span class="fontstyle0"><span class="fontstyle3">challenge? </span>McKinsey &amp; Company. https://www.mckinsey.com/industries/financial-services/ourinsights/ai-bank-of-the-future-can-banks-meet-the-ai-challenge Bondos, I. (2016). Reakcje na dynamicznie ustalane ceny – czy konsumenci mają podwójne standardy oceny </span><span class="fontstyle0">uczciwości cen online? [Reactions to dynamic pricing: Do consumers apply double standards when assessing online price fairness?]. <span class="fontstyle3">Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu, (460), 173– 188. </span>https://www.dbc.wroc.pl/publication/40250</span></p>
<p><span class="fontstyle0">Buiten, M. C. (2024). Product liability for defective AI. <span class="fontstyle3">European Journal of Law and Economics, 57</span>, 239–273. https://doi.org/10.1007/s10657-024-09794-z</span></p>
<p><span class="fontstyle0">Buiten, M., de Streel, A., &amp; Peitz, M. (2023). The law and economics of AI liability. <span class="fontstyle3">Computer Law &amp; Security Review, 48</span>, Article 105794. https://doi.org/10.1016/j.clsr.2023.105794 </span></p>
<p><span class="fontstyle0">Campbell, I. C. (2021, March 23). The Apple Card doesn’t actually discriminate against women, investigators say. </span><span class="fontstyle2">The Verge. </span><span class="fontstyle0">https://www.theverge.com/2021/3/23/22347127/goldman-sachs-apple-card-no-genderdiscrimination</span></p>
<p><span class="fontstyle0">Capgemini. (2024). </span><span class="fontstyle2">World retail banking report 2024. </span><span class="fontstyle0">Capgemini Research Institute.</span></p>
<p><span class="fontstyle0">Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. </span><span class="fontstyle2">Frontiers in Human Dynamics, 6</span><span class="fontstyle0">, Article 1421273. https://doi.org/10.3389/fhumd.2024.1421273</span></p>
<p><span class="fontstyle0">Contissa, G., Docter, K., Lagioia, F., Lippi, M., Micklitz, H. W., Palka, P., Sartor, G., &amp; Torroni, P. (2018). CLAUDETTE meets GDPR: Automating the evaluation of privacy policies using artificial intelligence. </span><span class="fontstyle2">SSRN Electronic Journal. </span><span class="fontstyle0">https://doi.org/10.2139/ssrn.3208596</span></p>
<p><span class="fontstyle0">Digital Poland. (2023). </span><span class="fontstyle2">Technologia w służbie społeczeństwu: Czy Polacy zostaną społeczeństwem 5.0? </span><span class="fontstyle0">[Technology in the service of society: Will Poland become a Society 5.0?]. Warsaw.</span></p>
<p><span class="fontstyle0">Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU as regards the better enforcement and modernisation of Union consumer protection rules. </span><span class="fontstyle2">Official Journal of the European Union, L 328, </span><span class="fontstyle0">7–28 (18 December 2019).</span></p>
<p><span class="fontstyle0">European Commission. (n.d.). </span><span class="fontstyle2">AI Board (European Artificial Intelligence Board)</span><span class="fontstyle0">. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/ai-board</span></p>
<p><span class="fontstyle0">European Data Protection Board (EDPB). (2018). </span><span class="fontstyle2">Zautomatyzowane podejmowanie decyzji i profilowanie </span><span class="fontstyle0">[Automated decision-making and profiling]. https://www.edpb.europa.eu/our-work-tools/ourdocuments/guidelines/automated-decision-making-and-profiling_enpl</span></p>
<p><span class="fontstyle0">European Data Protection Board (EDPB). (n.d.). </span><span class="fontstyle2">Rola EROD </span><span class="fontstyle0">[Role of the EDPB]</span><span class="fontstyle2">. </span><span class="fontstyle0">https://www.edpb.europa.eu/role-edpb_enpl</span></p>
<p><span class="fontstyle0">European Union. (n.d.). </span><span class="fontstyle2">European Data Protection Supervisor (EDPS)Europejski Inspektor Ochrony Danych [European Data Protection Supervisor]. </span><span class="fontstyle0">https://european-union.europa.eu/institutions-lawbudget/institutions-and-bodies/search-all-eu-institutions-and-bodies/european-data-protectionsupervisor-edps_enpl</span></p>
<p><span class="fontstyle0">EY (Ernst &amp; Young). (2024, September 18). </span><span class="fontstyle2">Badanie EY: Rosną obawy konsumentów o bezpieczeństwo ich danych </span><span class="fontstyle0">[EY study: Consumer concerns about data security are growing]</span><span class="fontstyle2">. </span><span class="fontstyle0">https://www.ey.com /pl_pl/newsroom/2024/09/rosna-obawy-konsumentow-o-bezpieczenstwo-ich-danych</span></p>
<p><span class="fontstyle0">GlobeNewswire / Precedence Research. (2025, February 11). </span><span class="fontstyle2">Artificial Intelligence skyrocketing, shaking the market with $3,680.47 Bn by 2034</span><span class="fontstyle0">. https://www.globenewswire.com/news-release/2025/02/11/ 3024340/0/en/Artificial-Intelligence-Skyrocketing-Shaking-the-Market-with-3-680-47-Bn-by-2034.html</span></p>
<p><span class="fontstyle0">Infinity Insurance Brokers. (n.d.). </span><span class="fontstyle2">Odpowiedzialność za szkody wywołane przez AI </span><span class="fontstyle0">[Liability for damages caused by AI]. https://ibu.pl/blog/odpowiedzialnosc-za-szkody-wywolane-przez-ai</span></p>
<p><span class="fontstyle0">Infor.pl. (2023, April 13). </span><span class="fontstyle2">Chatboty, algorytmy, sztuczna inteligencja a prawa konsumenta – stanowisko UOKiK </span><span class="fontstyle0">[Chatbots, algorithms, AI and consumer rights – UOKiK’s position]. https://ksiegowosc.infor.pl/ wiadomosci/5722900,chatboty-algorytmy-sztuczna-inteligencja-a-prawa-konsumenta-stanowiskouokik.html</span></p>
<p><span class="fontstyle0">Iron Mountain. (2025, July 15). </span><span class="fontstyle2">Sztuczna inteligencja w edukacji – szansa czy zagrożenie? </span><span class="fontstyle0">[Artificial intelligence in education – Opportunity or threat?]. https://www.ironmountain.com/pl-pl/resources/blogs-andarticles/a/artificial-intelligence-in-education-opportunity-or-threat (see newsroom listing)</span></p>
<p><span class="fontstyle0">Jelińska-Sabatowska, A. (2025, May 28). </span><span class="fontstyle2">Prawa konsumentów w erze AI: Jak sztuczna inteligencja zmienia relacje w sferze B2C </span><span class="fontstyle0">[Consumer rights in the AI era]. Legalis. C.H.Beck. https://www.legalis.pl/prawa-konsumentow-werze-ai-jak-sztuczna-inteligencja-zmienia-relacje-w-sferze-b2c/</span></p>
<p><span class="fontstyle0">Jurczak, T. (2023, December 30 February 12). </span><span class="fontstyle2">UOKiK otrzymuje skargi na boty </span><span class="fontstyle0">[UOKiK receives complaints about bots]. </span><span class="fontstyle2">Gazeta Prawna</span><span class="fontstyle0">. https://serwisy.gazetaprawna.pl/poradnik-konsumenta/artykuly/ 8658758,chatboty-voiceboty-uokik-boty-prawa-konsumenta.html</span></p>
<p><span class="fontstyle0">KALASOFT. (n.d.). </span><span class="fontstyle2">Inteligentny dziekanat, inteligentna rekrutacja: Jak sztuczna inteligencja zmieniła komunikację uczelni ze studentami </span><span class="fontstyle0">[Smart dean’s office, smart admissions]. https://www.kalasoft.pl/sztucznainteligencja/</span></p>
<p><span class="fontstyle0">Kaniewski, P., &amp; Kowacz, K. (2023, October 3). </span><span class="fontstyle2">Co jeśli AI zawiedzie, czyli odpowiedzialność cywilna za sztuczną inteligencję </span><span class="fontstyle0">[What if AI fails? Civil liability for AI]. </span><span class="fontstyle2">ITwiz</span><span class="fontstyle0">. https://itwiz.pl/co-jesli-ai-zawiedzie-czyliodpowiedzialnosc-cywilna-za-sztuczna-inteligencje/</span></p>
<p><span class="fontstyle0">Keller, A., Martins Pereira, C., &amp; Lucas Pires, M. (2024). The European Union’s approach to artificial intelligence and the challenge of systemic risk. In H. Sousa Antunes, P. M. Freitas, A. L. Oliveira, C. Martins Pereira, E. Vaz de Sequeira, &amp; L. Barreto Xavier (Eds.), </span><span class="fontstyle2">Multidisciplinary perspectives on artificial intelligence and the law </span><span class="fontstyle0">(pp. 415–439). Springer. https://doi.org/10.1007/978-3-031-41264-6_22</span></p>
<p><span class="fontstyle0">Kornaś, W. (2024, August 22). </span><span class="fontstyle2">Sztuczna inteligencja w szkolnictwie wyższym </span><span class="fontstyle0">[Artificial intelligence in higher education]. </span><span class="fontstyle2">Wyższa Szkoła Bezpieczeństwa </span><span class="fontstyle0">(WSB) Blog. https://www.wsb.net.pl/technologia/sztucznainteligencja-w-szkolnictwie-wyzszym/</span></p>
<p><span class="fontstyle0">Kroplewski, R. (2023). Odporność AI dla odpornej wspólnoty [AI resilience for a resilient community]. In A. Szczęsna &amp; M. Stachoń (Eds.), </span><span class="fontstyle2">Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie </span><span class="fontstyle0">(pp. 111–122). CyberPOLICY NASK – Państwowy Instytut Badawczy. https://cyberpolicy.nask.pl/wp-content/ uploads/2023/09/Cyberbezpieczenstwo-AI.-AI-w-cyberbezpieczenstwie.pdf</span></p>
<p><span class="fontstyle0">Kruszyńska, A. (2024, January 4). </span><span class="fontstyle2">Wyłoniono Słowo Roku 2023. Kapituła podała wyniki </span><span class="fontstyle0">[The Word of the Year 2023 has been announced]. </span><span class="fontstyle2">Polska Agencja Prasowa (PAP). </span><span class="fontstyle0">https://www.pap.pl/aktualnosci/wylonionoslowo-roku-2023-kapitula-podala-wyniki</span></p>
<p><span class="fontstyle0">Kulawik, T. (2024, September 17). </span><span class="fontstyle2">Zrozumieć decyzje algorytmów – wyjaśnialność sztucznej inteligencji </span><span class="fontstyle0">[Understanding algorithmic decisions – Explainability of AI]. </span><span class="fontstyle2">ING Tech Blog. </span><span class="fontstyle0">https://techblog.ing.pl/blog/zrozumiec-decyzje-algorytmow-wyjasnialnosc-sztucznej-inteligencji</span></p>
<p><span class="fontstyle0">Kulicki, Ł. (2025). </span><span class="fontstyle2">Szkody wyrządzone przez sztuczną inteligencję – Kto ponosi odpowiedzialność? </span><span class="fontstyle0">[Damages caused by AI – Who is liable?]. After Legal Kancelaria. https://umowywit.pl/szkody-wyrzadzone-przez-aikto-odpowiada/</span></p>
<p><span class="fontstyle0">Lagioia, F., Jabłonowska, A., Liepiòa, R., &amp; Drazewski, K. (2022). AI in search of unfairness in consumer contracts: The terms of service landscape. </span><span class="fontstyle2">Journal of Consumer Policy, 45</span><span class="fontstyle0">(3), 481–536. https://doi.org/10.1007/s10603-022-09520-9</span></p>
<p><span class="fontstyle0">Luguri, J., &amp; Strahilevitz, L. J. (2021). Shining a light on dark patterns. </span><span class="fontstyle2">The Journal of Legal Analysis, 13</span><span class="fontstyle0">(1), 43–109. https://doi.org/10.1093/jla/jaab001https://doi.org/10.1093/jla/laaa006</span></p>
<p><span class="fontstyle0">mp/dap. (2023, December 30). </span><span class="fontstyle2">Taki był rok 2023 w gospodarce. Dziesięć najważniejszych wydarzeń </span><span class="fontstyle0">[2023 in review: Ten key economic events]. </span><span class="fontstyle2">TVN24.pl</span><span class="fontstyle0">. https://tvn24.pl/biznes/z-kraju/rok-2023-w-gospodarce-dziesiecnajwazniejszych-wydarzen-st7537170</span></p>
<p><span class="fontstyle0">Myszakowska-Kaczała, D. (2024). </span><span class="fontstyle2">AI – Jak sztuczna inteligencja zmienia życie konsumentów? </span><span class="fontstyle0">[AI – How AI is changing consumers’ lives]. LexCultura. https://lexcultura.pl/ai-jak-sztuczna-inteligencja-zmieniazycie-konsumentow/</span></p>
<p><span class="fontstyle0">Ness, S., Volkivskyi, M., Muhammad, T., &amp; Balzhyk, K. (2024). Banking 4.0: The impact of artificial intelligence on the banking sector and its transformation of modern banks. </span><span class="fontstyle2">International Journal of Innovative Science and Research Technology, 9</span><span class="fontstyle0">(2), 1064–1072. https://ijisrt.com/banking-40-the-impactof-artificial-intelligence-on-the-banking-sector-and-its-transformation-of-modern-banks</span></p>
<p><span class="fontstyle0">Nogacki, R. (2024, April 5). </span><span class="fontstyle2">Prawne problemy ze sztuczną inteligencją: Czy prawo powstrzyma „bunt maszyn”? </span><span class="fontstyle0">[Legal issues with artificial intelligence: Will the law stop the “machine rebellion”?]. </span><span class="fontstyle2">Gazeta Prawna / Kancelaria Prawna Skarbiec. </span><span class="fontstyle0">https://www.gazetaprawna.pl/firma-i-prawo/artykuly/9422540, prawne-problemy-ze-sztuczna-inteligencja-czy-prawo-powstrzyma-bunt-m.html</span></p>
<p><span class="fontstyle0">Nogacki, R. (2025, February 10). </span><span class="fontstyle2">Odpowiedzialność prawna za decyzje systemów AI: Kto odpowiada, gdy algorytm się myli? </span><span class="fontstyle0">[Legal responsibility for AI system decisions: Who is responsible when the algorithm makes a mistake?]. </span><span class="fontstyle2">Business Centre Club / Kancelaria Prawna Skarbiec. </span><span class="fontstyle0">https://www.bcc.org.pl/ odpowiedzialnosc-prawna-za-decyzje-systemow-ai-kto-odpowiada-gdy-algorytm-sie-myli/</span></p>
<p><span class="fontstyle0">Nogueira, E., Lopes, J. M., &amp; Gomes, S. (2025). The new era of artificial intelligence in consumption: Theoretical framing, review and research agenda. </span><span class="fontstyle2">Management Review Quarterly, 75</span><span class="fontstyle0">(3), 965–1000. https://doi.org/10.1007/s11301-024-00390-1</span></p>
<p><span class="fontstyle0">Nowakowski, M. (2021, August 3). </span><span class="fontstyle2">Czy zbyt samodzielne bankowe algorytmy AI mogą dyskryminować klientów ubiegających się o kredyty? </span><span class="fontstyle0">[Can overly autonomous AI algorithms in banking discriminate against loan applicants?]. </span><span class="fontstyle2">Bank.pl. </span><span class="fontstyle0">https://bank.pl/czy-zbyt-samodzielne-bankowe-algorytmy-ai-mogadyskryminowac-klientow-ubiegajacych-sie-o-kredyty/</span></p>
<p><span class="fontstyle0">Organisation for Economic Co-operation and Development (OECD). (n.d.). </span><span class="fontstyle2">Artificial intelligence. </span><span class="fontstyle0">https://www.oecd.org/en/topics/policy-issues/artificial-intelligence.html</span></p>
<p><span class="fontstyle0">Paprocki, T. (2025, June 4). </span><span class="fontstyle2">Prawne wymogi automatyzacji: Co może zrobić AI, a co nadal wymaga pracy człowieka? AI Act od 2026 roku – co nowe przepisy zmienią w biznesie? </span><span class="fontstyle0">[Legal requirements for automation: What can AI do, and what still requires human work? The AI Act from 2026 – what will change for business?]. Infor.pl / Kancelaria Paprocki, Wojciechowski &amp; Partnerzy. https://kadry.infor.pl/zatrudnienie/umowa-o-prace/6960354,prawne-wymogi-automatyzacji-co-mozezrobic-ai-a-co-nadal-wymaga-pracy-czlowieka-ai-act-od-2026-roku-co-nowe-przepisy-zmienia-w-bizn esie.html</span></p>
<p><span class="fontstyle0">PARP Grupa PFR. (2023). </span><span class="fontstyle2">Rynek pracy, edukacja, kompetencje: Aktualne trendy i wyniki badań </span><span class="fontstyle0">[Labour market, education, skills: Current trends and research findings]. Wydanie specjalne. Polska Agencja Rozwoju Przedsiêbiorczoœci.</span></p>
<p><span class="fontstyle0">Paterson, J. M. (2022). Misleading AI: Regulatory strategies for algorithmic transparency in technologies augmenting consumer decision-making. </span><span class="fontstyle2">Loyola Consumer Law Review, 34</span><span class="fontstyle0">(3), 558–589. https://doi.org/10.2139/ssrn.4164809</span></p>
<p><span class="fontstyle0">Polish Act on Counteracting Unfair Market Practices (2007). </span><span class="fontstyle2">Ustawa z dnia 23 sierpnia 2007 r. o przeciwdziałaniu nieuczciwym praktykom rynkowym </span><span class="fontstyle0">[Act of 23 August 2007 on Counteracting Unfair Market Practices], </span><span class="fontstyle2">Journal of Laws 2007, No. 171, item 1206, as amended.</span></p>
<p><span class="fontstyle0">Polish Civil Code (1964). </span><span class="fontstyle2">Ustawa z dnia 23 kwietnia 1964 r. – Kodeks cywilny </span><span class="fontstyle0">[Act of 23 April 1964 – Civil Code], </span><span class="fontstyle2">Journal of Laws 1964, No. 16, item 93, as amended.</span></p>
<p><span class="fontstyle0">Polish Competition and Consumer Protection Act (2007). </span><span class="fontstyle2">Ustawa z dnia 16 lutego 2007 r. o ochronie konkurencji i konsumentów </span><span class="fontstyle0">[Act of 16 February 2007 on Competition and Consumer Protection], </span><span class="fontstyle2">Journal of Laws 2007, No. 50, item 331, as amended.</span></p>
<p><span class="fontstyle0">Polish Consumer Rights Act (2014). </span><span class="fontstyle2">Ustawa z dnia 30 maja 2014 r. o prawach konsumenta </span><span class="fontstyle0">[Act of 30 May 2014 on Consumer Rights], </span><span class="fontstyle2">Journal of Laws 2014, item 827, as amended.</span></p>
<p><span class="fontstyle0">Polish Personal Data Protection Act (2018). </span><span class="fontstyle2">Ustawa z dnia 10 maja 2018 r. o ochronie danych osobowych </span><span class="fontstyle0">[Act of 10 May 2018 on the Protection of Personal Data], </span><span class="fontstyle2">Journal of Laws 2018, item 1000, as amended.</span></p>
<p><span class="fontstyle0">Precedence Research. (2025, February 11). </span><span class="fontstyle2">Artificial intelligence (AI) market size, share, and trends 2025 to 2034. </span><span class="fontstyle0">https://www.precedenceresearch.com/artificial-intelligence-market</span></p>
<p><span class="fontstyle0">ProgramistaJava.pl. (2025, April 9). </span><span class="fontstyle2">Prawo a AI – czy maszyna może mieć odpowiedzialność? </span><span class="fontstyle0">[Law and AI: Can a machine bear responsibility?]. https://programistajava.pl/2025/04/09/prawo-a-ai-czy-maszyna-mozemiec-odpowiedzialnosc/</span></p>
<p><span class="fontstyle0">Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). </span><span class="fontstyle2">Official Journal of the European Union, L 119, </span><span class="fontstyle0">1–88 (4 May 2016).</span></p>
<p><span class="fontstyle0">Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). </span><span class="fontstyle2">Official Journal of the European Union, L , 2024 (18 July 2024).</span></p>
<p><span class="fontstyle0">Sandelowski, M. (2000). Whatever happened to qualitative description? </span><span class="fontstyle2">Research in Nursing &amp; Health, 23</span><span class="fontstyle0">(4), 334–340. https://doi.org/10.1002/1098-240X(200008)23:4&lt;334::AID-NUR9&gt;3.0.CO;2-G</span></p>
<p><span class="fontstyle0">Sojkin, B., Bartkowiak, P., &amp; Skuza, A. (2012). Determinants of higher education choices and student satisfaction: The case of Poland. </span><span class="fontstyle2">Higher Education, 63</span><span class="fontstyle0">(5), 565–581. https://doi.org/10.1007/s10734-011-9459-2</span></p>
<p><span class="fontstyle0">Stecyk, A. (2025, June 9). </span><span class="fontstyle2">Manus AI: Rewolucja w tworzeniu prezentacji akademickich i zmiana paradygmatu oceniania w środowisku edukacyjnym </span><span class="fontstyle0">[Manus AI: A revolution in academic presentation creation and a paradigm shift in educational assessment]. </span><span class="fontstyle2">Uniwersytet Szczeciński – AI Blog. </span><span class="fontstyle0">https://ai.usz.edu.pl/2025/06/09/manusai-rewolucja-w-tworzeniu-prezentacji-akademickich-i-zmiana-paradygmatu-oceniania-w-srodowisku -edukacyjnym/</span></p>
<p><span class="fontstyle0">Stecyk, A. (2025, May 6). </span><span class="fontstyle2">Sztuczna inteligencja w edukacji – szansa i wyzwanie </span><span class="fontstyle0">[Artificial intelligence in education – opportunity and challenge]. </span><span class="fontstyle2">Uniwersytet Szczeciński – AI Blog. </span><span class="fontstyle0">https://ai.usz.edu.pl/2025/05/06/ sztuczna-inteligencja-w-edukacji-szansa-i-wyzwanie/</span></p>
<p><span class="fontstyle0">Szostek, D., Bar, G., Prabucki, R. T., &amp; Nowakowski, M. (2022). </span><span class="fontstyle2">Zastosowanie sztucznej inteligencji w bankowości – szanse oraz zagrożenia </span><span class="fontstyle0">[The use of artificial intelligence in banking – opportunities and risks]. Program Analityczno-Badawczy Fundacji Warszawski Instytut Bankowości. Warszawa.</span></p>
<p><span class="fontstyle0">Tak Prawnik. (2025, April 30). </span><span class="fontstyle2">Sztuczna inteligencja a przedsiębiorcy: Kto ponosi odpowiedzialność? </span><span class="fontstyle0">[Artificial intelligence and entrepreneurs: Who bears responsibility?]. </span><span class="fontstyle2">Poradnik Przedsiębiorcy. </span><span class="fontstyle0">https://poradnikprzedsiebiorcy.pl/-sztuczna-inteligencja-a-przedsiebiorcy-kto-ponosi-odpowiedzialnosc</span></p>
<p><span class="fontstyle0">Taveira da Fonseca, A., Vaz de Sequeira, E., &amp; Barreto Xavier, L. (2024). Liability for AI-driven systems. In H. Sousa Antunes, P. M. Freitas, A. L. Oliveira, C. Martins Pereira, E. Vaz de Sequeira, &amp; L. Barreto Xavier (Eds.), </span><span class="fontstyle2">Multidisciplinary perspectives on artificial intelligence and the law </span><span class="fontstyle0">(pp. 395–414). Springer. https://doi.org/10.1007/978-3-031-41264-6_21</span></p>
<p><span class="fontstyle0">Terryn, E., &amp; Martos Marquez, S. (2025). AI and consumer protection. In N. A. Smuha (Ed.), </span><span class="fontstyle2">The Cambridge handbook of the law, ethics and policy of artificial intelligence </span><span class="fontstyle0">(pp. 401–418). Cambridge University Press. https://doi.org/10.1017/9781009264844.029</span></p>
<p><span class="fontstyle0">The Guardian. (2019, November 10). </span><span class="fontstyle2">Apple Card issuer investigated after claims of sexist credit checks. </span><span class="fontstyle0">https://www.theguardian.com/technology/2019/nov/10/apple-card-issuer-investigated-after-claimsof-sexist-credit-checks</span></p>
<p><span class="fontstyle0">The Guardian. (2023, February 2). </span><span class="fontstyle2">ChatGPT reaches 100 million users two months after launch. </span><span class="fontstyle0">https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastestgrowing-app</span></p>
<p><span class="fontstyle0">Trzaska, K. (2024, June 12). </span><span class="fontstyle2">Wciąż nie wiadomo, kto ponosi odpowiedzialność za szkodę wyrządzoną przez AI </span><span class="fontstyle0">[It is still unclear who bears responsibility for damages caused by artificial intelligence]. </span><span class="fontstyle2">Prawo.pl / Kancelaria Prawna Maciej Panfil i Partnerzy. </span><span class="fontstyle0">https://www.prawo.pl/biznes/szkoda-wyrzadzona-przez-al-ktoponosi-odpowiedzialnosc-,528456.html</span></p>
<p><span class="fontstyle0">United Nations Conference on Trade and Development (UNCTAD). (2024). </span><span class="fontstyle2">Artificial intelligence and consumer protection. </span><span class="fontstyle0">Geneva: United Nations.</span></p>
<p><span class="fontstyle0">Urząd Ochrony Danych Osobowych (UODO). (n.d.). </span><span class="fontstyle2">Sztuczna inteligencja </span><span class="fontstyle0">[Artificial intelligence]. https://uodo.gov.pl/pl/p/sztuczna-inteligencja</span></p>
<p><span class="fontstyle0">Urząd Ochrony Konkurencji i Konsumentów (UOKiK). (2024, March 14). </span><span class="fontstyle2">Wielkie „wymiatanie” złych praktyk w e-commerce </span><span class="fontstyle0">[The great “cleanup” of unfair practices in e-commerce]. https://uokik.gov.pl/wielkiewymiatanie-zlych-praktyk-w-e-commerce</span></p>
<p><span class="fontstyle0">Urząd Ochrony Konkurencji i Konsumentów (UOKiK). (n.d.). </span><span class="fontstyle2">O UOKiK </span><span class="fontstyle0">[About UOKiK]. https://uokik.gov.pl/o-uokik</span></p>
<p><span class="fontstyle0">Villamin, P., Lopez, V., Thapa, D. K., &amp; Cleary, M. (2024). A worked example of qualitative descriptive design: A step-by-step guide for novice and early career researchers. </span><span class="fontstyle2">Journal of Advanced Nursing, 82</span><span class="fontstyle0">(8), 1729–1745. https://doi.org/10.1111/jan.15756</span></p>
<p><span class="fontstyle0">Warchoł-Lewucka, R. (2024, July 29). </span><span class="fontstyle2">Kto ponosi odpowiedzialność, gdy chatbot udzieli błędnej odpowiedzi? </span><span class="fontstyle0">[Who bears responsibility if a chatbot provides misleading or inaccurate information?]. </span><span class="fontstyle2">GSW Gorazda, Świstuń, Wątroba i Partnerzy – Adwokaci i Radcowie Prawni. </span><span class="fontstyle0">https://gsw.com.pl/publikacje/prawo-it/ktoponosi-odpowiedzialnosc-gdy-chatbot-udzieli-blednej-odpowiedzi/</span></p>
<p><span class="fontstyle0">Warszycki, M. (2019). </span><span class="fontstyle2">Wykorzystanie sztucznej inteligencji do predykcji emocji konsumentów </span><span class="fontstyle0">[The use of artificial intelligence for predicting consumer emotions]. </span><span class="fontstyle2">Studia i Prace Kolegium Zarządzania i Finansów, 173, </span><span class="fontstyle0">115–129. Warszawa: Oficyna Wydawnicza SGH.</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Automating the systematic literature review process in management science using artificial intelligence</title>
		<link>https://minib.pl/en/numer/no-2-2025/automating-the-systematic-literature-review-process-in-management-science-using-artificial-intelligence/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Thu, 19 Jun 2025 14:22:33 +0000</pubDate>
				<category><![CDATA[academic writing]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[management]]></category>
		<category><![CDATA[process management]]></category>
		<category><![CDATA[systematic literature review]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=8510</guid>

					<description><![CDATA[1. Introduction Systematic literature reviews (SLR) shape scholarship in many disciplines, functioning as a rigorous method for synthesizing existing primary research. They are particularly important in fields such as the health sciences and management, where the proliferation of publications entails a need for more effective and dependable methods to condense vast bodies of information into...]]></description>
										<content:encoded><![CDATA[<p><strong><span class="fontstyle0" style="font-size: 18pt;">1. Introduction</span></strong></p>
<p><span class="fontstyle2">Systematic literature reviews (SLR) shape scholarship in many disciplines, functioning as a rigorous method for synthesizing existing primary research. They are particularly important in fields such as the health sciences and management, where the proliferation of publications entails a need for more effective and dependable methods to condense vast bodies of information into practical insights (Tantawy et al., 2023; Tsafnat et al., 2013, 2014; Tranfield et al., 2003). The introduction of artificial intelligence (AI) into the SLR process promises to transform and greatly enhance its efficiency and accuracy through automation – especially in repetitive and time-consuming tasks, such as data extraction and synthesis (Clark et al., 2020; Lau, 2019).</span></p>
<p><span class="fontstyle2">The use of AI in SLRs represents more than just a technological advancement; it signifies a shift in the researcher’s role from a traditional examiner of literature to a manager of research processes. In process management, the manager plans, organizes, coordinates, and controls the work (Sommerville et al., 2010), whereas the employees execute the assigned tasks. Transferring this logic to the process of creating a systematic literature review, the researcher, acting as manager, can plan that process, organize the work, coordinate the use of AI applications, and monitor their effects on the outcomes. The AI algorithms carry out the instructions provided by the manager. The whole process remains grounded in the established methodological logic of systematic literature reviews (see Denyer &amp; Tranfield, 2009; Vrontis &amp; Christofi, 2021).</span></p>
<p><span class="fontstyle2">This shift brings both new opportunities and challenges that are redefining the academic research landscape (Vrontis &amp; Christofi, 2021; Wagner et al., 2022). AI tools can quickly become collaborative partners, enabling complex analyses that extend beyond simple automation, even supporting the generation of novel research questions and hypotheses (Saeidnia et al., 2024).</span></p>
<p><span class="fontstyle2">In this paper, we consider the role of AI in the SLR process. AI functions as a collaborator, with the potential to redefine the researcher’s role. Based on a systematic review of the relevant literature, this study explores how AI is currently utilized in SLRs and proposes a framework for future collaboration between humans and AI in academic writing and research. These practical and philosophical considerations highlight the evolving relationship between human researchers and AI technologies.</span></p>
<p><span class="fontstyle2">With the advancement of AI technologies, traditional ideas of authorship and the researcher&#8217;s role in knowledge creation are increasingly being challenged. AI can not only support the research process but also autonomously carry out certain tasks, raising questions about maintaining integrity and accountability in scientific output (Howard, 2024; Masukume, 2024).</span></p>
<p><span class="fontstyle0">This article also discusses the variability and difficulties associated with incorporating AI into management-focused systematic reviews, where the nuanced and contextual aspects of research may pose challenges for automation. The goal is to present a balanced perspective that acknowledges both the potential of AI to improve research methods and the need for researchers to ensure that AI applications align with academic standards and ethical considerations.</span></p>
<p><span class="fontstyle0">Building on this foundation, we formulated the following research question: How can AI support the SLR process in management? This question itself was then addressed through a systematic literature review.</span></p>
<p><span class="fontstyle0">This study adopts a transdisciplinary approach to research methodology, integrating perspectives from management, information science, and technology studies. By exploring how artificial intelligence can be meaningfully embedded in the process of conducting systematic literature reviews, the article addresses not only academic concerns but also the practical needs of external stakeholders – including research institutions, consulting firms, and organizations seeking evidence-based insights. The proposed human–AI collaboration framework encourages more inclusive and participatory models of knowledge creation, potentially involving non-academic actors in the innovation process by enabling faster and more accessible synthesis of research findings. In doing so, the paper aligns with broader efforts to make academic inquiry more responsive, collaborative, and relevant to real-world challenges in business and society.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">2. Research design</span></strong></p>
<p><strong><span class="fontstyle3">Data collection</span></strong></p>
<p><span class="fontstyle0">We conducted searches using commonly accepted search algorithms in the Scopus and Web of Science databases, which contain the largest collections of peer-reviewed academic publications (Glińska &amp; Siemieniako, 2018, Paul &amp; Criado, 2020). We formulated two search queries (one for each database) corresponding to the most common keywords of our basic research concepts, and we followed the database protocols regarding the use of Boolean operators AND, OR, and appropriate truncations (*).</span></p>
<p><span class="fontstyle0">1. (“Automation” OR “Automating” OR “Automated” OR “Automatic”</span><span class="fontstyle0">OR “Automates” OR “Mining”)</span></p>
<p><span class="fontstyle0">2. (“Systematic review*” OR “Systematic Literature Review*”)</span></p>
<p><span class="fontstyle0">3. (“Artificial intelligence” OR “AI”)</span></p>
<p><span class="fontstyle0">This yielded the following query for Scopus, which returned 1,297 studies:</span></p>
<p><span class="fontstyle0">TITLE-ABS-KEY((“Automation” OR “Automating” OR “Automated” OR “Automatic” </span><span class="fontstyle0">OR “Automates” OR “Mining”) AND (“Systematic review*” OR “Systematic Literature Review*”) AND (“Artificial intelligence” OR “AI”)).</span></p>
<p><span class="fontstyle0">On Web of Science, we used the following query, which returned 785 studies:</span></p>
<p><span class="fontstyle0">TS = ((“Automation” OR “Automating” OR “Automated” OR “Automatic” </span><span class="fontstyle0">OR “Automates” OR “Mining”) AND (“Systematic review*” OR “Systematic Literature Review*”) AND (“Artificial intelligence” OR “AI”)).</span></p>
<p><span class="fontstyle0">Together, both queries produced an initial sample of 2,082 studies.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2">Data selection</span></strong></p>
<p><span class="fontstyle0">Three inclusion criteria were applied to select which articles to review. Papers had to be scientific in nature, published in peer-reviewed scientific journals, and written in English. This reduced the initial sample to 1,649 papers. Next, two exclusion criteria were introduced during title and abstract screening. We excluded papers that merely mentioned AI automation in SLRs without describing its application, as well as studies focusing solely on specific phases of the SLR process rather than automation or AI in general. These were mainly technical articles not including any broader context or concept. We also eliminated duplicates from the two databases.</span></p>
<p><span class="fontstyle0">Following this exclusion process, 34 publications remained. Then we added four articles found through AI engines (Elicit and SciSpace software). We then conducted a backward citation search analysis on these 38 articles, yielding 17 additional papers (for a total of 55 in all). Finally, we performed a one-layer forward citation search, which produced 38 additional articles, proceedings, preprints and one doctoral thesis. The final sample consisted of 93 publications, collected as of April 8, 2024.</span></p>
<p><span class="fontstyle0">We chose not to conduct a formal quality assessment due to the emerging nature of the topic. At this nascent stage of the research field, we deemed it more valuable to analyse all available sources to ensure comprehensive coverage.</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8523" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1.jpg" alt="" width="1769" height="1750" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-300x297.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-1024x1013.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-768x760.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-1536x1520.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-figure-1-1320x1306.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">The final set of 93 publications was analysed using thematic analysis. We coded the material to identify recurring themes related to the integration of AI into the SLR process. The themes were grouped according to the stages of the review process. The findings were then synthesized to build a framework supporting researchers’ collaboration with AI in academic writing. Based on our analysis of 93 studies, we identified how AI contributes to different stages of the SLR process. The review revealed that AI tools are used in scoping, research question formulation, literature identification and selection, data extraction, synthesis, and reporting. These findings of our analysis form the basis for the human researcher–AI collaboration framework we propose.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0">3. Results</span></strong></span></p>
<p><strong><span class="fontstyle2">Systematic literature review as a form of scientific writing in management</span></strong></p>
<p><span class="fontstyle3">A </span><span class="fontstyle4">systematic literature review </span><span class="fontstyle3">(SLR) is a rigorous method for identifying, selecting, evaluating, analysing, and synthesizing existing research findings on a specific topic. It follows a precisely defined and replicable procedure for systematically gathering knowledge on a given topic. The results are transparent and can be verified by other researchers (van Dinter et al., 2021). In contrast to traditional literature reviews used in empirical articles, SLRs employ detailed criteria for selecting and evaluating the quality of source articles and the possibility of using the results in different contexts. They are used to identify research gaps, develop new ideas, and generate comprehensive reviews of the state of the art in specific research fields (Denyer &amp; Tranfield, 2009).</span></p>
<p><span class="fontstyle3">Automation of the SLR process has so far been most widely implemented in the health sciences (Laynor, 2022; Tsafnat et al., 2013, 2014). This trend is reflected in our findings, as more than 70% of the articles in our sample are from that domain. Systematic literature reviews in the health sciences are a comprehensive and scientifically rigorous approach to summarizing existing evidence on a specific topic. As volume of research publications continues to increase, SLRs help researchers, healthcare providers, and medical practitioners stay informed about the latest evidence and practices (Laynor, 2022).</span></p>
<p><span class="fontstyle3">SLRs in management sciences, although no less important than in health sciences, are nevertheless considerably less developed. There therefore remains an under-satisfied need for rigorous synthesis of research findings in the field (Siemieniako et al., 2022), providing a comprehensive and relatively unbiased analysis of the existing literature on particular topics in management. SLRs help identify research gaps, inform directions for future research, and reduce the time spent synthesizing existing sources (Denyer &amp; Tranfield, 2009). Scholars have advocated for the use of systematic review methods in management and organizational studies to advance evidence-based management practices (Tranfield et al., 2003). While certain adjustments may be expected in traditional systematic review methodologies to accommodate the unique characteristics of the management field, the benefits of using systematic literature reviews are widely recognized (Tranfield et al., 2003).</span></p>
<p><span class="fontstyle3">Given the significant progress achieved in automating SLRs within the health sciences and their growing importance in management, it is worth exploring how similar automation could be implemented in this context. To address our research question, the following section presents the various phases of SLRs in management sciences and examines the current possibilities of their automation, based on practices in the health sciences.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0">Systematic literature review phases in management</span></strong></p>
<p><span class="fontstyle2">As outlined by Tranfield et al. (2003) and Denyer and Tranfield (2009), the general phases of a systematic literature review in management typically include:</span></p>
<p><span class="fontstyle3">Planning the review</span><span class="fontstyle2">: The researcher plans the review and defines the scope, protocol, and process for conducting the literature review. In this step, the researcher considers which databases and tools to use, what skills are needed, how to allocate time, and how to search for high-quality resources.</span></p>
<p><span class="fontstyle3">Conducting the search</span><span class="fontstyle2">: Next the researcher collects and selects primary studies that are relevant to the review topic. The researcher performs database searches, screens the citations, assesses the quality of the studies, extracts data, and monitors the activities.</span></p>
<p><span class="fontstyle3">Analyzing &amp; synthesizing the literature</span><span class="fontstyle2">: In the next phase, the researcher correlates the evidence from multiple sources, synthesizes results, and then arranges the data in order to address the research questions.</span></p>
<p><span class="fontstyle3">Reporting the findings</span><span class="fontstyle2">: This final stage involves preparing and disseminating the review results. This includes formatting the main report, reviewing the report, summarizing the findings, discussing limitations, formulating recommendations for policy and practice, and identifying future research areas.</span></p>
<p><span class="fontstyle2">For this study, we adopted the concise and clear procedure developed by Vrontis and Christofi (2021), which also corresponds to the process outlined by Denyer and Tranfield (2009). This procedure consists of the following steps.</span></p>
<p><span class="fontstyle3">Conducting a scoping review</span><span class="fontstyle2">: Scoping analysis defines the boundaries and focus of a research study, systematically determining which studies to include according to established criteria and the timeframe to be covered (Vrontis &amp; Christofi, 2021). The main aim is to develop a comprehensive, structured review of relevant literature. This analysis facilitates mapping the field; identifying the main trends, gaps, and opportunities for theoretical development; and providing solid and reliable evidence for further research. A scoping analysis, therefore, allows researchers to efficiently and effectively assemble, assess, and collate the available literature to inform study objectives and methodologies (Vrontis &amp; Christofi, 2021).</span></p>
<p><span class="fontstyle3">Identifying the research purpose and research question</span><span class="fontstyle2">: In the next step, the researcher identifies the research purpose and research question by defining the scope and the focus of the study. This process follows a comprehensive scoping review, which enhances awareness of gaps, trends, and what is already known on the subject of interest (Pereira et al., 2023). Finally, research questions are formulated based on this preliminary study to meet the review’s overall research objectives.</span></p>
<p><span class="fontstyle2">One effective way to formulate a research question is through the interplay between the researchers and feedback from experts in academia and from the relevant industries <span class="fontstyle0">(Vrontis &amp; Christofi, 2021). Such an iterative process may better focus the research question so as to better capture the intent. The research question should be grounded in an understanding of the interface between different variables or concepts under study (Billore et al., 2023).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">At this stage, it is also important to consider the inclusion criteria, regarding what the study will seek to address and what kinds of sources to include (Vrontis &amp; Christofi, 2021). Well-honed inclusion criteria ensure that a research question remains focused and relevant to the set research objectives. Generally, by following a structured methodology, researchers can formulate well-defined research questions in line with the overall research aim.</span></span></p>
<p><span class="fontstyle2">Identifying the research context<span class="fontstyle0">: The research context is the particular setting, condition, or background in which the study takes place. It incorporates the industry under study, participants’ cultural traits, geographical locations, time periods, and all those elements which may have an effect on the research topic or its findings (Vrontis et al., 2020).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">Understanding the research context is therefore crucial for interpreting and generalizing findings, since different contexts may lead researchers to varying outcomes with different implications (Christofi et al., 2017). Researchers usually design their studies with contextual factors in mind to ensure that their findings are relevant and applicable in particular situations (Baima et al., 2020). By examining different research contexts, scholars can gain new insights, refine theories, and enhance their understanding of a particular area of study (Vrontis et al., 2022).</span></span></p>
<p><span class="fontstyle2">Identifying the literature<span class="fontstyle0">: Literature identification is a systematic process of searching for, selecting, and analysing relevant publications and research studies with respect to a given topic or issue. This typically includes assessing the relevance and quality of the literature found and synthesizing key findings into insights about the current state of knowledge on the subject under investigation. Identifying the literature allows researchers to better grasp the theoretical approaches taken and the extant research gaps, trends, and challenges in the respective field of study. In other words, this step enables scholars to map out the current state of the subject and, consequently, to identify gaps and trends in order to support the development of scientific projects (Jain et al., 2022).</span></span></p>
<p><span class="fontstyle2">Selecting the literature<span class="fontstyle0">: In the fifth step, the relevant sources of information – such as research articles, books, and other publications – are selected for inclusion in the study or review. This process requires the setting of clear selection criteria, such as the studies’ research questions, objectives, and the quality of the sources. These criteria help to identify and screen potential sources and, finally, select relevant and high-quality literature to be further studied (Christofi et al., 2017). The systematic methodologies used in conducting literature reviews help researchers ensure a very rigorous and comprehensive selection process for this step of the review (Battisti et al., 2023). Through careful selection,</span> <span class="fontstyle0">researcher build a solid foundation of existing knowledge and findings relevant to their own study.</span></span></p>
<p><span class="fontstyle2">Extracting and synthesizing data<span class="fontstyle0">: Data extraction involves the systematic collection of relevant data from the selected articles or research papers, according to predefined criteria. This includes identifying and recording specific information such as publication details, author details, article type, methods used, key findings, and other relevant data points (Christofi et al., 2021). Data synthesis, by contrast, involves analysing the extracted material to identify patterns, relationships, or common themes in the literature. This stage aims at synthesizing the data from the different sources of information into a coherent framework or model that will then guide further research or provide practical implications (Christofi et al., 2021). This is then followed by thematic analysis to integrate the results into an overall framework, further enabling in-depth understanding of interrelating concepts (Battisti et al., 2023). In general, data synthesis facilitates the generation of meaningful inferences from the literature review and provides directions for future research.</span></span></p>
<p><span class="fontstyle2">Reporting and making recommendations<span class="fontstyle0">: This final stage involves preparing the report and recommendations, which requires summarizing and synthesizing the results of the reviewed studies in a structured and transparent manner. Principal results, themes, and lessons learned from the literature are organized and presented comprehensively. The authors of the review identify gaps in the literature, propose future directions, and offer recommendations for both academics and practitioners based on their analysis of the reviewed studies. The ultimate aim is to contribute valuable insights to the existing knowledge base of the research area and guide further research efforts (Christofi et al., 2017; Pereira et al., 2023).</span></span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2"><span class="fontstyle3">Automation of SLRs in management</span></span></strong></p>
<p><span class="fontstyle2"><span class="fontstyle0">In this section, we illustrate how artificial intelligence tools can be used to automate specific stages of the systematic literature review process. The examples come mainly from health sciences literature, but the same SLR procedures are increasingly being used in management (Denyer &amp; Tranfield, 2009).</span></span></p>
<p><span class="fontstyle2">Scientific automation <span class="fontstyle0">refers to the application of technological instruments and procedures to mechanize and enhance a number of scientific processes related to data collection, analysis, and reporting. Within the context of systematic reviews, it entails the use of software and algorithms to accelerate the review process and to efficiently and accurately synthesize evidence (Lau, 2019). The tasks that can be automated for systematic reviews include literature screening, data extraction, and meta-analysis (Tóth et al., 2023). More generally, science automation aims to improve efficiency, transparency, and</span> <span class="fontstyle0">reproducibility, while reducing costs by taking advantage of better technology and artificial intelligence (Laynor, 2022).</span></span></p>
<p><em><span class="fontstyle2">Scoping analysis</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI algorithms can help automate several tasks within scoping analysis, facilitating the extraction of key information from large bodies of scientific literature – such as author names, affiliations, keywords, citation counts, or topics (Saeidnia et al., 2024). By analysing citation networks, AI systems can identify highly cited and influential papers and reveal the dynamics of scientific knowledge diffusion. They may also predict the potential impact of scientific research based on a variety of factors. Moreover, they may detect and visualize research collaborations through co-authorship networks and publication histories. Applying natural language processing (NLP) techniques can make it easier for researchers to identify emerging trends and topics during the scoping analysis (Saeidnia et al., 2024).</span></span></p>
<p><em><span class="fontstyle2">Identifying the research purpose and research questions</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI can assist researchers in posing research questions by providing data-driven insights and optimized methodologies. It can identify gaps in the available literature, generate hypotheses, and even predict probable correlations or causal relationships. AI tools can, therefore, enhance the brainstorming process with insights drawn from existing trends, historical data, and cross-disciplinary studies that may ultimately set researchers onto new investigative paths (Wagner et al., 2022). Moreover, given AI’s advanced capacity to analyse data faster and more accurately than is humanly possible, it can reveal hidden patterns, correlations, and emerging research trends that enable the researcher to find new directions to pursue (Saeidnia et al., 2024; Tomczyk et al., 2024).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">However, while AI can significantly increase the efficiency of the research process, human judgment and critical thinking remain indispensable for determining which research gaps merit exploration and how they should be addressed (Spillias et al., 2023). While AI can open up ways to fast-track the process of identifying relevant literature and proposing hypotheses, human judgment is necessary for generating meaningful questions through problematization (Wagner et al., 2022).</span></span></p>
<p><em><span class="fontstyle2">Identifying the research context </span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI can also contribute to defining research contexts by generating ideas, reviewing the literature, analysing data, and mapping out collaboration networks (Saeidnia et al., 2024). AI algorithms are able to process large amounts of data to pinpoint underexplored areas within a field (Khalifa &amp; Albadawy, 2024). In this respect, using natural language processing techniques, AI can extract keywords, topics, and trends from scientific</span> <span class="fontstyle0">publications that may be helpful for the research community to find new directions and emerging areas of focus in the respective domains (Saeidnia et al., 2024). Moreover, AI can contribute to the generation of ideas and hypotheses and to the development of robust designs by proposing relevant research problems as well as methodologies (Khalifa &amp; Albadawy, 2024). It can be applied to predict emerging research trends; identify potential collaborators and influential research networks, and measure the impact and visibility of scientific papers, authors, and journals (Saeidnia et al., 2024).</span></span></p>
<p><em><span class="fontstyle2">Identifying literature</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI techniques can identify relevant literature in various ways. Algorithms can distinguish between authors with similar names by considering variables such as institutional affiliations and publication histories (Saeidnia et al., 2024). This guarantees that scholarly work is attributed correctly and also enhances the reliability of bibliometric analysis.</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">Researchers are increasingly applying AI techniques such as machine learning (ML) and data mining in bibliometrics in order to predict future publication trends, emerging research areas, and research impact (Saeidnia et al., 2024). AI algorithms can recognize patterns and relationships in large bibliographic datasets, and then deliver critical insights regarding what the scientific enterprise of research may look like in the years to come. Such studies may significantly enhance researchers’ capacity to recognize and remain abreast of key trends and research collaborations.</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">As Saeidnia et al. (2024) observed, AI algorithms can automatically collect bibliographic data from a variety of sources, such as online databases, academic libraries, and digital repositories, and this may save a lot of time and effort for researchers engaged in data collection. AI analysis of citation networks also helps locate influential papers, authors, and journals, highlighting the impact and visibility of research outputs and spotting key trends.</span></span></p>
<p><em><span class="fontstyle2">Selecting literature</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI can facilitate the literature-selection stage through advanced methods for knowledge representation and inference, text manipulation, and learning from large amounts of data. These techniques are particularly useful for tasks that are laborious or repetitive for humans, such as the critical analysis of scientific literature (de la Torre-López</span></span><span class="fontstyle2"><span class="fontstyle0">et al., 2023). AI tools support the clear specification of problem domains and literatureselection criteria, thus enabling researchers to apply search and selection criteria, save time, and ensure transparency and quality in the literature review (Ngwenyama &amp; Rowe, 2024). AI-based tools can potentially deal with fuzzy, weakly structured, and unstructured</span> <span class="fontstyle0">data, providing abstraction and semantic meaning-based analysis that can support searching and screening tasks for literature selection (Wagner et al., 2022). Advanced supervised machine learning methods, such as deep learning (DL), are used to automate decisions on the relevance of papers. This alleviates researchers from the tedious task of rule-codification and also makes the literature-selection processes more efficient (Wagner et al., 2022). Essentially, AI tools offer capabilities that can be harnessed to advance the effectiveness, efficiency, and accuracy of the literature-selection processes, thus proving very instrumental for researchers in their quest to navigate the veritable sea of literature available in many domains.</span></span></p>
<p><em><span class="fontstyle2">Extracting and synthesizing data</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">In the data-extraction phase, AI tools can automatically extract information from articles, whether structured through elements of the PICO framework or specific data points, using ML/DL/NLP methods (Santos et al., 2023). AI tools can assist in summarizing and interpreting the extracted information in formats that will enable graphic and statistical synthesis, including the generation of tables, diagrams, and graphs examining between-study heterogeneity, and in updating meta-analyses and related forest plots (Amezcua-Prieto et al., 2020). These capabilities of AI thus support faster data-extraction and synthesis processes in literature reviews, improving efficiency and quality in synthesizing evidence in scholarly research.</span></span></p>
<p><em><span class="fontstyle2">Reporting and preparing recommendations</span></em></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI-driven tools can contribute significantly to improved manuscript preparation, assisting in such stages as grammar correction, text rewriting, and recommendation generation – often tailored to the users’ individual preferences and writing style (Chemaya &amp; Martin, 2023). AI systems can also automatically identify missing data, synthesize evidence from source studies, and identify topics through automated text clustering (Santos et al., 2023). Moreover, AI algorithms can digest large numbers of scientific publications to retrieve information about author names, affiliations, keywords, or </span></span><span class="fontstyle2"><span class="fontstyle0">citations, all of which may help researchers gain a better grasp the publication patterns, underlying research networks, and collaborations in a scientific area (Saeidnia et al., 2024). </span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">AI-powered recommender systems can be used to recommend relevant scientific websites, online resources, and research collaborations based on user preferences, reading behaviour, and web data (Saeidnia et al., 2024). Natural language processing and machine learning techniques may play a central role in these systems, supporting the analysis of web-based documents, extraction of key information, understanding of research outputs, </span></span><span class="fontstyle2"><span class="fontstyle0">and assessment of impact and visibility of online scientific research (Saeidnia et al., 2024).</span> </span></p>
<p><span class="fontstyle2"> <span class="fontstyle0">The reviewed literature shows that AI capabilities in data extraction, analysis, and recommendation generation are transforming the process of reporting, explaining, and communicating research findings – bringing a revolution in how academic and research outputs are reported and shared. Table 1 presents a summary of this analysis.</span> </span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8533" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1.jpg" alt="" width="1769" height="2372" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-224x300.jpg 224w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-764x1024.jpg 764w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-768x1030.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-1146x1536.jpg 1146w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-1527x2048.jpg 1527w, https://minib.pl/wp-content/uploads/2025/06/2-2025-07-table-1-1320x1770.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">In summary, this section has demonstrated how artificial intelligence can support the automation of the various phases of systematic literature reviews, which therefore answers our core research question. More specifically, we investigated how AI applications implemented in the SLR procedures for health sciences can be applied to management sciences. The SLR procedure adopted here follows the framework proposed by Vrontis and Christofi (2021), who extended that of Denyer and Tranfield (2009).</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0" style="font-size: 18pt;"> 4. Conclusions, Limitations, and Future Research</span></strong></p>
<p>The integration of artificial intelligence into systematic literature reviews represents not Merely an evolution, but a revolution – one that challenges the very foundation of academic Research. The traditional painstaking process of identifying, analysing, and synthesizing Literature is being rapidly overtaken by ai-driven automation, fundamentally shifting the Researcher’s role from that of an intellectual labourer to that of a process manager.</p>
<p>To further illustrate the balance between human oversight and machine capability, This transformation can be productively examined in terms of the data–prediction–Judgment–action model (agrawal, gans, &amp; goldfarb, 2018). According to this model, ai Improves the prediction stage by processing large amounts of information, whereas the Stages of judgment and action remain the responsibility of humans. Applied to slrs, this Implies that researchers are not passive supervisors. Rather, they must critically evaluate Ai outputs, interpret them, and decide how to integrate them into existing theory. Ai can Automate the identification of literature and point to potential gaps, yet it cannot replace Human judgment in assessing relevance or drawing conclusions. A useful analogy can be Found in the military domain, where ai improves predictive capacities but decisionmaking authority ultimately remains with humans (agrawal, gans, &amp; goldfarb, 2018). This framework thus reinforces our view that ai does not eliminate the researcher’s role. Instead, it redefines it. Researchers remain managers of the process, with their judgment And action ensuring rigor and depth.</p>
<p>However, this transformation is not universally welcomed. While it may lead to Improved efficiency, scalability, and precision, one must ask: at what cost? Increased <span class="fontstyle0">reliance on AI threatens to erode the depth of critical engagement with literature, potentially reducing researchers to mere supervisors of algorithms rather than active participants in knowledge creation. Yet AI systems are not neutral; they inherit the biases of their training data, the priorities of their programmers, and the constraints of their algorithms. If left unchecked, these embedded biases could reshape academic discourse in ways we are only beginning to understand.</span></p>
<p><span class="fontstyle0">The present study has a number of limitations, which reflect broader concerns about AI’s role in research. The fact that most extant SLR automation techniques stem from health sciences raises a crucial question: Is management research even compatible with such mechanization? The field of management thrives on context, interpretation, and theoretical nuance – elements that AI, for all its computational power, struggles to grapple with. Applying automation techniques designed for medical trials to a discipline that values qualitative insight may, at best, be an oversimplification and, at worst, an intellectual misstep. Moreover, our reliance on peer-reviewed studies from established databases inadvertently sidelines alternative perspectives and cutting-edge discussions happening outside traditional academic publishing. If AI is trained only on what is deemed “acceptable” by established gatekeepers, are we not reinforcing the very same academic silos that researchers have long criticized? The omission of formal quality assessment further highlights the immaturity of this research area. We have embraced AI before rigorously questioning whether it genuinely improves the research process – or simply accelerates flawed methodologies.</span></p>
<p><span class="fontstyle0">As far as further limitations are concerned, the number of references included in this study could possibly have been larger, but it was the direct outcome of our systematic selection procedure. The final set of publications was determined through predefined keywords and strict inclusion and exclusion criteria, ensuring objectivity and transparency. As a result, the number of sources may have been smaller than expected, but it accurately reflects the available and relevant research within the scope of this emerging field.</span></p>
<p><span class="fontstyle0">The fact that our own study was itself conducted through the systematic literature </span><span class="fontstyle0">review method also invites some brief reflection on this process. We relied on established databases (Scopus and Web of Science) and complemented them with AI-based tools such </span><span class="fontstyle0">as Elicit and SciSpace to identify additional sources. While this approach provided a broad coverage of relevant studies, it also revealed challenges that are characteristic of AI-assisted reviews. For example, integrating results from traditional databases and AI tools required additional effort to ensure consistency and avoid duplication. Furthermore, while AI engines accelerated the retrieval of relevant articles, they sometimes produced results lacking sufficient context or theoretical framing, which required careful human judgment. These experiences confirm our broader argument: AI can support the prediction and data</span> <span class="fontstyle0">retrieval stages, but the stages of judgment and action remain dependent on researchers. By reflecting on our own process, we emphasise the importance of methodological transparency and show that the opportunities and limitations of AI-assisted SLRs are not only conceptual but also practical realities encountered during research.</span></p>
<p><span class="fontstyle0">Looking ahead, future research must confront these uncomfortable realities rather than blindly celebrate AI’s capabilities. Instead of merely asking how AI can make SLRs more efficient, we should ask whether AI-assisted reviews do actually produce better knowledge at all. If AI is allowed to dictate research agendas by prioritizing what is most frequently cited, we risk creating an academic echo chamber where innovation is stifled in favour of algorithmic consensus.</span></p>
<p><span class="fontstyle0">The ethical implications are equally alarming. Who takes responsibility when AIgenerated literature reviews misrepresent findings or reinforce biases? The obsession with automation must be tempered with a serious conversation about accountability and intellectual integrity. Scholars must resist the temptation to let AI do their thinking for them. The most pressing challenge is not improving AI but ensuring that human researchers remain the architects of inquiry rather than its passive facilitators. The future of AI-driven research is not inevitable – it is a choice. Whether that choice leads to a new era of intellectual empowerment or a hollowing out of academic rigor depends entirely on how critically we engage with this technology now.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">References</span></strong></p>
<p><span class="fontstyle0">Amezcua-Prieto, C., Fernández-Luna, J. M., Huete-Guadix, J. F., Bueno-Cavanillas, A., &amp; Khan, K. S. (2020). Artificial intelligence and automation of systematic reviews in women’s health. </span><span class="fontstyle3">Current Opinion in Obstetrics &amp; Gynecology</span><span class="fontstyle0">, </span><span class="fontstyle3">32</span><span class="fontstyle0">(5), 335–341. https://doi.org/10.1097/GCO.0000000000000643</span></p>
<p><span class="fontstyle0">Battisti, E., Graziano, E. A., Pereira, V., Vrontis, D., &amp; Giovanis, A. (2023). Talent management and firm performance in emerging markets: A systematic literature review and framework. </span><span class="fontstyle3">Management</span></p>
<p><span class="fontstyle3">Decision</span><span class="fontstyle0">, </span><span class="fontstyle3">61</span><span class="fontstyle0">(9), 2757–2783. https://doi.org/10.1108/MD-10-2021-1327 Billore, S., Anisimova, T., &amp; Vrontis, D. (2023). Self-regulation and goal-directed behavior: A systematic literature review, public policy recommendations, and research agenda. </span><span class="fontstyle3">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle3">156</span><span class="fontstyle0">. https://doi.org/10.1016/j.jbusres.2022.113435 Chemaya, N., &amp; Martin, D. (2023). Perceptions and detection of AI use in manuscript preparation for academic journals. Preprint. http://arxiv.org/abs/2311.14720 Christofi, M., Leonidou, E., &amp; Vrontis, D. (2017). Marketing research on mergers and acquisitions:</span></p>
<p><span class="fontstyle0">A systematic review and future directions. </span><span class="fontstyle3">International Marketing Review</span><span class="fontstyle0">, </span><span class="fontstyle3">34</span><span class="fontstyle0">(5), 629–651. https://doi.org/10.1108/IMR-03-2015-0100</span></p>
<p><span class="fontstyle0">Clark, J., Glasziou, P., Del Mar, C., Bannach-Brown, A., Stehlik, P., &amp; Scott, A. M. (2020). A full systematic review was completed in 2 weeks using automation tools: A case study. Journal of Clinical Epidemiology, </span><span class="fontstyle3">121</span><span class="fontstyle0">, 81–90. https://doi.org/10.1016/j.jclinepi.2020.01.008</span></p>
<p><span class="fontstyle0">de la Torre-López, J., Ramírez, A., &amp; Romero, J. R. (2023). Artificial intelligence to automate the systematic review of scientific literature. </span><span class="fontstyle3">Computing</span><span class="fontstyle0">, </span><span class="fontstyle3">105</span><span class="fontstyle0">(10), 2171–2194. https://doi.org/10.1007/s00607-023-01181-x</span></p>
<p><span class="fontstyle0">Denyer, D., &amp; Tranfield, D. (2009). Producing a systematic review. In D. A. Buchanan &amp; A. Bryman (Eds.), </span><span class="fontstyle2">Sage Handbook of Organizational Research Methods </span><span class="fontstyle0">(pp. 671–689). Sage Publications.</span></p>
<p><span class="fontstyle0">Glińska, E., &amp; Siemieniako, D. (2018). Binge drinking in relation to services – Bibliometric analysis of scientific research directions. </span><span class="fontstyle2">Engineering Management in Production and Services</span><span class="fontstyle0">, </span><span class="fontstyle2">10</span><span class="fontstyle0">(1), 45–54.</span></p>
<p><span class="fontstyle0">Howard, F. M., Li, A., Riffon, M. F., Garrett-Mayer, E., &amp; Pearson, A. T. (2024). Characterizing the increase in artificial intelligence content detection in oncology scientific abstracts from 2021 to 2023. </span><span class="fontstyle2">JCO Clinical Cancer Informatics</span><span class="fontstyle0">, </span><span class="fontstyle2">8</span><span class="fontstyle0">, e2400077. https://doi.org/10.1200/CCI.24.00077</span></p>
<p><span class="fontstyle0">Jain, R., Jain, K., Behl, A., Pereira, V., Del Giudice, M., &amp; Vrontis, D. (2022). Mainstreaming fashion rental consumption: A systematic and thematic review of literature. </span><span class="fontstyle2">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle2">139</span><span class="fontstyle0">, 1525– 1539. https://doi.org/10.1016/j.jbusres.2021.10.071</span></p>
<p><span class="fontstyle0">Khalifa, M., &amp; Albadawy, M. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. </span><span class="fontstyle2">Computer Methods and Programs in Biomedicine Update</span><span class="fontstyle0">, </span><span class="fontstyle2">5. </span><span class="fontstyle0">https://doi.org/10.1016/j.cmpbup.2024.100145</span></p>
<p><span class="fontstyle0">Lau, J. (2019). Editorial: Systematic review automation thematic series. </span><span class="fontstyle2">Systematic Reviews</span><span class="fontstyle0">, 8(1). https://doi.org/10.1186/s13643-019-0974-z</span></p>
<p><span class="fontstyle0">Laynor, G. (2022). Can systematic reviews be automated? </span><span class="fontstyle2">Journal of Electronic Resources in Medical Libraries</span><span class="fontstyle0">, </span><span class="fontstyle2">19</span><span class="fontstyle0">(3), 101–106.</span></p>
<p><span class="fontstyle0">Masukume, G. (2024). The impact of AI on scientific literature: A surge in AI-associated words in academic and biomedical writing. medRxiv, June 1, 2024. https://doi.org/10.1101/2024.05.31.24308296</span></p>
<p><span class="fontstyle0">Ngwenyama, O., &amp; Rowe, F. (2024). Should we collaborate with AI to conduct literature reviews? Changing epistemic values in a flattening world. </span><span class="fontstyle2">Journal of the Association for Information Systems</span><span class="fontstyle0">, </span><span class="fontstyle2">25</span><span class="fontstyle0">(1), 122–136. https://doi.org/10.17705/1jais.00869</span></p>
<p><span class="fontstyle0">Paul, J., &amp; Criado, A. R. (2020). The art of writing literature review: What do we know and what do we need to know?. International Business Review, </span><span class="fontstyle2">29</span><span class="fontstyle0">(4), 101717.</span></p>
<p><span class="fontstyle0">Pereira, V., Hadjielias, E., Christofi, M., &amp; Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. </span><span class="fontstyle2">Human Resource Management Review</span><span class="fontstyle0">, </span><span class="fontstyle2">33</span><span class="fontstyle0">(1). https://doi.org/10.1016/j.hrmr.2021.100857</span></p>
<p><span class="fontstyle0">Saeidnia, H. R., Hosseini, E., Abdoli, S., &amp; Ausloos, M. (2024). Unleashing the power of AI: A systematic review of cutting-edge techniques in AI-enhanced scientometrics, webometrics and bibliometrics. </span><span class="fontstyle2">Library Hi Tech</span><span class="fontstyle0">. https://doi.org/10.1108/LHT-10-2023-0514</span></p>
<p><span class="fontstyle0">Santos, Á. O. dos, da Silva, E. S., Couto, L. M., Reis, G. V. L., &amp; Belo, V. S. (2023). The use of artificial intelligence for automating or semi-automating biomedical literature analyses: A scoping review. </span><span class="fontstyle2">Journal of Biomedical Informatics</span><span class="fontstyle0">, </span><span class="fontstyle2">142</span><span class="fontstyle0">. https://doi.org/10.1016/j.jbi.2023.104389</span></p>
<p><span class="fontstyle0">Siemieniako, D., Mitrêga, M., &amp; Kubacki, K. (2022). The antecedents to social impact in inter-organizational relationships – A systematic review and future research agenda. </span><span class="fontstyle2">Industrial Marketing Management</span><span class="fontstyle0">, </span><span class="fontstyle2">101</span><span class="fontstyle0">(March 2021), 191–207. https://doi.org/10.1016/j.indmarman.2021.12.014</span></p>
<p><span class="fontstyle0">Sommerville, J., Craig, N., &amp; Hendry, J. (2010). The role of the project manager: All things to all people?. </span><span class="fontstyle2">Structural Survey</span><span class="fontstyle0">, </span><span class="fontstyle2">28</span><span class="fontstyle0">(2), 132–141.</span></p>
<p><span class="fontstyle0">Spillias, S., Andreotta, M., Annand-Jones, R., Boschetti, F., Cvitanovic, C., Duggan, J., Fulton, E., Karcher, D., Paris, C., Shellock, R., &amp; Trebilco, R. (2023). Human-AI collaboration to identify literature for evidence synthesis. Preprint. https://doi.org/10.21203/rs.3.rs-3099291/v1</span></p>
<p><span class="fontstyle0">Tantawy, A., Amankwah-Amoah, J., &amp; Puthusserry, P. (2023). Political ties in emerging markets: A systematic review and research agenda. International Marketing Review, </span><span class="fontstyle2">40</span><span class="fontstyle0">(6), 1344-1378. https://doi.org/10.1108/imr-09-2022-0197</span></p>
<p><span class="fontstyle0">Tomczyk, P., Brüggemann, P., Mergner, N., &amp; Petrescu, M. (2024). Exploring AI’s role in literature searching: Traditional methods versus AI-based tools in analyzing topical e-commerce themes. In Francisco J. Martínez-López, Luis F. Martinez, Philipp Brüggemann (Eds.), </span><span class="fontstyle2">Advances in Digital Marketing &amp; eCommerce – 5</span><span class="fontstyle2">th </span><span class="fontstyle2">Annual Conference, 2024 </span><span class="fontstyle0">(pp. 141 – 148). Springer, Cham. https://doi.org/10.1007/978- 3-031-62135-2_15</span></p>
<p><span class="fontstyle0">Tranfield, D., Denyer, D., &amp; Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. </span><span class="fontstyle2">British Journal of Management</span><span class="fontstyle0">, </span><span class="fontstyle2">14</span><span class="fontstyle0">(3), 207–222. https://doi.org/10.1111/1467-8551.00375</span></p>
<p><span class="fontstyle0">Tsafnat, G., Dunn, A., Glasziou, P., &amp; Coiera, E. (2013). The automation of systematic reviews. </span><span class="fontstyle2">BMJ </span><span class="fontstyle0">(Online), </span><span class="fontstyle2">345</span><span class="fontstyle0">(7891). https://doi.org/10.1136/bmj.f139</span></p>
<p><span class="fontstyle0">Tsafnat, G., Glasziou, P., Keen Choong, M., Dunn, A., Galgani, F., &amp; Coiera, E. (2014). Systematic review automation technologies. </span><span class="fontstyle2">Systematic Reviews</span><span class="fontstyle0">, </span><span class="fontstyle2">3</span><span class="fontstyle0">, 1–15. http://www.systematicreviewsjournal.com/ content/3/1/74</span></p>
<p><span class="fontstyle0">van Dinter, R., Tekinerdogan, B., &amp; Catal, C. (2021). Automation of systematic literature reviews: A systematic literature review. </span><span class="fontstyle2">Information and Software Technology</span><span class="fontstyle0">, </span><span class="fontstyle2">136</span><span class="fontstyle0">. https://doi.org/10.1016/j.infsof. 2021.106589</span></p>
<p><span class="fontstyle0">Vrontis, D., &amp; Christofi, M. (2021). R&amp;D internationalization and innovation: A systematic review, integrative framework and future research directions. </span><span class="fontstyle2">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle2">128</span><span class="fontstyle0">, 812–823. https://doi.org/10.1016/j.jbusres.2019.03.031</span></p>
<p><span class="fontstyle0">Vrontis, D., Hulland, J., Shaw, J. D., Gaur, A., Czinkota, M. R., &amp; Christofi, M. (2022). Guest editorial: Systematic literature reviews in international marketing: From the past to the future and beyond. </span><span class="fontstyle2">International Marketing Review</span><span class="fontstyle0">, </span><span class="fontstyle2">39</span><span class="fontstyle0">(5), 1025–1028. https://doi.org/10.1108/IMR-09-2022-390</span></p>
<p><span class="fontstyle0">Vrontis, D., Leonidou, E., Christofi, M., Kaufmann Hans, R., &amp; Kitchen, P. J. (2020). Intercultural service encounters: A systematic review and a conceptual framework on trust development. </span><span class="fontstyle2">EuroMed Journal of Business</span><span class="fontstyle0">, </span><span class="fontstyle2">16</span><span class="fontstyle0">(3), 306–323. https://doi.org/10.1108/EMJB-03-2019-0044</span></p>
<p><span class="fontstyle0">Wagner, G., Lukyanenko, R., &amp; Paré, G. (2022). Artificial intelligence and the conduct of literature reviews. </span><span class="fontstyle2">Journal of Information Technology</span><span class="fontstyle0">, </span><span class="fontstyle2">37</span><span class="fontstyle0">(2), 209–226. https://doi.org/10.1177/02683962211048201</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>It’s not the AI, it’s us: individual concerns and the challenge of ai adoption in organizations</title>
		<link>https://minib.pl/en/numer/no-2-2025/its-not-the-ai-its-us-individual-concerns-and-the-challenge-of-ai-adoption-in-organizations/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Thu, 19 Jun 2025 12:32:33 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[attitudinal clusters]]></category>
		<category><![CDATA[generative AI]]></category>
		<category><![CDATA[technology acceptance]]></category>
		<category><![CDATA[worries and concerns]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=8513</guid>

					<description><![CDATA[1. Introduction Artificial intelligence (AI) has gained significant attention both in public discourse (e.g., Special Committee on AIDA, 2022; White House, 2022; OECD, 2019) and in academic research (e.g., Gamma &#38; Magistretti, 2025; Querci et al., 2022). Many organizations continue to face challenges, particularly in the early stages of AI implementation (Atsmon et al., 2021)....]]></description>
										<content:encoded><![CDATA[<p><strong><span class="fontstyle0" style="font-size: 18pt;">1. Introduction</span></strong></p>
<p><span class="fontstyle2">Artificial intelligence (AI) has gained significant attention both in public discourse (e.g., Special Committee on AIDA, 2022; White House, 2022; OECD, 2019) and in academic research (e.g., Gamma &amp; Magistretti, 2025; Querci et al., 2022). Many organizations continue to face challenges, particularly in the early stages of AI implementation (Atsmon et al., 2021). One potential reason for this slow adoption is the public’s worries and concerns about AI and other emerging technologies. People remain concerned about AI applications in areas such as facial recognition, driverless cars, and detecting false information on social media (Rainie et al., 2022). While AI-driven innovations promise significant progress, widespread anxiety about their societal impact persists (Schiavo et al., 2024); meanwhile, managers often lack a clear scientific grasp of the conditions required for AI to generate organizational value (Gamma &amp; Magistretti, 2025).</span></p>
<p><span class="fontstyle2">Acceptance of AI is closely related to general acceptance of new technology. The literature on new technology acceptance highlights the social, economic, policy, and ethical challenges that arise with emerging technologies (Dwivedi et al., 2021). New technology acceptance is a complex process influenced by various motivators and inhibitors (Blut &amp; Wang, 2020). Researchers have analyzed this process using various models considering both technology-related factors, such as perceived usefulness, ease of use, and risk (Davis et al., 1989; Hubert et al., 2019), and individual-related factors, including anxiety, uncertainty, hedonic motivation, and emotional responses (Tamilmani et al., 2021). Notably, worries and concerns about new technology may act as significant inhibitors to adoption in organizations (Blut &amp; Wang, 2020). While these traits and feelings have been recognized, existing theories often treat worries and concerns as having only a weak or indirect impact on adoption decisions. Beyond AI as a specific new technology, its diffusion is also embedded within the broader digital transformation driven by information and communication technologies (ICT). The adoption of AI thus reflects not only technical change but also the evolution of organizational ICT capabilities and infrastructures that enable intelligent data use and automation (Chugh et al., 2025; Mariani &amp; Dwivedi, 2024). Positioning AI within the ICT continuum underscores that AI is a critical but integral component of modern digital ecosystems in organizations.</span></p>
<p><span class="fontstyle2">Despite growing public and organizational attention to AI, limited empirical research has systematically examined how individuals differ in their concerns about emerging technologies. Understanding such variation is important because individual perceptions and anxieties shape broader public attitudes and, indirectly, the environment in which organizations adopt AI. Such perceptions may also influence how individuals within organizations respond to AI initiatives, affecting their openness, trust, and readiness for <span class="fontstyle0">change. Therefore, the aim of this paper is to identify and classify patterns of individual worries and concerns about AI and related technologies, and to explore how these attitudinal clusters differ across technology domains.</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">Using data from over 10,000 U.S. adults collected through the Pew Research Center’s American Trends Panel (ATP) survey (Pew Research Center, 2021), this study examines worries and concerns about new technologies, particularly AI. We ask whether individuals worry uniformly about all technologies, or whether their worries vary depending on the specific technology in question. Identifying such differences, and grouping individuals accordingly, may help firms accelerate AI adoption and provide insights into how organizational members perceive AI in practice.</span></span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">2. Literature review</span></strong></p>
<p><strong><span class="fontstyle2">Acceptance of New Technology</span></strong></p>
<p><span class="fontstyle2"><span class="fontstyle0">The acceptance of new technology is often a gradual process, influenced by a variety of factors (Blut &amp; Wang, 2020). In many organizations, adoption takes considerable time, as seen in higher education (Skoumpopoulou et al., 2018), healthcare (Rahimi et al., 2018), media (Youn &amp; Lee, 2019), and the food sector (Siegrist &amp; Hartmann, 2020). In the literature, technology acceptance in organizations is recognized as often depending on individuals’ acceptance. Models like the Technology Acceptance Model (TAM) (Park et al., 2021), Technology Readiness Model (TRM) (Blut &amp; Wang, 2020), and Innovation Diffusion Theory (IDT) (Hubert et al., 2019) highlight the impact of individual traits and perceptions on technology adoption. The Technology Readiness Model (TRM), in addition, includes motivators (innovativeness, optimism) and inhibitors (insecurity, discomfort) (Blut &amp; Wang, 2020). Related frameworks such as the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) similarly integrate affective and risk-related beliefs but often model them as indirect antecedents of behavioral intention (Tamilmani et al., 2021).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">In the literature, both internal factors (e.g., the Technology Readiness Model, or TRM) and external factors (e.g., the Technology Acceptance Model, or TAM) are recognized as shaping individuals’ attitudes toward new technologies. Individuals may hold varying attitudes toward different technologies, with the TRM reflecting a general predisposition toward technology, though it has only limited impact on specific attitudes (Park et al., 2021; Blut &amp; Wang, 2020; Wixom &amp; Todd, 2005). Among the internal factors, individuals’ </span></span><span class="fontstyle2"><span class="fontstyle0">worries and concerns about new technologies often hinder their acceptance. These concerns include privacy and trust issues (Dhagarra et al., 2020), as well as fears of security breaches in online transactions (Mousavizadeh et al., 2016). In this context, AI adoption can be viewed as part of a wider trajectory of ICT innovation and digital transformation,</span> <span class="fontstyle0">where human and organizational readiness play central roles (Chugh et al., 2025). Insecurity and discomfort, in particular, hinder technology acceptance, and the model has evolved with technological developments (Parasuraman &amp; Colby, 2015). Worries and concerns are not minor factors but often central barriers to adoption.</span></span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2">Organizational Adoption of AI</span></strong></p>
<p><span class="fontstyle2"><span class="fontstyle0">Artificial Intelligence (AI), defined as a machine-based system that generates outputs such as predictions, recommendations, or decisions (White House, 2022; OECD, 2019), has received attention worldwide from governments and organizations (Special Committee on AIDA, 2022). It is projected to contribute $13 trillion to global economic growth by 2030 (AI Commission, 2023), driving competition for global leadership (Special Committee on AIDA, 2022). Applications may include AI-driven chatbots (Morsi, 2023), digital platforms (Gamma &amp; Magistretti, 2025), and process automation (Jha et al., 2019). AI has the potential to enhance organizational performance, with employee productivity serving as a key mediator between AI adoption and performance outcomes (Kassa &amp; Worku, 2025).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">Despite this potential, organizational adoption of AI remains challenging, particularly because it is shaped by organizational members’ attitudes and dispositions. AI’s impact on productivity and innovation differs markedly across organizations (Kim et al., 2025). Individuals’ resistance to change and lack of skills (Romeo &amp; Lacko, 2025), as well as social dynamics among organizational members, including team motivation and leader-follower bonds (Booyse &amp; Scheepers, 2024), may affect the adoption of AI. Individuals’ worries regarding privacy and the collection of personal data (Querci et al., 2022), potential disruptions to social norms (Dwivedi et al., 2021), and concerns such as the perceived immaturity of the technology (Morsi, 2023) often deter adoption. A Qualtrics study (2023) similarly highlights worries about privacy, transparency, and AI’s lack of emotional understanding (Ozsevim, 2023). Managers often lack a grounded understanding of these </span></span><span class="fontstyle2"><span class="fontstyle0">dynamics, limiting their ability to capture AI’s value (Gamma &amp; Magistretti, 2025), while AI may reduce employees’ psychological safety and increase stress (Kim et al., 2025). Recent research on organizational adoption of AI highlights not only technical challenges but also organizational readiness and human-AI collaboration (Raisch &amp; Krakowski, 2021). Understanding how different individuals perceive AI is therefore essential for mapping realistic adoption trajectories.</span></span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2">The Role of Generative AI </span></strong></p>
<p><span class="fontstyle2"><span class="fontstyle0">Generative AI (GenAI) represents a new stage in AI development, using large-scale models to generate novel outputs – such as text, images, or code – beyond the training data (Feuerriegel et al., 2024). It enhances creativity and efficiency (Stokel-Walker &amp; Van</span> <span class="fontstyle0">Noorden, 2023) but also introduces new challenges around governance, trust, and human– AI collaboration (Mariani &amp; Dwivedi, 2024; Romeo &amp; Lacko, 2025), and its adoption requires safeguards addressing risks such as bias, privacy, and intellectual property (IBM Institute for Business Value, 2024; Smith, 2025). While recent studies emphasize GenAI’s potential to facilitate employee performance and corporate innovation (Rana et al., 2024), with employee productivity identified as a key mediating factor (Liu et al., 2025; Kassa &amp; Worku, 2025), GenAI assistants such as ChatGPT, Grok, and DeepSeek also raise concerns that may hinder adoption among organizational members (Monteverde et al., 2025; Hornung &amp; Smolnik, 2021).</span></span></p>
<p><span class="fontstyle2"><span class="fontstyle0">In our study, the AI technologies (e.g., facial recognition, driverless vehicles, brain– computer interfaces, robotic exoskeletons) represent applied or embodied systems that have functional focus different from GenAI, which many individuals may find both more impressive and more unsettling than GenAI. Informed by recent discussions on the evolution of AI and Generative AI (e.g., Chugh et al., 2025; Pandy et al., 2025; Rashidi et al., 2025; Reddy et al., 2025; Smith, 2025; Feuerriegel et al., 2024; Mariani &amp; Dwivedi, 2024), Table 1 synthesizes and extends these perspectives to highlight the contrasts most relevant to our study.</span> </span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8526" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-1.jpg" alt="" width="1769" height="1206" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-1.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-1-300x205.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-1-1024x698.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-1-768x524.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-1-1536x1047.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-1-1320x900.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">This contrast clarifies that the type of AI examined in this study represents a parallel branch of AI development, rather than an earlier stage. While sharing core concerns with GenAI, such as privacy, bias, and trust, these systems differ in their forms of interaction and risk emphasis, focusing more on safety, accountability, and real-world consequences than on generated content. Such distinctions suggest the technologies analyzed in our dataset tend to elicit stronger public anxieties and concerns, thereby making individual differences in perception more salient and offering a sharper lens for identifying attitudinal clusters than would be possible with GenAI at this stage. Because the dataset used here (Pew Research Center, 2021) predates the mainstream rise of GenAI, it captures public attitudes toward these applied AI systems that were already provoking strong societal reactions.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2">Research Gap and Hypotheses</span></strong></p>
<p><span class="fontstyle0">While the existing literature suggests that individual concerns often stem from a technology’s features and risks (Park et al., 2021; Blut &amp; Wang, 2020), the most recent research on AI has often mentioned the importance of internal factors such as emotions (Hornung &amp; Smolnik, 2021) and social anxiety (Yuan et al., 2022).</span></p>
<p><span class="fontstyle0">Recent studies have begun to examine individuals’ attitudes and emotional responses toward AI, including both traditional and generative forms, confirming that personal dispositions, perceived risks, and trust strongly influence adoption intentions, including within organizations (e.g., Daly et al., 2025; Montag et al., 2025). Several works in the past two years have also explored the psychological underpinnings of AI resistance and acceptance, such as trust (Daly et al., 2025), existing general attitudes to AI (Montag &amp; Ali, 2025), personality traits (Grassini &amp; Koivisto, 2024; Stein et al., 2024), personal experiences (Grassini &amp; Koivisto, 2024), demographic factors (Kaya et al., 2024), individual perceptions such as self-efficacy and perceived job threat (G. Wang et al., 2025), and social perceptions (C. Wang et al., 2025), as well as broader anxieties and societal concerns, including social anxiety (Yuan et al., 2022) and ethical or governance concerns (Mariani &amp; Dwivedi, 2024; Rashidi et al., 2025).</span></p>
<p><span class="fontstyle0">However, most of these studies rely on small or context-specific samples (e.g., healthcare, education, artwork, or organizational case studies) and do not systematically test </span><span class="fontstyle3">whether distinct clusters of individuals exist </span><span class="fontstyle0">across the general population. This gap highlights the need for a broader, data-driven approach to verify attitudinal heterogeneity in public concerns about AI; our study addresses this need through cluster analysis of a nationally representative dataset. Our research builds on the existing research stream by emphasizing that worries and concerns are not simply by-products of features, but are deeply shaped by individuals’ dispositions. Understanding such heterogeneity is increasingly important for organizations, where individual-level acceptance may shape adoption outcomes. Nevertheless, several important gaps remain.</span></p>
<p><span class="fontstyle0">The first such gap lies in attitudinal heterogeneity. While the TAM/TRM frameworks (e.g., Blut &amp; Wang, 2020) and some demographic segmentation (e.g., Park et al., 2021) exist, most large-scale surveys treat the public as uniform. Surveys on AI adoption often report aggregate percentages (Rainie et al., 2022) but rarely uncover latent clusters of attitudes. What is missing is systematic, cluster-based evidence of how individuals’ worries diverge at scale. The second gap concerns the consistency of individual worries and concerns. Prior studies often examine AI acceptance in narrow contexts such as healthcare, education, or surveillance (Rahimi et al., 2018; Skoumpopoulou et al., 2018). Yet, few have tested whether clusters show stable differences across diverse AI technology domains, or whether their relative concerns follow a consistent directional structure. The third gap involves the individual–organizational link. Organizational studies acknowledge privacy, bias, and intellectual-property worries as barriers to AI adoption (IBM Institute for Business Value, 2024), but these are often discussed at the organizational capability level. The connection between individual dispositions and organizational adoption therefore remains underexplored.</span></p>
<p>&nbsp;</p>
<p><span class="fontstyle0">Drawing on prior research, we advance the following hypotheses:</span></p>
<p><span class="fontstyle0"><strong><span class="fontstyle2">H1: </span></strong>Respondents can be grouped into distinct clusters reflecting different attitudes toward AI and related technologies.</span></p>
<p><span class="fontstyle0"><span class="fontstyle2"><strong>H2:</strong> </span>These clusters differ significantly in their reported concerns across a range of AI- and technology-related variables.</span></p>
<p><span class="fontstyle0"><span class="fontstyle2"><strong>H3:</strong> </span>Cluster differences will exhibit a consistent directional order across domains, rather than varying unpredictably by context.</span></p>
<p>&nbsp;</p>
<p><span class="fontstyle0" style="font-size: 18pt;"><span class="fontstyle3"><strong>3. Methods</strong> </span></span></p>
<p><span class="fontstyle0">To analyze individuals’ opinions on new technologies, particularly AI, we used data from the Pew Research Center’s American Trends Panel (ATP) survey (Pew Research Center, 2021). The ATP is a nationally representative online panel of over 10,000 U.S. adults on current issues, with questions available in both English and Spanish (Rainie et al., 2022; Keeter, 2019). This study specifically uses data from Wave 99 of the ATP survey, conducted from November 1 to November 7, 2021, with 10,260 U.S. adults as respondents, including residents of Hawaii and Alaska (Rainie et al., 2022). </span></p>
<p><span class="fontstyle0">This dataset is particularly well suited for our study because it is one of the largest nationally representative surveys focusing on AI and emerging technologies, and it contains uniquely detailed attitudinal measures across multiple domains. Respondents came from diverse demographic backgrounds (e.g., age, gender, race/ethnicity, education, income, and region). Pew provides survey weights to align the panel with U.S. population benchmarks; however, in this study we analyzed the unweighted dataset. This choice reflects our focus on relative patterns across clusters of respondents rather than on nationally generalizable point estimates. Accordingly, the results should be interpreted as revealing attitudinal structures within the sample, while acknowledging that weighted distributions would more closely reflect the demographic profile of the U.S. adult population. Basic demographics (e.g., age, gender) closely aligned with U.S. Census benchmarks (United States Census Bureau, 2023), confirming sample reliability (Pew Research Center, 2021). Respondents were members of the general U.S. adult population, not sampled by occupational role or management level. While the survey does not provide organizational subgroups, the findings are nonetheless relevant for organizationalResearch, as employees and managers emerge from the broader public and bring these Predispositions into organizations. To identify latent attitudinal clusters (h1), we used Two composite variables capturing respondents’ excitement or concern about (a) ai Applications (posnegai) and (b) potential human enhancements (posneghe). At the outset of the survey (pew research center, 2021), all participants answered two Multi-item questions: one asked how excited or concerned they would be if ai performed<br />
Six specific types of work, while the other asked about potential new techniques that could Change human abilities in six ways. For each item, respondents selected from five options Ranging from “very excited” to “very concerned,” with nonresponses coded separately And excluded from analysis. These two question blocks offered a uniquely detailed set of Attitudinal measures, each with six items and five-point response options, making them Well suited for clustering analysis. Notably, while the first block focused directly on ai Applications, the second addressed broader technological changes that may be facilitated<br />
By ai. Table 2 summarizes the two multi-item questions that served as clustering inputs For h1: </span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8527" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-2.jpg" alt="" width="1769" height="1295" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-2.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-2-300x220.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-2-1024x750.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-2-768x562.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-2-1536x1124.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-2-1320x966.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">To test whether the clusters differed significantly in their overall attitudes (H2), we examined four additional variables measuring general orientations toward technology, science, and AI. These items extend beyond the specific clustering inputs (POSNEGAI and POSNEGHE) and provide a broader attitudinal context, allowing us to validate whether the clusters reflect meaningful differences in respondents’ general views. Two items assessed attitudes toward AI (CNCEXC and ALGFAIR), while two others measured general attitudes toward technology (TECH1) and science (SC1). Because TECH1 appeared only in Form 1 and SC1 only in Form 2, these variables also serve as a robustness check across subsamples. Table 3 below includes the questions of TECH1, SC1, CNCEXC and ALGFAIR, which are the four variables used to test H2, capturing broader general attitudes.</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8528" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-3.jpg" alt="" width="1769" height="1224" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-3.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-3-300x208.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-3-1024x709.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-3-768x531.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-3-1536x1063.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-3-1320x913.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">To assess whether cluster differences followed a consistent directional order across domains (H3), we analyzed six items from the Pew Research Center survey (2021) covering widely discussed AI-related technologies. These included three noninvasive applications (social media, facial recognition, and driverless vehicles) and three more invasive or embodied applications (brain-implanted chips, gene editing, and robotic exoskeletons). The survey was split into two forms, with approximately half of respondents answering the first set (noninvasive, Form 1) and the other half the second set (invasive, Form 2). This design enables us to test not only whether clusters differ in their overall orientations toward AI but also whether their relative positions remain consistent across contrasting domains. Table 4 below shows the six domain-specific variables analyzed to assess H3:</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8529" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-4.jpg" alt="" width="1769" height="971" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-4.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-4-300x165.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-4-1024x562.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-4-768x422.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-4-1536x843.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-4-1320x725.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">Note: Some variables (e.g., CNCEXC, BCHIP2) originally used the coding in an order similar to 1 = Good idea, 2 = Bad idea, 3 = Not sure. For comparability across items, we recoded them as 1 = Positive, 2 = Neutral, 3 = Negative. This ensured that lower scores consistently reflect more positive attitudes and higher scores more negative attitudes. All ANOVAs and post-hoc tests were conducted on the recoded variables. For all the variables in Table 3 and Table 4, responses that represented nonresponse codes (e.g., “99”) were also excluded.</span></p>
<p><span class="fontstyle0">As mentioned earlier, each of the two variables for clustering purposes (POSNEGAI and POSNEGHE) comprises six sub-questions (a–f). To facilitate clustering, we averaged these responses to create two composite variables, which were then standardized. We acknowledge that averaging the variables may reduce variance and mask item-level heterogeneity (Hair et al., 2019). However, in this study the averaged variables still produced distinct and interpretable clusters, suggesting that substantive group differences were preserved. Using these scaled variables, we applied k-means clustering to determine the optimal number of clusters. For our cluster analysis, we employed k-means clustering because it is computationally efficient for larger samples, produces non-overlapping clusters that are straightforward to interpret, and is widely applied in marketing and management research (Hair et al., 2019). The elbow method, a widely used approach for this purpose (Johnson &amp; Wichern, 1992), indicated that three clusters best fit the data.</span></p>
<p><span class="fontstyle0">To assess whether the identified clusters differed significantly in their attitudes toward AI-related technologies, we employed a series of one-way analyses of variance (ANOVAs) (Hair et al., 2019). Each ANOVA tested mean differences across clusters for a given survey item. Where overall F-tests were significant, we conducted Tukey’s Honest Significant Difference (HSD) post-hoc comparisons to identify which cluster means differed from one another while controlling for familywise error. This allowed us to distinguish whether differences followed a linear pattern or whether reversals emerged. In addition to significance testing, we reported effect sizes using eta squared (η²), which quantify the proportion of variance explained by cluster membership. Effect sizes (η²) were calculated for each ANOVA and interpreted following benchmarks recommended by Hair et al. (2019), with values of approximately .01, .06, and .14 indicating small, medium, and large effects, respectively. These effect sizes provide information about the substantive strength of differences beyond statistical significance. This analytic sequence allowed us to (a) identify latent clusters (H1), (b) test their overall attitudinal differences (H2), and (c) assess whether these differences followed a consistent directional structure across domains (H3).</span></p>
<p><span class="fontstyle0">The dataset was collected in November 2021, prior to the widespread emergence of generative AI applications such as ChatGPT. Survey items focused on technologies that were prominent in public debate at the time, including facial recognition, driverless cars, brain chip implants, gene editing, and robotic exoskeletons. Accordingly, the results capture attitudes toward these technologies rather than generative AI specifically. This temporal scope should be borne in mind when interpreting the findings.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0"><span class="fontstyle2">4. Results</span></span></strong></span></p>
<p><span class="fontstyle0">We conducted the analysis using R version 4.5.0. As mentioned earlier, we performed a cluster analysis based on POSNEGAI and POSNEGHE (Table 2) to examine whether different clusters exhibit distinct attitudes toward issues related to new technologies. Using the elbow method, the total within-cluster sum of squares declined sharply before stabilizing at three clusters, indicating that three clusters provided the optimal solution. The sample was divided into three clusters based on responses to POSNEGAI (focused on potential AI technologies) and POSNEGHE (focused on potential new technologies in general). Given this classification, differences in responses to these variables are expected.</span></p>
<p><span class="fontstyle0">The k-means analysis yielded three clusters of sizes 2,951, 4,336, and 2,639 respondents, respectively. Table 5 presents descriptive statistics for the two clustering variables (POSNEGAI and POSNEGHE) by cluster. The results show clear differentiation. These distinctions indicate that averaging did not obscure meaningful differences but produced interpretable clusters. </span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8530" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-5.jpg" alt="" width="1769" height="541" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-5.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-5-300x92.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-5-1024x313.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-5-768x235.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-5-1536x470.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-5-1320x404.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">For interpretability, we labeled the clusters according to their dominant attitudinal patterns. Cluster 1 (n = 2951) scored highest on both measures of concern and was labeled the “Skeptics.” Cluster 3 (n = 2639) scored lowest on both sets of concern items and was therefore labeled the “Optimists,” as they appeared more excited about AI and related technologies. Cluster 2 (n = 4336) showed moderate scores and was labeled the “Cautious.” POSNEGAI captures attitudes toward AI-specific applications, while POSNEGHE reflects attitudes toward potential human enhancements, neither of which includes invasive technologies such as brain chips, gene editing, or exoskeletons. These</span> <span class="fontstyle0">results provide strong evidence for the existence of three distinct attitudinal clusters, consistent with H1.</span></p>
<p><span class="fontstyle0">To further validate the clustering analysis, we examined responses to general attitudes toward technology and science (TECH1 and SC1) and AI (CNCEXC and ALGFAIR) with ANOVA. The results are as follows:</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8531" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-6.jpg" alt="" width="1769" height="1187" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-6.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-6-300x201.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-6-1024x687.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-6-768x515.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-6-1536x1031.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-6-1320x886.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">The three clusters identified by responses to POSNEGAI and POSNEGHE also differ significantly in their responses to the four variables listed, even when divided across two questionnaire forms. While POSNEGAI and POSNEGHE do not address invasive technologies, our results show that these clusters maintain distinct response patterns even when invasive technologies are considered.</span></p>
<p><span class="fontstyle0">As expected, the clusters differed significantly on CNCEXC (F(2, 9923) = 2046, p &lt; 2 × 10⁻¹⁶). As in Table Y, clusters also differed significantly on ALGFAIR (F(2, 9855) = 817.2, p &lt; 2 × 10⁻¹⁶), TECH1 (F(2, 4984) = 318.4, p &lt; 2 × 10⁻¹⁶), and SC1 (F(2, 4923) = 209.3, p &lt; 2 × 10⁻¹⁶). On all variables, the Optimists scored lowest, the Skeptics highest, and the Cautious fell in between, mirroring the patterns observed for POSNEGAI and POSNEGHE. These results confirm that the clusters not only exist but also differ systematically in their general orientations toward AI, technology, and science, consistent with H2.</span></p>
<p><span class="fontstyle0">Also as mentioned earlier, we next ran ANOVAs on six domain-specific items, covering both noninvasive and invasive technologies (SMALG2, FACEREC2, DCARS2, BCHIP2, GENEV2, EXOV2): Social media, facial recognition, driverless vehicles, brain-implanted computer chips, gene editing, and robotic exoskeletons (Pew Research Center, 2021). The results are below:</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8532" src="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-7.jpg" alt="" width="1769" height="1427" srcset="https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-7.jpg 1769w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-7-300x242.jpg 300w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-7-1024x826.jpg 1024w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-7-768x620.jpg 768w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-7-1536x1239.jpg 1536w, https://minib.pl/wp-content/uploads/2025/06/2-2025-09-table-7-1320x1065.jpg 1320w" sizes="auto, (max-width: 1769px) 100vw, 1769px" /></p>
<p><span class="fontstyle0">Clusters differed significantly across all six measures: SMALG2 (F(2, 4979) = 152.9, p &lt; 2 × 10⁻¹⁶), FACEREC2 (F(2, 4977) = 40.83, p &lt; 2 × 10⁻¹⁶), DCARS2 (F(2, 4977) = 592.4, p &lt; 2 × 10⁻¹⁶), BCHIP2 (F(2, 4910) = 452.8, p &lt; 2 × 10⁻¹⁶), GENEV2 (F(2, 4917) = 374, p &lt; 2 × 10⁻¹⁶), and EXOV2 (F(2, 4921) = 498.8, p &lt; 2 × 10⁻¹⁶). Across all the variables above, Skeptics reported the highest concern levels, Cautious respondents fell in between, and Optimists consistently scored lowest, similar to the pattern observed in POSNEGAI and POSNEGHE. These findings indicate that the clusters’ relative positions remain stable across diverse and contrasting domains, consistent with H3.</span></p>
<p><span class="fontstyle0">Our research suggests that individuals can be meaningfully clustered by their attitudes toward new technology, and these attitudinal differences remain relatively stable across diverse AI-related domains. This consistency indicates that technology attitudes are not merely context-specific reactions but reflect deeper, enduring dispositions. The robustness of these patterns aligns with prior research highlighting the influence of individual characteristics on technology acceptance. For instance, studies have linked personality traits such as openness, conscientiousness, and agreeableness to technology adoption (Fuglsang, 2024; Stein et al., 2024; Park &amp; Woo, 2022; Barnett et al., 2015). However, while earlier work often reported only weak correlations (Fuglsang, 2024; Park &amp; Woo, 2022), our results suggest that individuals’ broader orientations toward new technology manifest as distinct and stable clusters. This implies that dispositional differences – potentially shaped by personality but not reducible to it – underlie how people respond to emerging technologies.</span></p>
<p><span class="fontstyle0">Consistent with H1, the cluster analysis revealed three distinct groups with differing attitudes toward AI. ANOVAs further confirmed significant between-group differences (H2). Moreover, in line with H3, these differences followed a consistent directional order across domains: Skeptics expressed the greatest concerns, Optimists the least, and Cautious respondents fell in between. This suggests that attitudinal clusters not only exist but also exhibit stability across diverse AI-related technologies, reinforcing the view that individuals’ underlying dispositions shape their responses more strongly than contextual variation.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2" style="font-size: 18pt;">5. Discussion</span></strong></p>
<p><strong><span class="fontstyle2">Key Findings and Theoretical Implications </span></strong></p>
<p><span class="fontstyle0">Across a large, nationally representative sample, we identified three stable attitudinal clusters based on concern/excitement composites about AI and human-enhancement technologies. These clusters differed systematically on general evaluations of AI, technology, and science, and the directional ordering (Skeptics &gt; Cautious &gt; Optimists in concerns) persisted across multiple domains of AI (e.g., social media detection, facial recognition, driverless vehicles, brain chips, gene editing, exoskeletons). This pattern supports the view that technology attitudes reflect enduring dispositions rather than context-specific reactions. Most prior studies are context-bound, while by using cluster analysis on a large public sample, we verify population-level heterogeneity and show that the same ordering of clusters appears across qualitatively different technology domains, strengthening external validity for segmentation approaches in future work.</span></p>
<p><span class="fontstyle0">Classical models (TAM/TRM/UTAUT2) often treat inhibiting emotions (e.g., anxiety, discomfort) and risk beliefs as indirect or weak drivers of adoption (Tamilmani et al., 2021; Parasuraman &amp; Colby, 2015; Davis et al. 1989). Our findings suggest that concern-based dispositions operate more directly, segmenting the population into groups with distinct baseline orientations that persist across contexts. This complements literature about individual traits, particularly individual worries and concerns (e.g., Grassini &amp; Koivisto, 2024; Stein et al., 2024; Blut &amp; Wang, 2020), as well as recent evidence on individual-level factors encompassing demographic, perceptual, and socio-psychological dimensions (e.g., Daly et al., 2025; Montag &amp; Ali, 2025; C. Wang et al., 2025; G. Wang et al., 2025; Kaya et al., 2024; Yuan et al., 2022), by demonstrating that such factors aggregate into stable attitudinal profiles at scale.</span></p>
<p><span class="fontstyle0">Our findings also enrich the broader research about ICT and AI/GenAI. Framing AI within the broader ICT continuum clarifies that acceptance depends not only on technical features but also on organizational ICT readiness and data/analytics infrastructures that shape perceived risks and value realization (Chugh et al., 2025; Mariani &amp; Dwivedi, 2024). Our results show that individual predispositions toward applied or embodied AI (as in this study) form coherent, domain-general profiles that likely carry over as organizations integrate both applied AI and GenAI. Because the technologies analyzed here, such as facial recognition, driverless vehicles, or brain–computer interfaces directly affect physical, ethical, and personal domains, they evoke more salient attitudinal differences, offering a sharper lens for identifying the underlying structure of individual concerns.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle2">Managerial Implications</span></strong></p>
<p><span class="fontstyle0">Our analysis captures individual-level attitudes in the general population. These are directly relevant for organizations, since employees bring these predispositions into the workplace, and managers must account for heterogeneous employee and consumer attitudes when implementing AI strategies. Our findings suggest resistance to adoption stems more from psychological predispositions than from technological attributes. While organizations aim for the rapid adoption of emerging technologies such as AI, individual worries and concerns are often overlooked, despite their crucial influence on adoption outcomes (Meuter et al., 2003; Querci et al., 2022). As recognized inhibitors in models like TAM and TRM, these concerns create psychological barriers to adoption (Blut &amp; Wang, 2020; Park et al., 2021). While past studies often treated such factors as indirect influences, our findings show that concerns remain stable across technologies, indicating a more direct impact. Accordingly, organizations should focus less on modifying product features and more on building trust and reducing uncertainty. Targeted engagement strategies such as tailored education, trust-building initiatives, and identifying resistant individuals based on prior technology attitudes may help organizations improve adoption outcomes.</span></p>
<p>&nbsp;</p>
<p><strong><span class="fontstyle0"><span class="fontstyle2">Limitations</span></span></strong></p>
<p><span class="fontstyle0">This study is subject to several limitations. Our dataset captures only concerns, not motivators, so future research should test whether positive drivers of adoption show the same stability across technologies. The analysis also averaged multiple item-level responses into composite variables, which may have masked some heterogeneity, though the resulting clusters remained distinct and interpretable. In addition, while clustering provided meaningful attitudinal profiles, group boundaries are statistical rather than categorical, and individual variation within clusters should be expected. Finally, the data were collected in 2021 in the U.S. context, before the rise of generative AI. Although the findings capture enduring patterns of concern, future work should validate them in updated datasets and cross-cultural settings.</span></p>
<p><span class="fontstyle0">Our results also advance technology adoption research by linking the findings more closely to prior studies. Prior studies often treated concerns as weak or indirect inhibitors (e.g., Davis et al., 1989; Blut &amp; Wang, 2020; Parasuraman &amp; Colby, 2015), but our analysis demonstrates that they represent enduring dispositions rather than context-specific reactions. By identifying Skeptics, Cautious, and Optimists, we highlight systematic attitudinal heterogeneity beyond aggregate survey percentages. This provides both theoretical and methodological value, while also pointing to future research opportunities to refine cluster-based approaches and extend them internationally.</span></p>
<p>&nbsp;</p>
<p><span class="fontstyle0"><strong><span class="fontstyle2" style="font-size: 18pt;">6. Conclusions </span></strong></span></p>
<p><span class="fontstyle0">This study provides population-level evidence that attitudinal heterogeneity toward AI is structured, stable, and cross-domain. Three clusters (Skeptics, Cautious, Optimists) differ consistently in concern across both applied/embodied AI domains and general orientations toward AI, technology, and science. In line with our hypotheses, the analysis identified three distinct clusters with differing attitudes toward AI (H1) and confirmed significant betweengroup differences across multiple variables (H2). Importantly, H3 was also supported: cluster differences followed a consistent directional order across domains, with Skeptics expressing the greatest concerns, Optimists the least, and Cautious respondents falling in between. This consistency underscores that individuals’ orientations toward AI remain stable across diverse contexts, reinforcing the role of underlying dispositions in shaping technology attitudes. The findings make theoretical advances and yield actionable managerial implications. The results highlight why organizational adoption hinges not only on technical performance but also on aligning governance, communication, and rollout strategies with personal dispositions. Practically, segmentation and tailored engagement may reduce resistance and accelerate value realization.</span></p>
<p><span class="fontstyle0">Limitations include the focus on concerns rather than motivators, the use of composite measures that may mask item-level nuances, and the pre-GenAI timing of the dataset. These limitations nonetheless point to meaningful avenues for future research using post- 2023 data, cross-cultural samples, and designs that connect attitudinal segments with actual adoption behaviors. Overall, the results underscore a central insight: in organizational contexts, the decisive constraint is often not the capability of AI itself but the diversity of human dispositions toward it. Addressing this challenge requires not only strategic and governance-aligned interventions but also a deeper sense of empathy and understanding toward the individuals whose experiences ultimately shape the success of AI adoption.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 18pt;"><strong><span class="fontstyle0"><span class="fontstyle2">References</span></span></strong></span></p>
<p><span class="fontstyle0">AI Commission. (2023). <span class="fontstyle3">Report and recommendations</span>. The Commission on Artificial Intelligence Competitiveness, Inclusion and Innovation. U.S. Chamber of Commerce Technology Engagement Center.</span></p>
<p><span class="fontstyle0">Atsmon, Y., Baroudy, K., Jain, P., Kishore, S., McCarthy, B., Nair, S., &amp; Saleh, T. (2021). Tipping the scales in AI: How leaders capture exponential returns. <span class="fontstyle3">McKinsey &amp; Company Report</span>.</span></p>
<p><span class="fontstyle0">Barnett, T., Pearson, A. W., Pearson, R., &amp; Kellermanns, <span class="fontstyle3">F</span>. W. (2015). Five-factor model personality traits as predictors of perceived and actual usage of technology. <span class="fontstyle3">European Journal of Information Systems, 24</span>(4), 374–390.</span></p>
<p><span class="fontstyle0">Bedué, P., &amp; Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. <span class="fontstyle3">Journal of Enterprise Information Management</span>, <span class="fontstyle3">35</span>(2), 530–549. Blut, M., &amp; Wang, C. (2020). Technology readiness: A meta-analysis of conceptualizations of the construct and its impact on technology use. <span class="fontstyle3">Journal of the Academy of Marketing Science</span>, <span class="fontstyle3">48</span>(4), 649–669. Booyse, D., &amp; Scheepers, C. B. (2024). Barriers to adopting automated organisational decision-making through the use of artificial intelligence. <span class="fontstyle3">Management Research Review</span>, <span class="fontstyle3">47</span>(1), 64–85. Chugh, R., Turnbull, D., Morshed, A., Sabrina, <span class="fontstyle3">F</span>., Azad, S., Md Mamunur, R., &amp; Subramani, S. (2025). <span class="fontstyle3">The promise and pitfalls: A literature review of generative artificial intelligence as a learning assistant in ICT education. Computer Applications in Engineering Education, 33</span>(2), e70002.</span></p>
<p><span class="fontstyle0">Daly, S. J., Wiewiora, A., &amp; Hearn, G. (2025). Shifting attitudes and trust in AI: Influences on organizational AI adoption. <span class="fontstyle3">Technological Forecasting and Social Change</span>, <span class="fontstyle3">215</span>, 124108.</span></p>
<p><span class="fontstyle0">Davis, <span class="fontstyle3">F</span>. D., Bagozzi, R. P., &amp; Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. <span class="fontstyle3">Management Science</span>, <span class="fontstyle3">35</span>(8), 982–1003.</span></p>
<p><span class="fontstyle0">Dhagarra, D., Goswami, M., &amp; Kumar, G. (2020). Impact of trust and privacy concerns on technology acceptance in healthcare: An Indian perspective. <span class="fontstyle3">International Journal of Medical Informatics</span>, <span class="fontstyle3">141</span>, 104164.</span></p>
<p><span class="fontstyle0">Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., … Williams, M. D. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. <span class="fontstyle3">International Journal of Information Management</span>, <span class="fontstyle3">57</span>, 102–147. </span></p>
<p><span class="fontstyle0">Feuerriegel, S., Hartmann, J., Janiesch, C., Zschech, P., Heinzl, A., &amp; Hund, A. (2024). Generative AI. </span><span class="fontstyle2">Business &amp; Information Systems Engineering</span><span class="fontstyle0">, </span><span class="fontstyle2">66</span><span class="fontstyle0">(2), 111–126.</span></p>
<p><span class="fontstyle0">Fuglsang, S. (2024). What if some people just do not like science? How personality traits relate to attitudes toward science and technology. </span><span class="fontstyle2">Public Understanding of Science</span><span class="fontstyle0">, </span><span class="fontstyle2">33</span><span class="fontstyle0">(5), 623–633.</span></p>
<p><span class="fontstyle0">Gamma, </span><span class="fontstyle2">F</span><span class="fontstyle0">., &amp; Magistretti, S. (2025). Artificial intelligence in innovation management: A review of innovation capabilities and a taxonomy of AI applications. </span><span class="fontstyle2">Journal of Product Innovation Management</span><span class="fontstyle0">, </span><span class="fontstyle2">42</span><span class="fontstyle0">(1), 76–111.</span></p>
<p><span class="fontstyle0">Gramlich, J. (2025). Q&amp;A: Why and how we compared the public’s views of artificial intelligence with those of AI experts. </span><span class="fontstyle2">Pew Research Center</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Grassini, S., &amp; Koivisto, M. (2024). Understanding how personality traits, experiences, and attitudes shape negative bias toward AI-generated artworks. </span><span class="fontstyle2">Scientific Reports</span><span class="fontstyle0">, </span><span class="fontstyle2">14</span><span class="fontstyle0">(1), 4113.</span></p>
<p><span class="fontstyle0">Hair, J. </span><span class="fontstyle2">F</span><span class="fontstyle0">., Black, W. C., Babin, B. J., &amp; Anderson, R. E. (2019). </span><span class="fontstyle2">Multivariate data analysis </span><span class="fontstyle0">(8th ed.). Cengage.</span></p>
<p><span class="fontstyle0">Hornung, O., &amp; Smolnik, S. (2021). AI invading the workplace: Negative emotions towards the organizational use of personal virtual assistants. </span><span class="fontstyle2">Electronic Markets</span><span class="fontstyle0">, </span><span class="fontstyle2">32</span><span class="fontstyle0">(1), 123–138.</span></p>
<p><span class="fontstyle0">Hubert, M., Blut, M., Brock, V., Zhang, R. W., Koch, V., &amp; Riedl, R. (2019). The influence of acceptance and adoption drivers on smart home usage. </span><span class="fontstyle2">European Journal of Marketing</span><span class="fontstyle0">, </span><span class="fontstyle2">53</span><span class="fontstyle0">(6), 1073–1098.</span></p>
<p><span class="fontstyle0">IBM Institute for Business Value. (2024). The ingenuity of generative AI: Unlock productivity and innovation at scale. </span><span class="fontstyle2">IBM</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Jha, K., Doshi, A., Patel, P., &amp; Shah, M. (2019). A comprehensive review on automation in agriculture using artificial intelligence. </span><span class="fontstyle2">Artificial Intelligence in Agriculture</span><span class="fontstyle0">, </span><span class="fontstyle2">2</span><span class="fontstyle0">, 1–12.</span></p>
<p><span class="fontstyle0">Johnson, R. A., &amp; Wichern, D. W. (1992). </span><span class="fontstyle2">Applied multivariate statistical analysis</span><span class="fontstyle0">. Prentice Hall.</span></p>
<p><span class="fontstyle0">Kaya, </span><span class="fontstyle2">F</span><span class="fontstyle0">., Aydin, </span><span class="fontstyle2">F</span><span class="fontstyle0">., Schepman, A., Rodway, P., Yetişensoy, O., &amp; Demir Kaya, M. (2024). The roles of personality traits, AI anxiety, and demographic factors in attitudes toward artificial intelligence. </span><span class="fontstyle2">International Journal of Human–Computer Interaction</span><span class="fontstyle0">, </span><span class="fontstyle2">40</span><span class="fontstyle0">(2), 497–514.</span></p>
<p><span class="fontstyle0">Kassa, B. Y., &amp; Worku, E. K. (2025). The impact of artificial intelligence on organizational performance: The mediating role of employee productivity. </span><span class="fontstyle2">Journal of Open Innovation: Technology, Market, and Complexity</span><span class="fontstyle0">, </span><span class="fontstyle2">11</span><span class="fontstyle0">, 100474.</span></p>
<p><span class="fontstyle0">Keeter, S. (2019). Growing and improving Pew Research Center’s American Trends Panel. </span><span class="fontstyle2">Pew Research Center</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Kelly, J. (2023). Goldman Sachs predicts 300 million jobs will be lost or degraded by artificial intelligence. </span><span class="fontstyle2">Forbes</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Kim, B. J., Kim, M. J., &amp; Lee, J. (2025). The dark side of artificial intelligence adoption: Linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership. </span><span class="fontstyle2">Humanities and Social Sciences Communications</span><span class="fontstyle0">, </span><span class="fontstyle2">12</span><span class="fontstyle0">, 704.</span></p>
<p><span class="fontstyle0">Liu, Y., Sheng, </span><span class="fontstyle2">F</span><span class="fontstyle0">., &amp; Liu, R. (2025). Generative AI adoption and employee outcomes: A conservation of resources perspective on job crafting, career commitment, and the moderating role of liking of AI. </span><span class="fontstyle2">Humanities and Social Sciences Communications</span><span class="fontstyle0">, </span><span class="fontstyle2">12</span><span class="fontstyle0">, 1376.</span></p>
<p><span class="fontstyle0">Mariani, M., &amp; Dwivedi, Y. K. (2024). Generative artificial intelligence in innovation management: A preview of future research developments. </span><span class="fontstyle2">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle2">175</span><span class="fontstyle0">, 114542.</span></p>
<p><span class="fontstyle0">Mariani, M. M., Perez-Vega, R., &amp; Wirtz, J. (2022). AI in marketing, consumer research and psychology: A systematic literature review and research agenda. </span><span class="fontstyle2">Psychology and Marketing</span><span class="fontstyle0">, </span><span class="fontstyle2">39</span><span class="fontstyle0">(4), 755–776.</span></p>
<p><span class="fontstyle0">Meuter, M. L., Ostrom, A. L., Bitner, M. J., &amp; Roundtree, R. (2003). The influence of technology anxiety on consumer use experiences with self-service technologies. </span><span class="fontstyle2">Journal of Business Research</span><span class="fontstyle0">, </span><span class="fontstyle2">56</span><span class="fontstyle0">(11), 899–906.</span></p>
<p><span class="fontstyle0">Montag, C., Ali, R., &amp; Davis, K. L. (2025). Affective neuroscience theory and attitudes towards artificial intelligence. </span><span class="fontstyle2">AI &amp; Society</span><span class="fontstyle0">, </span><span class="fontstyle2">40</span><span class="fontstyle0">(1), 167–174.</span></p>
<p><span class="fontstyle0">Montag, C., &amp; Ali, R. (2025). Can we assess attitudes toward AI with single items? Associations with existing attitudes toward AI measures and trust in ChatGPT. </span><span class="fontstyle2">Journal of Technology in Behavioral Science</span><span class="fontstyle0">, 1–11.</span></p>
<p><span class="fontstyle0">Monteverde, G., Cammarota, A., Serafini, L., &amp; Quadri, M. (2025). Are we human or are we voice assistants? Revealing the interplay between anthropomorphism and consumer concerns. </span><span class="fontstyle2">Journal of Marketing Management</span><span class="fontstyle0">, </span><span class="fontstyle2">41</span><span class="fontstyle0">(1–2), 1–25.</span></p>
<p><span class="fontstyle0">Mousavizadeh, M., Kim, D. J., &amp; Chen, R. (2016). Effects of assurance mechanisms and consumer concerns on online purchase decisions: An empirical study. </span><span class="fontstyle2">Decision Support Systems</span><span class="fontstyle0">, </span><span class="fontstyle2">92</span><span class="fontstyle0">, 79–90.</span></p>
<p><span class="fontstyle0">Morsi, S. (2023). Artificial intelligence in electronic commerce: Investigating the customers’ acceptance of using chatbots. </span><span class="fontstyle2">Electronic Commerce Research</span><span class="fontstyle0">, </span><span class="fontstyle2">13</span><span class="fontstyle0">(3), 156–176.</span></p>
<p><span class="fontstyle0">Organization for Economic Cooperation and Development (OECD). (2019). OECD AI principles overview. </span><span class="fontstyle2">OECD</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Ozsevim, I. (2023). Consumer concerns: AI privacy, transparency and emotionality. </span><span class="fontstyle2">AI Magazine</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Pandy, G., Pugazhenthi, V. J., &amp; Murugan, A. (2025). </span><span class="fontstyle2">Generative AI: Transforming the landscape of creativity and automation. International Journal of Computer Applications, </span><span class="fontstyle0">186</span><span class="fontstyle2">(63), 7–13</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Parasuraman, A., &amp; Colby, C. L. (2015). An updated and streamlined technology readiness index: TRI 2.0. </span><span class="fontstyle2">Journal of Service Research</span><span class="fontstyle0">, </span><span class="fontstyle2">18</span><span class="fontstyle0">(1), 59–74.</span></p>
<p><span class="fontstyle0">Park, S. S., Tung, C. D., &amp; Lee, H. (2021). The adoption of AI service robots: A comparison between credence and experience service settings. </span><span class="fontstyle2">Psychology &amp; Marketing</span><span class="fontstyle0">, </span><span class="fontstyle2">38</span><span class="fontstyle0">(4), 691–703.</span></p>
<p><span class="fontstyle0">Park, J., &amp; Woo, S. E. (2022). Who likes artificial intelligence? Personality predictors of attitudes toward artificial intelligence. </span><span class="fontstyle2">Journal of Psychology</span><span class="fontstyle0">, </span><span class="fontstyle2">156</span><span class="fontstyle0">(1), 68–94.</span></p>
<p><span class="fontstyle0">Păvăloaia, V.-D., &amp; Necula, S.-C. (2023). Artificial intelligence as a disruptive technology – A systematic literature review. </span><span class="fontstyle2">Electronics</span><span class="fontstyle0">, </span><span class="fontstyle2">12</span><span class="fontstyle0">(5), 1102.</span></p>
<p><span class="fontstyle0">Pew Research Center. (2021). </span><span class="fontstyle2">American Trends Panel wave 99 </span><span class="fontstyle0">[Data files and questionnaire].</span></p>
<p><span class="fontstyle0">Qualtrics. (2023). Beyond chatbots, majority of consumers are open to AI in legal, medical or financial matters. </span><span class="fontstyle2">Qualtrics News</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Querci, I., Barbarossa, C., Romani, S., &amp; Ricotta, </span><span class="fontstyle2">F</span><span class="fontstyle0">. (2022). Explaining how algorithms work reduces consumers’ concerns regarding the collection of personal data and promotes AI technology adoption. </span><span class="fontstyle2">Psychology &amp; Marketing</span><span class="fontstyle0">, </span><span class="fontstyle2">39</span><span class="fontstyle0">(10), 1888–1901.</span></p>
<p><span class="fontstyle0">Rahimi, B., Nadri, H., Afshar, H. L., &amp; Timpka, T. (2018). A systematic review of the technology acceptance model in health informatics. </span><span class="fontstyle2">Applied Clinical Informatics</span><span class="fontstyle0">, </span><span class="fontstyle2">9</span><span class="fontstyle0">(3), 604–634.</span></p>
<p><span class="fontstyle0">Rainie, L., Anderson, J., &amp; Vogels, E. A. (2021). Experts doubt ethical AI design will be broadly adopted as the norm within the next decade. </span><span class="fontstyle2">Pew Research Center</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Rainie, L., Funk, C., Anderson, M., &amp; Tyson, A. (2022). AI and human enhancement: Americans’ openness is tempered by a range of concerns. </span><span class="fontstyle2">Pew Research Center</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Raisch, S., &amp; Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. </span><span class="fontstyle2">Academy of Management Review</span><span class="fontstyle0">, </span><span class="fontstyle2">46</span><span class="fontstyle0">(1), 192–210.</span></p>
<p><span class="fontstyle0">Rana, N. P., Pillai, R., Sivathanu, B., &amp; Malik, N. (2024). Assessing the nexus of generative AI adoption, ethical considerations and organizational performance. </span><span class="fontstyle2">Technovation</span><span class="fontstyle0">, </span><span class="fontstyle2">135</span><span class="fontstyle0">, 103064.</span></p>
<p><span class="fontstyle0">Rashidi, H. H., Pantanowitz, J., Hanna, M. G., Tafti, A. P., Sanghani, P., Buchinsky, A., &amp; Pantanowitz, L. (2025). </span><span class="fontstyle2">Introduction to artificial intelligence and machine learning in pathology and medicine: Generative and nongenerative artificial intelligence basics. Modern Pathology, 38</span><span class="fontstyle0">(4), 100688.</span></p>
<p><span class="fontstyle0">Reddy, P., Ch, K., Sharma, K., Sharma, B., &amp; Sharma, S. (2025). </span><span class="fontstyle2">Evolution of generative artificial intelligence: A review of the developed and developing. Engineered Science, 35</span><span class="fontstyle0">, 1529.</span></p>
<p><span class="fontstyle0">Romeo, E., &amp; Lacko, J. (2025). Adoption and integration of AI in organizations: A systematic review of challenges and drivers towards future directions of research. </span><span class="fontstyle2">Kybernetes, Advance online publication</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Shell, M. A., &amp; Buell, R. W. (2022). Mitigating the negative effects of consumer anxiety through access to human contact (Harvard Business School Working Paper No. 19-089). </span><span class="fontstyle2">Harvard Business School</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Schiavo, G., Businaro, S., &amp; Zancanaro, M. (2024). Comprehension, apprehension, and acceptance: Understanding the influence of literacy and anxiety on acceptance of artificial intelligence. </span><span class="fontstyle2">Technology in Society</span><span class="fontstyle0">, </span><span class="fontstyle2">77</span><span class="fontstyle0">, 102537.</span></p>
<p><span class="fontstyle0">Sidoti, O., Park, E., &amp; Gottfried, J. (2025). About a quarter of U.S. teens have used ChatGPT for schoolwork – double the share in 2023. </span><span class="fontstyle2">Pew Research Center</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Siegrist, M., &amp; Hartmann, C. (2020). Consumer acceptance of novel food technologies. </span><span class="fontstyle2">Nature Food</span><span class="fontstyle0">, </span><span class="fontstyle2">1</span><span class="fontstyle0">(6), 343–350.</span></p>
<p><span class="fontstyle0">Skoumpopoulou, D., Wong, A., Ng, P., &amp; Lo, M. </span><span class="fontstyle2">F</span><span class="fontstyle0">. (2018). Factors that affect the acceptance of new technologies in the workplace: A cross case analysis between two universities. </span><span class="fontstyle2">International Journal of Education and Development Using Information and Communication Technology</span><span class="fontstyle0">, </span><span class="fontstyle2">14</span><span class="fontstyle0">(3), 209–222.</span></p>
<p><span class="fontstyle0">Smith, G. K. (2025). Strategic integration of generative AI: Opportunities, challenges, and organizational impacts. </span><span class="fontstyle2">Law, Economics and Society</span><span class="fontstyle0">, </span><span class="fontstyle2">1</span><span class="fontstyle0">(1), 156–179.</span></p>
<p><span class="fontstyle0">Special Committee on Artificial Intelligence in a Digital Age (AIDA). (2022). </span><span class="fontstyle2">Report on artificial intelligence in a digital age</span><span class="fontstyle0">. European Parliament.</span></p>
<p><span class="fontstyle0">Stein, J. P., Messingschlager, T., Gnambs, T., Hutmacher, </span><span class="fontstyle2">F</span><span class="fontstyle0">., &amp; Appel, M. (2024). Attitudes towards AI: Measurement and associations with personality. </span><span class="fontstyle2">Scientific Reports</span><span class="fontstyle0">, </span><span class="fontstyle2">14</span><span class="fontstyle0">(1), 2909.</span></p>
<p><span class="fontstyle0">Stokel-Walker, C., &amp; Van Noorden, R. (2023). What ChatGPT and generative AI mean for science. </span><span class="fontstyle2">Nature</span><span class="fontstyle0">, </span><span class="fontstyle2">614</span><span class="fontstyle0">(7947), 214–216.</span></p>
<p><span class="fontstyle0">Tamilmani, K., Rana, N. P., Fosso Wamba, S., &amp; Dwivedi, R. (2021). The extended unified theory of acceptance and use of technology (UTAUT2): A systematic literature review and theory evaluation. </span><span class="fontstyle2">International Journal of Information Management</span><span class="fontstyle0">, </span><span class="fontstyle2">57</span><span class="fontstyle0">, 102269.</span></p>
<p><span class="fontstyle0">United States Census Bureau. (2023). </span><span class="fontstyle2">2023 population QuickFacts</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Wang, C., Li, X., Liang, Z., Sheng, Y., Zhao, Q., &amp; Chen, S. (2025). The roles of social perception and AI anxiety in individuals’ attitudes toward ChatGPT in education. </span><span class="fontstyle2">International Journal of Human– Computer Interaction</span><span class="fontstyle0">, </span><span class="fontstyle2">41</span><span class="fontstyle0">(9), 5713–5730.</span></p>
<p><span class="fontstyle0">Wang, G., Obrenovic, B., Gu, X., &amp; Godinic, D. (2025). Fear of the new technology: Investigating the factors that influence individual attitudes toward generative Artificial Intelligence (AI). </span><span class="fontstyle2">Current Psychology</span><span class="fontstyle0">, </span><span class="fontstyle2">44</span><span class="fontstyle0">, 8050–8067.</span></p>
<p><span class="fontstyle0">White House. (2022). </span><span class="fontstyle2">The impact of artificial intelligence on the future of work forces in the European Union and the United States of America</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Wilson, H. J., &amp; Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. </span><span class="fontstyle2">Harvard Business Review</span><span class="fontstyle0">.</span></p>
<p><span class="fontstyle0">Wixom, B. H., &amp; Todd, P. A. (2005). A theoretical integration of user satisfaction and technology acceptance. </span><span class="fontstyle2">Information Systems Research</span><span class="fontstyle0">, </span><span class="fontstyle2">16</span><span class="fontstyle0">(1), 85–102.</span></p>
<p><span class="fontstyle0">Youn, S., &amp; Lee, K.-H. (2019). Proposing value based technology acceptance model: Testing on paid mobile media service. </span><span class="fontstyle2">Fashion and Textiles</span><span class="fontstyle0">, </span><span class="fontstyle2">6</span><span class="fontstyle0">(13), 1–16.</span></p>
<p><span class="fontstyle0">Yuan, C., Zhang, C., &amp; Wang, S. (2022). Social anxiety as a moderator in consumer willingness to accept AI assistants based on utilitarian and hedonic values. </span><span class="fontstyle2">Journal of Retailing and Consumer Services</span><span class="fontstyle0">, </span><span class="fontstyle2">68</span><span class="fontstyle0">, 103101.</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Responsible innovation in e-health care: Empowering patients with emerging technologies</title>
		<link>https://minib.pl/en/numer/no-2-2024/responsible-innovation-in-e-health-care-empowering-patients-with-emerging-technologies/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Fri, 29 Mar 2024 09:30:55 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[e-health]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[management]]></category>
		<category><![CDATA[medicine]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=7997</guid>

					<description><![CDATA[Introduction In the 21st century, the technological world is evolving with increasing rapidity. This is especially true in the field of artificial intelligence (AI), which is transforming markets in revolutionary ways. The aim of this article is to explore the impact of the development of these new AI new technologies on medical services, and products,...]]></description>
										<content:encoded><![CDATA[<h2>Introduction</h2>
<p>In the 21st century, the technological world is evolving with increasing rapidity. This is especially true in the field of artificial intelligence (AI), which is transforming markets in revolutionary ways. The aim of this article is to explore the impact of the development of these new AI new technologies on medical services, and products, and to classify them according to patient needs and benefits. We contribute to the literature by demonstrating the added value for the patient, for the healthcare system, and for the physicians (service providers), the interconnectedness of the factors influencing the development of new technologies, and the benefits for key stakeholders. We focus on demonstrating key innovative solutions that enable new functionalities, higher standards of service and improved clinician competence.</p>
<p>The article is both theoretical and practical in nature. Our primary research method is analysis of the literature and information we collected while managing the project “Implementation of a telemedicine model in the field of cardiology by ‘Polish Mother&#8217;s Memorial Hospital – Research Institute, 5/NMF/2066/00/62/2023/295, subsidized by the Norwegian Financial Mechanism and the state budget.”</p>
<p>First, we consider the theoretical aspects of the empowerment that emerges from new technologies, products and services, and then focus more on AI based technology for healthcare. Next, we propose an original classification of new e-health technologies according to their added value to the main healthcare stakeholders (patients, clinicians, and the healthcare system itself). Then we discuss some of the challenges faced by the implementation of new e-health technologies, products and services, and finally offer some conclusions.</p>
<h2>Empowering new technologies, products and services — theoretical aspects</h2>
<p>The process of harnessing new technologies and products in providing healthcare services is deeply embedded within the healthcare system as a whole. In particular, this involves healthcare providers and the services they provide, aimed at strengthening and improving the health of individuals and societies through disease prevention, early detection, treatment, and rehabilitation. Unfortunately, the healthcare situation in most European countries is expected to deteriorate due to population ageing, price increases, and the increasing complexity of healthcare technologies (Marmot et al., 2012). This entails a demand for a disproportionately high level of financial, material, and human resources.</p>
<p>Hence, meeting the health needs of citizens, and thus ensuring the availability of health-related services, depends primarily on a number of critical factors (Sobiech, 1990, p. 10): the volume of financial resources flowing into the health care system of a given country, the number and qualifications of medical staff, their spatial distribution and efficiency, the availability and application of medical technology and apparatus, and access to medical expertise (know-how) (Bukowska-Piestrzyńska, 2013, p. 66). With healthcare funding becoming more constrained every day, it is becoming increasingly important to look more closely at the processes of purchasing, distributing, and harnessing new technologies. Metrics such as stock coverage, urgent purchases, and non-standard purchases are particularly important in the management of healthcare facilities (Santosa et al., 2022, pp. 1-6).</p>
<p>The integration of AI systems in handling some aspects of communications during the diagnosis and treatment process could prove crucial for patient well-being and thus for the doctor-patient relationship. AI-based technologies, products and services will raise new questions about the limits of usability, cost-effectiveness and the ever-increasing cost of healthcare – above all, the issue of optimality in the creation of new products and services (Ayad et al., 2023).</p>
<p>A promising avenue of opportunities to apply new solutions to this challenge lies in digital healthcare, particularly through the implementation of artificial intelligence. These implementations involve a wide range of technologies. Digital tools are “fine-tuning” the capabilities of medical staff and facilitating a shift towards “consumerized” healthcare. This allows citizens to become more involved in managing their family’s healthcare. Digitalization, however, also brings risks, particularly if the challenges it presents are not adequately understood. If these wonderful technological advances are misused, they may expose society to the “dark side” of digital innovation. For example, if smart homes are not designed with the patient’s needs firmly in mind, but instead for the convenience of the “system,” they may give patients a prison-like experience, with robotic and sensor monitoring and control (Stahl &amp; Cockelberg, 2016). Moreover, while AI can improve doctors’ technical skills to operate new technological solutions, it may also reduce their exposure to varied clinical experience (which in turn may make it more difficult to detect rare and atypical diseases).</p>
<p>Accountability in the health sector therefore raises critical questions: accountability for what, and to whom?</p>
<p>Ramachandran et al. (2015), for instance, reported that as many as 60% of patients with chronic diseases show interest in receiving healthcare via telephone, highlighting the a growing demand for e-health services in recent years. The management of chronic diseases is costly for individual patients and their families, as well as for the national health service. Therefore, there is great potential for developing new e-health technologies to improve the management of chronic diseases. Implementing these e-health technologies in healthcare systems can yield significant improvements and facilitate the integration of different aspects of healthcare (Hunt, 2015).</p>
<p>The responsible development of new technologies and bringing innovations to market require the active involvement of all stakeholders, from the very beginning of the innovation process. This helps to accurately identify the needs and priorities of innovation for society (Owen et al., 2012; Stahl et al., 2017). In health care, this means involving patients, carers and other stakeholders in the innovation process, anticipating the risks associated with new solutions, and ensuring that the solutions offered are implemented in a responsible and safe way, with the patient at the forefront (Pawelec, 2022).</p>
<p>New technologies and products in medicine mean smarter, safer, and more patient-centered healthcare services. By improving fit-for-purpose design, efficiency, and effectiveness, they help to reduce errors and shorten the length of hospital stays. The marketing management of healthcare services increasingly focuses on the individual purchaser – a shift emerging in many healthcare organizations thanks in part to new technologies. It should be recognized that there is an important difference between an “ordinary” customer, who can opt out of a purchase, and the patient-consumer of a healthcare service, who relies on medical consultations that directly affect his or her health or life (Białowolski et al., 2012).</p>
<p>Traditionally, patients have often been passive recipients at the endpoint of the service delivery system, rather than active stakeholders. One of the dangers of powerful new technologies is that patients may become even more marginalized, as healthcare is provided and delivered in an increasingly administrative, programmed manner. The doctor may also become more like a robot, carrying out programmed tasks in what could be described as “inhumane services.” The alternative approach places the patient at the center and puts technologies, products, and services at their disposal that allow them to design and control their healthcare based on their own needs. In this, it is important to shift away from seeing patients as a homogeneous group, instead categorizing them as distributed across a spectrum, including:</p>
<p>1)“Informed Users,&#8221; who are in a position to use technology with a better understanding;</p>
<p>2)“Engaged Users,” who play an activist role in the wider healthcare system, empowered by technology;</p>
<p>3)“Innovative Users,” who contribute their own ideas based on a deep understanding of healthcare problems.</p>
<h2>Artificial Intelligence based technology for healthcare</h2>
<p>Advancing safety in the organization of health technology use underscores that while consumers generally trust mature and complex technologies, advances in this area often obscure our understanding of the basics of how such technologies operate. We rely on them not because we are unaware of the potential risks, but because we believe that these risks are properly managed both by control procedures and by human oversight (by a physician). For example, we use increasingly advanced medicines without fear, often without fully grasping the complex clinical trial process that validates their safety. Similarly, we consent to robotic surgeries without fear that our health will be compromised (Turpin et. al., 2020).</p>
<p>The application of new technologies in healthcare should create new value, which may vary depending on the stakeholder. On the one hand, there are private companies that develop and market a technology, product or service, offering it to patients and hospitals in exchange for payment. This technology or product usually enables new functionality, a higher standard of healthcare, or a higher level of proficiency among doctors. On the other hand, there are hospitals that seek to generate maximum value, provided that does not exceed costs. Value is created when it increases revenue, enables more patients to access services, or allows diseases to be detected more quickly, improving quality of life. Value can also be derived from adhering to new global trends, such as the use of AI (Kulkov, 2021).</p>
<p>Artificial intelligence (AI) in healthcare involves the deployment of advanced mathematical algorithms and computer software to analyze complex medical data. The analysis of large datasets (“big data”) makes it possible to predict the probability of particular medical events. Programs that operate with the support of AI have the ability to learn autonomously (machine learning), by harnessing the collected data and the performed analyses.</p>
<p>Some of the first medical applications of artificial intelligence emerged in the field of radiology. AI systems are able to automatically assimilate X-ray data from databases containing thousands of images and then use this knowledge to assess a particular case and even evaluate a patient&#8217;s skeletal age (Jankowski, 2018). Physicians from the Department of Radiology, School of Medicine, Stanford University conducted a study in which 33 patients with nonspecific or common interstitial pneumonia were enrolled. Participants were selected by radiologists with 15-year experience. The same group of patients was qualified by n AI algorithm and two medics who had attended a one-year training course in the field. The AUC (area under the curve) obtained by the AI was 0.81, indicating its strong diagnostic ability. Interestingly, different diagnostic errors were found between the trained doctors and the algorithm, involving different patients. Such findings suggest the possibility of diminishing the risk associated with human error and the possibility of AI collaborating with physicians to further minimize incorrect diagnoses (Depeursinge et al, 2015).</p>
<p>Artificial intelligence in the field of radiology facilitates the search and analysis process for lesions, and is additionally able to detect the smallest lesions that may have been overlooked by experts (Arbabshirani et al., 2018). Recent studies also show that deep learning can adaptively improve image reconstruction during MRI examinations, leading to shorter scan times and increased quality of the obtained images, and thereby to a higher diagnostic value of the examination performed. Such improvements are particularly notable in images obtained with the FLAIR (fluid-attenuated inversion recovery) MRI sequence, which is commonly used for imaging specific brain structures (Hagiwara et. al., 2019).</p>
<p>A significant advantage of AI in healthcare is its potential to relieve doctors of many of their duties, allowing for more patients to be examined. An example of such an application is a study conducted on 154 diabetic patients, which investigated the efficacy of diabetic retinopathy detection based on ocular fundus examinations by the Remidio NM FOP 10, an AI-based device. Results showed concurrence in 85 cases between the device’s assessments and those of ophthalmologists. There were four instances where diabetic retinopathy lesions were identified and 81 cases with no lesions detected. Discrepancies arose in 21 cases, involving poor-quality images. The study revealed that the Remidio NM FOP 10 has a detection accuracy of 80.2%. Additionally, the device can be operated by a trained individual without an ophthalmologist’s direct involvement, potentially increasing the accessibility of preventive measures for individuals with diabetes (Kaczmarek, 2021). Deep learning holds promise for the automatic detection of diabetic retinopathy, offering consistency and precision due to its methodological approach and detailed analysis capabilities.</p>
<p>Another example of the application of intelligent algorithms is their use in supporting Czech medical unit doctors during appointments with specialists. Here, the AI system listens to the patient and the doctor during the appointment at the medical facility and then files a transcription of their dialog. After a few seconds, the AI generates a report from the visit, capturing the most important information provided by the patient as well as the diagnosis, recommendations, and treatment suggested by the doctor. The specialist can edit the report, add or remove specific information that the algorithm has generated. This process not only improves the visit but also allows for detailed review of previous visits, increasing the potential for seeing more patients and reducing their waiting time.</p>
<p>The methodologies described above have not yet been implemented in standard use. Many systems are still in the testing and observation stage in order to verify their correct functioning. Nevertheless, intelligent algorithms often yield results that are on par with, or sometimes even better than, those achieved by medical experts. The cooperation of AI systems and medical experts can minimize the risk of human error when making a diagnosis. Nevertheless, despite the attractive solutions that AI offers, there are several challenges that cannot be overlooked. It is crucial to collect, store and share medical data correctly, in accordance with current regulations. Intelligent algorithms are trained based on huge databases, with content of quality that can be difficult to access. The more information AI assimilates, the more precise the final results and diagnoses will be. Ultimately, it is crucial for results generated by AI to be verified and approved by experts in the relevant medical field (Char et al., 2018).</p>
<p>During the COVID-19 pandemic, new technology played an important role in allowing health services to function through increased Internet capabilities. Telemedicine, in particular, has seen significant advancements, catalyzing dynamic changes in the medical field. In addition, a variety of applications have been developed to facilitate the monitoring of patient health, as well as websites providing necessary information for those interested in such innovations. Some of the solutions are developing globally, making it possible not only to treat, but also to improve procedures or save patients&#8217; lives, thus raising the standard of medical care in the healthcare sector.</p>
<h2>A classification of new e-health technologies according to benefits to the main healthcare stakeholders</h2>
<p>This section of the article explores the emerging importance of these and other cutting-edge technologies and products in healthcare. The multifaceted nature of such technologies, exhibiting high complexity, mean that a broad range of traditional health care stakeholders must be taken into consideration in the analysis of their implementation. We evaluate the benefits and added value for various groups, including medical institutions, physicians, nurses, medical technicians, distributors, e-health providers, e-health systems managers, and patients. The needs of these stakeholders vary, necessitating tailored solutions that cater to specific requirements.</p>
<p>While some stakeholders are involved in R&amp;D on new technology and products, others function primarily as distributors or supporters, while still others are end-users. The literature on this topic offers various classifications of new technologies and products – notably including Herrmann et al.’s (2018) classification of over 400 different digital health projects and solutions. These were categorized according to their purpose into ten different types: software as a medical device, advanced analytics, artificial intelligence, cloud services, cybersecurity, interoperability, medical devices data systems, mobile medical applications, wireless technologies, and novel digital health solutions. However, this classification primarily focuses on products aimed at healthcare professionals, mitting those designed for the industry, the insurance companies and other stakeholders.</p>
<p>Severika and Ceranic (2020), in contrast, offer a broader classification of new technologies pertaining to healthcare professionals, industries, insurance companies and other stakeholders. Their proposed categories include: lifestyle intervention tools, diagnostics and prevention tools, research and development &amp; production optimization tools, remote tracing tools, clinical decision support tools, telemedicine tools, and workflow tools. The World Health Organization (2018) emphasizes that digital and mobile technologies are increasingly crucial in supporting the needs of health systems.</p>
<p>From a market perspective, new technologies and products that are implemented in medical units should first and foremost add value for patients and physicians. (The economic value of new technologies cannot be overlooked, of course, but it is not the focus of this article.) From this perspective of the added value for clinicians, we propose to segment the new technologies and products into seven categories: wearable devices, mobile applications, remote monitoring systems, technologies based on artificial intelligence algorithms, telemedicine platforms, electronic health records, and 3D printing technologies. These categories underscore the specialized development and implementation needs within medicine and their potential to offer significant value to clinicians and patients. In many cases, technologies and products span multiple categories.</p>
<p>A detailed table of benefits for patients, doctors and the healthcare system is presented in Table 1.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-7971" src="https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-scaled.jpg" alt="" width="953" height="2560" srcset="https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-scaled.jpg 953w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-112x300.jpg 112w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-381x1024.jpg 381w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-768x2063.jpg 768w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-572x1536.jpg 572w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-762x2048.jpg 762w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-4_t-1-1320x3546.jpg 1320w" sizes="auto, (max-width: 953px) 100vw, 953px" /></p>
<p>We will illustrate our classification further by providing a few examples of the first product group in it: wearable devices, where the use of the product by the customer has medical applications. Several solutions will be discussed to illustrate their importance for patients.</p>
<p>HigoSense has created a device with 5 interchangeable tips that capture images and measurements of given areas of the patient&#8217;s body and is equipped with a module for listening to breathing, etc. This allows anyone, anywhere, to collect detailed medical data of similar quality to that obtained by a doctor during direct contact with the patient in the office or during a home visit. Data collection, delivery and sharing are carried out by means of the Higo app, which also supports medical interviews, management of patient’ history, scheduling examinations and communications with the doctor. (https://higosense.com/pl/produkt/) MedApp&#8217;s CarnaLife Holo solution employs a revolutionary technology for the three-dimensional visualization of diagnostic data to assist in the planning and execution of medical procedures. The HoloLens 2 goggles, developed by Microsoft, provide the ability to view 3D holograms of anatomical structures in a real-world environment. The doctor is able to interact with the holograms, such as rotating, scaling, moving and even entering inside the anatomical structures using gestures and voice commands. The entire process is carried out without the risk of compromising sterility and without the need to cooperate with additional technical staff. The goggles are an interactive screen that can be used for procedure planning and anywhere in the operating room, or even during the procedure. (https://medapp.pl/carnalife-holo/)</p>
<p>telDoc presents an innovative solution related to the Virtual Medical Assistant, which, during the patient&#8217;s contact with the medical facility, provides initial, ad hoc assistance and then refers the patient to the appropriate tests and specialist. During the visit, the doctor receives the test results and the initial medical history, which has been conducted by the Virtual Assistant via voice or chat, reducing the time spent on the administrative part of the visit. In addition, the company has created a Virtual Nurse Assistant, which regularly calls patients to ask how they are feeling, collects basic information about the patient&#8217;s vital functions, asks or reminds them to take prescribed medication, suggests contacting a medic in an alarming situation, and notifies relatives of the patient’s situation. (https://www.teldoc.eu/projekty)</p>
<p>Nestmedic’s Pregnabit medical device is designed for remote/hybrid KTG monitoring for women from 32 weeks of pregnancy, with indications for examination or hospitalization. The system consists of a mobile KTG device and a Medical Telemonitoring Centre service, where the test results are analyzed by medical experts. The use of specially developed medical algorithms aids doctors and midwives in monitoring and decision-making. (https://nestmedic.com/pregnabit/)</p>
<p>For patients, the main advantages of using modern technology of this sort include reduced access time to the doctor, increased intensity of treatment, a higher level of care resulting in better treatment outcomes, a better standard of living with chronic diseases with all-day health monitoring. Doctors, in turn, can optimize medical care and have faster, often immediate access to data in the form of epics or images. Artificial intelligence is also entering the operating theatre to assist the doctor, making procedures easier and reducing the number of repeat operations. For the healthcare system, however, it is the cost implications that are important. Estimation of the real costs of new implementations, reduction of unit costs with an increase in the number of interventions, possibility of detection of new diseases (including rare diseases).</p>
<h2>Challenges for the implementation of new e-health technologies, products and services</h2>
<p>Some of the key challenges include:</p>
<ul>
<li>Data accuracy and reliability: The accuracy and reliability of data collected by medical devices incorporating intelligent technologies is critical to the effective and efficient management of healthcare services. Patient accountability must ensure the timeliness, accuracy, relevance, appropriateness and consistency of measurements provided by AI devices (Etemadi &amp; Khashei, 2020).</li>
<li>Data security and privacy: Smart technologies generate and transmit sensitive health data, raising concerns about data security and privacy. Protecting personal data and health information from unauthorized access, breaches and misuse is paramount in the development of cyberhealth. Security measures must be implemented to protect patient data. These must be in line with data protection regulations and encryption techniques (Fatima &amp; Colombo-Palacios, 2018).</li>
<li>Integration: Integration of different smart healthcare technologies and systems is essential for seamless data exchange and collaboration (Shah et al., 2021). However, the challenges of integrating different devices, digital platforms and electronic health record systems can hinder effective data sharing and communication between patients and their doctors or healthcare organizations. One solution to this involves standardization.</li>
<li>Adaptability: Devices, systems and platforms must be adapted to the type of patient, the level of health care reference, and the level of technological development of the organization implementing the new solutions (Chronaki et al., 2004).</li>
<li>User acceptance and involvement: The success of the implementation of intelligent technologies depends on user acceptance and involvement. Patients need to be motivated to make consistent use of these technologies and to take an active part in their own care (Jankowska-Polańska et al., 2014). Clinicians need to follow protocols and monitor the activity and accuracy of patients’ use of the technologies. Overcoming barriers such as technology familiarity, usability concerns and resistance to change is key to widespread use of digital technologies.</li>
<li>Legislation and regulation: Regulatory changes need to keep pace with the rapid development of smart technologies. Legislation needs to be put in place to ensure the safe, effective and ethical use of technology in healthcare, particularly artificial intelligence. In addition, reimbursement policies should take into account the value and cost-effectiveness of smart technologies, as this may have an impact on their availability and adoption (Orędziak, 2018).</li>
<li>Accessibility: Ensuring equal access to smart technologies is essential to address inequalities in healthcare. Price, usability and accessibility of new technologies need to be considered (Bokolo, 2021).</li>
<li>Validation: Rigorous clinical trials are needed for smart technologies, especially those based on artificial intelligence. Rigorous scientific research, randomized controlled trials and analysis of real-world data are needed to demonstrate the clinical value and safety of using smart technologies in healthcare. The margin for error in the use of new technologies in medicine, for example, is very small or may not exist at all. This has to do not only with protecting health, but also with protecting life (ICH Guidelines, 2016).</li>
<li>Cost-effectiveness: Cost-effectiveness is an important factor in the introduction of new medical technologies. However, its role in improving quality of life and standards of care should also be emphasized (Trzmielak, 2014).</li>
</ul>
<h2>Conclusions</h2>
<p>In the coming years, medical professionals can expect to be able to access more advanced and highly specialized tools will be available to medical professionals, increasing their competence and capabilities. Continued advances in artificial intelligence (AI) research in medicine are also likely contribute to the thorough validation of both existing and future systems, which could lead to their widespread adoption. However, the integration of advanced technologies, particularly AI, into healthcare practices represents a significant paradigm shift towards improving patient care and enhancing healthcare delivery systems. Throughout this article, we have explored the multifaceted impact of these technologies, demonstrating how they not only augment clinical practices but also empower patients by offering more personalized and accessible healthcare solutions. The bibliographic analysis and examples discussed herein offer a certain overview of the practical applications and theoretical implications of AI in healthcare, emphasizing the dual benefit to both clinicians and patients.</p>
<p>Our findings illustrate that AI-driven tools can significantly relieve the workload of healthcare professionals, allowing for the expansion of healthcare services and specializations that cater more directly to patient needs. This not only improves the efficiency of healthcare delivery but also enhances the quality of patient care by enabling more accurate diagnoses and tailored treatment plans. The classification of new e-health technologies that we have proposed herein may serve as a clear framework for understanding the various ways in which these innovations can be implemented to maximize their benefits across different sectors of the healthcare industry.</p>
<p>Moving forward, the continuous advancement and deployment of these technologies necessitates a committed approach to research and validation, ensuring that they meet the highest standards of efficacy and safety. The collaborative acceptance by healthcare professionals and patients is crucial for these technological innovations to be successfully integrated into everyday medical practices. Such acceptance is dependent on clear demonstrations of the improvements these technologies bring to patient outcomes and healthcare workflows.</p>
<p>In conclusion, the successful deployment of AI and other innovative technologies in medicine requires ongoing analysis and adaptation to the evolving needs of the healthcare sector. By aligning these technological advancements with the real-world requirements of both healthcare providers and recipients, we can ensure that they lead to more effective, efficient, and empathetic healthcare services. The promising developments discussed in this article not only highlight the current achievements but also pave the way for future innovations that will continue to transform healthcare.</p>
<h2>Aknowledgements</h2>
<p>The article was funded by the project entitled “Implementation of a telemedicine model in the field of cardiology by Polish Mother&#8217;s Memorial Hospital – Research Institute subsidized by the Norwegian Financial Mechanism and the state budget,” under contract no. 5/NMF/2066/00/62/2023/295.</p>
<h2>References</h2>
<p>Arbabshirani, M. R., Fornwalt, B. K., Mongelluzzo, G. J., Suever, J. D., Geise, B. D., Patel, A. A., &amp; Moore, G. J. (2018). Advanced machine learning in action: Identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. npj <em>Digital Medicine, 1</em>, 9. https://doi.org/10.1038/s41746-017-0015-z</p>
<p>Auwal, F. I., Copeland, C., Clark, E. J., Naraynassamy, C., &amp; McClelland, G. R. (2023, September). A systematic review of models of patient engagement in the development and life cycle management of medicines. <em>Drug Discovery Today, 28</em>(9), 103702. https://doi.org/10.1016/j.drudis.2023.103702</p>
<p>Ayad, N., Schwendicke, F., Krois, J., S. van den Bosch, Bergé, S., Bohner, L., Hanisch, M., &amp; Vinayahalingam, S. (2023, December). Patients’ perspectives on the use of artificial intelligence in dentistry: A regional survey. <em>Head &amp; Face Medicine, 19</em>, Article 23. https://doi.org/10.1186/s13005-023-00368-z</p>
<p>Białowolski, P., Grabowska, I., Kotowska, I., Strzelecki, P. &amp;Węziak-Białowolska, D. (2012). <em>Social Diagnosis 2011: The objective and subjective quality of life in Poland.</em> Ed. Czapiński, J., &amp; Panek, T. The Council for Social Monitoring Warsaw.</p>
<p>Bokolo, A. Jnr. (2020). Application of telemedicine and eHealth technology for clinical services in response to the COVID-19 pandemic. <em>Health and Technology, 11</em>, 359–366. https://doi.org/10.1007/s12553-020-00516-4</p>
<p>Bukowska-Piestrzyńska, A. (2013). Zmiany w systemie opieki zdrowotnej a dostępność usług stomatologicznych w Polsce w XXI wieku [Changes in the healthcare system and the availability of dental services in Poland in the 21st century]. Vol. <em>XIV</em>(10), pt. I. [in Polish]</p>
<p>Char, D. S., Shah, N. H., &amp; Magnus, D. (2018). Implementing machine learning in health care — Addressing ethical challenges. <em>New England Journal of Medicine, 378</em>(11), 981–983. https://doi.org/10.1056/NEJMp1714229</p>
<p>Chronaki, C. E., Lelis, P., Chiarugi, F., Trypakis, D., Moumouris, K., Stavrakis, H., Kavlentakis, G., Stathiakis, N., Tsiknakis, M., &amp; Orphanoudakis, S. C. (2004). An open eHealth platform for health management using adaptable service profiles. International Congress Series, <em>1268.</em> https://doi.org/10.1016/j.ics.2004.03.201</p>
<p>Depeursinge, A., Chin, A. S., Leung, A. N., Terrone, D., Bristow, M., Rosen, G., &amp; Rubin, D. L. (2015). Automated classification of usual interstitial pneumonia using regional volumetric texture analysis in high-resolution computed tomography. <em>Investigative Radiology 50</em>(4), 261–267. https://doi.org/10.1097/RLI. 0000000000000127</p>
<p>Etemadi, S., &amp; Khashei, M. (2020). Data accuracy and reliability. <em>Computers in Biology and Medicine, 141.</em> https://doi.org/10.1016/j.compbiomed.2021.105138</p>
<p>Fatima, A., &amp; Colomo-Palacios, R. (2018). Security aspects in healthcare information systems: A systematic mapping. <em>Procedia Computer Science, 138</em>, 12–19. https://doi.org/10.1016/j.procs.2018.10.003</p>
<p>Hagiwara, A., Otsuka, Y., Hori, M., Tachibana, Y., Yokoyama, K., Fujita, S., Andica, C., Kamagata, K., Irie, R., Koshino, S., Maekawa, T., Chougar, L., Wada, A., Takemura, M., Hattori, N., &amp; Aoki, S. (2019). Improving the quality of synthetic FLAIR images with deep learning using a conditional generative adversarial network for pixel-by-pixel image translation. <em>American Journal of Neuroradiology, 40</em>(2), 224–230.</p>
<p>Herrmann, M., Boehme, P., Mondritzki, T., Ehlers, J. P., Kavadias, S., Truebel, H. (2018). Digital transformation and disruption of the health care sector: Internet-based observational study. Journal of Medical Internet Research, 20, 104–112. https://doi.org/10.2196/jmir.9498 Hunt, C. W. (2015). Technology and diabetes self-management: An integrative review. <em>World Journal of Diabetes, 6</em>(2), 225–33. https://doi.org/10.4239/wjd.v6.i2.225</p>
<p>ICH Guidelines. (2016). <em>Harmonized ICH Guidelines, Integrated Addendum to ICH E6(R1): Good Clinical Practice, E6(R2)</em>, version 4. International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH). https://database.ich.org/sites/default/files/E6_R2_Addendum.pdf</p>
<p>Jankowski M., (2018). Sztuczna inteligencja jako narzędzie wspomagające proces diagnostyczno-terapeutyczny, <em>Menadżer Zdrowia, październik – listopad 8-9</em>, 66–67.</p>
<p>Jankowska-Polańska, B., Ilko, A., &amp; Wleklik, M. (2014). Wpływ akceptacji choroby na jakość życia chorych z nadciśnieniem tętniczym [Influence of the acceptance of the disease on quality of life of patients with hypertension]. <em>Nadciśnienie Tętnicze, 18</em>(3), 143–150. [in Polish]</p>
<p>Kaczmarek, E. (2021). Sztuczna inteligencja – pomoc w wykryciu retinopatii cukrzycowej [Artificial Intelligence – Assistance in Detecting Diabetic Retinopathy]. <em>Optyka, 6</em>(73), 48–49. [in Polish]</p>
<p>Kulkov, I. (2023). Next-generation business models for artificial intelligence start-ups in the healthcare. <em>International Journal of Entrepreneurial Behavior &amp; Research, 29</em>(4).</p>
<p>Marmot M., Allen J., Bell R., Bloomer E., Goldblatt P., (2012). WHO European review of social determinants of health and the health divide, <em>Lancet 2012</em>; 380, 1011–29. 10.1016/S0140-6736(12)61228-8</p>
<p>Orędziak, B. (2018). Telemedycyna a konstytucyjne prawo do opieki zdrowotnej w kontekście wykluczenia cyfrowego [Telemedicine and the Constitutional Right to Healthcare in the Context of Digital Exclusion]. <em>Zeszyty Prawnicze, 18</em>(1). https://doi.org/10.21697/zp.2018.18.1.06 [in Polish]</p>
<p>Owen, R., Macnaghten, Ph., Stilgoe, J., (2012). Responsible Research and Innovation: From Science in Society to Science for Society, with Society. <em>Science and Public Policy, 39</em>(6), 751–760. https://doi.org/10.1093/scipol/scs093.</p>
<p>Pawelec, G. (2022). Rola nowych technologii w podnoszeniu jakości usług zdrowotnych w dobie pandemii COVID-19. [The role of new technologies in improving the quality of health services in the era of the COVID-19 pandemic]. <em>Marketing i Rynek, XXIX</em>(2). https://doi.org/10.33226/1231-7853.2022.2.2 [in Polish]</p>
<p>Ramachandran, N., Srinivasan, M., Thekkur, P., Johnson, P., Chinnakali, P., &amp; Naik, B. N. (2015). Mobile phone usage and willingness to receive health-related information among patients attending a chronic disease clinic in rural Puducherry, India. <em>Journal of Diabetes Science and Technology, 9</em>(6), 1350–1. https://doi.org/10.1177/1932296815599005</p>
<p>Santosa, E. S., Fariab, S. C. M., Carvalhob, M. I. S., Molc, M. P. G., Silvab, M. N., &amp; Silva, K. R. (2022, December). Management of unused healthcare materials and medicines discarded in a Brazilian hospital from 2015 to 2019. <em>Cleaner Waste Systems, 3</em>. https://doi.org/10.1016/j.clwas.2022.100046</p>
<p>Severika, B., &amp; Ceranic, K. (2020, April). Digital Health Classification Systems. <em>Statistics &amp; Science.</em> https://www.5-ht.com/en/media/blog/digital-health-classification-systems</p>
<p>Stahl, B.C. and Coeckelbergh, M. (2016) Ethics of Healthcare Robotics: Towards Responsible Research and Innovation. <em>Robotics and Autonomous Systems, 86</em>, 152–161. https://doi.org/10.1016/j.robot.2016.08.018</p>
<p>Shah, J. L., Bhat, H. F., &amp; Khan, A. I. (2021). Integration of Cloud and IoT for smart e-healthcare. In V.E. Balas, S. Pal (Eds.). <em>Healthcare Paradigms in the Internet of Things Ecosystem</em> (pp. 101–136). Elsevier. https://doi.org/10.1016/B978-0-12-819664-9.00006-5.</p>
<p>Stahl, S., Morrissette, D. A., Faedda, G. L., Fava, M., Goldberg, J., Keck, P., Lee, Y., Malhi, G., Marangoni, C., Mcelroy, S., Ostacher, M., Rosenblat, J., Solé, E. Suppes, T. Takeshima, M. Thase, M., Vieta, E., Young, A. Zimmerman, M. McIntyre, R. (2017). Guidelines for the recognition and management of mixed depression. <em>CNS Spectrums. 22</em>(2), 203–219. https://doi.org/10.1017/S1092852917000165.</p>
<p>Sobiech, J. (1990). Warunki wyboru ekonomiczno-finansowych mechanizmów kierowania opieką zdrowotną. <em>Zeszyty Naukowe, 109</em>. Wydawnictwo Akademii Ekonomicznej w Poznaniu.</p>
<p>Trzmielak, D. M. (2013). Komercjalizacja wiedzy i technologii – determinanty i strategia [Commercialization of knowledge and technology – determinants and strategy]. Łódź: Wydawnictwa Uniwersytetu Łódzkiego. [in Polish]</p>
<p>Turpin, R., Hoefer, E., Lewelling, J., &amp; Baird, P. (2020). Machine Learning AI in Medical Devices, Adapting Regulatory Frameworks and Standards to Ensure Safety and Performance. AAMI, BSI. https://www.medical-device-regulation.eu/wp-content/uploads/2020/09/machine_learning_ai_in_medical_devices.pdf</p>
<p>Ullah, M., Hamayun, S., Wahab, A., Khan, S. U., Rehman, M. U., Haq, Z. U., Rehman, K. U., Ullah, A., Mehreen, A., Awan, U. A., Qayum, M., &amp; Naeem, M. (2023, November). Smart Technologies used as Smart Tools in the Management of Cardiovascular Disease and their Future Perspective. <em>Current Problems in Cardiology, 48</em>. https://doi.org/10.1016/j.cpcardiol.2023.101922</p>
<p>Verbraecken, J. (2021, September). Telemedicine in Sleep-Disordered Breathing: Expanding the Horizons. <em>Sleep Medicine Clinics, 16</em>(3), 418–445.</p>
<p>World Health Organization. (2018). Classification of Digital Health Interventions v 1.0: A shared language to describe the uses of digital technology for health. https://apps.who.int/iris/bitstream/handle/10665/260480/WHO-RHR-18.06-eng.pdf</p>
<p>Online sources:</p>
<p><a href="https://higosense.com/pl/produkt/">https://higosense.com/pl/produkt/</a><br />
<a href="https://medapp.pl/carnalife-holo/">https://medapp.pl/carnalife-holo/</a><br />
<a href="https://nestmedic.com/pregnabit/">https://nestmedic.com/pregnabit/</a><br />
<a href="https://www.teldoc.eu/projekty">https://www.teldoc.eu/projekty</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Digital transformation in health care and its marketing dimension</title>
		<link>https://minib.pl/en/numer/no-3-2023/digital-transformation-in-health-care-and-its-marketing-dimension/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Sun, 10 Sep 2023 08:45:55 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[digital transformation]]></category>
		<category><![CDATA[generations X and Y]]></category>
		<category><![CDATA[health care]]></category>
		<category><![CDATA[marketing]]></category>
		<category><![CDATA[patient]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=7678</guid>

					<description><![CDATA[Introduction Digital transformation is a critical phenomenon in today&#8217;s global economy. Through its activities, it is forcing customer orientation and a focus on customer needs and expectations. Marketing, also undergoing profound transformations, plays a considerable role in the transformation processes (Mazurek, 2019). In the case of marketing medical services, it is essential to point out...]]></description>
										<content:encoded><![CDATA[<h2>Introduction</h2>
<p>Digital transformation is a critical phenomenon in today&#8217;s global economy. Through its activities, it is forcing customer orientation and a focus on customer needs and expectations. Marketing, also undergoing profound transformations, plays a considerable role in the transformation processes (Mazurek, 2019). In the case of marketing medical services, it is essential to point out its social-creative role in the context of the digitalisation of the health sector. Marketing is now more strongly associated with creating value for the general public. The manifestation of this is the change in the approach to value in marketing, which is increasingly associated with the resultant customer experience, thus personalising it (Baran, 2013).</p>
<p>The phenomenon of co-creating value with the customer also manifests itself in the healthcare market in the context of digital innovation. E-health platforms and tools respond to the needs and expectations of the main stakeholders in the healthcare system-patients. The coronavirus pandemic has significantly accelerated the process of digital transformation (Baudier et al., 2022; Li, 2021; Park et al., 2022; Pauzi &amp; Juhari, 2020; Schiliro, 2020, 2021), including the health sector (Marx &amp; Padmanabhan, 2020; Wahab &amp; Saad, 2022), which has translated into an increase in innovative technical and technological solutions in medical records, medical services and preventive health care. In reaching the audience for these solutions, marketing communication is essential. The market for medical services is changing, and the marketing product is evolving. The digital maturity of patient customers is increasing, and the requirements for quality medical services are changing. The synergy of medicine, technology and telecommunications should translate into new medical services available to all. The role of marketing here is vast-from informing patients about new products/services to allowing them to learn about new features to getting feedback on digital solutions.</p>
<p>The article aims to present issues on digital transformation in the health sector with attention to its marketing dimension.</p>
<h2>Research Methodology</h2>
<p>The author used the desk research method. He reviewed the literature treating digital transformation in health care in terms of marketing. The bibliography includes 82 items, including scientific articles, reports, books, chapters from monographs and electronic sources-mainly from 2020–2022. The following scientific databases were used in the desk research analysis: Google Scholar, ResearchGate, Taylor and Francis Online and ScienceDirect. In searching the literature in the mentioned databases, the author used the following combination of words using Boolean operators (AND, OR): &#8216;marketing&#8217; AND &#8216;digital transformation&#8217; AND (&#8216;healthcare&#8217; OR &#8216;health care&#8217; OR &#8216;health service&#8217; OR &#8216;healthcare sector&#8217; OR &#8216;health sector&#8217; OR &#8216;healthcare industry&#8217; OR &#8216;health industry&#8217; OR &#8216;health industry&#8217; OR &#8216;medicine&#8217;). Searches supplemented the collected literature in the databases above for the following keywords: &#8216;blockchain&#8217;, &#8216;value&#8217;, &#8216;co-creation&#8217;, &#8216;4P medicine&#8217;, &#8216;artificial intelligence&#8217; and &#8216;machine learning&#8217;. The aforementioned scientific databases were used because of the possibility of collecting literature for this article about its purpose.</p>
<h2>Digital Transformation-Essence and Significance</h2>
<p>Digital transformation looks and runs differently for every organisation or company. Hence it is not easy to point to a single universal definition. At the same time, it signifies a cultural change manifested in constant questioning of the status quo, frequent experimentation and dealing with failure. The process of digital transformation can sometimes also mean moving away from existing, proven business processes to relatively new, still-developing practices (Nius, 2022). Therefore, digital transformation can be understood as a change in an organisation&#8217;s people, processes, technology and data components, creating an organisation&#8217;s evolution (McCarthy et al., 2022).</p>
<p>In general, digital transformation refers to a process aimed at improving an entity by inducing significant changes in its operation through the interplay of information, computing, communication and connectivity technologies (Kraus et al., 2021; Vial, 2019). Digital transformation introduces strategy — and customer-focussed changes through innovative information and communication technologies. This process aims to implement improved or new processes in modern organisations (Pihir et al., 2019). Thus, the digital transformation process represents the innovative use of digital technologies to provide better offerings to customers, design efficient operations or create new revenue streams for the business. The technologies used in the transformation process may not be new, but their innovative combination here matters. Hence, strategy, not just technology, is at the core of digital transformation (Chawla &amp; Goyal, 2022; Kane et al., 2015; Vallero, 2019).</p>
<p>The transformation process is aided by digital platforms that create a socio-technical environment that mediates interactions between actors and uses data streams to create value-individual and community value by inducing business users and suppliers to innovate their existing business models (Pietronudo et al., 2022). As mentioned, digital platforms create value. This situation happens in two ways. First, they facilitate transactions and offer technological building blocks to create new products and services (Darius &amp; Maticiuc, 2022; Shan &amp; John, 2022). Transaction facilitation platforms are exchange platforms that create value for at least two different types of users who can benefit from interacting with each other. In contrast, platforms that offer technological building blocks aim to orchestrate industry innovation by co-creating value with external general partners (Hermes et al., 2020).</p>
<p>The importance of digital transformation is immense because it first forces companies to rethink the role and values that guide their business models. Second, it represents a significant change in companies&#8217; fundamental pattern of value creation. Third, the transformation process causes a fundamental change in how an organisation thinks and uses legacy systems and tools to reposition part or all of the organisation in terms of value creation (Mugge et al., 2020). Finally, digital transformation helps organisations engage customers in the conception and product development phases, supporting the co-creation (co-innovation) process, which increases customer centricity (Hauke-Lopes et al., 2022; Imran et al., 2021). As one of the critical elements of digital transformation, customer centricity manifests itself in anticipating and shaping customer expectations, managing the customer journey and creating customer communities that communicate market value. Customer centricity focuses on empathy mapping to gain the benefits of reaching the right stakeholders (Pileggi, 2021; Tomièić-Pupek et al., 2021).</p>
<p>The importance of digital transformation should also be considered in reducing the impact of the COVID-19 pandemic, as it forced the rapid and unexpected implementation of digital technologies into corporations&#8217; business models and organisational structures. In general, digital transformation has influenced socio-economic recovery, that is to say economic growth, health care and income inequality (Mohamed, 2022), while its nature and pace were determined by artificial intelligence (AI), changing customer preferences and global crises such as the coronavirus pandemic (McCausland, 2021). In summary, digital transformation is a comprehensive, holistic concept that enables an overhaul of core processes and changes culture, organisation, relationships and business models. It enables both the delivery of sustainable results in the long term and the value creation for people and organisations. Undoubtedly, the COVID-19 pandemic has awakened and revolutionised how we understand digitality and demonstrated the strategic importance of its transformation (Gabryelczyk, 2020).</p>
<h2>Digitisation of the Health Sector-Security and Stakeholder Benefits</h2>
<p>Digital transformation in health care is essential in societies&#8217; transition to a post-industrial, knowledge-based economy (Garcia-Perez et al., 2022). Digital technology is being deployed in health care to support and improve its traditional operations and create new value propositions for end users of health services (Ghosh et al., 2022). For patients, the digitisation of the health sector enables them to operate in a comprehensive multi-channel environment giving broad access to medical information, education and health monitoring through AI and machine learning (ML) (Kraus et al., 2021). AI technologies could address unwarranted disparities in medical care, reduce medical errors, reduce healthcare inequities, and reduce waste and low-quality, low-value care (Hashiguchi et al., 2022). ML, in turn, contributes to observing sick patients, analysing disease patterns, and diagnosing and prescribing medication. ML helps provide patient-centred care, make therapeutic decisions, and detect sepsis and high-risk emergencies in patients (Quazi, 2022). Deployment of AI systems in health care can further optimise healthcare resources, facilitate a better patient experience, reduce per capita costs and increase the satisfaction of medical professionals and patients (Dicuonzo et al., 2022).</p>
<p>The creation and co-creation of value for patients are mediated by digital platforms that manage the public health ecosystem. This process is taking place in collaboration with a much more comprehensive range of partners and stakeholders than was previously the case (Hermes et al., 2020). Therefore, the digitisation of health care should ensure a seamless but, at the same time, secure and protected exchange of data, such as medical data, interoperability and patient-generated data. According to Jahankhani &amp; Kendzierskyj (2019), blockchain is a mechanism that can ensure data security and privacy in the health sector&#8217;s digitisation. Blockchain is a computerised, distributed database of records, transactions and digital events made and shared among connected users (Rejeb &amp; Rejeb, 2020). Another definition states that blockchain is a digital, decentralised, distributed ledger that records and adds transactions chronologically to create permanent and tamper-proof records (Jain &amp; Jain, 2022; Treiblmaier, 2018). Blockchain is shared by a network of computers, allowing customers to securely exchange financial information with suppliers without needing a third party, such as a bank (Peres et al., 2022; Swan, 2015; Yli-Huumo et al., 2016; Zheng &amp; Yu, 2016).</p>
<p>In health care, a blockchain is an effective tool in preventing data breaches, improving the accuracy of medical records, reducing costs (Reddy, 2022), biomedical research, health data analytics, education, health insurance claims, remote patient monitoring or finally in pharmaceutical supply chains (Elangovan et al., 2022). Blockchain technology represents the potential for value creation in health care through compliance achievements, reduction of errors and fraud, better governance, collaborative value creation among entities, intelligent contracts, technology to support charity, greater trust, and integrity. The elements above suggest that blockchain fosters multiple tangible and intangible value creation in the study area for individuals and organisations across the health ecosystem (Spano et al., 2021). Finally, blockchain technology is crucial to developing a platform to manage the COVID-19 pandemic effectively-now and in the future. Currently, the most significant difficulty facing most nations is the lack of a precise mechanism for detecting new infections and predicting their risk. Moreover, such features of blockchain technology as decentralisation, transparency and immutability can help manage a pandemic by detecting infection outbreaks early, speeding up drug distribution and protecting users&#8217; privacy throughout the treatment process (Jafri &amp; Singh, 2022).</p>
<p>Technological advances in medicine and, consequently, the digital transformation of the health sector must be accompanied by parallel advances in promoting patient and public participation throughout the process. To this end, perceptions of personalised medicine (4P) and assessments of its value and risks must be better understood. The 4Ps of personalised, preventive (preemptive), predictive and participatory medicine help refocus health services from a focus on treating established diseases to maintaining health and well-being (George et al., 2022; Horne, 2017). It represents a new paradigm of holistic and integrative patient management practices with equal participation of the patient and physician in holistic health care, combining precision medicine and medical experience across the patient&#8217;s lifetime (Bartold &amp; Ivanovski, 2022). Personalised medicine is otherwise known in the literature as precision medicine (Duffy, 2016; Hussain et al., 2021; Sharma et al., 2022; Verma et al., 2022), stratified medicine (Jorgensen, 2019; Olechno, 2016; Ruppert et al., 2016), individualised medicine (Rahimi, 2016), customised medicine (Miller &amp; Tucker, 2017; Sarvan &amp; Nori, 2021), molecular medicine (Ziv et al., 2016) or genomic medicine (Roden &amp; Tyndale, 2013), which corresponds to the 4P elements listed above (Slim et al., 2021).</p>
<p>Digital innovations in health care provide solutions to unmet health needs. Hence they can take the form of new processes, therapies, tools, medical procedures or innovative approaches to education, training, management and procurement. Digital transformation emphasises the patient experience in delivering and improving health services to discover and identify the needs. Accordingly, healthcare users should be actively engaged in innovation to manage their health consciously. Patients are now co-producers of health services, and thanks to digital technologies, they can play a more active role in decision-making and innovation activities. Healthcare providers who continuously monitor, digitise and analyse patient data can better understand the desires and needs of healthcare users and tailor offerings and care to provide quality services (Santarsiero et al., 2022).</p>
<h2>Practical Aspects of Implementing Digital Technologies in Health Care</h2>
<p>The digitisation process in the health sector involves using innovative digital tools. They could improve the level of service to stakeholders and streamline the patient registration process. In addition, these IT solutions can direct patients&#8217; movement and monitor their health inwards. Using the latest digital technology to monitor such patients helps improve their quality of life and enables attending physicians to intervene immediately in life-threatening conditions.</p>
<p>A critical application of AI in medicine is using algorithms to aid diagnosis in various fields-such as radiology and cardiology. The advantage of AI is that the sensitivity and specificity of the diagnosis are more significant by up to several percent than the diagnosis made by a doctor or team of medical professionals. In addition, the vast potential lies in solutions that support diagnosis at the early stages of the disease, such as cancer or cardiovascular disease (Żochowska, 2022). AI-based technology can reduce preparation times for head, neck and prostate cancers, for example, by as much as 90%, meaning that waiting times for potentially life-saving radiation therapy treatment to begin can be drastically reduced. Critical future AI applications include immunomics, synthetic biology and drug discovery. These will find revolutionary use in cancer, neurological and rare disease space, personalising the patient&#8217;s care experience (Bajwa et al., 2021). Studies further indicate that AI-based systems can outperform dermatologists in correctly classifying suspicious skin lesions. The advantage of AI systems stems from learning (more and faster) from successive cases and exposure to multiple cases per minute, which is far superior to cases evaluated by a clinician. AI-based decision-making approaches also bring applications in situations of disagreement between experts-for example, the identification of pulmonary tuberculosis on chest radiographs (Amisha et al., 2019).</p>
<p>Further practical applications of AI in the medical industry are support for telemedicine, body composition analysis, prediction of patient response to treatment, and democratisation of prevention (Żochowska, 2022). A key element in the development of e-health is telemonitoring of implantable devices. This situation is necessary to guarantee continuous, safe, highquality health care for patients with implantable devices. These devices are new-generation devices that, through Bluetooth technology, allow direct transmission of data from the implantable device to the patient&#8217;s configured smartphone, from which, with the dedicated application, data are transmitted to the provider through a server provided by the device manufacturer. In this case, it is not necessary to use additional transmission devices (Telemedyczna Grupa Robocza, 2021).</p>
<p>It is important to note that advances in wireless technology have created opportunities to provide on-demand healthcare services through healthtracking applications. Such innovative solutions have enabled a new form of healthcare delivery through remote interactions, available anywhere, anytime. Such services are essential for regions with underdeveloped infrastructure and places that lack specialists. They help reduce costs and prevent unnecessary exposure to infectious diseases at the clinic. Telehealth technology is also essential in developing countries (Bohr &amp; Memarzadeh, 2020). In addition, it passes the test in monitoring and observing elderly and disabled patients who live far from healthcare centres (Finco et al., 2023).</p>
<p>In conclusion, the practical aspects of implementing innovative digital solutions into the day-to-day operations of healthcare entities can be an essential source of building a healthcare entity&#8217;s competitive advantage in the healthcare market. On the global scale, meanwhile, AI can become a vital tool for improving health equality around the world.</p>
<h2>Generations X and Y in the Digitisation of Health Care and the Dimension of Marketing</h2>
<p>Today&#8217;s medical market requires a change in approach to the services offered, which should be personalised and accessible on the patient&#8217;s mobile devices. The marketing dimension is critical here-namely, the design and communication of relevant medical content and digital applications that meet the expectations of demanding patient-clients. Appropriate patientcentred (patient-centric) activities should be carried out to achieve a positive patient experience. Patient experience management is now a sine qua non and a considerable challenge for the digitisation of health care.</p>
<p>Patient experience is the interaction between the patient and the healthcare provider, integral to healthcare quality. In general, the quality of health care services is determined by easy access to health information, timely appointments and good communication with providers, among other factors. In order to provide patient-centred care, healthcare providers need to understand the patient experience. Evaluation of the patient experience and other elements, such as the safety and effectiveness of care, constitute the only means for the creation of a complete picture of healthcare quality. (Daffodil Software, n.d.). A precise understanding of the patient experience will benefit the healthcare industry and society in many ways, including, among other things, the establishment of tailored and personalised health care (Oben, 2020).</p>
<p>By 2025, generations X and Y will make up about 75% of Polish society (Kozak et al., 2022); hence, there is a need to align with these generations suitable activities and marketing messages that are related to the new digital health services resulting from the ongoing digital transformation of the health sector.</p>
<p>Generation X consists of those born between 1961 and 1983, the communist generation, the Nothing for Real generation, the White Collar generation, the Blue Collar generation (Czerska, 2016), MTV Generation and Gen-Xers (Berk, 2013). People of this generation value work and are even attached to one employer-loyal to it. They often prioritise work responsibilities over leisure despite rejecting the &#8216;rat race&#8217;. On the other hand, Generation X are unstable, insecure people, full of doubts-including about themselves. They are searching for the meaning of their existence and are characterised by colourlessness. When handling new technologies, this is not a problem for them (Czerska, 2016). Generation Y, or the Millennium generation, the next generation, the digital generation, the generation of flip-flops and iPods (Bilińska-Reformat &amp; Stefańska, 2016), tech-savvy consumers (Dewalska-Opitek, 2017), generation me (Spinney, 2012), generation WHY, gaming generation, net generation, Facebook generation or iGeneration (Kelan &amp; Lehnert, 2009), are people between 1984 and 1995. They are shrewd, overconfident and even brash at times. They are characterised by believing in their uniqueness and are intensely narcissistic.</p>
<p>On the other hand, generation Y cannot make decisions independently. They expect constant attention, and are also impatient as well as welleducated, with excessive expectations. Compared to Generation X, they prefer flexible employment and freedom of action, which translates into an average working time with one employer of 2 years. Millennials do not respect their bosses, treating work as an avenue for personal development. They are eager to work in teams and are open to new challenges. When it comes to new technologies, they actively use them (Czerska, 2016).</p>
<p>Given the above characteristics of both generations X and Y, which are open to new technological solutions, patients should be included in constructing complex health ecosystems designed to meet their needs.</p>
<p>One of the biggest challenges of digital transformation in the healthcare field is the final measurement of the effectiveness of the personalisation of healthcare services and the impact of patient involvement in the treatment process. Given the attitude of generations X and Y towards work and employer, it is necessary to be flexible in the design of health services and focus, on the one hand, on brand loyalty and attachment, and the other hand, on freedom of choice and frequent change of decisions. Undoubtedly, patients now actively using health services are informed and engaged. They play an active role in the decision-making process in the context of innovative health tools and services: they search for information on preventive health care, health monitoring, specialist doctors, clinics and outpatient clinics, and appointment enrolment, after which they actively use these services and consume the previously searched health services. Thus, such patients can be considered prosumers of e-health services and tools (Wolny, 2013).</p>
<p>According to Deloitte Digital&#8217;s 2022 report, two post-pandemic patient archetypes in Poland represent their health and digital behaviour. The first group is the so-called Traditional Patients-rarely using digital channels, using up to four apps. This group represents nearly 43% of the population. The second one is the so-called Phygital Patients-frequent users of digital channels but also interested in traditional channels. They make up more than 17% of the population. The Phygital Patient of the future expects the same level of service in all available channels, which complement each other (Deloitte Digital, 2022). This cross-channel model challenges marketing and managers to make each communication channel work smoothly and meet patient expectations, as the new standard of medical care is becoming an offering that spans multiple touchpoints across traditional and digital channels. Concerning Generation Y, Phygital patients are mainly women of the millennial generation working in large and medium-sized companies. Moreover, it is primarily to this target group that marketing messages about innovative digital solutions should be personalised, as these people are more likely to actively take care of their health when encouraged to do so by digital solutions. Besides, they need convenient access to specialists and multiple functionalities within a single application, such as automatic appointment reminders or the ability to share information about their health with a doctor (Okoniewska, 2022).</p>
<h2>Limitations</h2>
<p>The article is characterised by several limitations. Firstly, only articles indexed in databases were used in the analysis: Google Scholar, ResearchGate, Taylor and Francis Online and ScienceDirect, which may have resulted in the omission of valuable items on the issues under consideration. Secondly, the literature search in the databases above used a given combination of words using Boolean operators, which could have narrowed the search for relevant items. Selected industry reports and electronic sources were used for the issues under consideration to complete the analysis.</p>
<h2>Conclusions and Practical Implications</h2>
<p>The goal of the article, which was to present issues on digital transformation in health care and its marketing dimension, has been achieved.</p>
<p>The author&#8217;s findings, through a review of the literature on the subject, indicate that digital transformation in health care creates new business opportunities to solve various problems in medical practice and enables the creation of values that determine the quality of medical services. Marketing activities become helpful and even indispensable in this process.</p>
<p>The coronavirus pandemic has become a gas pedal, so to speak, of digital health solutions. In the health industry, which until recently was considered traditional or even conservative, the Internet is now the critical tool for learning about products and services, using them, and building opinions about healthcare providers and medical professionals. In parallel with the transformation process of the health industry, a marketing transformation process is taking place. The most critical activities in this process are patient relationship management, patient experience management, patient engagement management, patient-centred marketing, hyper-personalisation of the message/message, business to human (B2H) approach, ML and AI. In addition to the activities above, blockchain technology in the medical sector is also a new and growing phenomenon.</p>
<p>Several practical implications have been developed based on the analysed content of scientific and industry items. First, the transition to remote health care, if only in prevention or preventive care, requires patients to change their mentality and be open to change. Second, the availability of digital tools is impossible without marketing-through promotional campaigns for new e-solutions and presentations of mobile health products. Third, introducing innovative digital tools requires building and using new, complementary communication channels (crosschannel model) between stakeholders in the health market. Finally, blockchain technology could transform existing healthcare management into a more efficient, secure one, potentially creating value across the health ecosystem.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-7716" src="https://minib.pl/wp-content/uploads/2023/09/Zrzut-ekranu-2023-11-03-122934-1.png" alt="" width="875" height="187" srcset="https://minib.pl/wp-content/uploads/2023/09/Zrzut-ekranu-2023-11-03-122934-1.png 875w, https://minib.pl/wp-content/uploads/2023/09/Zrzut-ekranu-2023-11-03-122934-1-300x64.png 300w, https://minib.pl/wp-content/uploads/2023/09/Zrzut-ekranu-2023-11-03-122934-1-768x164.png 768w" sizes="auto, (max-width: 875px) 100vw, 875px" /></p>
<h2>References</h2>
<p>1. Amisha, F., Malik, P., Pathania, M., &amp; Rathaur, V. K. (2019). Overview of artificial intelligence in medicine. <em>Journal of Family Medicine and Primary Care, 8</em>(7), 2328–2331. https://doi.org/10.4103/JFMPC.JFMPC_440_19<br />
2. Bajwa, J., Munir, U., Nori, A., &amp; Williams, B. (2021). Artificial intelligence in healthcare: Transforming the practice of medicine. <em>Future Healthcare Journal, 8</em>(2), e188. https://doi.org/10.7861/FHJ.2021–0095<br />
3. Baran, G. (2013). <em>Marketing współtworzenia wartości z klientem. Społecznotwórcza rola marketingu w procesie strukturacji interakcyjnego środowiska doświadczeń.</em> Instytut Spraw Publicznych Uniwersytetu Jagiellońskiego.<br />
4. Bartold, P. M., &amp; Ivanovski, S. (2022). P4 Medicine as a model for precision periodontal care. <em>Clinical Oral Investigations, 26</em>(9), 5517–5533. https://doi.org/10.1007/S00784-022-04469-Y/FIGURES/7<br />
5. Baudier, P., Kondrateva, G., Ammi, C., Chang, V., &amp; Schiavone, F. (2022). Digital transformation of healthcare during the COVID-19 pandemic: Patients&#8217; teleconsultation acceptance and trusting beliefs. <em>Technovation, 102547</em>. https://doi.org/10.1016/J.TECHNOVATION.2022.102547<br />
6. Berk, R. A. (2013). Multigenerational diversity in the academic workplace: Implications for practice. <em>Journal of Higher Education Management, 28</em>(1), 10–23.<br />
7. Bilińska-Reformat, K., &amp; Stefańska, M. (2016). Young consumers&#8217; behaviours in retail market and their impact on activities of retail chains. <em>Poslovna Izvrsnost, 10</em>(2), 123–134.<br />
8. Bohr, A., &amp; Memarzadeh, K. (2020). The rise of artificial intelligence in healthcare applications. In A. Bohr &amp; K. Memarzadeh (Eds.), <em>Artificial Intelligence in Healthcare</em> (pp. 25–60). Academic Press. https://doi.org/10.1016/B978-0-12-818438-7.00002-2<br />
9. Chawla, R. N., &amp; Goyal, P. (2022). Emerging trends in digital transformation: A bibliometric analysis. <em>Benchmarking, 29</em>(4), 1069–1112. https://doi.org/10.1108/BIJ-01-2021-0009/FULL/PDF<br />
10.Czerska, I. (2016). Pokolenie head down jako konsekwencja smartfonizacji społeczeństwa. <em>Prace Naukowe Uniwersytetu Ekonomicznego We Wrocławiu, 459</em>, 214–221. https://doi.org/10.15611/pn.2016.459.20<br />
11. Daffodil Software. (n.d.). <em>A complete guide to patient experience management.</em> Retrieved November 20, 2022, from https://www.daffodilsw.com/healthcare/patient-experiencemanagement/<br />
12. Darius, O. D., &amp; Maticiuc, M. D. (2022). Digital platforms — Integrate part of company performance. <em>Annals-Economy Series,</em> (4), 331–336.<br />
13. Deloitte Digital. (2022). <em>Phygitalowy Pacjent przyszłości. W jaki sposób technologia cyfrowa ukształtuje pacjenta nowej generacji?</em> Wyniki dla Polski.<br />
14. Dewalska-Opitek, A. (2017). Generation Y consumer preferences and mobility choices — An empirical approach. <em>Archives of Transport System Telematics, 10</em>(1), 17–23.<br />
15. Dicuonzo, G., Donofrio, F., Fusco, A., &amp; Shini, M. (2022). Healthcare system: Moving forward with artificial intelligence. <em>Technovation, 102510</em>. https://doi.org/10.1016/J.TECHNOVATION.2022.102510<br />
16. Duffy, D. J. (2016). Problems, challenges and promises: Perspectives on precision medicine. <em>Briefings in Bioinformatics, 17</em>(3), 494–504. https://doi.org/10.1093/bib/bbv060<br />
17. Elangovan, D., Long, C. S., Bakrin, F. S., Tan, C. S., Goh, K. W., Yeoh, S. F., Loy, M. J., Hussain, Z., Lee, K. S., Idris, A. C., &amp; Ming, L. C. (2022). The use of blockchain technology in the health care sector: Systematic review. <em>JMIR Medical Informatics, 10</em>(1), e17278. https://doi.org/10.2196/17278<br />
18. Finco, G., Kamal, M. A., Ismail, Z., Shehata, I. M., Djirar, S., Talbot, N. C., Ahmadzadeh, S., Shekoohi, S., Cornett, E. M., Fox, C. J., &amp; Kaye, A. D. (2023). Telemedicine, E-health, and multi-agent systems for chronic pain management. <em>Clinics and Practice, 13</em>(2),<br />
470–482. https://doi.org/10.3390/CLINPRACT13020042<br />
19. Gabryelczyk, R. (2020). Has COVID-19 Accelerated digital transformation? Initial lessons learned for public administrations. <em>Information Systems Management, 37</em>(4), 303–309. https://doi.org/10.1080/10580530.2020.1820633<br />
20. Garcia-Perez, A., Cegarra-Navarro, J. G., Sallos, M. P., Martinez-Caro, E., &amp; Chinnaswamy, A. (2022). Resilience in healthcare systems: Cyber security and digital transformation. <em>Technovation, 102583</em>. https://doi.org/10.1016/J.TECHNOVATION. 2022.102583<br />
21. George, C., Shetty, A., Graduate, P., &amp; Manjunath, N. (2022). Personalized periodontics. <em>International Journal of Innovative Science and Research Technology, 7</em>(3), 1376–1381.<br />
22. Ghosh, K., Dohan, M. S., Veldandi, H., &amp; Garfield, M. (2022). Digital transformation in healthcare: Insights on value creation. <em>Journal of Computer Information Systems.</em> https://doi.org/10.1080/08874417.2022.2070798<br />
23. Hashiguchi, T. C. O., Oderkirk, J., &amp; Slawomirski, L. (2022). Fulfilling the promise of artificial intelligence in the health sector: Let&#8217;s get real. <em>Value in Health, 25</em>(3), 368–373. https://doi.org/10.1016/J.JVAL.2021.11.1369<br />
24. Hauke-Lopes, A., Ratajczak-Mrozek, M., &amp; Wieczerzycki, M. (2022). Value co-creation and co-destruction in the digital transformation of highly traditional companies. <em>Journal of Business &amp; Industrial Marketing</em>. https://doi.org/10.1108/JBIM-10-2021-0474<br />
25. Hermes, S., Riasanow, T., Clemons, E. K., Böhm, M., &amp; Krcmar, H. (2020). The digital transformation of the healthcare industry: Exploring the rise of emerging platform ecosystems and their influence on the role of patients. <em>Business Research, 13</em>(3), 1033–1069. https://doi.org/10.1007/S40685-020-00125-X/TABLES/6<br />
26. Horne, R. (2017). The human dimension: Putting the person into personalised medicine. The New Bioethics. A Multidisciplinary <em>Journal of Biotechnology and the Body, 23</em>(1), 38–48. https://doi.org/10.1080/20502877.2017.1314894<br />
27. Hussain, S., Kaur, G., &amp; Deb, A. (2021). Mini-review on personalized medicine: A revolution in health care. <em>Precision Medicine Research, 3</em>(4). https://doi.org/10.53388/PMR2021080601<br />
28. Imran, F., Shahzad, K., Butt, A., &amp; Kantola, J. (2021). Digital Transformation of Industrial Organizations: Toward an integrated framework. Journal of Change Management. <em>Reframing Leadership and Organizational Practice, 21</em>(4), 451–479. https://doi.org/10.1080/14697017.2021.1929406<br />
29. Jafri, R., &amp; Singh, S. (2022). Blockchain applications for the healthcare sector: Uses beyond Bitcoin. In S. Tanwar (Ed.), <em>Blockchain applications for healthcare informatics: Beyond 5G</em> (pp. 71–92). Academic Press. https://doi.org/10.1016/B978-0-323-90615-9.00022-0<br />
30. Jahankhani, H., &amp; Kendzierskyj, S. (2019). Digital transformation of healthcare. In H. Jahankhani, S. Kendzierskyj, A. Jamal, G. Epiphaniou, &amp; H. Al-Khateeb (Eds.), <em>Blockchain and clinical trial. Advanced sciences and technologies for security applications</em> (pp. 31–52). Springer. https://doi.org/10.1007/978-3-030-11289-9_2/COVER<br />
31. Jain, G., &amp; Jain, A. (2022). Blockchain for 5G-enabled networks in healthcare service based on several aspects. <em>Blockchain Applications for Healthcare Informatics: Beyond 5G</em>, 471–493. https://doi.org/10.1016/B978-0-323-90615-9.00018-9<br />
32. Jorgensen, J. T. (2019). Twenty years with personalized medicine: Past, present, and future of individualized pharmacotherapy. <em>The Oncologist, 24</em>(7), e432–e440. https://doi.org/10.1634/THEONCOLOGIST.2019-0054<br />
33. Kane, G. C., Palmer, D., Phillips, A. N., Kiron, D., &amp; Buckley, N. (2015). Strategy, not technology, drives digital transformation. Becoming a digitally mature enterprise. <em>MIT Sloan Management Review</em>. https://sloanreview.mit.edu/projects/strategy-drives-digital-transformation/<br />
34. Kelan, E. K., &amp; Lehnert, M. (2009). The millennial generation: Generation Y and the opportunities for a globalised, networked educational system. <em>Beyond Current Horizons.</em> https://www.researchgate.net/publication/255588891_The_Millennial_Generation_Gene<br />
ration_Y_and_the_Opportunities_for_a_Globalised_Networked_Educational_System<br />
35. Kozak, G., Kuna, B., &amp; Żelazna, A. (2022). Personalizacja usług medycznych jako kolejny kierunek transformacji cyfrowej obszaru ochrony zdrowia. https://www2.deloitte.com/pl/pl/pages/deloitte-digital/Articles/Personalizacja-uslug-medycznych-jako-kolejny-kierunek-transformacji-cyfrowej-obszaru-ochrony-zdrowia.html<br />
36. Kraus, S., Schiavone, F., Pluzhnikova, A., &amp; Invernizzi, A. C. (2021). Digital transformation in healthcare: Analyzing the current state-of-research. <em>Journal of Business Research, 123</em>, 557–567. https://doi.org/10.1016/J.JBUSRES.2020.10.030<br />
37. Li, S. (2021). How does COVID-19 speed the digital transformation of business processes and customer experiences? <em>Review of Business, 41</em>(1), 1–14.<br />
38. Marx, E. W., &amp; Padmanabhan, P. (2020). Healthcare digital transformation: How consumerism, technology and pandemic are accelerating the future (1st ed.). <em>Productivity Press.</em> https://doi.org/10.4324/9781003035695<br />
39. Mazurek, G. (2019). <em>Transformacja cyfrowa: perspektywa marketingu.</em> Wydawnictwo Naukowe PWN SA.<br />
40. McCarthy, P., Sammon, D., &amp; Alhassan, I. (2022). &#8216;Doing&#8217; digital transformation: Theorising the practitioner voice. <em>Journal of Decision Systems.</em> https://doi.org/10.1080/12460125.2022.2074650<br />
41. McCausland, T. (2021). Digital transformation. <em>Research-Technology Management, 64</em>(6), 64–67. https://doi.org/10.1080/08956308.2021.1974783<br />
42. Miller, A. R., &amp; Tucker, C. (2017). Frontiers of health policy: Digital data and personalized medicine. <em>Innovation Policy and the Economy, 17</em>(1), 49–75. https://doi.org/10.1086/688844<br />
43. Mohamed, H. A. (2022). The role of digital transformation in the socio-economic recovery post COVID-19. <em>Applied Economics.</em> https://doi.org/10.1080/00036846.2022.2117779<br />
44. Mugge, P., Abbu, H., Michaelis, T. L., Kwiatkowski, A., &amp; Gudergan, G. (2020). Patterns of digitization. A practical guide to digital transformation. <em>Research-Technology Management, 63</em>(2), 27–35. https://doi.org/10.1080/08956308.2020.1707003<br />
45. Nius, B. (2022). <em>Transformacja cyfrowa — Czym jest i po co to robić?</em> https://global4net.com/ecommerce/transformacja-cyfrowa-czym-jest-i-po-co-to-robic/<br />
46. Oben, P. (2020). Understanding the patient experience: A conceptual framework. <em>Journal of Patient Experience, 7</em>(6), 906–910. https://doi.org/10.1177/2374373520951672<br />
47. Okoniewska, M. (2022). <em>Pacjent phygitalowy to pacjent przyszłości.</em> https://medycynaprywatna.pl/pacjent-phygitalowy-to-pacjent-przyszlosci/<br />
48. Olechno, D. J. (2016). Individualized medicine vs. precision medicine. <em>DDNews, 12</em>(5). https://doi.org/10.1038/nbt.3514<br />
49. Park, J. Y., Lee, K., &amp; Chung, D. R. (2022). Public interest in the digital transformation accelerated by the COVID-19 pandemic and perception of its future impact. <em>The Korean Journal of Internal Medicine, 37</em>(6), 1223–1233. https://doi.org/10.3904/KJIM.2022.129<br />
50. Pauzi, M. F., &amp; Juhari, S. N. (2020). View of digital transformation of healthcare and medical education, within, and beyond pandemic COVID-19. <em>Asian Journal of Medicine and Biomedicine, 4</em>(2), 39–42. https://doi.org/10.37231/ajmb.2020.4.2.363<br />
51. Peres, R., Schreier, M., Schweidel, D. A., &amp; Sorescu, A. (2022). Blockchain meets marketing: Opportunities, threats, and avenues for future research. <em>International Journal of Research in Marketing.</em> https://doi.org/10.1016/J.IJRESMAR.2022.08.001<br />
52. Pietronudo, M. C., Zhou, F., Caporuscio, A., La Ragione, G., &amp; Risitano, M. (2022). New emerging capabilities for managing data-driven innovation in healthcare: The role of digital platforms. <em>European Journal of Innovation Management, 25</em>(6), 867–891. https://doi.org/10.1108/EJIM-07-2021-0327<br />
53. Pihir, I., Tomièić-Pupek, K., &amp; Furjan, M. T. (2019). Digital transformation playground — Literature review and framework of concepts. <em>Journal of Information and Organizational Sciences, 43</em>(1), 33–48. https://doi.org/10.31341/JIOS.43.1.3<br />
54. Pileggi, S. F. (2021). Knowledge interoperability and re-use in Empathy Mapping: An ontological approach. <em>Expert Systems with Applications, 180</em>, 115065. https://doi.org/10.1016/J.ESWA.2021.115065<br />
55. Quazi, S. (2022). Artificial intelligence and machine learning in precision and genomic medicine. <em>Medical Oncology, 39</em>, 120. https://doi.org/10.1007/s12032-022-01711-1<br />
56. Rahimi, A. (2016). The promising prospects of precision medicine. <em>Journal of Advanced Medical Sciences and Applied Technologies (JAMSAT), 2</em>(3). https://doi.org/10.18869/nrip.jamsat.2.3.244<br />
57. Reddy, M. (2022). <em>Digital transformation in healthcare in 2022: 7 key trends.</em> https://www.digitalauthority.me/resources/state-of-digital-transformation-healthcare/<br />
58. Rejeb, A., &amp; Rejeb, K. (2020). Blockchain and supply chain sustainability. <em>LogForum, 16</em>(3), 363–372. https://doi.org/10.17270/J.LOG.2020.467<br />
59. Roden, D. M., &amp; Tyndale, R. F. (2013). Genomic medicine, precision medicine, personalized medicine: What&#8217;s in a name? <em>Clinical Pharmacology Therapeutics, 94</em>(2), 169–172. https://doi.org/10.1038/clpt.2013.101<br />
60. Ruppert, T., Sydow, S., &amp; Stock, G. (2016). Personalized medicine: Consequences for drug research and therapy. <em>Advances in Precision Medicine, 1</em>(2). https://doi.org/10.18063/APM.2016.02.004<br />
61. Santarsiero, F., Schiuma, G., Carlucci, D., &amp; Helander, N. (2022). Digital transformation in healthcare organisations: The role of innovation labs. <em>Technovation, 102640</em>. https://doi.org/10.1016/J.TECHNOVATION.2022.102640<br />
62. Sarvan, M. S., &amp; Nori, L. P. (2021). Personalized medicine: A new normal for therapeutic success. <em>Indian Journal of Pharmaceutical Sciences, 83</em>(3), 416–429. https://doi.org/10.36468/PHARMACEUTICAL-SCIENCES.790<br />
63. Schiliro, D. (2020). Towards digital globalization and the Covid-19 challenge. <em>International Journal of Business Management and Economic Research, 2</em>(11), 1710–1716.<br />
64. Schiliro, D. (2021). Digital transformation, COVID-19, and the future of work. <em>International Journal of Business Management and Economic Research, 12</em>(3), 1945–1952.<br />
65. Shan, K., &amp; John, E. P. (2022). Adoption of digital health care — A reality in future. <em>International Journal of Innovative Research in Technology</em>, 8(12), 370–380. https://www.researchgate.net/publication/360614780<br />
66. Sharma, A., Guleria, V., Gupta, G., Lata, K., Patial, S. K., &amp; Jaiswal, V. (2022). Big data analytics for personalized medicine and healthcare. In G. Gupta, V. Jaiswal, M. Khari, &amp; N. Kumar (Eds.), Mobile health: Advances in research and applications (Vol. II, pp.<br />
81–109). <em>Nova Science Publishers</em>, Inc. https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/86110<br />
67. Slim, K., Selvy, M., &amp; Veziant, J. (2021). Conceptual innovation: 4P Medicine and 4P surgery. <em>Journal of Visceral Surgery, 158</em>(3), S12–S17. https://doi.org/10.1016/J.JVISCSURG.2021.01.003<br />
68. Spano, R., Massaro, M., &amp; Iacuzzi, S. (2021). Blockchain for value creation in the healthcare sector. <em>Technovation, 102440</em>. https://doi.org/10.1016/J.TECHNOVATION. 2021.102440<br />
69. Spinney, L. (2012). The curse of generation Y. <em>New Scientist, 214</em>(2862), 44–47. https://doi.org/10.1016/S0262-4079(12)61104-X<br />
70. Swan, M. (2015).<em> Blockchain: Blueprint for a new economy</em> (1st ed.). O&#8217;Reilly Media.<br />
71. Telemedyczna Grupa Robocza. (2021). <em>Wykorzystanie telemonitoringu urządzeń wszczepialnych w celu poprawy opieki nad pacjentami kardiologicznymi. Stan obecny i proponowane zmiany.</em> https://nil.org.pl/uploaded_files/art_1620382513_raporttgr.pdf<br />
72. Tomièić-Pupek, K., Tomièić Furjan, M., Pihir, I., &amp; Vrèek, N. (2021). Disruptive business model innovation and digital transformation. <em>SSRN Electronic Journal.</em> https://doi.org/10.2139/SSRN.3975574<br />
73. Treiblmaier, H. (2018). The impact of the blockchain on the supply chain: A theorybased research framework and a call for action. <em>Supply Chain Management, 23</em>(6), 545–559. https://doi.org/10.1108/SCM-01-2018-0029/FULL/PDF<br />
74. Vallero, D. (2019). <em>Strategy, not technology, is at the core of digital transformation — Provided it proves agile.</em> https://www.linkedin.com/pulse/strategy-technology-coredigital-transformation-provided-vallero<br />
75. Verma, R., Patel, S., Minj, S. V., &amp; Bhat, A. (2022). Healthcare 5.0: A study on improving personalized care. <em>Proceedings — 2022 6th International Conference on Intelligent Computing and Control Systems, ICICCS 2022</em>, 1815–1818. https://doi.org/10.1109/ICICCS53718.2022.9788411<br />
76. Vial, G. (2019). Understanding digital transformation: A review and a research agenda. <em>The Journal of Strategic Information Systems, 28</em>(2), 118–144. https://doi.org/10.1016/J.JSIS.2019.01.003<br />
77. Wahab, S. M. A. A., &amp; Saad, M. (2022). Digital transformation acceleration in health sector during COVID-19: Drivers and consequences. <em>Journal of Business and Management Sciences, 10</em>(4), 164–179. https://doi.org/10.12691/jbms-10-4-1<br />
78. Wolny, R. (2013). Prosumpcja i prosument na rynku e-usług. <em>Konsumpcja i Rozwój, 1</em>(4), 152–163.<br />
79. Yli-Huumo, J., Ko, D., Choi, S., Park, S., &amp; Smolander, K. (2016). Where is current research on Blockchain technology? — A systematic review. <em>PLoS ONE, 11</em>(10). https://doi.org/10.1371/JOURNAL.PONE.0163477<br />
80. Zheng, S., &amp; Yu, B. (2016). Landsenses pattern design to mitigate gale conditions in the coastal city — A case study of Pingtan, China. <em>International Journal of Sustainable Development &amp; World Ecology, 24</em>(4), 352–361. https://doi.org/10.1080/13504509.2016.1230077<br />
81. Ziv, E., Durack, J. C., &amp; Solomon, S. B. (2016). The importance of biopsy in the era of molecular medicine. <em>Cancer Journal (Sudbury, Mass.), 22</em>(6), 418. https://doi.org/10.1097/PPO.0000000000000228<br />
82. Żochowska, D. (2022). <em>Praktyczne zastosowanie sztucznej inteligencji (AI) w medycynie 2022.</em> https://www.medonet.pl/magazyn-digital-health/digital-innovation,praktycznezastosowanie-sztucznej-inteligencji-ai-w-medycynie-2022,artykul,38817574.html</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
