<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>measurement &#8211; Marketing of Scientific and Research Organizations &#8211; The scientific journal by the Institute of Aviation</title>
	<atom:link href="https://minib.pl/en/tag/measurement/feed/" rel="self" type="application/rss+xml" />
	<link>https://minib.pl</link>
	<description></description>
	<lastBuildDate>Tue, 17 Feb 2026 13:00:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.4</generator>

 
	<item>
		<title>Stevens’ measurement scales in marketing research – A continuation of discussion on whether researchers can ignore the Likert scale’s limitations as an ordinal scale</title>
		<link>https://minib.pl/en/numer/no-1-2025/stevens-measurement-scales-in-marketing-research-a-continuation-of-discussion-on-whether-researchers-can-ignore-the-likert-scales-limitations-as-an-ordinal-scale/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Wed, 19 Mar 2025 09:30:55 +0000</pubDate>
				<category><![CDATA[Likert scale]]></category>
		<category><![CDATA[limitations of measurement scale]]></category>
		<category><![CDATA[marketing research]]></category>
		<category><![CDATA[measurement]]></category>
		<category><![CDATA[Stevens’ measurement scales]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=8205</guid>

					<description><![CDATA[1. Introduction A distinctive feature of contemporary marketing research, and more broadly speaking, of economic and social science research, is its advanced mathematization – understood here as the application of mathematical methods to capture the essence of some phenomenon. At its core, this process involves the permissible mathematical transformations that can be applied to a...]]></description>
										<content:encoded><![CDATA[<h2>1. Introduction</h2>
<p>A distinctive feature of contemporary marketing research, and more broadly speaking, of economic and social science research, is its advanced mathematization – understood here as the application of mathematical methods to capture the essence of some phenomenon. At its core, this process involves the permissible mathematical transformations that can be applied to a dataset, which determine the applicability of various statistical and econometric techniques with respect to the type of measurement scale. To facilitate the use of mathematics in drawing empirical conclusions from psychological data, which are often ordinal in nature, S.S. Stevens redefined measurement as “the assignment of numbers to objects and events in accordance with a rule” (Stevens, 1946). He introduced four fundamental types of scales that comprise measurement instruments – nominal, ordinal, interval, and ratio scales – and established criteria for the permissible statistical tests, methods and techniques that should be applied to each of them.</p>
<p>Stevens’ scales of measurement are still widely used in data analysis in the natural and social sciences, including marketing research. They were revolutionary, but they have certain flaws which have fueled an ongoing debate about the acceptability of using different tests and statistical techniques at different scales and levels of measurement (i.e. weak vs. strong scales). Instead of relying on Stevens’ scales, researchers may need to demonstrate the mathematical properties of their data and map them to analogous sets of numbers, making explicit claims about mathematization, defending them with proofs, and applying only those operations that are defined for that set (Thomas, 2019). Increasing mathematization can be explained by the needs of maximization, optimization, modeling, and forecasting.</p>
<p>However, we must ask whether the application of various statistical methods and techniques in marketing research has go too far, limiting the researchers’ horizon of thought, leading erroneous conclusions to be drawn, and diverting attention from trying to explain the non-quantitative attitudes, motives, opinions, needs, expectations, preferences of consumers (who are people, not machines or AIs). Addressing this question is the main goal of the article.</p>
<p>People’s attitudes are comprised of three closely related components: cognitive, affective and behavioral dispositions. Together these elements form a set of beliefs about the nature of the attitude object, making it difficult to establish clear boundaries between them at the measurement stage. Therefore, it is impossible to indicate where purely descriptive knowledge about the attitude object, or ideas about its nature, end and emotions and where assessments begin, where knowledge and assessments of the object end and where readiness, intention or sense of obligation to undertake specific behaviors towards the object (such as a purchase decision) begins (Nowak 1973; Escher 2010). This type of explanation is the key justification for the common practice of determining the direction and strength of attitudes only based on measurement of opinions expressed with varying degrees of acceptance.</p>
<p>The Likert scale is one of the most frequently used scales for measuring the direction and strength of attitudes of customers, consumers, and people in general. It was constructed to be applicable to measuring hidden phenomena (Likert, 1932) and was intended to overcome the limitations of simple scales, having the advantage of being multi-item. The method of appropriately using and analyzing data obtained on Likert-type measurement scales has been the subject of discussion for over 70 years. There are basically two main, competing views, which have evolved independently of each other, in the related literature and in the practice of empirical research. Historically, there has been a debate between those who support viewing the Likert scale in terms of ordinality (rank order – the present author is a supporter of this approach) and those who support intervalism – ascribing an interval-scale nature to the Likert scale (Burke, 1953; Glass, 1972; Walesiak, 1996; Kampen &amp; Swyngedouw, 2000; Francuz &amp; Mackiewicz, 2007; Jamieson, 2004; Carifio &amp; Perla, 2008; Kaczmarek &amp; Tarka, 2013; Kero &amp; Lee, 2016). Supporters of the first approach point to Siegel’s argument (“the properties of an ordinal scale are not isomorphic to the number system known as arithmetic”, Siegel, 1956, p. 26), while opponents point to the authority of K. Pearson (1909), who pointed out that measurements on an ordinal scale can be treated as a certain version of measurements on an interval scale (for discussion, see Francuz &amp; Mackiewicz, 2007, pp. 388–390).</p>
<p>The lack of a natural or arbitrary zero on the Likert scale creates a certain problem, so we do not know whether the distances on the scale are the same. For instance, is the distance on the Likert scale between “I completely agree”, coded as 5, and “I completely disagree”, coded as 1, equal to 4? Unfortunately, this remains unknown, because the “numbers” on this scale play a different role than they do, for instance, in the mathematical formula 5 – 1 = 4. The numbers on the Likert scale could be replaced, for example, with typographic symbols, such as emoticons. This fact determines the admissibility of using specific statistical methods and techniques in the process of data processing and inference. If a researcher calculates the arithmetic mean and standard deviation of data obtained on an ordinal scale, this is evidence either that they misunderstand measurement theory (whereby the type of scale is determined according to Stevens), or that they are implicitly assuming that the given scale is not an ordinal scale, but rather possesses interval properties.</p>
<p>Given that most of what is directly measured in marketing research – and often also in sociological, psychological, and even medical research – is measured suing ordinal Likert-type scales, a critical question remains: Is expressing ratings on an n-point “scale” (with 5, 7, or more points) truly a measurement on an ordinal scale? This issue extends beyond concerns about the “distances” between scale values; also pertains to whether a single numerical value can accurately represent a set of indistinguishable observations, as required by measurement theory.</p>
<h2>2. The issue of interval or lack of interval of the Likert measurement scale</h2>
<p>The Likert scale and its variants are situated on the ordinal level of measurement (Pett, 1997; Blaikie, 2003). This means that response categories, and therefore data obtained in ordinal-level measurement, are characterized by a rank order, and hence these empirical data can be compared and sorted. However, in research practice, these data are also often subjected to reduction processes, including latent variable analysis and correlation assessments, seeking to identify underlying factors that serve as the basis for empirical scaling and index construction. Yet it cannot be assumed that the intervals between values on the Likert scale are equal – although, as Blaikie (2003) points out, “researchers frequently assume that they are”. However, Cohen et al. (2000) claim that it is “unjustified” to conclude that the difference in intensity of feelings between “strongly disagree” and “disagree” is equivalent to the difference in intensity of feelings between other consecutive categories on the Likert scale. Nominal and ordinal variables (as well as interval and ratio variables) require different statistical approaches, and if an inappropriate statistical technique is applied, the risk of drawing erroneous conclusions from research findings (positive or negative verification of research hypotheses) significantly increases.</p>
<p>The scientific literature on statistics and research methodology consistently emphasizes that, for ordinal data, the median or mode should be used as a “measure of the central tendency”, rather than the mean. This is because the arithmetic manipulations required to calculate the mean (and the standard deviation) are inappropriate for data obtained by measurement on an ordinal scale, where numbers usually represent verbal statements (Clegg, 1998). Ordinal data can also be described using frequencies/percentages of responses in each category. Moreover, it is recommended that appropriate statistical inference for ordinal data be performed using nonparametric tests, such as Chi-square, Spearman’s Rho, or the Mann-Whitney U test, rather than parametric tests, because the latter require data at the level of interval or ratio scale measurement (Mann &amp; Whitney, 1947; Lieberson, 1964; Myers, 2003; Sobczyk, 2007).</p>
<p>However, in practice these “rules” are often ignored by the authors of scientific articles, master’s theses, doctoral dissertations, and reports prepared by national and international research agencies. Such authors may, for instance, use a Likert scale, but describe and analyze the empirical data using means and standard deviations and conduct parametric analyses such as ANOVA. This is consistent with Blaikie’s observation that it has become common practice to assume that data obtained from a Likert scale measure can be processed like data obtained from an interval scale measure (at the interval level). In general, such authors do not clarify whether they are even aware that some would consider this to be invalid. There is often no explicit justification for assuming that Likert scale data has interval properties, nor any argument is provided to support this assumption.</p>
<h2>3. Permissible operations on numbers depending on the type of measurement scales</h2>
<p>In marketing research, in particular, the proper use of measurement scales is one of the basic problems. According to Stevens (1946), the permissible operations that can be performed on numerical data depend on the type of measurement scales used for the variables studied. Therefore, a different procedure is required when dealing with a data matrix that includes quantitative variables measured on scales of different types – i.e. when in addition to variables measured on strong measurement scales (i.e. interval and ratio scales), there are qualitative variables, characteristic of marketing research, measured on weak nominal and ordinal scales (e.g. data obtained from the measurement of attitudes, opinions, attitudes, preferences and expectations of recipients; product architecture and image; data from measurements of the color, quality and taste of products, packaging properties, opinions on the price level).</p>
<p>When all variables in a dataset are measured on a single type of scale, especially strong scales, the choice of statistical and econometric methods for analysis and interpretation is relatively straightforward. The problem of the transformation of measurement scales and permissible mathematical and statistical transformations for data obtained in individual types of measurement scales nevertheless often becomes apparent in social, economic and marketing research (Walesiak, 2014). What approach should researchers adopt, when specialist sources say one thing but common practice is different? The treatment of ordinal scales as interval scales, although common, has long been controversial (e.g. discussed by Walesiak, 1996) and – it seems – remains so. Kuzon et al. (1996) referred to the application of parametric tests to analyze ordinal data as the first of the “seven deadly sins of statistical analysis”. Knapp (1990), however, found some merit in the argument that sample size and distribution are more important than the level of measurement when determining whether it is appropriate to use parametric tests to assess specific parameter values for a given population from which the sample is drawn. These parameters may be the mean, variance or standard deviation.</p>
<p>Nevertheless, even if we accept that the status of intervals is justified in the case of data obtained using the Likert method, datasets generated using Likert-type scales often have a skewed or polarized distribution (e.g., when most respondents “agree” or “strongly agree” that a given brand of beer was tasty, or when respondents have polarized views on the “color of a beer bottle,” depending on their place of residence). Therefore, if we want to improve the quality of research in social sciences, and in marketing research in particular, such issues as the level of measurement and adequacy of mean, standard deviation, and parametric statistics should be taken into account already at the stage of research design, and authors must address them when discussing their chosen research methodology and the individual phases and stages of the research process, including specific activities, methods, and expected results at a given stage. Knapp (1990) proposed that researchers should decide what level of measurement is being used. To paraphrase: if data are measured on the interval level, for outcome x the researcher should be able to answer the question “x what?”. If the data are clearly ordinal, nonparametric tests should be used; and if the researcher is confident that the data can be reasonably classified as interval, attention should nevertheless be paid to the sample size, its representativeness, and whether the distribution is normal.</p>
<p>Finally, can we assume that Likert-type scales are interval scales? I remain convinced by the above arguments of Kuzon and Knapp. To paraphrase their reasoning: the average of “strongly agree” and “strongly disagree” is not “neutral and a half”, and this is true even when whole numbers are assigned to represent those who “disagree” and “agree”!</p>
<p>In the design phase and implementation phase of the research process, researchers must also resolve methodological issues. The basic distinction drawn is between qualitative and quantitative methods. The former are characterized by a holistic approach to the research object, treating it as an individualized entity and seeking to uncover the deepest possible research findings, understanding the very essence of phenomena being studied. Qualitative methods are therefore particularly suitable in social sciences, especially in marketing, for such purposes as the analysis of subjective customer experiences, the meanings of messages, the motivations and attitudes of participants in market exchange processes, or for holistically reconstructing or predicting the course of specific market processes (Bryman, 2005; Devine, 2006). Quantitative methods, on the other hand, are based on completely different logic, assumptions and research goals. When applying them, the researcher should accept that the obtained results will not be as deep as in the case of qualitative research, that certain nuances and subtleties will be naturally omitted, and that the studied phenomena will be treated aspectually and without an individualized approach. In exchange, the research results, i.e. the new information, may be more reliable and objective (not burdened with the subjectivity of the subject or object of cognition), unambiguous and precise in interpretation, while at the same time providing greater possibilities for generalization and, above all, making good decisions.</p>
<p>However, the potential benefits of the quantitative approach can be achieved only if the research is conducted carefully, methodically, and with strict control of the research process. The key element here is the measurement of variables. This is the thread connecting theoretical categories with empirical research and the means by which the former can be analyzed (Bryman, 2004). The essence of quantitative research is that the objects studied are not treated as holistic, ontologically separated entities, but as bundles of variables characterizing them. The main goal of quantitative research is to find relationships between these variables, through analyses revealing appropriate statistical relationships or their absence (Białas, 1999). However, in order for these analyses to be reliable and accurate, they must be based on input data of appropriate quality. Without this, they would be worthless, because statistics is only a tool and in itself cannot tell us anything valuable about market reality without solid work by the researcher and analyst.</p>
<p>This means that the element that determines the quality of the entire research process is measurement, understood as a sequence of research activities “aimed at determining the value of a specific quantity, and thus a numerical comparison of this value with a unit of measurement” (Szewczak, 2010). The activities that make up measurement may include the application of certain measuring tools, observation of their readings, as well as appropriate processing of directly obtained results – e.g. various calculations leading to determining the value sought. In short, “measurement is the assignment of numbers to objects in such a way that these numbers reflect the relations between these objects.” In the so-called representational approach to measurement, it is assumed that the measured properties are determined by means of empirical relations between objects, that can be characterized by them (Szewczak, 2010).</p>
<h2>4. Measurement scales in measurement theory and properties of measurement scales according to Stevens’ classification</h2>
<p>Measurement theory encompasses the entire scope of the measurement procedure, which also includes the construction of measurement scales, which serve as the instruments by means of which the value of a variable is measured. The researcher therefore performs an operation, by means of which the relations between certain objects can be observed, measured and interpreted. Regardless of whether we treat measurement scale construction as a separate research procedure or an integral part of measurement itself, it is one of the most important determinants of the reliability, validity and accuracy of quantitative study and to a large extent determines whether the results of study (useful information) can be considered valuable in the decision-making process. Only reliable instruments or measurement tools can ensure that the values of the variables subject to analysis correspond to actual characteristics of objects studied, and results of these analyses accurately reflect the structure of the market reality under study (Lissowski et al., 2008).</p>
<p>Constructing a measurement scale is not an easy task, it requires appropriate methodological competences and knowledge about the phenomenon or event being measured. It is also a time-consuming process. Therefore, in research practice, there may be a temptation to take shortcuts – omitting certain important elements, or even creating an ad hoc scale based on related indicators, selected according to the criterion of data availability, and then assuming that when summed up, these will jointly measure a phenomenon, event or process. Such an approach is not recommendable, because it leads to the creation of research artifacts and amounts to the mere simulation of scientific inquiry. A methodologically rigorous and reliable scale creation process, in contrast, ensures the reliability and credibility of the obtained instrument or tool. Although this process demands substantial effort and time, the benefits are significant: well-constructed measurement tools yield results that contribute to scientific knowledge and inform decision-making processes.</p>
<p>Researchers rely on multiple sources of information in their research, diagnostic, and prognostic endeavors, seeking to achieve both scientific and practical, utilitarian goals. A crucial part of this process involves selecting the appropriate measurement scales and research instruments to use. However, the measurement scales used must simultaneously meet several important criteria: (a) standardization, (b) reliability, (c) validity, (d) normalization, (e) feasibility of use (Stevens, 1956).</p>
<p>In marketing research, the concept of “scale” appears in three basic meanings (Sagan, 2003):</p>
<ul>
<li>in the relational sense, a scale defines the field of permissible transformations of sets of measured objects into a set of symbols while maintaining the principle of homomorphism, establishing a set of statistical analysis tools permissible for a given level;</li>
<li>as an outcome of the research procedure, a scale defines the positions of respondents at discrete points or along the continuum of the measured feature (discrete-step or continuous variables);</li>
<li>in data collection, a scale is a set of conventional categories or response patterns, in estimated graphic scales or so-called rank scales, which are instruments for collecting information and defining the direction and strength of respondents’ reactions to a given item within a complex measurement scale.</li>
</ul>
<p>In the relational meaning of scale, the classification of measurement scales by the aforementioned S.S. Stevens (1946 and 1951) is adopted in marketing research methodology. This approach assumes that the type of measurement scale is known in relation to a given level of measurement. However, this distinction may be problematic for researchers in empirical identification, especially in relation to ordinal and interval scales. In contrast, researchers should have no problems distinguishing qualitative and quantitative data, discrete/step variables and continuous variables. The problems with the classification of Stevens’ measurement scales noticed in literature may be related to the fact that researchers may not recognize the type of scale a priori. The measurement operation is also related to theoretical construct adopted by researchers. The measurement procedure on Stevens’ scales, however, ensures access to data that are “empirically” at the appropriate level of measurement, and the transformation of variables that is mathematically and statistically permissible for a given level does not change their position at the points of the scale or its continuum (Townsend &amp; Ashby, 1984; Mitchell, 1986). A measurement scale can also be treated as the result of a research procedure that determines position of respondents on a continuum (understood as a continuous, ordered set of an infinite number of elements that smoothly transition from one to another), or at points of measured feature. This is how the attitude scales of Likert (ordinal scale), Guttman (ordinal scale), and Thurstone (interval scale) are constructed and defined – the sum of the ratings for an individual respondent in relation to all items of a one-dimensional scale indicates the respondent’s position at points or on the continuum of the measured attitude, depending on its strength and direction.</p>
<p>In cases where a respondent’s position is determined by summing their individual scores across the scale, the result is essentially an attitude index (the scale is arbitrary in nature). However, when a respondent’s position is derived from specific mathematical procedures transforming raw scores (e.g. into factor values), then the resulting measure can be classified as an attitude scale (Sagan 2003).</p>
<p>Measurement scales are ordered from the weakest (nominal) to the strongest (quotient). In his foundational work, Stevens (1946) distinguished between intensive and extensive scales, emphasizing that the type of scale is associated with possible transformations that preserve its properties. The basic properties of Stevens’ measurement scales are presented in Table 1. The type of scale used to measure the value of a given variable (statistical feature), or more precisely, the properties of the chosen scale, determine the statistical methods that can be applied (Adams et al. 1965). The first two scales are classified as nonmetric (weak) scales, and the remaining two as metric (strong) scales.</p>
<p><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-8232" src="https://minib.pl/wp-content/uploads/2025/03/01-2025-03-t1.jpg" alt="" width="781" height="1718" srcset="https://minib.pl/wp-content/uploads/2025/03/01-2025-03-t1.jpg 781w, https://minib.pl/wp-content/uploads/2025/03/01-2025-03-t1-136x300.jpg 136w, https://minib.pl/wp-content/uploads/2025/03/01-2025-03-t1-466x1024.jpg 466w, https://minib.pl/wp-content/uploads/2025/03/01-2025-03-t1-768x1689.jpg 768w, https://minib.pl/wp-content/uploads/2025/03/01-2025-03-t1-698x1536.jpg 698w" sizes="(max-width: 781px) 100vw, 781px" /></p>
<p>It is important to recognize that the order of scales determines their level (power, strength). Nominal and ordinal scales are non-metric and qualitative scales, while interval and ratio scales are metric and quantitative. The metric scales are commonly treated in research together as a quantitative scale – this is the case in most statistical packages, including SPSS and Statistica. In experimental sciences, variables measured on a nominal and ordinal scale are most often referred to as discrete, and those measured on a quantitative scale as continuous. The distinction between measurement scales can therefore be summarized as follows (Wiktorowicz et al., 2020):</p>
<ul>
<li>When comparing the values of a variable expressed on a nominal scale (e.g. gender), we are only able to indicate whether two people have the same or a different variant of the variable.</li>
<li>If we can additionally indicate which person has a higher variant of the variable (but we are not able to determine how much higher), we are dealing with a variable measured on an ordinal scale (this is the case, for example, with level of education or a feature measured on a Likert scale).</li>
<li>If we can additionally indicate how much higher or lower a given variant is (distances are fixed), we are dealing with a quantitative scale.</li>
</ul>
<p>And so, the stronger the measurement scale, the greater the accuracy of measurement, which in turn enables researchers to apply other advanced and complex methods of statistical analysis.</p>
<p>The data matrix is the starting point for mathematization and the application of statistical methods. The problem of applying, for example, multivariate statistical analysis methods becomes more complicated when variables in the dataset are measured on mixed scales or contain variables measured only on weak scales (especially on an ordinal scale). The problem of using methods like multidimensional statistical analysis, for example, occurs when variables in a data matrix are measured on non-metric scales.</p>
<p>The Likert scale is precisely such a non-metric scale, meaning it does not inherently possess the mathematical properties required for interval or ratio measurement. This raises the question of whether it is permissible to apply statistical tools designed for metric data to non-metric variables. One fundamental principle of measurement theory states that only measurement results on a stronger scale (interval, ratio) can be transformed into numbers belonging to a weaker scale (nominal, ordinal) (Steczkowski &amp; Zeliaś, 1981;Wiśniewski, 1987; Walesiak, 1996, Jezior, 2013). Direct transformation of scales, consisting in their strengthening, is not possible, because from information Xn it is not possible to derive Xn+1 information or more (Walesiak, 1993). Whether mathematical manipulation of an empirical data matrix leads to valid research conclusions depends, among other things, on the validity of the initial mathematization of attitudes and the validity of the subsequent mathematization of empirical data, i.e. the permitted mathematical transformations, relations, and mathematical operations on these data. If attitudes are measured on an ordinal scale, respondents’ answers are only coded as real numbers, and mathematical operations are performed that are defined only for real numbers, not ordinal numbers, then these mathematical operations on the data matrix have no empirical equivalent and do not provide a basis for inferences or conclusions about attitudes.</p>
<p>If attitudes and perceptions exhibit the mathematical properties of real numbers and are limited, and statements offered on the Likert scale correctly define endpoints and consistent intervals on an attitude continuum, then there are two possibilities. The empirical data matrix can be mathematized as ordinal numbers because the data has mathematical properties of ordinal numbers, although this results in a loss of information.</p>
<p>However, the arithmetic mean and standard deviation cannot be calculated because they are undefined. Alternatively, the numerical values in the empirical data matrix can be real, and conversion of data into numbers involves rescaling. In this case, the numbers contained in data matrix are analogous to the object of study, operations are defined, and mathematical and statistical inferences lead to valid empirical conclusions.</p>
<h2>5. Likert did not recommend calculating averages for data obtained on his scale</h2>
<p>Rensis Likert, in 1932, cited Thurstone and Chave when he assumed that attitudes were formed on a linear “continuum of attitudes,” which was the basis for his explanation of how to construct a scale to measure attitudes (Likert, 1932). Likert proposed measuring attitudes based on respondents’ agreement with statements developed by researcher, the respondent marking various points on the “continuum” of attitudes. The statements should be arranged in order from one end of continuum to the other. Likert then explained that the statements should be assigned numbers, from one to five, in the case of a question with five options, with the number “one” being assigned to one end of the continuum and “five” to the other. Likert did not explicitly discuss the mathematical properties of these numbers, but he recommended calculating a correlation coefficient for each statement to ensure that the statement was numbered correctly, and he provided a table as an example. He treated the numbers of answers as if they were real numbers, and the continuum of attitudes as if they were limited (Likert, 1932, p. 50).</p>
<p>Likert did not recommend calculating average values, as is confirmed by this quote from his work:</p>
<p>The split-half reliability should be found by correlating the sum of the odd statements for each individual against the sum of the even statements. Since each statement is answered by each individual, calculations can be reduced by using the sum rather than the average. (Likert 1932, p. 48)</p>
<p>This, in turn, yields a clear answer to the question of whether the use of various statistical methods and techniques in marketing research has gone too far in empirical research on the nature of attitudes.</p>
<p>Returning to the discussion on the mathematical properties of the Likert measurement scale described earlier, this debate does not address the mathematical properties of attitudes themselves, on which the proper mathematization of the empirical data matrix depends. In fact, it is not even entirely clear whether attitudes can be ordered. There is ongoing debate in psychology, economics, and marketing about whether the evidence supports the idea that attitudes and preferences adhere to the principle of transitivity (if a &gt; b and b &gt; c, then a &gt; c) (see, e.g., Regenwetter &amp; Dana, 2011; Bleichrodt &amp; Wakker, 2015), which is a property of both ordinal and real numbers. Additionally, Johnson (1936) raised early concerns about whether attitudes are dynamically stable. Whether various statistical operations are defined on Likert items and scales depends on how the empirical data matrix is mathematized. Performing operations that are not defined in mathematics is not mathematics – and as a result, it does not provide a valid basis for drawing empirical conclusions.</p>
<h2>6. Conclusions</h2>
<p>Given length constraints, this article concludes by proposing that future discussion should explore the following methodological issues regarding the incorrect treatment of different versions of the Likert scale as interval scales:</p>
<ul>
<li>the violation of the principle of equal intervals, which results from the principle of measurement isomorphism/homomorphism (especially “at the extremes” of the Likert scale, e.g. comparing distances 1–2 and 6–7);</li>
<li>the validity of applying Thurstone’s method of successive interval scaling and other transformational procedures to Likert scales;</li>
<li>the degree of suppression of Pearson correlation coefficients when calculated for Likert scales and the size of this suppression depending on the number of points – notably, 5–7 point scales are relatively resistant to the suppression effect;</li>
<li>alternative measures and methods for analyzing multi-item Likert scales, such as using polychoric correlation coefficients instead of Pearson’s in the analysis of data with Likert scales (Sagan 2014).</li>
</ul>
<p>The problem discussed herein is likely to become even more complex with the development of AI, machine learning, and data science and big data, because data scientists perform computational analysis but are not often involved in collecting the data or making decisions about how it is represented. They lack access to information about the empirical mathematical properties of the object of study, the evidence supporting the mathematization, and the set of numbers used, and moreover the programming languages they use may or may not allow for classification of the data by set of numbers or impose restrictions on the mathematical operations performed on the data with respect to type. This also encourages the treatment of all numbers as real, reducing the validity of empirical conclusions from the research process.</p>
<h2>References</h2>
<p>Adams, E. W., Fagot, R. F., &amp; Robinson, R. E. (1965). A theory of appropriate statistics. <em>Psychometrika, 30</em>(1), 99–127.</p>
<p>Białas, S. (1999). <em>Metrologia techniczna z podstawami tolerowania wielkości geometrycznych dla mechaników</em> [Technical metrology with fundamentals of geometric dimensioning for mechanics]. Oficyna Wydawnicza Politechniki Warszawskiej.</p>
<p>Blaikie, N. (2003). <em>Analyzing quantitative data.</em> SAGE Publications.</p>
<p>Bleichrodt, H., &amp; Wakker, P. (2015). Regret theory: A bold alternative to the alternatives. <em>The Economic Journal, 125</em>(583), 493–532.</p>
<p>Bligh, J. (2004). Ring the changes: Some resolutions for the new year and beyond. <em>Medical Education, 38</em>(1), 2–4.</p>
<p>Bryman, A. (2005). <em>Research methods and organization studies.</em> Routledge.</p>
<p>Bryman, A. (1988). <em>Quantity and quality in social research</em> (1st ed.). Routledge. https://doi.org/10.4324/9780203410028</p>
<p>Burke, C. J. (1953). Additive scales and statistics. <em>Psychological Review, 60</em>(1), 73–75.</p>
<p>Carifio, J., &amp; Perla, R. (2007). Ten common misunderstandings, misconceptions, persistent myths, and urban legends about Likert scales and Likert response formats and their antidotes. <em>Journal of Social Sciences, 3</em>(2), 106–116.</p>
<p>Churchill, G. A. (2002). <em>Badania marketingowe. Podstawy metodologiczne</em> [Marketing research: Methodological foundations]. Wydawnictwo Naukowe PWN.</p>
<p>Clegg, F. (1998). <em>Simple statistics.</em> Cambridge University Press.</p>
<p>Cohen, L., Manion, L., &amp; Morrison, K. (2000). <em>Research methods in education</em> (5th ed.). Routledge Falmer. https://doi.org/10.4324/9780203224342</p>
<p>Devine, F. (2006). Metody jakościowe [Qualitative methods]. In D. Marsh &amp; G. Stoker (Eds.), <em>Teorie i metody w naukach politycznych</em> [Theories and methods in political sciences] (pp. 197–200).</p>
<p>Diamantopoulos, A., &amp; Winklhofer, H. M. (2001). Index construction with formative indicators: An alternative to scale development. <em>Journal of Marketing Research, 38</em>(2), 269–277.</p>
<p>Escher, I. (2010). Pomiar kierunku i siły marketingowej postawy pracownika – kompromis pomiędzy teorią a praktyką marketingową [Measuring the direction and strength of employee marketing attitudes – a compromise between theory and marketing practice]. <em>Acta Universitatis Nicolai Copernici, Ekonomia, 41</em>(397), 159–174.</p>
<p>Fornell, C., &amp; Bookstein, F. (1982). Two structural equation models: LISREL and PLS applied to consumer exit-voice theory. <em>Journal of Marketing Research, 19</em>(4), 440–452.</p>
<p>Francuz, P., &amp; Mackiewicz, R. (2007). <em>Liczby nie wiedzą skąd pochodzą. Przewodnik po metodologii i statystyce nie tylko dla psychologów</em> [Numbers don’t know where they come from. A guide to methodology and statistics not only for psychologists]. Redakcja Wydawnictw Katolickiego Uniwersytetu Lubelskiego.</p>
<p>Frankfort-Nachmias, C., &amp; Nachmias, D. (2001). <em>Metody badawcze w naukach społecznych</em> [Research methods in social sciences]. Zysk i S-ka.</p>
<p>Glass, G. V., Peckham, P. D., &amp; Sanders, J. R. (1972). Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. <em>Review of Educational Research, 42</em>(3), 237–288.</p>
<p>Główny Urząd Statystyczny. (n.d.). Pojęcia stosowane w statystyce publicznej [Concepts used in public statistics]. Główny Urząd Statystyczny. Retrieved March 1, 2025, from https://stat.gov.pl/ metainformacje/slownik-pojec/pojecia-stosowane-w-statystyce-publicznej/2924,pojecie.html</p>
<p>Hansen, J. P. (2003). CAN’T MISS – Conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation. <em>Journal of Healthcare Quality, 25</em>(4), 19–24.</p>
<p>Jamieson, S. (2005). Likert scales: How to (ab)use them. <em>Medical Education, 38</em>(12), 1217–1218. https://doi.org/10.1111/j.1365-2929.2004.02012.x</p>
<p>Jezior, J. (2013). Metodologiczne problemy zastosowania skali Likerta w badaniach postaw wobec bezrobocia [Methodological problems of using the Likert scale in research on attitudes towards unemployment]. <em>Przegląd Socjologiczny, 62</em>(1), 117–138.</p>
<p>Johnson, H. M. (1936). Pseudo-mathematics in the mental and social sciences. <em>American Journal of Psychology, 48</em>(3), 342–351.</p>
<p>Kaczmarek, M., &amp; Tarka, P. (2013). Metoda gromadzenia danych a ekwiwalencja wyników pomiaru systemu wartości w 5- i 7-stopniowych skalach ratingowych Likerta [Data collection method and equivalence of value system measurement results in 5- and 7-point Likert rating scales]. <em>Handel Wewnętrzny, 5</em>(346), 42–56.</p>
<p>Kaczmarczyk, S. (2014). <em>Badania marketingowe. Podstawy metodyczne</em> [Marketing research: Methodological foundations]. PWE.</p>
<p>Kampen, J., &amp; Swyngedouw, M. (2000). The ordinal controversy revisited. <em>Quality and Quantity, 34</em>(1), 87–102.</p>
<p>Kero, P., &amp; Lee, D. (2016). Likert is pronounced ‘LICK-urt’ not ‘LIE-kurt’ and the data are ordinal not interval. <em>Journal of Applied Measurement, 17</em>(4), 502–509.</p>
<p>Knapp, T. R. (1990). Treating ordinal scales as interval scales: An attempt to resolve the controversy. <em>Nursing Research, 39</em>(2), 121–123.</p>
<p>Krajewski, W. (1977). <em>Correspondence principle and growth of science.</em> Reidel.</p>
<p>Kuzon, W. M., Urbanchek, M. G., &amp; McCabe, S. (1996). The seven deadly sins of statistical analysis. <em>Annals of Plastic Surgery, 37</em>(3), 265–272.</p>
<p>Lieberson, S. (1964). Limitations in the application of non-parametric coefficients of correlation. <em>American Sociological Review, 29</em>(5), 744–746.</p>
<p>Lissowski, G., Haman, J., &amp; Jasiński, M. (2008). <em>Podstawy statystyki dla socjologów</em> [Fundamentals of statistics for sociologists]. Wydawnictwo Naukowe Scholar.</p>
<p>Likert, R. (1932). A technique for the measurement of attitudes. <em>Archives of Psychology, 22</em>(140), 55.</p>
<p>Mann, H. B., &amp; Whitney, D. R. (1947). On a test of whether one of two random variables is stochastically larger than the other. <em>Annals of Mathematical Statistics, 18</em>, 50–60. https://doi.org/10.1214/aoms/ 1177730491</p>
<p>Mayntz, R., Holm, K., &amp; Hubner, P. (1985). <em>Wprowadzenie do metod socjologii empirycznej</em> [Introduction to methods of empirical sociology]. PWN.</p>
<p>Mitchell, J. (1986). Measurement scales and statistics: A clash of paradigms. <em>Psychological Bulletin, 100</em>(3), 398–407.</p>
<p>Myers, J. L., &amp; Well, A. D. (2003). <em>Research design and statistical analysis</em> (2nd ed.). Lawrence Erlbaum Associates.</p>
<p>Nowak, S. (1973). <em>Teorie postaw</em> [Theories of attitudes]. Państwowe Wydawnictwo Naukowe.</p>
<p>Nowak, S. (1985). <em>Metodologia badań społecznych</em> [Methodology of social research]. Państwowe Wydawnictwo Naukowe.</p>
<p>Pett, M. A. (1997). <em>Nonparametric statistics for health care research.</em> SAGE Publications.</p>
<p>Pearson, K. (1909). On a new method of determining the correlation between a measured character A and a character B. <em>Biometrika, 7</em>, 96–105.</p>
<p>Regenwetter, M., &amp; Dana, J. (2011). Transitivity of preferences. <em>Psychological Review, 118</em>(1), 42–56.</p>
<p>Sagan, A. (2003). Skale i indeksy jako narzędzia pomiaru w badaniach marketingowych [Scales and indices as measurement tools in marketing research]. <em>Zeszyty Naukowe / Akademia Ekonomiczna w Krakowie, 640</em>, 21–36.</p>
<p>Sagan, A. (2014). <em>Wprowadzenie do modelowania zjawisk społecznych i przykłady zastosowań w Statistica</em> [Introduction to modeling social phenomena and examples of applications in Statistica]. StatSoft Polska.</p>
<p>Santina, M., &amp; Perez, J. (2003). Health professionals’ sex and attitudes of health science students to health claims. <em>Medical Education, 37</em>(6), 509–513.</p>
<p>Siegel, S. (1956). <em>Nonparametric statistics for the behavioral sciences.</em> McGraw-Hill.</p>
<p>Sobczyk, M. (2007).<em> Statystyka</em> [Statistics]. Wydawnictwo Naukowe PWN.</p>
<p>Steczkowski, J., &amp; Zeliaś, A. (1981). <em>Statystyczne metody analizy cech jakościowych</em> [Statistical methods of qualitative trait analysis]. PWE.</p>
<p>Stevens, S. S. (1946). On the theory of scales of measurement. <em>Science, 103</em>(2684), 677–680.</p>
<p>Stevens, S. S. (1951). Mathematics, measurement and psychophysics. In S. S. Stevens (Ed.), <em>Handbook of experimental psychology</em> (pp. 1–49). John Wiley &amp; Sons.</p>
<p>Stevens, S. S. (1959). Measurement, psychophysics and utility. In C. W. Churchman &amp; P. Ratoosh (Eds.), <em>Measurement; definitions and theories</em> (pp. 18–61). Wiley.</p>
<p>Szewczak, W. (2010). Jak zmierzyć demokrację? Teoretyczne i metodologiczne podstawy budowy skal demokracji politycznej w politologii porównawczej [How to measure democracy? Theoretical and methodological foundations for constructing democracy scales in comparative political science]. <em>Przegląd Politologiczny, 4</em>, 98–100.</p>
<p>Thomas, M. A. (2019). Mathematization, not measurement: A critique of Stevens’ scales of measurement. <em>Journal of Methods and Measurement in the Social Sciences, 10</em>(2), 76–94.</p>
<p>Thurstone, L. L., &amp; Chave, E. J. (1930). Theory of attitude measurement. In L. L. Thurstone &amp; E. J. Chave (Eds.), <em>The measurement of attitude</em> (pp. 1–21). University of Chicago Press.</p>
<p>Townsend, J. T., &amp; Ashby, F. G. (1984). Measurement scales and statistics: The misconception misconceived. <em>Psychological Review, 96</em>(3), 394–401.</p>
<p>Walesiak, M. (1993). Statystyczna analiza wielowymiarowa w badaniach marketingowych [Multivariate statistical analysis in marketing research]. <em>Prace Naukowe Akademii Ekonomicznej we Wrocławiu, 654</em>.</p>
<p>Walesiak, M. (1996). <em>Metody analizy danych marketingowych</em> [Methods of marketing data analysis]. Wydawnictwo Naukowe PWN.</p>
<p>Walesiak, M. (2014). Wzmacnianie skali pomiaru dla danych porządkowych w statystycznej analizie wielowymiarowej [Strengthening measurement scales for ordinal data in multivariate statistical analysis]. <em>Prace Naukowe Uniwersytetu Ekonomicznego We Wrocławiu, 327</em>, 60–68.</p>
<p>Wiktorowicz, J., Grzelak, M. M., &amp; Grzeszkiewicz-Radulska, K. (2020). <em>Analiza statystyczna z IBM SPSS Statistics</em> [Statistical analysis with IBM SPSS Statistics]. Wydawnictwo Uniwersytetu Łódzkiego. https://doi.org/10.18778/8220-387-5</p>
<p>Wiśniewski, J. W. (1987). Teoria pomiaru a teoria błędów w badaniach statystycznych [Measurement theory and error theory in statistical research]. <em>Wiadomości Statystyczne, 11</em>, 18–20.</p>
<p>Zeller, R. A., &amp; Carmines, E. G. (1980). Measurement in the social sciences: The link between theory and data. <em>American Political Science Review, 76</em>(4), 996–1008.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Three root causes for the impasse in reputation measuremant for higher education institutions</title>
		<link>https://minib.pl/en/numer/no-2-2024/three-root-causes-for-the-impasse-in-reputation-measuremant-for-higher-education-institutions/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Fri, 29 Mar 2024 09:30:55 +0000</pubDate>
				<category><![CDATA[higher education]]></category>
		<category><![CDATA[higher education management]]></category>
		<category><![CDATA[measurement]]></category>
		<category><![CDATA[monitoring systems]]></category>
		<category><![CDATA[reputation]]></category>
		<guid isPermaLink="false">https://minib.pl/?post_type=numer&#038;p=7994</guid>

					<description><![CDATA[I. The relevance of measuring reputation for the management of Higher Education Institutions Today’s Higher Education Institutions (HEIs) operate in an increasingly competitive landscape (Garcia-Rodriguez &#38; Gutiérrez-Taño, 2021). Factors such as deregulation, globalization of educational markets, and rising student mobility are contributing to intensified competition among HEIs worldwide. Notably, traditional HEIs from established educational-supplier countries...]]></description>
										<content:encoded><![CDATA[<h2>I. The relevance of measuring reputation for the management of Higher Education Institutions</h2>
<p>Today’s Higher Education Institutions (HEIs) operate in an increasingly competitive landscape (Garcia-Rodriguez &amp; Gutiérrez-Taño, 2021). Factors such as deregulation, globalization of educational markets, and rising student mobility are contributing to intensified competition among HEIs worldwide. Notably, traditional HEIs from established educational-supplier countries are facing challenges from new rivals, including institutions in Asia and South America that cater to fee-paying international students (Manzoor et al., 2021). Furthermore, different types of HEIs – varying in structural conditions – compete within the same international marketplace for higher education (e.g. Elken &amp; Rosdal, 2017). This competition is particularly evident when comparing public and private universities.</p>
<p>The changing market dynamics coincide with the adoption of new public management practices in public HEIs. Developed in the 1980s, new public management offers an alternative framework for more efficient governance of public organizations (e.g., Broucker et al., 2016). This movement is observed in a number of countries. Particularly in Europe, these shifts reflect broader public sector reforms (de Boer et al., 2007). Notably, funding priorities within Higher Education Institutions are transitioning away from public sources toward non-public funding.</p>
<p>This transformation can be seen, as Wedlin (2008) suggests, as a result of university marketization. HEIs therefore face growing demands for external accountability. Consequently, monitoring practices – originally developed for corporate management – are increasingly developing in higher education environments (e.g., Engwall, 2008; Kethüda, 2023).</p>
<p>A pivotal construct for assessing HEI outcomes is reputation. Reputation serves as a signal of educational and scientific quality, influencing university evaluation and prospective student selection (Hemsley-Brown, 2012; Munisamy et al., 2014). From an institutional economics perspective, reputation’s signaling quality arises because educational and scientific quality cannot be fully evaluated until experienced (Suomi et al., 2014). Following the argument of Plewa et al. (2016), it is precisely this quality that makes reputation a key concept in HEI management in competitive situations.</p>
<p>When applied to HEIs, a good reputation can be interpreted as a long-term expression of the performance and the perceptions of an HEI by its many stakeholders. A good reputation will be related to instilling trust (e.g., Dass et al., 2021), accessing financial support more easily, attracting a higher number of top-quality students, or being of interest for the best researchers, teachers, and administration experts. Studies from a corporate reputation context (e.g., Eberl &amp; Schwaiger, 2005; Fombrun &amp; Shanley, 1990; Sabate &amp; Puente, 2003) have revealed a correlation between reputation and financial success, discussing the positive impact of reputation on diverse business goals. Consequently, reputation is appreciated as an intangible asset of organizations, the importance of which is even increasing in business valuation (Brønn, 2008; BrandFinance, 2019).</p>
<p>In today’s marketized landscape, professional monitoring is crucial for managing and marketing HEIs. However, adequate and accepted reputation measures are a prerequisite for effective monitoring. Despite being extensively studied in corporate research, measuring the reputation of HEIs still remains underexplored: while many studies discuss the various facets of reputation measurement for corporations (e.g., Alcaide-Pulido &amp; Gutiérrez-Villar, 2017; Chun, 2005; Walker, 2010), deplorably little research has focused on the question of how reputation in HEI contexts should be measured and monitored.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-7959" src="https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-2_f-1.jpg" alt="" width="1776" height="809" srcset="https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-2_f-1.jpg 1776w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-2_f-1-300x137.jpg 300w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-2_f-1-1024x466.jpg 1024w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-2_f-1-768x350.jpg 768w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-2_f-1-1536x700.jpg 1536w, https://minib.pl/wp-content/uploads/2024/03/MINIB-2024_2-2_f-1-1320x601.jpg 1320w" sizes="(max-width: 1776px) 100vw, 1776px" /></p>
<h2>II. Capturing reputation in HEI contexts – the lack of a feasible approach</h2>
<p>The following sections outline three possible explanations for the logjam in research progress on measuring HEI reputation, which results in a lack of established reputation monitoring systems in HEIs. Figure 1 provides a summary of the argument.</p>
<h2>IIa. The reputation construct is not clear, either in terms of its definition or in terms of its its dimensionality</h2>
<p>The concept of reputation, extensively examined in business contexts, has been approached from diverse perspectives (e.g., Fombrun &amp; van Riel, 2003; Chun, 2005). In one approach, rooted in Fombrun’s (1996) seminal work, reputation is often defined as the collective perception of an organization held by its stakeholders, shaped by their interactions and received communications (Fombrun &amp; Shanley, 1990; Walker, 2010). Alternatively, reputation may be conceptualized as stakeholders’ assessments of the organization’s ability to meet expectations (e.g., Fombrun &amp; van Riel, 2003), as the collective beliefs about an organization’s identity and prominence (Rao, 1994), or as a set of beliefs encompassing the organization’s capacities, intentions, history, and mission (Carpenter, 2010). Bromley (1993, 2002) and Grunig and Hung (2002) emphasize reputation as shared beliefs among social groups, while Deephouse (1997, 2000) underscores its relation to media visibility and favorability. Gotsi and Wilson (2001) define reputation as stakeholders’ overall evaluation of a company over time, informed by direct experiences or other forms of communication, which is rather similar to Grunig and Hung’s (2002) approach. Note that while reputation and image are intertwined, they represent distinct theoretical constructs (Alcaide-Pulido &amp; Gutiérrez-Villar, 2017; Manzoor et al., 2021).</p>
<p>The discourse surrounding reputation, both intricate and fundamental, has engendered debates about its dimensional structure (which need to be seen as intertwined with the challenges of defining and measuring reputation). Prominent contributions and viewpoints can be compiled as follows:</p>
<ul>
<li>Lange et al. (2011) sum up studies and essays which conceptualize reputation in a uni-dimensional way, resulting in three dominant directions: Reputation consisting of familiarity with the organization, reputation consisting of beliefs about what to expect from the organization in the future, and reputation consisting of impressions about the organization’s favorability.</li>
<li>Examining the application of organizational reputation to public administration, Carpenter (2010) distinguishes four dimensions. The performative dimension is related to stakeholders’ perceptions and evaluation of whether an organization delivers outputs that comply with its mission and activities (also Chapleo et al., 2011). In a way, this dimension refers to aspects of effectiveness and efficiency. The procedural dimension deals with the appropriateness of an organization’s procedural and legal requirements in its decision making. The technical dimension, in turn, draws attention to the knowledge and competencies within the organization which are necessary to handle complex tasks and changing environments. Finally, the moral dimension of reputation refers to the stakeholders’ perceptions of whether an organization is honest, humane, even emotionally appealing (Carpenter &amp; Krause, 2012). These dimensions also reflect evaluations of whether an organization protects the interests of stakeholders and members. As these various authors point out, reputation should not be seen one-dimensional; however one consequence of a multi-dimensional construct is the complicating issue that striving to boost one dimension implies that another dimension will most likely suffer: trade-offs are inherent. Consequently, organizations will strive to prioritize certain forms of reputation, e.g., the performative reputation, over others. To further add to the complication, different parts of the organization may favor and support different dimensions of reputation.</li>
<li>Fombrun and colleagues have developed a reputation definition for a marketing context, using a more specific operationalization (Fombrun, 1996; Fombrun et al., 2000; Fombrun &amp; van Riel, 2003). Their work, in turn, has served as one basis for the approach proposed by Eisenegger and Imhof (2009). They crafted a three-dimensional reputation approach by combining the reputation concept of Eberl and Schwaiger (2005) (who distinguished cognitive and affective reputations) with a normative dimension. The derived framework includes a functional, a social and an expressive dimension. The functional reputation refers to success and competence of the actor that can be expressed through key figures or ratings (Eisenegger &amp; Imhof, 2009). On top of this, a successful organization needs to adhere to the norms and values of society; this denotes the social reputation. While the first two dimensions focus more on the outside world, the individual world of the actor itself becomes the object of the expressive dimension. In other words, the emotional attractiveness, authenticity, or uniqueness of the organization is reflected in this third dimension. According to Eisenegger (2004, 2005, 2009), the three-dimensional reputation concept should have universal applicability and should be relevant to all types of organizations.</li>
<li>Agarwal et al. (2015) propose as many as six dimensions – basing reputation on stakeholders’ perceptions of quality level, vision, workplace, responsibility, financial performance, and emotional appeal.</li>
<li>The discussion about the dimensionality of reputation, on the one hand, and the debate about inherent trade-offs between them as advanced by Carpenter (2010) and Carpenter and Krause (2012), on the other, both lend support to the idea of different sub-types of reputation as we have presented in Morschheuser and Redler (2015). Building on the notion of scientific organizations as multi-sectional organizations, we have argued that scientific organizations’ reputations should be understood as being composed of four sub-reputations: reputation of administration (stakeholders’ perceptions about administrative performance), reputation of research (stakeholders’ perceptions about research performance), reputation of transfer (stakeholders’ perceptions about transfer performance) and reputation of teaching (stakeholders’ perceptions about teaching performance).</li>
</ul>
<p>In summary, various conceptualizations of the reputation construct have been discussed, all of them having their origins in the study of business. Nevertheless, no consistent understanding of the dimensional structure of this phenomenon has yet emerged from this research.</p>
<h2>IIb. There are many unresolved discussions regarding the appropriate measurement methods to apply to HEI reputation</h2>
<p>The preceding discussion highlights that reputation is linked to a number of factors, which therefore presents challenges for its measurement and monitoring. One challenge lies in accurately measuring each factor, while another involves linking them to indicators of the overarching construct. A review of the literature on appropriate quantitative measurement design reveals several ambiguities and trade-offs, likely reflecting more general disputes. These ambiguities cause significant headaches for reputation managers, who struggle to design appropriate reputation management tools. Such ambiguities often lead to initiatives going round in circles. The main ambiguities are as follows: Subjective vs. objective measures: There is a debate about whether to use measures based on “objective” data or ones based on “subjective” data (e.g., Siefke, 1998). The first approach relies on intersubjectively verifiable measures and assumes that reputation can be measured using neutral third parties or objective and external indicators that are not subject to distorted perceptions. Examples include the use of figures and performance indicators or observational data. In contrast, methods based on subjective data accept intersubjectively different perceptions and apply measures that can capture these. Scale-based measures, incident-based procedures or problem-based assessments are examples of this type of approach.</p>
<p>Formative vs. reflective measurement: The formative vs. reflective measurement controversy concerns how the factors within reputation are combined and whether they are a cause or a result of the construct under investigation. Helm (2005, p. 96) points out that “most researchers assume a reflective relationship, meaning that the observed latent variable is assumed to be a construct of all its indicators.” According to this reflective perspective, the observable factors change as the latent variable changes (e.g. reputation) – they “reflect“ the latent variable (an “eliciting variable,” Rossiter, 2002). Formative measurement takes a different perspective. In a formative view, the factors cause the latent variable, they “form” it (a “formed variable,” Rossiter, 2002; Diamantopoulos et al., 2008). While Agarwal et al. (2015, p. 448) conceptualize (corporate) reputation as a reflective construct, the many indices or rankings are expressions of the formative approach. As Helm (2005) reminds us, these are “classic examples of formative construct conceptualisation” and, as is well known, they are often used to express reputation. A more recent overview of the reflective-formative debate can be found in Fleuren et al. (2018).</p>
<p>Measures for first-order vs. second-order constructs: Agarwal et al. (2015) discuss, among other things, whether reputation is a first-order or a second-order construct. While a first-order construct has observable variables as indicators, a second-order structure implies that the original construct (here: reputation) is an unobservable (latent) variable and has other latent variables as indicators. There seems to be theoretical grounds and empirical evidence for considering reputation as a second-order construct based on individual measurement dimensions (Agarwal et al., 2015). Similar paths, but for different objects of reputation, are outlined in papers by Dong et al. (2019) and Walsh and Beatty (2007). Although coming from different contexts, papers by Danneels (2016) and Potter (1991), for example, provide a deeper insight into the specifics of measuring first- or second-order constructs.</p>
<p>Single-item vs. multi-item measures: Theoretical considerations have also focused on whether a construct (such as reputation) should be measured by a single-item or multi-item measure. While single-item scales use only one item (question or indicator) to capture a construct, multi-item measures use a variety of items to assess the empirical situation of a construct. Today, the use of multi-item scales seems to be the standard in academic research. However, the conventional wisdom (in marketing research) has been challenged by Bergkvist and Rossiter (2007), referring to ideas of the C-OAR-SE procedure proposed by Rossiter (2002). In general, single-item measurement is discussed because of several advantages (see Sarstedt &amp; Wilczynski, 2009, or Bowling, 2005, for an overview), such as higher response rates, simplicity, increased flexibility, or lower costs. However, as Sarstedt and Wilczynski (2009) point out, the arguments in favor of single item measures apply only to reflective measures, as classical psychometric performance criteria are not applicable to formative constructs. On the other hand, there are convincing arguments in favor of multi-item measures (e.g. Sarstedt &amp; Wilczynski, 2009, for a review), such as increased reliability, higher construct validity or better predictive validity (e.g. Diamantopoulos et al., 2012). For the higher education context, Svensson (2008) examines the measures underlying scientific journal rankings and finds that these rankings are largely based on single-item measures (e.g., expert perceptions or citations) and therefore fail to provide estimates of psychometric quality such as reliability or validity. Consequently, he recommends the use of broader approaches based on multi-item measures.</p>
<p>In summary, notable sub-questions of the overall measurement problems are still left unanswered by measurement and scaling theory; rather, the discussion points to several forks in the road that need to be taken if a solution for measuring HEI reputation is to be derived. This is another reason why ideas about what might provide a sound solution for measuring HEI reputation have not yet been established.</p>
<h2>IIc. Traditional reputation measurement is poorly tailored to the distinctive nature of HEIs</h2>
<p>Many authors understand HEIs as any institutions involved in higher education, which includes all educational institutions authorized to provide two to three years of post-secondary education (e.g. World Conference on Higher Education, 1998; McCaffrey, 2019). When discussing educational strategies, Pucciarelli and Kaplan (2016) highlight that universities (as HEIs) have three basic missions: teaching, research and public service, which have always been in conflict. HEIs can take many forms, for example they can be public vs. private, non-profit vs. for-profit, specialized in particular disciplines vs. very broad, focused on research vs. teaching vs. both. HEIs produce teaching, research and transfer outputs. In Morschheuser and Redler (2015), we have discussed HEIs as a subtype of Scientific Organizations, which we define as tetra-sectional social systems that act in a goal-oriented way, produce knowledge or know-how, use and defend scientific methods, share their insights and ways of research with the public for the purpose of discussion, quality control and stimulation of further research, and are embedded in a complex network of stakeholders.</p>
<p>It is important to note that HEIs differ from corporate organizations in a number of significant ways. For instance, the literature has identified certain unique characteristics of HEIs:</p>
<ul>
<li>Telem (1981, p. 581) emphasizes that HEIs are large organizations that deal with thousands of students across various academic levels and programs. HEIs have hundreds of faculty members and administrative staff, numerous buildings, significant financial turnover, and a variety of research programs. This description highlights the size of the organization and the various stakeholders involved, as well as the complexity that characterizes HEIs, including the notion of multiple interrelated subsystems. Barnett (2015) illustrates that HEIs face super-complexity on several levels. Therefore, it is not surprising that HEIs are often described as one of the most complex organizational forms (Austin &amp; Jones, 2015). Cohen et al. (1972) and Cohen &amp; March (1974) have described HEI as “organized anarchy,” a term also acknowledged by Perkins (1973). For instance, no one holds absolute authority in a typical HEI.</li>
<li>A related idea is that of HEIs as “loosely coupled systems” (Weick, 1976, p. 1).</li>
<li>HEIs have a normative character, and Birnbaum (1988) emphasizes the typical roles of referent and expert power. According to Birnbaum, HEIs are unique organizations because they have little specialization of work but much specialization of expertise, a comparatively flat hierarchy, and a less visible role performance.</li>
<li>Birnbaum (1988) and Perkins (1973) both argue that accountability for cause and effect is low in HEIs.</li>
<li>HEIs have been described as tetra-sectional, integrating four distinct organizations into one. This concept is supported by research from Barnett (2003) and Kerr (1972), who both propose the idea of an internally fragmented “multiversity.”</li>
</ul>
<p>These arguments support the conclusion that HEIs cannot easily be compared to business organizations. Furthermore, there are issues with the market-related assumptions that have been implied in corporate reputation research. While firms operate within a market system, HEIs do not. Firms interact with a market that consists of suppliers and demand as the main actors. The interaction of supply and demand creates efficient solutions for all parties involved, resulting in the creation of value (Sheth &amp; Uslay, 2007).</p>
<p>The discussion of reputation in the context of HEIs has so far left certain aspects insufficiently considered – such as measuring and monitoring issues. Additionally, the current theory on reputation in HEIs is not specific and instead refers to reputation as it originated from business research (as we have noted, the conceptualizations of reputation outlined above are largely from business backgrounds). Returning to the key authors who have worked on defining reputation, such as Fombrun (1996), Fombrun and Shanley (1990), Walker (2010), or Fombrun and van Riel (2003), it is clear that their focus is on enterprises. As research has not yet addressed whether the concept of reputation and its related measurement solutions can be adequately applied to HEIs, or what adaptations may be necessary, this remains an important area for future investigation.</p>
<h2>III. The current landscape: a scarcity of research on measuring HEI reputation</h2>
<p>As a kind of interim summary of the above discussion, we can state that it appears that the issue of measuring HEI reputation has not yet gained much attention. Only a limited number of tailored research contributions can be identified that examine HEI reputation at all.</p>
<p>Theus (1993), for instance, explores how university reputations develop and fade; moreover, she investigates attributes of reputation. Conard and Conard (2000) conducted a study with high school seniors to investigate the factors that contribute to a college’s reputation, including academic quality, career preparation, ethos, and exclusivity. Suomi’s (2014) findings emphasize the multidimensionality of the reputation construct in HEI backgrounds. The relationship between reputation and student loyalty has been investigated by various scholars, including Nguyen and Leblanc (2001) and Garcia-Rodriguez and Gutiérrez-Taño (2021). Their studies have found that reputation has a positive effect on loyalty. Ressler and Abratt (2009) propose a framework for managing and testing university reputation. Vidaver-Cohen (2007) previously introduced a reputation model for higher education institutions, applying findings from reputation research to business schools. In certain academic papers, the perception of HEI reputation is often associated with branding concepts, including constructs such as university brands (Dass et al., 2021; Khoshtaria et al., 2020). Examples of this can be found in Chapleo (2004) and Balmer and Liao (2007).</p>
<h2>IV. Conclusions and call to action</h2>
<p>As we have sought to show in this paper, professional monitoring is crucial for managing the reputation of HEIs in today’s marketized landscape. However, to manage reputation effectively, it is necessary to measure it adequately. Nevertheless, as we have outlined above, there are serious problems with finding an acceptable and feasible way of measuring reputation that accounts for the specifics of HEIs.</p>
<p>Three main reasons have been highlighted to explain this situation: (a) that notable sub-questions of the overall measurement problems are still left unanswered, (b) that there are several, sometimes incommensurate, (construct-related) demands that need to be met for the same measurement task, and (c) that the generic challenges become further exacerbated when it comes to adapting these views more specifically to HEI reputation measurement and monitoring.</p>
<p>To resolve the resulting impasse and help facilitate discussion, several options might be worth considering (for a more detailed view see Redler &amp; Morschheuser, 2024). One option could be to advance both basic HEI theory and reputation theory. This stream could include a thorough evaluation of what reputation means in the context of HEI, acknowledging that the objective of reputation needs to play a more prominent role in monitoring issues. When considering HEIs, it is important to determine which characteristics are relevant and how they can be measured accurately.</p>
<p>Another approach might be to concentrate on practical solutions. To maintain traditional perspectives in reputation research and to make them valuable, it is necessary to be more direct and to assess the usefulness of proposed measurement guidelines in actual HEI management. This involves researchers stepping outside of their own narrow definitions and considering alternative perspectives on what constitutes appropriate measurement. Rather than producing more findings that add another small piece to the silo-based construction of reality, researchers need to devote time and resources to research that contributes to knowledge that has an impact on the reality of (HEI) managers. To do so, researchers may benefit from engaging with managers to analyze their views, understand their needs and behaviors, and use these insights to inform research strategies. Acceptance of more pragmatic solutions for measuring HEI reputation may be an outcome of engaging with the practical views of HEI managers. It should be noted that these solutions may not meet all requirements for optimal measurement from a theoretical perspective.</p>
<p>An alternative approach could be to focus on constructs that have a more widely accepted definition and valid measures, such as brand equity (Khoshtaria et al., 2020) or image (Alcaide-Pulido &amp; Gutiérrez-Villar, 2017), rather than relying on the reputation approach.</p>
<p>Finally, it may be worthwhile to explore the use of a scorecard to assess the reputation of an HEI. This approach shows potential, but it is crucial to first identify the HEI’s needs before determining the measurement dimensions, as suggested by Suomi (2014) or Nicolescu (2009). It may also be necessary to discard current constructs and their operationalizations and consider both qualitative and quantitative perspectives. Overall, there are valid reasons to use scorecard concepts as a starting point for new initiatives, particularly the Balanced Scorecard (BSC) developed by Kaplan and Norton (1996). As the originators of the scorecard view point out, “measurement was as fundamental to managers as it was for scientists” (Kaplan, 2009, p. 1253). The scorecard lens has an important advantage in that it approaches construct and measurement issues in a more integrative way.</p>
<h2>References</h2>
<p>Agarwal, J., Osiyevskyy, O., &amp; Feldman, P.M. (2015). Corporate reputation measurement: Alternative factor structures, nomological validity, and organizational outcomes.<em> Journal of Business Ethics, 130</em>(2), 485–506. https://doi.org/10.1007/s10551-014-2232-6</p>
<p>Alcaide-Pulido, P., Alves, H., &amp; Gutiérrez-Villar, B. (2017). Development of a model to analyze HEI image: a case based on a private and a public university. <em>Journal of Marketing for Higher Education, 27</em>(2), 162–187. https://doi.org/10.1080/ 08841241.2017.1388330</p>
<p>Austin, I., &amp; Jones, G. A. (2015). <em>Governance of higher education: Global perspectives, theories, and practices.</em> Routledge.</p>
<p>Baldridge, J. V., Curtis, D. V., Ecker, G., &amp; Riley, G. L. (1978). <em>Policy making and effective leadership: A national study of academic management.</em> Jossey-Bass.</p>
<p>Balmer, J. M. T., &amp; Liao, M. (2007). Student corporate brand identification: An exploratory case study. <em>Corporate Communications: An International Journal, 12</em>(4), 356–375. https://doi.org/10.1108/13563280710832515 Barnett, R. (2003). Beyond all reason. Living with ideology in the university. SRHE and Open University Press.</p>
<p>Barnett, R. (2015). <em>Understanding the university: Institution, idea, possibilities.</em> Routledge.</p>
<p>Bergkvist, L., &amp; Rossiter, J. (2007). The predictive validity of multiple-item versus single-item measures of the same constructs. <em>Journal of Marketing Research, 44</em>(2), 175–184. https://doi.org/10.1509/jmkr.44.2.175 Birnbaum, R. (1988). How colleges work: The cybernetics of academic organization and leadership. Jossey-Bass Publishers.</p>
<p>Bowling A. (2005). Just one question: If one question works, why ask several? <em>Journal of Epidemiology &amp; Community Health, 59</em>(5), 342–345. https://doi.org/ 10.1136/jech.2004.021204 BrandFinance (Ed.) (2019). Global intangible finance tracker (GIFT) – An annual review of the worlds intangible value. Brand Finance.</p>
<p>Bromley, D. B. (1993). <em>Reputation, image and impression management.</em> John Wiley &amp; Sons.</p>
<p>Bromley, D. B. (2002). An examination of issues that complicate the concept of reputation in business studies. <em>International Studies of Management &amp; Organization, 32</em>(3), 65–81. https://www.jstor.org/stable/40397542 Broucker, B., De Wit, K., &amp; Leisyte, L. (2016). Higher education reform: A systematic comparison of ten countries from a new public management perspective. In R. Prichard, A. Pausitis, &amp; J. Williams (Eds.), Positioning Higher Education Institutions (pp. 19–40). Brill.</p>
<p>Brønn, P. S. (2008). Intangible assets and communication. In A. Zerfass, B. van Ruler &amp; K. Sriramesh (Eds.), <em>Public relations research</em> (pp. 281–291). VS Verlag für Sozialwissenschaften.</p>
<p>Carpenter, D. (2010). <em>Reputation and power: Organizational image and pharmaceutical regulation at the FDA.</em> Princeton University Press.</p>
<p>Carpenter, D. P., &amp; Krause, G. A. (2012). Reputation and public administration. <em>Public Administration Review, 72</em>(1), 26–32. https://doi.org/10.1111/j.1540-6210.2011.02506.x Chapleo, C. (2004). Interpretation and implementation of reputation/brand management by UK university leaders. <em>International Journal of Educational Advancement, 5</em>(1), 7–23. https://doi.org/10.1057/palgrave.ijea.2140201</p>
<p>Chapleo, C., Carrillo Durán, M. V., &amp; Castillo Díaz, A. (2011). Do UK universities communicate their brands effectively through their websites? Journal of Marketing for Higher Education, 21(1), 25–46. https://doi.org/10.1080/08841241. 2011.569589 Chun, R. (2005). Corporate reputation: Meaning and measurement. <em>International Journal of Management Reviews, 7</em>(2), 91–109. https://doi.org/10.1111/j.1468-2370.2005.00109.x</p>
<p>Cohen, M. D., &amp; March, J. G. (1974). <em>Leadership and ambiguity: The American College President.</em> McGraw-Hill.</p>
<p>Cohen, M. D., March, J. G., &amp; Olsen, J. P. (1972). A garbage can model of organizational choice. <em>Administrative Science Quarterly, 17</em>(1), 1–25. https://doi.org/10.2307/2392088</p>
<p>Conard, M. J., &amp; Conard, M. A. (2000). An analysis of academic reputation as perceived by consumers of higher education. <em>Journal of Marketing for Higher Education, 9</em>(4), 69–80. https://doi.org/10.1300/J050v09n04_05</p>
<p>Dass, S., Popli, S., Sarkar, A., Sarkar, J. G., &amp; Vinay, M. (2021). Empirically examining the psychological mechanism of a loved and trusted business school brand. <em>Journal of Marketing for Higher Education, 31</em>(1), 23–40. https://doi.org/10.1080/08841241.2020.1742846</p>
<p>Danneels, E. (2016). Survey measures of first-and second-order competences. <em>Strategic Management Journal, 37</em>(10), 2174–2188. https://doi.org/10.1002/smj.2428</p>
<p>de Boer, H., Enders, J., &amp; Schimank, U. (2007). On the way towards new public management? The governance of university systems in England, the Netherlands, Austria, and Germany. In D. Jansen (Ed.), <em>New forms of governance in research organizations</em> (pp. 137–152). Springer.</p>
<p>Deephouse, D. L. (1997). Part IV – How Do Reputations Affect Corporate Performance?: The effect of financial and media reputations on performance. <em>Corporate Reputation Review, 1</em>, 68–72. https://doi.org/10.1057/palgrave.crr. 1540019</p>
<p>Deephouse, D. L. (2000). Media reputation as a strategic resource: An integration of mass communication and resource-based theories. <em>Journal of Management, 26</em>(6), 1091–1112. https://doi.org/10.1177/014920630002600602</p>
<p>Diamantopoulos, A., Riefler, P., &amp; Roth, K. P. (2008). Advancing formative measurement models. <em>Journal of Business Research, 61</em>(12), 1203–1218. https://doi.org/10.1016/j.jbusres.2008.01.009</p>
<p>Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., &amp; Kaiser, S. (2012). Guidelines for choosing between multi-item and single-item scales for construct measurement: A predictive validity perspective. <em>Journal of the Academy of Marketing Science, 40</em>(3), 434–449. https://doi.org/10.1007/s11747-011-0300-3</p>
<p>Dong, Y., Sun, S., Xia, C., &amp; Perc, M. (2019). Second-order reputation promotes cooperation in the spatial prisoner’s dilemma game. IEEE Access, <em>7,</em> 82532–82540. https://doi.org/10.1109/ACCESS.2019.2922200</p>
<p>Eberl, M., &amp; Schwaiger, M. (2005). Corporate reputation: Disentangling the effects on financial performance. <em>European Journal of Marketing, 39</em>(7–8), 838–854. https://doi.org/10.1108/03090560510601798</p>
<p>Eisenegger, M. (2004): Reputationskonstitution in der Mediengesellschaft. In K. Imhof, R. Blum, H. Bonfadelli, &amp; O. Jarren (Eds.), <em>Mediengesellschaft. Strukturen, Merkmale, Entwicklungsdynamiken</em> (pp. 262–292). Springer.</p>
<p>Eisenegger, M. (2005). <em>Reputation in der Mediengesellschaft – Konstitution, issues monitoring, issues management.</em> VS Verlag für Sozialwissenschaften.</p>
<p>Eisenegger, M. (2009). Trust and reputation in the age of globalisation. In J. Klewes &amp; R. Wreschniok (Eds). <em>Reputation capital</em>, (pp. 11–22). Springer.</p>
<p>Eisenegger, M., &amp; Imhof, K. (2009). Funktionale, soziale und expressive Reputation – Grundzüge einer Reputationstheorie. In U. Röttger (Ed.), <em>Theorien der Public Relations</em>, (pp. 243–264). VS Verlag für Sozialwissenschaften.</p>
<p>Elken, M., &amp; Rosdal, T. (2017). Professional higher education institutions as organizational actors. <em>Tertiary Education and Management, 23</em>, 376–387.</p>
<p>Engwall, L. (2008). The university: A multinational corporation? In L. Engwall &amp; D. Weaire (Eds.), The university in the market, (pp. 9–21). Portland Press.</p>
<p>Fleuren, B. P., van Amelsvoort, L. G., Zijlstra, F. R., de Grip, A., &amp; Kant, I. (2018). Handling the reflective-formative measurement conundrum: A practical illustration based on sustainable employability. <em>Journal of Clinical Epidemiology, 103</em>, 71–81. https://doi.org/10.1016/j.jclinepi.2018.07.007</p>
<p>Fombrun, C. J. (1996). <em>Reputation. Realizing value from the corporate image.</em> Harvard Business School Press.</p>
<p>Fombrun, C. J., &amp; Shanley M. (1990). What’s in a name? Reputation building and corporate strategy. <em>Academic Management Journal, 33</em>(2), 233–258. https://doi.org/10.2307/256324 Fombrun, C. J., &amp; van Riel, C. (2003). Fame &amp; fortune. How successful companies build winning reputations. Pearson.</p>
<p>Fombrun, C. J., Gardberg, N. A., &amp; Sever, J. M. (2000). The reputation quotient SM: A multi-stakeholder measure of corporate reputation. <em>Journal of Brand Management, 7</em>(4), 241–255. https://doi.org/10.1057/bm.2000.10</p>
<p>García-Rodríguez, F. J., &amp; Gutiérrez-Taño, D. (2021). Loyalty to higher education institutions and the relationship with reputation: an integrated model with multi-stakeholder approach. <em>Journal of Marketing for Higher Education, 1–23</em>. https://doi.org/10.1080/08841241.2021.1975185</p>
<p>Gotsi, M., &amp; Wilson, A.M. (2001). Corporate reputation: Seeking a definition. <em>Corporate Communications: An International Journal, 6</em>(1), 24–30. https://doi.org/10.1108/ 13563280110381189</p>
<p>Grunig, J., &amp; Hung, C. (2002, March 8-10). <em>The effect of relationships on reputation and reputation on relationships: a cognitive, behavioral study</em> [Paper Presentation]. PRSA Educator’s Academy 5th Annual International Interdisciplinary Public Relations Research Conference, Miami.</p>
<p>Helm, S. (2005). Designing a formative measure for corporate reputation. <em>Corporate Reputation Review, 6</em>(2), 95–111. https://doi.org/10.1057/palgrave.crr.1540242</p>
<p>Hemsley-Brown, J. (2012). The best education in the world: Reality, repetition or cliché? International students’ reasons for choosing an English university. <em>Studies in Higher Education, 37</em>(8), 1005–1022. https://doi.org/10.1080/03075079. 2011.562286</p>
<p>Kaplan, R. S. (2009). Conceptual foundations of the balanced scorecard. In C. S. Chapman, A. G. Hopwood &amp; M. D. Shields (Eds.), <em>Handbooks of management accounting research</em> (Vol. 3, pp. 1253–1269). Elsevier.</p>
<p>Kaplan, R. S., &amp; Norton, D. P. (1996).<em> The Balanced Scorecard: Translating Strategy Into Action.</em> Harvard Business School Press.</p>
<p>Kerr, C. (1972). <em>The uses of the university.</em> Cambridge.</p>
<p>Kethüda, Ö. (2023). Positioning strategies and rankings in the HE: congruence and contradictions. <em>Journal of Marketing for Higher Education, 33</em>(1), 97–123. https://doi.org/10.1080/08841241.2021.1892899</p>
<p>Khoshtaria, T., Datuashvili, D., &amp; Matin, A. (2020). The impact of brand equity dimensions on university reputation: an empirical study of Georgian higher education. <em>Journal of Marketing for Higher Education, 30</em>(2), 239–255. https://doi.org/10.1080/08841241.2020.1725955</p>
<p>Lange, D., Lee, P. M., &amp; Dai, Y. (2011). Organizational reputation: A review. <em>Journal of Management, 37</em>(1), 153–184. https://doi.org/10.1177/0149206310390963</p>
<p>Manzoor, S. R., Ho, J. S. Y., &amp; Al Mahmud, A. (2021). Revisiting the ‘university image model’ for higher education institutions’ sustainability. <em>Journal of Marketing for Higher Education, 31</em>(2), 220–239. https://doi.org/10.1080/ 08841241.2020.1781736 McCaffery, P. (2019). The higher education manager’s handbook effective leadership and management in universities and colleges. Routledge.</p>
<p>Morschheuser, P., &amp; Redler, J. (2015). Reputation management for scientific organisations — Framework development and exemplification. <em>Journal of Marketing of Scientific and Research Organizations (MINIB), 18</em>(4), 1–36. https://doi.org/10.14611/minib.18.04.2015.08</p>
<p>Munisamy, S., Jafaar, N. I. M., &amp; Nagaraj, S. (2014). Does reputation matter? Case study of undergraduate choice at a premier university. <em>Asia-Pacific Education Research, 23</em>(3), 451–462. https://doi.org/10.1007/s40299-013-0120-y</p>
<p>Nicolescu, L. (2009). Applying marketing to higher education: Scope and limits. <em>Management &amp; Marketing, 4</em>(2), 35–44.</p>
<p>Nguyen, N., &amp; Leblanc, G. (2001). Corporate image and corporate reputation in customers’ retention decisions in services. <em>Journal of Retailing and Consumer Services, 8</em>(4), 227–236. https://doi.org/10.1016/S0969-6989(00)00029-1</p>
<p>Perkins, J. A. (1973). <em>The university as an organization.</em> McGraw-Hill.</p>
<p>Plewa, C., Ho, J., Conduit, J., &amp; Karpen, I. O. (2016). Reputation in higher education: A fuzzy set analysis of resource configurations. <em>Journal of Business Research, 69</em>(8), 3087–3095. https://doi.org/10.1016/j.jbusres.2016.01.024</p>
<p>Potter, W. J. (1991). The relationships between first-and second-order measures of cultivation. <em>Human Communication Research, 18</em>(1), 92–113. https://doi.org/ 10.1111/j.1468-2958.1991.tb00530.x</p>
<p>Pucciarelli, F., &amp; Kaplan, A. (2016). Competition and strategy in higher education: Managing complexity and uncertainty. <em>Business Horizons, 59</em>(3), 311–320.</p>
<p>Rao, H. (1994). The social construction of reputation: Certification contests, legitimation, and the survival of organizations in the American automobile industry: 1895–1912. <em>Strategic Management Journal, 15</em>(S1), 29–44. https://doi.org/10.1002/smj.4250150904</p>
<p>Redler, J., &amp; Morschheuser, P. (2024). Somehow bogged down: why current discussions on measuring HEI reputation go round in circles, and possible ways out. <em>Journal of Marketing for Higher Education, 1–25</em>. https://doi.org/ 10.1080/08841241.2024.2305637</p>
<p>Ressler, J., &amp; Abratt, R. (2009). Assessing the impact of university reputation on stakeholder intentions. <em>Journal of General Management, 35</em>(1), 35–45. https://doi.org/10.1177/030630700903500104</p>
<p>Rossiter, J. R. (2002). The C-OAR-SE procedure for scale development in marketing. <em>International Journal of Research in Marketing, 19</em>(4), 305–335. https://doi.org/ 10.1016/S0167-8116(02)00097-6</p>
<p>Sabate, J. M. de la Fuente, &amp; Puente, E. de Quevedo (2003). Empirical analysis of the relationship between corporate reputation and financial performance: A survey of the literature. <em>Corporate Reputation Review, 6</em>(2), 161–177. https://doi.org/10.1057/palgrave.crr.1540197</p>
<p>Sarstedt, M., &amp; Wilczynski, P. (2009). More for less? A comparison of single-item and multi-item measures. <em>Die Betriebswirtschaft, 69</em>(2), 211–227. https://www.researchgate.net/publication/281306739_More_for_Less_A_Comparison_of_Single-item_and_Multi-item_Measures</p>
<p>Sheth, J. N., &amp; Uslay, C. (2007). Implications of the revised definition of marketing: from exchange to value creation. <em>Journal of Public Policy &amp; Marketing, 26</em>(2), 302–307.</p>
<p>Siefke, A. (1998). <em>Zufriedenheit mit Dienstleistungen: ein phasenorientierter Ansatz zur Operationalisierung und Erklärung der Kundenzufriedenheit im Verkehrsbereich auf empirischer Basis.</em> Peter Lang.</p>
<p>Suomi, K. (2014). Exploring the dimensions of brand reputation in higher education – A case study of a Finnish master’s degree programme. <em>Journal of Higher Education Policy and Management, 36</em>(6), 646–660. https://doi.org/10.1080/ 1360080X.2014.957893</p>
<p>Suomi, K., Kuoppakangas, P., Hytti, U., Hampden-Turner, C., &amp; Kangaslahti, J. (2014). Focusing on dilemmas challenging reputation management in higher education. <em>Journal of Educational Management, 28</em>(4), 261–478. https://doi.org/ 10.1108/IJEM-04-2013-0046</p>
<p>Svensson, G. (2008). Scholarly journal ranking(s) in marketing: Single-or multi-item measures? <em>Marketing Intelligence &amp; Planning, 26</em>(4), 340–352. https://doi.org/ 10.1108/02634500810879250</p>
<p>Telem, M. (1981). The institution of higher education – A functional perspective. <em>Higher Education, 10</em>(5), 581–596. https://doi.org/10.1007/BF01676903</p>
<p>Theus, K. T. (1993). Academic reputations: The process of formation and decay. <em>Public Relations Review, 19</em>(3), 277–291. https://doi.org/10.1016/0363-8111 (93)90047-G</p>
<p>Vidaver-Cohen, D. (2007). Reputation beyond the rankings: A conceptual framework for business school research. <em>Corporate Reputation Review, 10</em>(4), 278–304. https://doi.org/10.1057/palgrave.crr.1550055</p>
<p>Walker, K. (2010). A systematic review of the corporate reputation literature: Definition, measurement, and theory. <em>Corporate Reputation Review, 12</em>(4), 357–387. https://doi.org/10.1057/crr.2009.26</p>
<p>Walsh, G., &amp; Beatty, S. E. (2007). Customer-based corporate reputation of a service firm: Scale development and validation. Journal of the <em>Academy of Marketing Science, 35</em>(1), 127–143. https://doi.org/10.1007/s11747-007-0015-7</p>
<p>Wedlin, L. (2008). University marketization: The process and its limits. <em>The University in the Market, 84</em>, 143–153.</p>
<p>Weick, K. E. (1976). Educational organizations as loosely coupled systems. <em>Administrative Science Quarterly, 21</em>(1), 1–19. https://doi.org/10.2307/2391875</p>
<p>World Conference on Higher Education (1998). <em>World declaration on higher education for the twenty-first century: Vision and action.</em> Retrieved August 21, 2022, from https://unesdoc.unesco.org/ark:/48223/pf0000141952</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The measurement and evaluation of PR communication</title>
		<link>https://minib.pl/en/numer/no-4-2018/the-measurement-and-evaluation-of-pr-communication/</link>
		
		<dc:creator><![CDATA[create24]]></dc:creator>
		<pubDate>Wed, 19 Dec 2018 11:55:39 +0000</pubDate>
				<category><![CDATA[AMEC’s integrated evaluation framework]]></category>
		<category><![CDATA[AVEs]]></category>
		<category><![CDATA[Barcelona Principles]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[measurement]]></category>
		<category><![CDATA[PR communications]]></category>
		<guid isPermaLink="false">https://minib.pl/beta/?post_type=numer&#038;p=7006</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
