Menu
Home
Log in / Register
 
Home arrow Economics arrow Bounded rational choice behaviour
< Prev   CONTENTS   Next >

1.3. Considering a Subset of Influential Attributes

While conjunctive and disjunctive rules define thresholds for each attribute and in that sense assume that individuals process all attributes, lexicographic models assume that attributes are considered sequentially and perhaps only partially. Usually, these models are based on some explicit measurement of attribute importance, on the basis of which the alternatives are ordered in terms of decreasing attribute importance. Firstly, alternatives are evaluated on the most important attribute and the best is identified. If there are ties, that is an individual is indifferent between two or more alternatives on that attribute, choice alternatives are evaluated on the second most important attribute. This process continues until a choice can be made or until all attributes have been considered. Note that unlike the conjunctive and disjunctive rules, the lexicographic rule involves an explicit comparison of choice alternatives. Lexicographic choice behaviour may be expressed as follows. Assume that the attributes are ranked in order of importance. Then,

(1.23)

Note that the model describes bounded rational behaviour in the sense that a subset of attributes is not considered if choice alternatives can be ranked in terms of more important attributes. The only exception is that two or more choice alternatives are identical on all attributes and the best on the more important and possibly subsequent attributes. In this case, they split the market share among them equally. At the same time, lexicographic models represent non-optimal behaviour in the sense that they do not guarantee that the first ranked alternative is the one with the maximum utility, based on the combination of attribute levels.

Pure lexicographic rules indicate that individuals value some attributes so much that they are not willing to make any trade-offs. For example, if an individual want to be on the first commercial flight to the moon, no other attribute will be taken into consideration and no price will be too high. Consequently, it is impossible to construct a utility function, representing lexicographic preferences over multiple real-valued attributes (Varian, 1984). The reason is that any multi-attribute utility function is associated with indifference curves in attribute space that express the marginal rate of substitution between pairs of attributes. Because lexicographic preferences imply an infinite marginal rate of substitution, a utility function cannot be constructed. However, lexicographic preferences functions may exist over discrete attributes (e.g. Kohli & Jedidi, 2007; Martignon & Schmitt, 1999). Alternatively, lexicographic rules could be applied to net utility differences higher than zero (e.g. Kawamoto & Setti, 1992).

Arana, Leon, and Hanemann (2008) considered two additional heuristics. The first is the complete ignorance heuristic. It represents those individuals who are not aware of the influence of the attributes or do not care about the consequences of their responses. It describes the decision-making process of individuals who choose based on a completely random process. A second heuristic is the satisfactory heuristic, originally proposed by Simon (1955). It describes a process in which an individual first selects the choice alternatives that meet the minimum requirements on all attributes. Subsequently, the choice is made at random between the selected candidate alternatives. Thus, this heuristic can be viewed as a combination of the conjunctive and ignorance heuristic, in which the former serves to delineate the consideration set and the latter serves to describe random choice among the remaining choice alternatives. Analogously, individuals may also set a threshold below which an alternative is rejected. In case of nominal attributes, indifference among attribute levels or rejection of alternatives can reflect absence of preference over a subset of attribute levels. The number of indifference classes is between 1 and the number of attribute levels. A single indifference class signals the lack of preference across attribute levels, while the number of indifference classes being equal to the number of attribute levels represents the standard lexicographic model.

The strict lexicographic model has been criticized for the fact that it assumes perfect discrimination and perfectly reliable information. The lexicographic semi-order model and the minimum difference lexicographic models have been formulated to relax these rigorous assumptions. The lexicographic semi-order model assumes that individuals consider the second important attribute if two or more choice alternatives, ranked ordered in utility space, do not differ more than some minimum threshold on the most important attribute. That is, given attributes are ranked in order of importance. Then,

(1.24)

where

The minimum difference lexicographic model constitutes a generalization of the lexicographic semi-order model by assuming that individuals consider the next important attribute if two or more choice alternatives, ranked ordered in utility space, do not differ more than some minimum threshold on the currently considered attribute. That is, given attributes are ranked in order of importance. Then,

(1.25)

where ,

A special case is the 'just noticeable difference' lexicographic model (Russ, 1972), which is defined at the level of (cognitive) attribute differences. That is,

(1.26)

where ; ,

If this condition is not met at any level of importance, choices are made in a compensatory manner in the sense that all attributes are taken into consideration. Although these two models can be equivalent, in principle the threshold at the perception or cognition level is not necessarily consistent with utility differences.

Recker and Golob (1979) also suggested combining a lexicographic attribute valuation model, jointly with thresholds. These thresholds were defined as the percentage deviation of the best alternative for the attribute under consideration. Foester (1977) formulated the 'just effective difference' lexicographic model. Rather than setting conditions at the utility or attribute levels, he assumed that choice will involve a compensatory process, unless the importance differences satisfy the condition:

(1.27)

where Г is the importance and Ψ is the effective difference threshold. Kohli and Jedidi (2007) discussed another variant of the lexicographic model: the binary lexicographic model. This model relaxes the assumption of an attribute-by- attribute evaluation of choice alternatives. It assumes that individuals first classify the choice alternatives into two classes. One class consists of the choice alternatives with the most preferred attribute level across attributes, while the other class consists of the remaining choice alternatives. Each class is then further partitioned in the same manner for the second most preferred attribute level across attributes.

Another interesting model is Tversky's elimination-by-aspects model (Tversky, 1972). This model assumes that individuals first select a discriminating aspect or attribute and eliminate all choice alternatives that do not have this attribute. Unlike the assumed sequential consideration of lexicographic models, attributes are selected with some probability that is equal to the ratio of the utility of that attribute and the total sum of utilities of all discriminatory attributes. The probability of choosing a choice alternatives is then equal to the sum across discriminating attributes of the probability that that attribute is being selected multiplied by the probability that the choice alternatives is chosen among the alternatives that possess the attribute. Although the original model has been formulated in terms of dichotomous attributes, the formulated can also be used for more general conditions that eliminate certain choice alternatives. Manrai and Sinha (1989) formulated an extension, while Batley and Daly (2003) showed the equivalence with generalized extreme value models.

Lexicographic models are extreme in the sense they assume that in lieu of any ties choices are based just on a single attribute. More general approaches have recently been developed in travel behaviour analysis and environmental economics. A stream of publications has emerged, which has been labelled as ignoring of attributes (Hensher, 2006, 2010) and attribute non-attendance (Hole, 2011; Scarpa, Zanoli, Bruschi, & Naspetti, 2012). In addition to the notion that consumers may ignore particular attributes due to lack of time, selective information processing (Cameron & DeShazo, 2011; DeShazo & Fermo, 2004) or low involvement, non-attendance of particular attributes may be related to their lack of any inherent utility (Collins, Rose, & Hensher, 2013) to an individual, who will therefore ignore these attributes. In the context of stated preference and choice experiments, unrealistic attribute levels and/or trade-offs (e.g. Alemu, Morkbak, Olsen, & Jensen, 2013; Hensher, Collins, & Greene, 2012) may cause attribute non-attendance. Failure to capture attribute non- attendance may lead to biases in model forecasts (Hensher, Rose, & Greene, 2005), and errors in the signs of random parameter coefficients (Hensher, 2007).

Two approaches can be distinguished in the literature to address attribute non- attendance: (i) directly asking respondents which attributes they did not consider in their responses or real-word decision-making processes; (ii) identifying attribute non- attendance using econometric approaches. The first approach involves asking respondents to identity the attributes that were systematically varied in a stated choice experiments but that they ignored (Hensher et al., 2005). Although this is a straightforward and easy way of identifying attribute non-attendance, it is questionable that respondents can rationalize their response strategy. Consequently, the reliability of direct statements of non-attendance has been criticized (e.g. Carlsson, Kataria, & Lampi, 2010; Hess & Hensher, 2010; Hess & Rose, 2007). Moreover, directly asking for non-attribute attendance is inconsistent with the very nature of stated preference and choice experiments. These models have been developed as an alternative to models that were based on direct measurement of part-worth utilities and importance weights and the composition of overall utility based on explicitly and independently measured part-worth utilities and weights because empirical evidence casted doubt on the validity and reliability of such models. In other words, directly asking for attribute non-attendance would introduce measurements error into an alternative modelling approach that it tried to avoid from its very beginning.

As an alternative to direct measurement of attribute non-attendance, several econometric models have been suggested. In particular, the following econometric approaches can be distinguished. The so-called attribute non-attendance (ANA) model (Hess & Rose, 2007; Scarpa et al., 2012) is a latent class model, where each class consists of different attendance/non-attendance profiles. For each class, the coefficients of the non-attended attributes are constrained to zero. It implies, and this is a disadvantage of this model, that the number of classes and the number of class membership parameters increase exponentially with an increasing number of attributes. Hole (2011), therefore, assumed that attribute non-attendance rates between attributes are not correlated, implying that the number of parameters to be estimated increases only linearly with the number of attributes. This may be a strong assumption because one would expect that non-attendances of attributes pertaining to the same underlying higher order decision construct are correlated. This independent attribute non-attendance (IANA) model is a special case of the correlated attribute non-attendance (CANA; Scarpa et al., 2012) model, which allows correlations between non-attendance rates across attributes. Others have estimated latent threshold values (Hess & Hensher, 2010; Mariel, Hoyos, & Meyerhoff, 2011). Hensher (2010) used thresholds to model whether attributes with a common metric (e.g. free flow travel time and congestion time) are processed as separate attributes (if their squared difference exceeds a threshold) or are summed and processed as a single common metric attribute with the same utility weight (if their squared difference does not exceed a threshold). The thresholds are assumed exponentially distributed in the population.

In addition to these fixed effects attribute non-attendance models, in an attempt to incorporate taste heterogeneity, several random effects models have been suggested. Campbell, Hensher, and Scarpa (2012) estimated point masses for cost only. Hensher et al. (2012) estimated a random parameters attribute non-attendance model (RPIANA). In principle, the random parameters model can be estimated under the assumption of independent attributes and under the assumption of correlated attributes. Hess, Stathapoulos, and Daly (2012) employed a latent class structure, and estimated continuously distributed random parameters. Collins et al. (2013) argued that full correlation of non-attendance across attributes does not only involve many parameters and computation time, but that the assumption of full correlation may also be too strong. They, therefore, developed a generalized latent class structure that allows independence across subsets of attributes, whilst allowing attribute non-attendance to be correlated.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel