Menu
Home
Log in / Register
 
Home arrow Economics arrow Bounded rational choice behaviour
< Prev   CONTENTS   Next >

4.1. Introduction

Discrete choice models are widely employed across a range of disciplines. These models can provide powerful insights into the choices made by a group of decisionmakers. However, the amount of information that decision-makers encounter and process as they make a choice may vary for a number of reasons, including natural variations in the market, decisions made by the analyst in more controlled experimental or survey settings, and further decisions by the decision-maker as to how much information to process. The influence of varying information load, sometimes equated to choice complexity, has been actively studied in the literature. In terms of the responses of the decision-maker, a widely acknowledged distinction, attributed to Heiner (1983), is as follows. One potential response to increasing information load is an attempt to process all information, with the rate of error increasing as the information load increases. DeShazo and Fermo (2004) refer to this as the passive bounded rationality model, while Arentze, Borgers, Timmermans, and DelMistro (2003) refer to it as a type I strategy. In contrast, under the rationally adaptive model, or type II strategy, the amount of information will be reduced by the decision-maker, possibly using some heuristic, to simplify the decision.

Discrete choice modellers have amassed evidence of passive bounded rationality by employing heteroskedastic choice models, in which measures of the information load or choice complexity moderate the size of the error variance. For example, DeShazo and Fermo (2002) found that the error variance was influenced by the number of attributes and alternatives, and various measures of the structure of the information within and across alternatives. Arentze et al. (2003) linked error variance to the number of attributes but not the number of alternatives, although they suspected that the use of labelled alternatives undermined this second test. Swait and Adamowicz (2001 a) moderated the error variance with a parsimonious measure of task complexity. Drawing on principles from information theory, this entropy measure combines the influence of the number of alternatives, number of attributes, attribute correlation and preference similarity among alternatives. They noted, however, that the error variance might be capturing different decision strategies that result from choice complexity, and called for research into modelling different decision strategies in this context.

Indeed, Swait and Adamowicz (2001b) answered this call by making the probability of alternative decision-making strategies a function of the entropy measure, using a latent class modelling approach. They found that one of the two strategies relied more on brand effects. Arentze et al. (2003) looked for, but did not find, evidence of lexicographic choice, although they detected differences in parameter weights. Both these examples are rationally adaptive models, and represent a departure from fully compensatory choice.

Another departure from compensatory choice is the phenomenon of attribute non-attendance (ANA), in which any given decision-maker might only consider some of the full set of attributes, and ignore or not attend to the remainder. This can be considered a non-compensatory decision strategy, as no amount of the ignored attribute will compensate for the attributes that are attended to (Campbell, Hutchinson, & Scarpa, 2008). Several studies have linked decision complexity or information load to ANA. DeShazo and Fermo (2004) simultaneously estimated the impact of choice complexity on error variance and the propensity to attend. They found evidence of both, with inclusion of measures of the propensity to attend reducing the impact on the error variance, suggesting some confounding between the two. Cameron and DeShazo (2011) handled these complexity influences in a more flexible way. It is not certain, however, whether the interactions used in these papers is uncovering ANA, or reduced sensitivity to the attributes.

Hensher (2006) used a different approach to detect ANA, asking respondents in a stated choice study to indicate if they ignored any attributes. Using a dataset in which the information load was systematically varied across respondents, he estimated a series of ordered heterogeneous logit models, with the dependent variable being the number of attributes ignored. Some of the significant influences on ANA included the number of alternatives, the number of attribute levels and the range of those levels. This chapter will revisit this dataset, using an alternative modelling approach that does not rely on stated non-attendance responses. This is motivated by recent developments in the literature of our understanding of ANA, and developments in the most appropriate way to identify and accommodate ANA in discrete choice models.

Evidence has emerged that the stated non-attendance responses may be subject to reporting error. Hess and Hensher (2010) estimated separate taste coefficients for both stated attenders and non-attenders. The coefficients for the non-attenders were found to be significant, but of smaller magnitude than those for attenders. Some percentage of the stated non-attenders may in fact not have been attending, but at least some were. Alemu, Morkbak, Olsen, and Jensen (2013) asked respondents not just whether they ignored an attribute, but why. They found that when ANA was stated as being due to true indifference to the attribute, most coefficients were estimated as zero, but when it was to make choice easier (i.e. rational adaptation), many coefficients were significant, and often even of greater magnitude than under stated attendance. This suggests that stated responses regarding non-attendance induced by information load may be particularly susceptible to reporting error. An alternative is to infer non-attendance, and here several important developments have been made in the literature.

One approach that is underappreciated in the ANA literature is the use of the censored normal distribution in a random parameters logit (RPL) model (Train & Sonnier, 2005). The censoring captures a mass of coefficients at zero which can represent non-attendance to the associated attribute. The approach is easy to implement and integrate into existing models; however a lack of flexibility in the distribution means that preference heterogeneity may be confounded with ANA, as both these aspects are estimated with the same two structural parameters. For example, this would make it difficult to parameterise the influence of information load on just ANA, without also influencing the variance of the distribution. Hess and Hensher (2010) investigated the conditional parameter estimates in an RPL model, and categorised as not attending those individuals for whom the coefficient of variation of the estimate exceeded an arbitrary threshold. However, it is unclear what threshold to use, and Mariel, Meyerhoff, and Hoyos (2011) found that, problematically, the most accurate threshold differs as the true ANA rate differs.

An approach that has been applied extensively is the constrained latent class model, in which coefficients in some classes are constrained to zero (Hensher & Greene, 2010; Hess & Rose, 2007; Scarpa, Gilbride, Campbell, & Hensher, 2009). An alternative specification of the class assignment component of the model, in which the parametric cost can be reduced by assuming that non-attendance is independent across attributes, was proposed by Hole (2011). However, it is now clear that these approaches confound preference heterogeneity with ANA, and are very likely to bias measures of both (Collins, 2012; Hess, Stathopoulos, Campbell, O'Neill, & Caussade, 2013).

The latent class approach is less prone to bias when combined with continuously distributed random parameters (Collins, 2012). The random parameters capture preference heterogeneity, while the coefficients constrained to zero represent ANA. Collins, Rose, and Hensher (2013) proposed that the model be called the random parameters attribute non-attendance (RPANA) model. They detailed various identification considerations, proposed a fully flexible correlation structure in the non-attendance component of the model and introduced covariates into this component of the model,[1] to allow the non-attendance rate to systematically vary, Hess et al. (2013) tested the model on multiple datasets, and Hensher, Collins, and Greene (2013) additionally incorporated the aggregation of common-metric attributes.

Application of the RPANA model is an appealing way to investigate the impact of varying information load, as the non-attendance rates can systematically vary with any number of covariates, such as measures of information load. There may also be a constant component, representing true indifference to the attribute, or other reasons, that are not influenced by the varying load. Crucially, the model can still handle preference heterogeneity amongst those that still attend. This chapter will apply the RPANA model on the dataset used by Hensher (2006). Unlike the earlier paper, the RPANA model used herein allows the influence of choice task complexity on inferred ANA to be modelled simultaneously with the choice model of interest. Unlike Cameron and DeShazo (2011), whose model handles the impact of varying information load through specific tradeoffs that can vary from one choice task to the next, we examine the impact through inferred non- attendance that remains invariant across all the choice tasks completed by each respondent.

  • [1] As did Hess and Rose (2007) for the straight latent class version of the model without random parameters.
 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel