Menu
Home
Log in / Register
 
Home arrow Business & Finance arrow Reengineering community development for the 21st century
< Prev   CONTENTS   Next >

Quasi-Experimental Methods for Measuring Impact

When experimental methods are not feasible, researchers typically use quasi-experimental methods to identify programmatic impacts. While quasi-experimental methods are typically not immune from validity threats (Hollister 2004), some methods may provide a sufficient degree of rigor – or at least provide us with information that is expected to, on average, give us substantially more accurate information than we had without the use of such methods.

There are a variety of quasi-experimental evaluation methods that may be appropriate for assessing CDFI impact. Not all potential methods and their faults are reviewed here. Much of that has been done elsewhere (Dickstein and Thomas 2005; Hollister 2004; Hollister and Hill 1995). Rather, the focus here is on approaches that get serious attention in the literature.

These methods fall into three groups. First is econometric simulation, in which multivariate methods are used to control for differences between nonrandom treatment and comparison groups, including the difference in the likelihood of applying to or being recruited into a program. This latter difference is key. If we merely attempt to identify differences in groups that we believe will affect the outcome variable, we may not adequately control for selection bias. Conventional econometric methods are used either to predict some raw level of outcome indicator or to explain the "difference in differences” between the treatment and comparison groups. That is, multivariate estimation is used to explain differences in gain (or loss) in a key outcome measure following program intervention (e.g., receiving CDFI loans of some kind) between the treatment group and a comparison group.

The second nonexperimental category of methods used to measure impact is called propensity-score matching. This approach is related to econometric methods that explicitly control for selection bias. A propensity score is the probability that, given certain features, a household, firm, or geography will receive “treatment” (e.g., receive a loan or investment). Households, firms, or geographies are grouped according to propensity scores. Within each group, outcomes for those receiving treatment are compared with those not receiving treatment.

Propensity-score-based matching studies have been used in economic development evaluation. Greenbaum and Engberg (2000) used this method to evaluate the impact of enterprise zones on housing markets in six states by comparing changes in prices for zip codes that contain enterprise zones to those that do not. O'Keefe (2003) used propensity-score matching to measure the impact of the California Enterprise Zone program on census tracts. Differences in growth between the zone tract and the nonzone tract were then used as estimates of program impact.

Propensity-scoring techniques are not without their critics. Hollister (2004) argues that propensity-score-matching techniques have not provided estimates of impact that are “consistently close" to those obtained from experimental methods. However, there appears to have been relatively little testing of these methods for geographic applications.

The last general category of approaches reviewed here is that of geographically based adjusted interrupted time series (AITS) analysis, which is a special subset of the econometric simulation approaches (Galster, Temkin, Walker, and Sawyer 2004). In this approach, outcome data for geographies receiving treatment are compared to those not receiving treatment. However, in AITS, researchers utilize time series data over a relatively substantial period of time (e.g., several years or longer) that are frequently collected (e.g., annually or more frequently) and that provide us with many observations over the study period – the more the better. This approach allows for the measurement of not only preintervention levels of the outcome indicator for treatment and comparison geographies but also the trends of the indicators in both groups before and after intervention. By being able to control for the differences in both the levels and trajectories of the treatment and comparison groups before and after the intervention, researchers can effectively control for omitted characteristics that might influence the outcome indicator.

Predecessors of AITS approaches include methods that seek to match geographies using more limited historical data and so are likely to do a poorer job of controlling for selection bias. Instead, selection bias is discounted as a problem as long as the comparison group is found to be in no better a position in terms of outcome indicators before the intervention as compared to the treatment group. A relatively well-known example of this approach is Isserman and Rephann's (1995) study of the Appalachian Regional Commission (ARC) in which the authors measured the impact of the ARC on county population growth by identifying a matched “twin” county for each ARC county. Matching was based on both level and trajectory data from 1959 and 1950 to 1959 before the beginning of the ARC in 1965. The accuracy of the match between the treatment and control group twins was then compared by identifying how well each pair was matched on 1959 to 1965 growth – again before the ARC was established. Because this method does not require the frequency of data that the AITS requires, it may prove more feasible in some cases. However, the approach is more vulnerable to selection bias.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel