Difference in differences
Difference in differences (sometimes 'Difference-in-Differences',[1] 'DID',[2] or 'DD'[3]) is a statistical technique used in econometrics and quantitative sociology that attempts to mimic an experimental research design using observational study data. It calculates the effect of a treatment (i.e., an explanatory variable or an independent variable) on an outcome (i.e., a response variable or dependent variable) by comparing the average change over time in the outcome variable for the treatment group to the average change over time for the control group. This method may be subject to certain biases (mean reversion bias, etc.), although it is intended to eliminate some of the effect of selection bias. In contrast to a within-subjects estimate of the treatment effect (which measures differences over time) or a between-subjects estimate of the treatment effect (which measures the difference between the treatment and control groups), the DID measures the difference in the differences between the treatment and control group over time.
Contents
General definition
Difference in differences requires data measured at two or more different time periods. In the example pictured, the treatment group is represented by the line P and the control group is represented by the line S. Both groups are measured on the outcome (dependent) variable at Time 1 before either group has received the treatment (i.e., the independent or explanatory variable), represented by the points P1 and S1. The treatment group then receives or experiences the treatment and both groups are again measured after this at Time 2. Not all of the difference between the treatment and control groups at Time 2 (that is, the difference between P2 and S2) can be explained as being an effect of the treatment, because the treatment group and control group did not start out at the same point at Time 1. DID therefore calculates the "normal" difference in the outcome variable between the two groups (the difference that would still exist if neither group experienced the treatment), represented by the dotted line Q. (Notice that the slope from P1 to Q is the same as the slope from S1 to S2.) The treatment effect is the difference between the observed outcome and the "normal" outcome (the difference between P2 and Q).
Formal definition
Consider the model
where is the dependent variable for individual
, given
and
. The dimensions
and
may for example be country and time.
and
is then the vertical intercept for
and
respectively.
is a dummy variable indicating treatment status,
is the treatment effect, and
is an error term.
Let
,
,
,
,
,
and suppose for simplicity that and
. Then
.
The strict exogeneity assumption then implies that
.
Without loss of generality, assume that and
, giving the DID estimator
,
which can be interpreted as the treatment effect of the treatment indicated by .
Assumptions
All the assumptions of the OLS model apply equally to DID. In addition, DID requires a parallel trend assumption. The parallel trend assumption says that are the same in both
and
. Given that the formal definition above accurately represents reality, this assumption automatically holds. However, a model with
may well be more realistic.
As illustrated to the right, the treatment effect is the difference between the observed value of y and what the value of y would have been with parallel trends, had there been no treatment. The Achilles' heel of DID is when something other than the treatment changes in one group but not the other at the same time as the treatment, implying a violation of the parallel trend assumption.
To guarantee the accuracy of the DID estimate, the composition of individuals of the two groups is assumed to remain unchanged over time. When using a DID model, various issues that may compromise the results, such as autocorrelation and Ashenfelter dips, must be considered and dealt with.
Implementation
The DID method can be implemented according to the table below, where the lower right cell is the DID estimator.
![]() |
![]() |
![]() |
Difference |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Change | ![]() |
![]() |
![]() |
Running a regression analysis gives the same result. Consider the OLS model
where is a dummy variable for
, and
is a dummy variable for
. The composite variable
is then a dummy variable indicating when
. Although it is not shown rigorously here, it turns out that the estimates in this model are
,
which is equivalent to
.
But this is the expression for the treatment effect that was given in the formal definition and in the above table.
Card & Krueger (1994) example
Consider one of the most famous DID studies, the Card and Krueger article on minimum wage in New Jersey, published in 1994.[4] Card and Krueger compared employment in the fast food sector in New Jersey and in Pennsylvania, in February 1992 and in November 1992, after New Jersey's minimum wage rose from $4.25 to $5.05 in April 1992. Observing a change in employment in New Jersey only, before and after the treatment, would fail to control for omitted variables such as weather and macroeconomic conditions of the region. By including Pennsylvania as a control in a difference-in-differences model, any bias caused by variables common to New Jersey and Pennsylvania are implicitly controlled for, even when these variables are unobserved. Assuming that New Jersey and Pennsylvania have parallel trends over time, Pennsylvania's change in employment can be interpreted as the change New Jersey would have experienced, had they not increased the minimum wage, and vice versa. The evidence suggested that the increased minimum wage did not induce an increase in unemployment in New Jersey, as standard economic theory would suggest. The table below shows Card & Krueger's estimates of the treatment effect on employment, measured as FTEs (or Full-time equivalents). Keeping in mind that the finding is controversial, Card and Krueger estimate that the $0.80 minimum wage increase in New Jersey lead to a 2.75 FTE increase in employment.
New Jersey | Pennsylvania | Difference | |
---|---|---|---|
February | 20.44 | 23.33 | -2.89 |
November | 21.03 | 21.17 | -0.14 |
Change | 0.59 | -2.16 | 2.75 |
See also
References
<templatestyles src="Reflist/styles.css" />
Cite error: Invalid <references>
tag; parameter "group" is allowed only.
<references />
, or <references group="..." />
Further reading
- Lua error in package.lua at line 80: module 'strict' not found.
External links
- How Does Charitable Giving Respond to Incentives and Income? Dynamic Panel Estimates Accounting for Predictable Changes in Taxation, National Bureau of Economic Research, July 2005
- T. Conley and C. Taber, "Inference with "Difference in Differences" with a Small Number of Policy Changes", National Bureau of Economic Research, July 2005
- Difference in Difference Estimation, Healthcare Economist website