Synthetic control with pymc models

import arviz as az

import causalpy as cp
WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
seed = 42

Load data

df = cp.load_data("sc")
treatment_time = 70

Run the analysis

Note

The random_seed keyword argument for the PyMC sampler is not neccessary. We use it here so that the results are reproducible.

# Note, we do not want an intercept in this model
result = cp.pymc_experiments.SyntheticControl(
    df,
    treatment_time,
    formula="actual ~ 0 + a + b + c + d + e + f + g",
    model=cp.pymc_models.WeightedSumFitter(
        sample_kwargs={"target_accept": 0.95, "random_seed": seed}
    ),
)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [beta, sigma]
100.00% [8000/8000 00:12<00:00 Sampling 4 chains, 2 divergences]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 12 seconds.
There were 2 divergences after tuning. Increase `target_accept` or reparameterize.
Sampling: [beta, sigma, y_hat]
Sampling: [y_hat]
Sampling: [y_hat]
Sampling: [y_hat]
Sampling: [y_hat]
fig, ax = result.plot(plot_predictors=True)
../_images/4d2569e641bf89363c4d1f30b1530c24eda095012f6f9ede2aff78bd59e6ccc0.png
result.summary()
==================================Pre-Post Fit==================================
Formula: actual ~ 0 + a + b + c + d + e + f + g
Model coefficients:
a                             0.34, 94% HDI [0.3, 0.38]
b                             0.048, 94% HDI [0.0098, 0.088]
c                             0.31, 94% HDI [0.26, 0.35]
d                             0.055, 94% HDI [0.012, 0.1]
e                             0.025, 94% HDI [0.0013, 0.067]
f                             0.19, 94% HDI [0.11, 0.26]
g                             0.038, 94% HDI [0.0026, 0.085]
sigma                         0.26, 94% HDI [0.22, 0.3]

As well as the model coefficients, we might be interested in the avarage causal impact and average cumulative causal impact.

Note

Better output for the summary statistics are in progress!

az.summary(result.post_impact.mean("obs_ind"))
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
x -1.71 0.215 -2.104 -1.312 0.006 0.004 1352.0 2030.0 1.0

Warning

Care must be taken with the mean impact statistic. It only makes sense to use this statistic if it looks like the intervention had a lasting (and roughly constant) effect on the outcome variable. If the effect is transient, then clearly there will be a lot of post-intervention period where the impact of the intervention has ‘worn off’. If so, then it will be hard to interpret the mean impacts real meaning.

We can also ask for the summary statistics of the cumulative causal impact.

# get index of the final time point
index = result.post_impact_cumulative.obs_ind.max()
# grab the posterior distribution of the cumulative impact at this final time point
last_cumulative_estimate = result.post_impact_cumulative.sel({"obs_ind": index})
# get summary stats
az.summary(last_cumulative_estimate)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
x -51.288 6.44 -63.121 -39.352 0.174 0.123 1352.0 2030.0 1.0