Skip to content

Commit 5bdf3f6

Browse files
committed
2 parents 08dcc4c + 036cdbb commit 5bdf3f6

13 files changed

Lines changed: 934 additions & 521 deletions

File tree

dev/build/paper/paper.html

Lines changed: 216 additions & 104 deletions
Large diffs are not rendered by default.

dev/build/paper/paper.pdf

24.9 KB
Binary file not shown.

dev/build/paper/paper_files/libs/bootstrap/bootstrap.min.css

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

dev/paper/paper.qmd

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,12 @@ execute:
2424

2525
{{< include sections/methodology.qmd >}}
2626

27+
{{< include sections/methodology_2.qmd >}}
28+
2729
{{< include sections/empirical.qmd >}}
2830

31+
{{< include sections/empirical_2.qmd >}}
32+
2933
{{< include sections/discussion.qmd >}}
3034

3135
{{< include sections/limitations.qmd >}}

dev/paper/paper.tex

Lines changed: 537 additions & 288 deletions
Large diffs are not rendered by default.

dev/paper/sections/abstract.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
# Abstract
22

3-
Existing work on Counterfactual Explanations (CE) and Algorithmic Recourse (AR) has largely been limited to the static setting: given some classifier we are interested in finding close, actionable, realistic, sparse, diverse and ideally causally founded counterfactuals. The ability of CE to handle dynamics like data and model drift remains a largely unexplored research challenge at this point. Only one recent work considers the implications of exogenous domain and model shifts. This project instead focuses on endogenous dynamics, that is shifts that occur when AR is actually implemented by a proportion of individuals. Early findings suggest that the involved shifts may be large with important implications on the validity of AR and the overall characteristics of the sample population.
3+
Existing work on Counterfactual Explanations (CE) and Algorithmic Recourse (AR) has largely been limited to the static setting and focused on single individuals: given some estimated model the goal is to find valid counterfactuals for individual instance that fulfill various desiderata. The ability of such counterfactuals to handle dynamics like data and model drift remains a largely unexplored research challenge at this point. There has also been surprisingly little work on the related question of how the actual implementation of recourse by one individual may affect other individuals. Through this work we aim to close that gap by systematizing and extending existing knowledge. We first show that many of the existing methodologies can be collectively described by a generalized framework. We then argue that the existing framework fails to account for a hidden external cost of recourse, that only reveals itself when studying the endogenous dynamics of recourse at the group level. Through simulation experiments involving various popular counterfactual generators and several benchmark datasets, we generate a total XX million Counterfactual Explanations and study the resulting domain and model shifts. We find that the induced shifts are substantial enough to likely impede the applicability Algorithmic Recourse in practice. Fortunately, we find various potential mitigation strategies that can be used in combination with existing approaches. Our simulation framework for studying recourse dynamics is fast and open-sourced.

dev/paper/sections/empirical.qmd

Lines changed: 66 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,69 @@
1-
# Experiments {#sec-empirical}
1+
# Experiment Setup {#sec-empirical}
2+
3+
## Data {#sec-empirical-data}
4+
5+
We have chosen to work with both synthetic and real-world datasets. Using synthetic data allows us to impose distributional properties that may affect the resulting recourse dynamics. Following @upadhyay2021towards, we generate synthetic data in $\mathbb{R}^2$ to also allow for a visual interpretation of the results. Real-world data is used in order to assess if endogenous dynamics also occur in higher-dimensional settings.
6+
7+
### Synthetic data
8+
9+
```{julia}
10+
using Plots, PlotThemes
11+
theme(:wong)
12+
catalogue = AlgorithmicRecourseDynamics.Data.load_synthetic()
13+
function plot_data(data,title)
14+
plt = plot(title=title)
15+
scatter!(data)
16+
return plt
17+
end
18+
plts = [plot_data(data,name) for (name, data) in catalogue]
19+
plt = plot(plts..., layout=(1,4), size=(850,200))
20+
savefig(plt, "dev/paper/www/synthetic_data.png")
21+
```
22+
23+
We use four synthetic binary classification datasets consisting of 1000 samples each. The datasets are presented in @fig-synthetic-data (see also Appendix A for a formal description). Samples from the negative class are marked in blue while samples of the positive class are marked in orange.
24+
25+
![Synthetic classification datasets used in our experiments.](www/synthetic_data.png){#fig-synthetic-data fig.pos="h" width="8cm" height="2cm"}
26+
27+
Ex-ante we expect to see that by construction Wachter will create a new cluster of counterfactual instances in the proximity of the initial decision boundary. Thus, the choice of a black-box model may have an impact on the paths of the recourse. For generators that use latent space search (REVISE @joshi2019towards, CLUE @antoran2020getting) or rely on (and have access to) probabilistic models (CLUE @antoran2020getting, Greedy @schut2021generating) we expect that counterfactuals will end up in regions of the target domain that are densely populated by training samples. Of course, this is expectation hinges on how effective said probabilistic models are at capturing predictive uncertainty. Finally, we expect to see the counterfactuals generated by DiCE to be uniformly spread around the feature space inside the target class^[As we mentioned earlier, the diversity constraint used by DiCE is only effective for when at least two counterfactuals are being generated. We have therefore decided to always generate 5 counterfactuals for each generator and randomly pick one of them.]. In summary, we expect that the endogenous shifts induced by Wachter outsize those induced by all other generators, since Wachter is the only approach that is not concered with generating what we have defined as meaningful counterfactuals.
28+
29+
### Real-world data
30+
31+
We use three different real-world datasets from the Finance and Economics domain, all of which are tabular and can be used for binary classification. Firstly, we use the **Give Me Some Credit** dataset which was open-sourced on Kaggle for the task to predict whether a borrower is likely to experience financial difficulties in the next two years [@gmsc_data]. Originally consisting of 250,000 instances with 11 numerical attributes. Secondly, we use the **UCI defaultCredit** dataset [@yeh2009comparisons], a benchmark dataset that can be used to train binary classifiers to predict the binary outcome variable, whether credit card clients default on their payment. In its raw form it consists of 23 explanatory variables - 4 categorical features relating to demographic attributes^[These have been ommitted from the analysis. See @sec-limit-data for details.] and 19 continuous features largely relating to individuals' payment histories and amount of credit outstanding. Both of these datasets have been used in the literature on Algorithmic Recourse before (see for example @pawelczyk2021carla, @joshi2019towards and @ustun2019actionable), presumably because they constitute real-world classification tasks involving individuals that compete for access to credit.
32+
33+
As a third dataset we include the **California Housing** dataset derived from the 1990 U.S. census and sourced through scikit-learn [@pedregosa2011scikit, @pace1997sparse]. It consists of 8 continuous features that can be used to predict the median house price for California districts. The continuous outcome variable is binarized as $\tilde{y}=\mathbb{I}_{y>\text{median}(Y)}$ indicating whether or not the median house price of a given district is above or below the median of all districts. While we have not seen this dataset used in the previous literature on AR, others have used the Boston Housing dataset in a similar fashion (see for example @schut2021generating). While we initially also conducted experiments on that dataset, we eventually discarded this dataset, since it has been found to suffer from an ethical problem [@carlisle2019racist].
34+
35+
Since the simulations involve generating counterfactuals for a significant proportion of the entire sample of individuals, we have randomly undersampled each dataset to yield balanced subsamples consisting of 10,000 individuals each. We have also standardized all explanatory features since our chosen classifiers are sensetive to scale.
36+
37+
## Classifiers and Generative Models {#sec-empirical-classifiers}
38+
39+
For each dataset and generator we look at three different types of classifiers all of them built and trained using `Flux.jl` [@innes2018fashionable]: firstly, a simple linear classifier - **Logistic Regression** - implemented as single linear layer with sigmoid activation; secondly, a multilayer perceptron (**MLP**); and finally, a **Deep Ensemble** composed of five MLPs following @lakshminarayanan2016simple that serves as our only probabilistic classifier. We have chosen to work with deep ensembles both for their simplicity and effectiveness at modelling predictive uncertainty. They are also the model of choice in @schut2021generating. The actual neural network architectures are kept simple (@tbl-mlp), since we are only marginally concerned with achieving good initial classifier performance. For the real-world datasets we using mini-batch training and dropout regularization.
40+
41+
The Latent Space generators rely on separate generative models. Following the authors of both REVISE and CLUE we use Variational Autoencoders (**VAE**) for this purpose. As with the classifiers, we deliberately choose to work with fairly simple architectures (@tbl-vae). More expressive generative models generally also lead to more meaningful counterfactuals produced by Latent Space generators. But in our view this should simply be considered as a vulnerability of counterfactual generators that rely on surrogate models to learn what realistic representations of the underlying data.
42+
43+
All classifiers and generative models are retrained for 10 epochs in each round $t$ of the experiment. Rather than retraining models from scratch, we initialize all parameters at their previous levels ($t-1$) and compute backpropagate for 10 epochs using the new training data as inputs into the existing model.
44+
45+
::: {#tbl-panel layout-ncol=1}
46+
47+
| | Hidden Dim. | Hidden Layers | Batch | Dropout |
48+
|------|------|------|-----|-----|
49+
| Synthetic | 32 | 1 | - | - |
50+
| Real-World | 32 | 2 | 50 | 0.25 |
51+
52+
: MLP {#tbl-mlp}
53+
54+
| | Hidden Dim. | Epochs |
55+
|------|------|------|
56+
| Synthetic | 2 | 100 |
57+
| Real-World | 8 | 250 |
58+
59+
: Variational Autoencoder {#tbl-vae}
60+
61+
Model Architectures
62+
:::
63+
64+
65+
66+
267

368

469

dev/paper/sections/empirical_2.qmd

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Experiments {#sec-empirical-2}
2+
3+
## Endogenous Macrodynamics
4+
5+
## Potential Mitigation Strategies
6+
7+
### Gravitational Counterfactual Explanations {#sec-empirical-2-mitigate}
8+
9+
A straight-forward choice simply extends the baseline approach by @wachter2017counterfactual: instead of only penalizing the distance of the individuals' counterfactual to its factual, we propose penalizing its distance to some sensible point in the target domain, for example the sample average: $\bar{\mathbf{x}}$. For such a recourse objective, higher choices of $\lambda_2$ relative to $\lambda_1$ will lead counterfactuals to gravitate towards the specified point in the target domain. In the remainder of this paper we will therefore refer to this approach as **Gravitational** generator, when we investigate its potential usefulness for mitigating endongenous macrodynamics^[Note that despite the naming convention our goal here is not to provide yet another counterfactual generator, but merely investigate the most simple penalty we can think of with respect to its effectiveness.].
10+
11+
#### A note on convergence
12+
13+
For this simple mitigating strategy underlying the Gravitational generator to work as expected, one needs to ensure that counterfactual search continues, even after a predetermined threshold probability $\gamma$ has potentially already been reached. @fig-convergence illustrates this distinction: if one chooses to terminate search once the desired threshold is reached (left panel) the gravitational pull towards $\bar{\mathbf{x}}$ is never actually satisfied (compare to right panel). More generally, if convergence is defined simply in terms of flipping the predicted label with some desired degree of confidence, this corresponds to essentially ignoring any parts of the counterfactual search objective that do not involve $\ell(M(f(s_k^\prime)),t)$ beyond that point. While this may be appropriate for some applications, in general this seems like an odd convention. Since we nonetheless seen convergence specified simply in terms of reaching the threshold probability in some places^[@joshi2019towards define convergence of Algorithm 1 in this way. The implementation of @wachter2017counterfactual in CARLA is also defined in this way.], we thought it worth making this distinction explicit.
14+
15+
![Comparison of counterfactual search outcome with simple (left) and strict convergence (right).](www/gravitational_generator_comparison.png){#fig-convergence fig.pos="h" width=45%}

0 commit comments

Comments
 (0)