Skip to content

Commit fe64d1d

Browse files
committed
some minor things
1 parent 9eb6f0e commit fe64d1d

6 files changed

Lines changed: 6 additions & 6 deletions

File tree

paper/paper.pdf

14.5 KB
Binary file not shown.

paper/paper.tex

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -594,7 +594,7 @@ \subsection{Domain and Model Shifts}\label{related-shifts}}
594594
\hypertarget{related-benchmark}{%
595595
\subsection{Benchmarking Counterfactual Generators}\label{related-benchmark}}
596596

597-
Despite the large and growing number of approaches to counterfactual search, there have been surprisingly few benchmark studies that compare different methodologies. This may be partially due to limited software availability in this space. Recent work has started to address this gap: firstly, Oliveira and Martens (2021) run a large benchmarking study using different algorithmic aproaches and numerous tabular datasets; secondly, Pawelczyk et al. (2021) introduce a Python framework - CARLA - that can be used to apply and benchmark different methodologies; finally, \texttt{CounterfactualExplanations.jl} (Altmeyer 2022) provides an extensible, fast and language-agnostic implementation in Julia. Since the experiments presented here involve extensive simulations, we have relied on and extended the Julia implementation due to the associated performance benefits. In particular, we have built a framework on top of \texttt{CounterfactualExplanations.jl} that extends the functionality from static benchmarks to simulation experiments: \texttt{AlgorithmicRecourseDynamics.jl}\footnote{The package is available from \ldots{}}. The core concepts implemented in that package reflect what is presented in Section \ref{method-2} of this paper.
597+
Despite the large and growing number of approaches to counterfactual search, there have been surprisingly few benchmark studies that compare different methodologies. This may be partially due to limited software availability in this space. Recent work has started to address this gap: firstly, Oliveira and Martens (2021) run a large benchmarking study using different algorithmic aproaches and numerous tabular datasets; secondly, Pawelczyk et al. (2021) introduce a Python framework - CARLA - that can be used to apply and benchmark different methodologies; finally, \texttt{CounterfactualExplanations.jl} (Altmeyer 2022) provides an extensible, fast and language-agnostic implementation in Julia. Since the experiments presented here involve extensive simulations, we have relied on and extended the Julia implementation due to the associated performance benefits. In particular, we have built a framework on top of \texttt{CounterfactualExplanations.jl} that extends the functionality from static benchmarks to simulation experiments: \href{(https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md)}{\texttt{AlgorithmicRecourseDynamics.jl}}\footnote{The code is available from the following anonymized Github repository: \url{https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md}.}. The core concepts implemented in that package reflect what is presented in Section \ref{method-2} of this paper.
598598

599599
\hypertarget{method}{%
600600
\section{Gradient-Based Recourse Revisited}\label{method}}
@@ -636,7 +636,7 @@ \subsection{\ldots{} towards collective recourse}\label{towards-collective-recou
636636
\hypertarget{method-2}{%
637637
\section{Modeling Endogenous Macrodynamics in Algorithmic Recourse}\label{method-2}}
638638

639-
In the following we describe the framework we propose for modeling and analysing endogenous macrodynamics in Algorithmic Recourse. We first describe the basic simulations that were generated to produce the findings in this work and also constitute the core of \texttt{AlgorithmicRecourseDynamics.jl} - the Julia package we introduced earlier. The remainder of this section then introduces various evaluation metrics that can be used to benchmark different counterfactual generators with respect to how they perform in the dynamic setting.
639+
In the following we describe the framework we propose for modeling and analysing endogenous macrodynamics in Algorithmic Recourse. We first describe the basic simulations that were generated to produce the findings in this work and also constitute the core of \href{https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md}{\texttt{AlgorithmicRecourseDynamics.jl}} - the Julia package we introduced earlier. The remainder of this section then introduces various evaluation metrics that can be used to benchmark different counterfactual generators with respect to how they perform in the dynamic setting.
640640

641641
\hypertarget{method-2-experiment}{%
642642
\subsection{Simulations}\label{method-2-experiment}}
@@ -904,7 +904,7 @@ \subsection{Classifiers}\label{classifiers}}
904904
\hypertarget{conclusion}{%
905905
\section{Concluding Remarks}\label{conclusion}}
906906

907-
This work has revisited and extended some of the most general and defining concepts underlying the literature on Counterfactual Explanations and, in particular, Algorithmic Recourse. We demonstrate that long-held beliefs as to what defines optimality in AR, are too short-sighted to serve as a foundation for applications of recourse in practice. Specifically, we run multiple experiments that simulate the application of recourse in practice using various popular counterfactual generators and find that all of them induce substantial domain and model shifts. We argue that these shifts should be considered as an expected external cost of individual recourse and call for a paradigm shift from individual to collective recourse. By proposing an adapted counterfactual search objective that incorporates this cost, we make that paradigm shift explicit. We show that this modified objective lends itself to mitigation strategies that can be used to effectively decrease the magnitude of induced domain and model shifts. Through our work we hope to inspire future research on this important topic. To this end we have open-sourced all of our code along with a Julia package - \texttt{AlgorithmicRecourseDynamics.jl}. The package is built on top of \texttt{CounterfactualExplanations.jl} and inherits its extensibility (Altmeyer 2022). That is to say that future researchers should find it relatively easy to replicate, modify and extend the simulation experiments presented here and apply to their own custom counterfactual generators.
907+
This work has revisited and extended some of the most general and defining concepts underlying the literature on Counterfactual Explanations and, in particular, Algorithmic Recourse. We demonstrate that long-held beliefs as to what defines optimality in AR, are too short-sighted to serve as a foundation for applications of recourse in practice. Specifically, we run multiple experiments that simulate the application of recourse in practice using various popular counterfactual generators and find that all of them induce substantial domain and model shifts. We argue that these shifts should be considered as an expected external cost of individual recourse and call for a paradigm shift from individual to collective recourse. By proposing an adapted counterfactual search objective that incorporates this cost, we make that paradigm shift explicit. We show that this modified objective lends itself to mitigation strategies that can be used to effectively decrease the magnitude of induced domain and model shifts. Through our work we hope to inspire future research on this important topic. To this end we have open-sourced all of our code along with a Julia package - \href{https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md}{\texttt{AlgorithmicRecourseDynamics.jl}}. The package is built on top of \texttt{CounterfactualExplanations.jl} and inherits its extensibility (Altmeyer 2022). That is to say that future researchers should find it relatively easy to replicate, modify and extend the simulation experiments presented here and apply to their own custom counterfactual generators.
908908

909909
\hypertarget{references}{%
910910
\section*{References}\label{references}}

paper/sections/conclusion.rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
# Concluding Remarks {#conclusion}
22

3-
This work has revisited and extended some of the most general and defining concepts underlying the literature on Counterfactual Explanations and, in particular, Algorithmic Recourse. We demonstrate that long-held beliefs as to what defines optimality in AR, are too short-sighted to serve as a foundation for applications of recourse in practice. Specifically, we run multiple experiments that simulate the application of recourse in practice using various popular counterfactual generators and find that all of them induce substantial domain and model shifts. We argue that these shifts should be considered as an expected external cost of individual recourse and call for a paradigm shift from individual to collective recourse. By proposing an adapted counterfactual search objective that incorporates this cost, we make that paradigm shift explicit. We show that this modified objective lends itself to mitigation strategies that can be used to effectively decrease the magnitude of induced domain and model shifts. Through our work we hope to inspire future research on this important topic. To this end we have open-sourced all of our code along with a Julia package - `AlgorithmicRecourseDynamics.jl`. The package is built on top of `CounterfactualExplanations.jl` and inherits its extensibility [@altmeyer2022CounterfactualExplanations]. That is to say that future researchers should find it relatively easy to replicate, modify and extend the simulation experiments presented here and apply to their own custom counterfactual generators.
3+
This work has revisited and extended some of the most general and defining concepts underlying the literature on Counterfactual Explanations and, in particular, Algorithmic Recourse. We demonstrate that long-held beliefs as to what defines optimality in AR, are too short-sighted to serve as a foundation for applications of recourse in practice. Specifically, we run multiple experiments that simulate the application of recourse in practice using various popular counterfactual generators and find that all of them induce substantial domain and model shifts. We argue that these shifts should be considered as an expected external cost of individual recourse and call for a paradigm shift from individual to collective recourse. By proposing an adapted counterfactual search objective that incorporates this cost, we make that paradigm shift explicit. We show that this modified objective lends itself to mitigation strategies that can be used to effectively decrease the magnitude of induced domain and model shifts. Through our work we hope to inspire future research on this important topic. To this end we have open-sourced all of our code along with a Julia package - [`AlgorithmicRecourseDynamics.jl`](https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md). The package is built on top of `CounterfactualExplanations.jl` and inherits its extensibility [@altmeyer2022CounterfactualExplanations]. That is to say that future researchers should find it relatively easy to replicate, modify and extend the simulation experiments presented here and apply to their own custom counterfactual generators.

paper/sections/methodology_2.rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Modeling Endogenous Macrodynamics in Algorithmic Recourse {#method-2}
22

3-
In the following we describe the framework we propose for modeling and analysing endogenous macrodynamics in Algorithmic Recourse. We first describe the basic simulations that were generated to produce the findings in this work and also constitute the core of `AlgorithmicRecourseDynamics.jl` - the Julia package we introduced earlier. The remainder of this section then introduces various evaluation metrics that can be used to benchmark different counterfactual generators with respect to how they perform in the dynamic setting.
3+
In the following we describe the framework we propose for modeling and analysing endogenous macrodynamics in Algorithmic Recourse. We first describe the basic simulations that were generated to produce the findings in this work and also constitute the core of [`AlgorithmicRecourseDynamics.jl`](https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md) - the Julia package we introduced earlier. The remainder of this section then introduces various evaluation metrics that can be used to benchmark different counterfactual generators with respect to how they perform in the dynamic setting.
44

55
## Simulations {#method-2-experiment}
66

paper/sections/related.rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,6 @@ In the context of Algorithmic Recourse, domain and model shifts were first broug
3232

3333
## Benchmarking Counterfactual Generators {#related-benchmark}
3434

35-
Despite the large and growing number of approaches to counterfactual search, there have been surprisingly few benchmark studies that compare different methodologies. This may be partially due to limited software availability in this space. Recent work has started to address this gap: firstly, @de2021framework run a large benchmarking study using different algorithmic aproaches and numerous tabular datasets; secondly, @pawelczyk2021carla introduce a Python framework - CARLA - that can be used to apply and benchmark different methodologies; finally, `CounterfactualExplanations.jl` [@altmeyer2022CounterfactualExplanations] provides an extensible, fast and language-agnostic implementation in Julia. Since the experiments presented here involve extensive simulations, we have relied on and extended the Julia implementation due to the associated performance benefits. In particular, we have built a framework on top of `CounterfactualExplanations.jl` that extends the functionality from static benchmarks to simulation experiments: `AlgorithmicRecourseDynamics.jl`^[The package is available from ...]. The core concepts implemented in that package reflect what is presented in Section \@ref(method-2) of this paper.
35+
Despite the large and growing number of approaches to counterfactual search, there have been surprisingly few benchmark studies that compare different methodologies. This may be partially due to limited software availability in this space. Recent work has started to address this gap: firstly, @de2021framework run a large benchmarking study using different algorithmic aproaches and numerous tabular datasets; secondly, @pawelczyk2021carla introduce a Python framework - CARLA - that can be used to apply and benchmark different methodologies; finally, `CounterfactualExplanations.jl` [@altmeyer2022CounterfactualExplanations] provides an extensible, fast and language-agnostic implementation in Julia. Since the experiments presented here involve extensive simulations, we have relied on and extended the Julia implementation due to the associated performance benefits. In particular, we have built a framework on top of `CounterfactualExplanations.jl` that extends the functionality from static benchmarks to simulation experiments: [`AlgorithmicRecourseDynamics.jl`]((https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md))^[The code is available from the following anonymized Github repository: [https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md](https://anonymous.4open.science/r/AlgorithmicRecourseDynamics/README.md).]. The core concepts implemented in that package reflect what is presented in Section \@ref(method-2) of this paper.
3636

3737

paper/www/synthetic_results.png

39.3 KB
Loading

0 commit comments

Comments
 (0)