Skip to content

Commit eb913e7

Browse files
committed
Fun with latex templates
1 parent ffc4331 commit eb913e7

35 files changed

Lines changed: 680 additions & 2765 deletions

Project.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
[deps]
2-
AlgorithmicRecourse = "2f13d31b-18db-44c1-bc43-ebaf2cff0be0"
32
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
3+
BayesLaplace = "c52c1a26-f7c5-402b-80be-ba1e638ad478"
44
CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
55
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
6+
CounterfactualExplanations = "2f13d31b-18db-44c1-bc43-ebaf2cff0be0"
67
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
78
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
89
EvalMetrics = "251d5f9e-10c1-4699-ba24-e0ad168fa3e4"

_quarto.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
project:
22
title: "Algorithmic Recourse Dynamics"
33
filters:
4-
- include-files.lua
4+
- lua/include-files.lua
55
- quarto
6-
- abstract-to-meta.lua
6+
- lua/abstract-to-meta.lua
77
bibliography: https://raw.githubusercontent.com/pat-alt/bib/main/bib.bib
88
date: today
99
date-format: long

dev/submissions/aies/extended_abstract/extended_abstract.fdb_latexmk

Lines changed: 217 additions & 0 deletions
Large diffs are not rendered by default.
83.6 KB
Binary file not shown.
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
---
2+
title: Dynamics in Algorithmic Recourse
3+
format:
4+
pdf:
5+
documentclass: acmconf
6+
number-sections: true
7+
include-in-header: preamble.tex
8+
keep-tex: true
9+
---
10+
11+
Recent advances in artificial intelligence (AI) have propelled its adoption in domains outside of computer science including health care, bioinformatics and genetics. In finance, economics and other social sciences, applications of AI are still relatively limited. Decision-making in these fields has traditionally been guided by Generalized Linear Models (GLM), which are theoretically founded, interpretable and often sufficient to model relationships between variables. Model interpretability is crucial in the social sciences context, because inference is typically at least as important as predictive performance. Decision-makers in the social sciences are also typically required to explain their decisions to human stakeholders: central bankers, for example, are held accountable by the public for the policies they decide on. It is therefore not surprising that practitioners and academics in these fields are reluctant to adopt AI technologies that ultimately cannot be trusted. Deep learning models, for example, are generally considered as black boxes and therefore difficult to apply in a context that demands explanations.
12+
13+
In my research I explore and develop methodologies that improve the trustworthiness of AI. I would like to understand how we can unlock the enormous potential of AI without sacrificing the human aspect of decision-making in finance and economics. My work so far has focused primarily on counterfactual explanations, algorithmic recourse and probabilistic machine learning. Counterfactual explanations are intuitive, largely model-agnostic and straight-forward to implement. They are also intrinsically linked to the potential outcome framework for causal inference and therefore should be somewhat familiar to social scientists. Counterfactual explanations that involve realistic and actionable changes can be used for the purpose of algorithmic recourse to help individuals facing adverse decisions. Probabilistic machine learning can be leveraged in this context and more generally facilitates inference and interpretability. It is also closely related to Bayesian statistics, which has played an important role in both finance and economics for many years.
14+
15+
In the following (@sec-main), I will first briefly present one particular research question I have explored during the first months of my PhD: how do counterfactual explanations handle dynamics? I will also briefly present related projects I have worked on (@sec-related) and ideas for future projects (@sec-future).
16+
17+
## Dynamics in Algorithmic Recourse {#sec-main}
18+
19+
Existing work on counterfactual explanations and algorithmic recourse has largely been limited to the following static setting: given some classifier $M: \mathcal{X} \mapsto \mathcal{Y}$ we are interested in finding close [@wachter2017counterfactual], actionable [@ustun2019actionable], plausible [@joshi2019towards, @antoran2020getting, @schut2021generating], sparse [@schut2021generating], diverse [@mothilal2020explaining] and ideally causally founded counterfactual explanations [@karimi2021algorithmic] for some individual $x$. The ability of counterfactual explanations to handle dynamics like data and model shifts remains a largely unexplored research challenge at this point [@verma2020counterfactual]. Only one recent work considers the implications of **exogenous** domain shifts on the validity of recourse [@upadhyay2021towards]. The authors propose a simple minimax objective, that minimizes the counterfactual loss function for a maximal domain and model shift. They show that their approach yields more robust counterfactuals than existing approaches. In my project I investigate **endogenous** domain and model shifts, i.e. shifts that occur as algorithmic recourse is actually impemented by a proportion of individuals. Preliminary findings indicate that individuals who receive and implement algorithmic recourse end up forming a distinct subgroup inside the target class, which may leave them vulnerable to discrimination (@fig-dynamics). This is a work-in-progress that I would like to present and discuss at AIES.
20+
21+
![PLACEHOLDER: The dynamics of algorithmic recourse.](www/dynamics.png){#fig-dynamics fig.pos="h" width=250px}
22+
23+
## Related Projects {#sec-related}
24+
25+
Alongside my research I have developed open-source implementations related to explainable AI. [CounterfactualExplanations.jl](https://www.paltmeyer.com/CounterfactualExplanations.jl/stable/) is a Julia package that can be used to generate counterfactual explanations for models developed and trained not only in Julia, but also in other popular programming languages like Python and R. I have recently submitted the package along with a companion paper as a proposal for a main talk at [JuliaCon](https://juliacon.org/2022/). [BayesLaplace.jl](https://www.paltmeyer.com/BayesLaplace.jl/dev/) is a small Julia package that can be used to recover Bayesian representations of deep neural networks through Laplace approximation in a post-hoc manner. It is inspired by a recent paper [@daxberger2021laplace] and has also been submitted to JuliaCon. Finally, [deepvars](https://github.com/pat-alt/deepvars) is an R package that implements an approach towards vector autoregression that leverages deep learning. This was originally my master's thesis and later presented at the NeurIPS 2021 MLECON workshop. I have also published several blog posts on explainable AI and probabilisitic ML in an effort to make my research accessible to a broad audience.
26+
27+
## Future Projects {#sec-future}
28+
29+
Data sets in finance and economics typically involve time series data. Therefore, I am naturally interested in the application of explainable AI to sequential data, an area which has so far not been explored extensively. In the future, I want to work on counterfactual explanations for time series models. I am also interested in seeing if and how Laplace approximation can be used for Bayesian deep learning with time series data. I hope that the findings from both of these projects can ultimately be used to build complex but interpretable time series models for classification and forecasting in finance and economics. For example, I would like to leverage effortless Bayesian deep learning to make our proposed Deep Vector Autoregression model explainable.
30+
31+
## References

0 commit comments

Comments
 (0)