This repository contains code to generate all figures and output for "Taming the Curse of Dimensionality: Quantitative Economics with Deep Learning" (Jesús Fernández-Villaverde, Galo Nuño, Jesse Perla).
It runs on all major operating systems and does not require any accelerators (e.g., GPUs). While you can use any Python environment manager, we recommend uv, a faster and reproducible alternative to Conda, albeit with incomplete support for challenging binary dependencies.
- Install uv, which is usually a one-line install, such as on macOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh- On Windows:
winget install --id=astral-sh.uv -eorpowershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" - If you have any installation issues, see the docs for troubleshooting
- On Windows:
- Synchronize the environment. If you directly use
uv run pythonas below, this step is done automatically.uv sync
- Run the main script using one of two approaches:
- If you activated the environment: See here for activation details, then run:
python generate_paper_figures_pytorch.py
- If you didn't activate: Use
uv runto automatically activate and then run:uv run python generate_paper_figures_pytorch.py
- If you activated the environment: See here for activation details, then run:
All output is in the .figures directory, including generated figures and a results.json that summarizes numerical values used in the paper.
Note: The baseline solution is computed using Newton's method, which converges in a few iterations to machine precision.
Quick start on Linux or macOS:
The following will install the required packages, clone the repoistory, and generate all paper figures.
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/HighDimensionalEconLab/taming.git
cd taming
uv run python generate_paper_figures_pytorch.pyThe following section provides a few variations on the setup and execution.
Ensure you activate the Python environment (i.e., the .venv or conda virtual environment):
- In VS Code, use
>Python: Select Interpreterto select the local.venv - Outside VS Code, a platform-specific command will activate .venv in your terminal.
If you prefer to use conda for your environment, then you can use the provided requirements.txt
conda create -n taming python=3.13
conda activate taming
pip install -r requirements.txtYou can run the code for a particular set of parameters directly on the commandline.
The execution with the default arguments (as used in the paper and figures) is simply:
python stochastic_growth_pytorch.pyHowever, you can change parameters on the commandline. A few variations:
python stochastic_growth_pytorch.py --k_0_multiplier=0.9 --seed=53
python stochastic_growth_pytorch.py --mlp_width=128
python stochastic_growth_pytorch.py --opt_set.max_iter=15 --data_set.train_T=30
python stochastic_growth_pytorch.py --base_solver_set.num_z_points=41 --base_solver_set.num_k_points=100The baseline solver defaults to Newton's method; pass --base_solver_set.solver=lbfgs to use L-BFGS instead.
Finally, to conduct experiments you can import the stochastic_growth_pytorch module and call stochastic_growth with whatever arguments you wish. See the generate_paper_figures_pytorch.py file for examples of how to do this.
pyproject.tomland associateduv.lockcontain the package dependencies and versions used in the experiments. Therequirements.txtwas generated byuv pip freeze > requirements.txtfor compatibility withpipandconda.stochastic_growth_pytorch.py: Solves the stochastic growth model end-to-end — both the baseline solution and the deep-learning solution described in the paper — in a single module.- The baseline solver supports two algorithms, selectable via
base_solver_set.solver:"newton"(default): Newton's method in float64 usingtorch.func.jacrevfor the Jacobian andtorch.linalg.solvefor the Newton step, globalized with backtracking line search. Achieves machine-precision Euler residuals (~1e-15 mean) in roughly 5 iterations."lbfgs": L-BFGS minimizes the sum of squared Euler residuals. Because it is an optimizer rather than a root-finder, it can stall at saddle points where the gradient vanishes but residuals remain nonzero, limiting accuracy to ~1e-5 regardless of iteration count or dtype.
- The baseline solver supports two algorithms, selectable via
generate_paper_figures_pytorch.py: Generates all figures and numerical results used in the paper.- It imports and calls
stochastic_growth_pytorch.pyand contains a summary of all default parameters for easy reference. - It saves all output to the
.figuresdirectory, including aresults.jsonfile that summarizes numerical results.
- It imports and calls