Skip to content

HighDimensionalEconLab/taming

Repository files navigation

This repository contains code to generate all figures and output for "Taming the Curse of Dimensionality: Quantitative Economics with Deep Learning" (Jesús Fernández-Villaverde, Galo Nuño, Jesse Perla).

It runs on all major operating systems and does not require any accelerators (e.g., GPUs). While you can use any Python environment manager, we recommend uv, a faster and reproducible alternative to Conda, albeit with incomplete support for challenging binary dependencies.

Replication Instructions

  1. Install uv, which is usually a one-line install, such as on macOS and Linux:
    curl -LsSf https://astral.sh/uv/install.sh | sh
    • On Windows: winget install --id=astral-sh.uv -e or powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
    • If you have any installation issues, see the docs for troubleshooting
  2. Synchronize the environment. If you directly use uv run python as below, this step is done automatically.
    uv sync
  3. Run the main script using one of two approaches:
    • If you activated the environment: See here for activation details, then run:
      python generate_paper_figures_pytorch.py
    • If you didn't activate: Use uv run to automatically activate and then run:
      uv run python generate_paper_figures_pytorch.py

All output is in the .figures directory, including generated figures and a results.json that summarizes numerical values used in the paper.

Note: The baseline solution is computed using Newton's method, which converges in a few iterations to machine precision.

Quick start on Linux or macOS:

The following will install the required packages, clone the repoistory, and generate all paper figures.

curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/HighDimensionalEconLab/taming.git
cd taming
uv run python generate_paper_figures_pytorch.py

Additional Instructions

The following section provides a few variations on the setup and execution.

Activation in VS Code

Ensure you activate the Python environment (i.e., the .venv or conda virtual environment):

  • In VS Code, use >Python: Select Interpreter to select the local .venv
  • Outside VS Code, a platform-specific command will activate .venv in your terminal.

Setup with Conda

If you prefer to use conda for your environment, then you can use the provided requirements.txt

conda create -n taming python=3.13
conda activate taming
pip install -r requirements.txt

Self-contained Execution

You can run the code for a particular set of parameters directly on the commandline.

The execution with the default arguments (as used in the paper and figures) is simply:

python stochastic_growth_pytorch.py

However, you can change parameters on the commandline. A few variations:

python stochastic_growth_pytorch.py --k_0_multiplier=0.9 --seed=53
python stochastic_growth_pytorch.py --mlp_width=128
python stochastic_growth_pytorch.py --opt_set.max_iter=15 --data_set.train_T=30
python stochastic_growth_pytorch.py --base_solver_set.num_z_points=41 --base_solver_set.num_k_points=100

The baseline solver defaults to Newton's method; pass --base_solver_set.solver=lbfgs to use L-BFGS instead.

Finally, to conduct experiments you can import the stochastic_growth_pytorch module and call stochastic_growth with whatever arguments you wish. See the generate_paper_figures_pytorch.py file for examples of how to do this.

Summary of Files

  • pyproject.toml and associated uv.lock contain the package dependencies and versions used in the experiments. The requirements.txt was generated by uv pip freeze > requirements.txt for compatibility with pip and conda.
  • stochastic_growth_pytorch.py: Solves the stochastic growth model end-to-end — both the baseline solution and the deep-learning solution described in the paper — in a single module.
    • The baseline solver supports two algorithms, selectable via base_solver_set.solver:
      • "newton" (default): Newton's method in float64 using torch.func.jacrev for the Jacobian and torch.linalg.solve for the Newton step, globalized with backtracking line search. Achieves machine-precision Euler residuals (~1e-15 mean) in roughly 5 iterations.
      • "lbfgs": L-BFGS minimizes the sum of squared Euler residuals. Because it is an optimizer rather than a root-finder, it can stall at saddle points where the gradient vanishes but residuals remain nonzero, limiting accuracy to ~1e-5 regardless of iteration count or dtype.
  • generate_paper_figures_pytorch.py: Generates all figures and numerical results used in the paper.
    • It imports and calls stochastic_growth_pytorch.py and contains a summary of all default parameters for easy reference.
    • It saves all output to the .figures directory, including a results.json file that summarizes numerical results.

About

Replication for Taming the Curse of Dimensionality: Quantitative Economics with Deep Learning (Jesús Fernández-Villaverde, Galo Nuño, Jesse Perla)

Resources

Stars

Watchers

Forks

Contributors

Languages