Skip to content

Commit cdb1b48

Browse files
committed
R code: changed dir structure. Python code: bugfixes. Updated readme and added gitignore.
Former-commit-id: 3e6b609 [formerly 3e6b609 [formerly 1b8ecaa]] Former-commit-id: 2ea797e99b184ddfc2777e9f647f6c7b1ec574e6 Former-commit-id: 1a8705a
1 parent e7f9df6 commit cdb1b48

24 files changed

Lines changed: 318 additions & 387 deletions

.gitignore

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
.vscode/
2+
3+
# Byte-compiled / optimized / DLL files
4+
__pycache__/
5+
*.py[cod]
6+
*$py.class
7+
8+
# C extensions
9+
*.so
10+
11+
# Distribution / packaging
12+
.Python
13+
build/
14+
develop-eggs/
15+
dist/
16+
downloads/
17+
eggs/
18+
.eggs/
19+
lib/
20+
lib64/
21+
parts/
22+
sdist/
23+
var/
24+
wheels/
25+
*.egg-info/
26+
.installed.cfg
27+
*.egg
28+
29+
# PyInstaller
30+
# Usually these files are written by a python script from a template
31+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
32+
*.manifest
33+
*.spec
34+
35+
# Installer logs
36+
pip-log.txt
37+
pip-delete-this-directory.txt
38+
39+
# Unit test / coverage reports
40+
htmlcov/
41+
.tox/
42+
.coverage
43+
.coverage.*
44+
.cache
45+
nosetests.xml
46+
coverage.xml
47+
*.cover
48+
.hypothesis/
49+
50+
# Translations
51+
*.mo
52+
*.pot
53+
54+
# Django stuff:
55+
*.log
56+
local_settings.py
57+
58+
# Flask stuff:
59+
instance/
60+
.webassets-cache
61+
62+
# Scrapy stuff:
63+
.scrapy
64+
65+
# Sphinx documentation
66+
docs/_build/
67+
68+
# PyBuilder
69+
target/
70+
71+
# Jupyter Notebook
72+
.ipynb_checkpoints
73+
74+
# pyenv
75+
.python-version
76+
77+
# celery beat schedule file
78+
celerybeat-schedule
79+
80+
# SageMath parsed files
81+
*.sage.py
82+
83+
# Environments
84+
.env
85+
.venv
86+
env/
87+
venv/
88+
ENV/
89+
90+
# Spyder project settings
91+
.spyderproject
92+
.spyproject
93+
94+
# Rope project settings
95+
.ropeproject
96+
97+
# mkdocs documentation
98+
/site
99+
100+
# mypy
101+
.mypy_cache/
102+
103+
# History files
104+
.Rhistory
105+
.Rapp.history
106+
107+
# Session Data files
108+
.RData
109+
110+
# Example code in package build process
111+
*-Ex.R
112+
113+
# Output files from R CMD build
114+
/*.tar.gz
115+
116+
# Output files from R CMD check
117+
/*.Rcheck/
118+
119+
# RStudio files
120+
.Rproj.user/
121+
122+
# produced vignettes
123+
vignettes/*.html
124+
vignettes/*.pdf
125+
126+
# OAuth2 token, see https://github.com/hadley/httr/releases/tag/v0.3
127+
.httr-oauth
128+
129+
# knitr and R markdown default cache directories
130+
/*_cache/
131+
/cache/
132+
133+
# Temporary files created by R markdown
134+
*.utf8.md
135+
*.knit.md

.vscode/.ropeproject/config.py

Lines changed: 0 additions & 100 deletions
This file was deleted.

.vscode/.ropeproject/objectdb

-6 Bytes
Binary file not shown.

README.md

Lines changed: 8 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -1,55 +1,21 @@
11
# pmh-tutorial
22

3-
This code was downloaded from < https://github.com/compops/pmh-tutorial > or from < http://work.johandahlin.com/ > and contains the code used to produce the results in the tutorial
3+
This code was downloaded from < https://github.com/compops/pmh-tutorial > and contains the code used to produce the results in the tutorial:
44

55
J. Dahlin and T. B. Schön, **Getting started with particle Metropolis-Hastings for inference in nonlinear models**. Pre-print, arXiv:1511:01707, 2017.
66

7-
The papers are available as a preprint from < http://arxiv.org/pdf/1511.01707 > and < http://work.johandahlin.com/ >.
7+
The papers are available as a preprint from < http://arxiv.org/pdf/1511.01707 >.
88

9-
Requirements
9+
Included material
1010
--------------
11-
The code is written and tested for R 3.2.2, Matlab R2014a and Python 2.7.6 with some additional libraries/packages (see below).
11+
**r/** This is the main implementation. The complete R code developed and implemented in the tutorial. This code was used to make all the numerical illustrations in the tutorial including the figures and tables. The workspaces for these runs are also provided together with the code to reproduce all the figures.
1212

13-
The implementation in R makes use of the packages Quandl and mvtnorm. They can be installed by the command
14-
``` R
15-
install.packages(c("mvtnorm", "Quandl"))
16-
```
17-
The implementation in Python makes use of NumPy 1.9.2, SciPy 0.15.1, Matplotlib 1.4.3, Pandas 0.13.1 and Quandl 2.8.9. On Ubuntu, these packages can be installed/upgraded using
18-
``` bash
19-
sudo pip install --upgrade package-name
20-
```
21-
For more information about the Quandl library, see < https://www.quandl.com/tools/python >.
13+
**python/** Code for Python to implement the basic algorithms covered in the tutorial. Implementations for the advanced topics are not provided. Only simple plotting is implemented and no figures or saved data from runs are provided.
2214

23-
The implementation in Matlab makes use of the statistics toolbox and the Quandl package. See < https://github.com/quandl/Matlab > for more installation and to download the toolbox. Note that urlread2 is required by the Quandl toolbox and should be installed as detailed in the README file of the Quandl toolbox.
15+
**matlab/** Code for MATLAB to implement the basic algorithms covered in the tutorial. Implementations for the advanced topics are not provided. Only simple plotting is implemented and no figures or saved data from runs are provided.
2416

25-
Included files (folders matlab, python and r)
26-
--------------
27-
**example1-lgss.[R,py,m]** Implements the numerical illustration in Section 3.2 of state estimation in a linear Gaussian state space (LGSS) model using the fully-adapted particle filter (faPF). The output is the filtered state estimated as presented in Figure 3.
28-
29-
**example2-lgss.[R,py,m]** Implements the numerical illustration in Section 3.4 of parameter estimation of the parameter phi in the LGSS model using particle Metropolis-Hastings (PMH) with the faPF as the likelihood estimator. The output is the estimated parameter posterior as presented in Figure 4.
30-
31-
**example3-sv.[R,py,m]** Implements the numerical illustration in Section 4 of parameter estimation of the three parameters in the stochastic volatility (SV) model using particle Metropolis-Hastings (PMH) with the bootstrap particle filter (bPF) as the likelihood estimator. The output is the estimated parameter posterior as presented in Figure 5. The code takes some time (hours to execute).
32-
33-
**example4-sv.[R,py,m]** Implements the numerical illustration in Section 5.3.1, which makes use of the same setup as in Section 4.1 but with a tuned proposal distribution. The output is the estimated ACF and IACT as presented in Figure 7. The code takes some time (hours to execute).
34-
35-
**example5-sv.[R]** Implements the numerical illustration in Section 5.3.2, which makes use of the same setup as in Section 4.1 but with a tuned proposal distribution. The output is the estimated ACF and IACT as presented in Figure 7. The code takes some time (hours to execute).
36-
37-
**example*.[RData,mat,spdata]** A saved copy of the workspace after running thr corresponding code. Can be used to directly recreate the plots in the tutorial and to conduct additional analysis.
17+
**r-package/** The files for the R package pmhtutorial on CRAN. These are very similar to the files in *r/*, which is more useful to learn the algorithms covered in the tutorial. However, this R package is simple to download and use directly.
3818

39-
Supporting files (folders matlab, python and r)
40-
--------------
41-
**stateEstimationHelper.[py,R]**
42-
Implementes the data generation for the LGSS model (generateData), the faPF for the LGSS model (sm), the Kalman filter for the LGSS model (kf) and the bPF for the SV model (sm_sv). In Matlab, these functions are defined in four seperate m-files with the corresponding file names.
43-
44-
**parameterEstimationHelper.[py,R]**
45-
Implementes the PMH algorithm for the LGSS model (pmh), the SV model (pmh_sv) and the reparameterised SV model (pmh_sv_reparametrised). In Matlab, these functions are defined in two seperate m-files with the corresponding file names.
46-
47-
Included files (folder r-package)
48-
--------------
49-
The files for the R package pmhtutorial. These should be virtually the same as the files in the folder r but packaged as an R package. The folder r is maintained to keep the code for the three languages as similar as possible. However, the r package is simple to download and use directly.
50-
51-
Included files (folder matlab-skeleton)
52-
--------------
53-
Skeleton code files for MATLAB to help step-by-step implementation during courses and seminars.
19+
**matlab-skeleton** Skeleton code files for MATLAB to help step-by-step implementation during courses and seminars.
5420

5521

python/README.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Python code for PMH tutorial
2+
3+
This R code implements the Kalman filter (KF), particle filter (PF) and particle Metropolis-Hastings (PMH) algorithm for two different dynamical models: a linear Gaussian state-space (LGSS) model and a stochastic volatilty (SV) model. Note that the Kalman filter can only be employed for the first of these two models. The details of the code is described in the tutorial paper available at: < http://arxiv.org/pdf/1511.01707 >.
4+
5+
Note that the Python code in this folder covers the basic implementations in the paper. See the R code in r/ for all the implementations and to recreate the results in the tutorial.
6+
7+
Requirements
8+
--------------
9+
The code is written and tested for Python 2.7.6/3.6.0 togehter with NumPy 1.9.2/1.11.3, SciPy 0.15.1/0.18.1, Matplotlib 1.4.3/2.0.0 and Quandl 2.8.9/3.1.0. These packages are easily avaiable via Anaconda (https://docs.continuum.io/anaconda/install) by installing the package for your preference of Python version and then executing
10+
``` bash
11+
conda install numpy scipy matplotlib quandl
12+
```
13+
For more information about the Quandl library, see < https://www.quandl.com/tools/python >.
14+
15+
Main script files
16+
--------------
17+
These are the main script files that implement the various algorithms discussed in the tutorial.
18+
19+
**example1-lgss.py** State estimation in a LGSS model using the KM and a fully-adapted PF (faPF). The code is discussed in Section 3.1 and the results are presented in Section 3.2 as Figure 4 and Table 1.
20+
21+
**example2-lgss.py** Parameter estimation of one parameter in the LGSS model using PMH with the faPF as the likelihood estimator. The code is discussed in Section 4.1 and the results are presented in Section 4.2 as Figure 5.
22+
23+
**example3-sv.py** Parameter estimation of three parameters in the SV model using PMH with the bootstrap PF as the likelihood estimator. The code is discussed in Section 5.1 and the results are presented in Section 5.2 as Figure 6. The code takes about an hour to run.
24+
25+
Supporting files (helpers/)
26+
--------------
27+
**stateEstimation.py**
28+
Implementes data generation for the LGSS model (generateData), the faPF for the LGSS model (particleFilter), the Kalman filter for the LGSS model (kalmanFilter) and the bPF for the SV model (paticleFilterSVmodel).
29+
30+
**parameterEstimation.py**
31+
Implementes the PMH algorithm for the LGSS model (particleMetropolisHastings) and the SV model (particleMetropolisHastingsSVModel).

python/example1-lgss.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,9 @@
3232
np.random.seed(10)
3333

3434

35-
##############################################################################
35+
#=============================================================================
3636
# Define the model
37-
##############################################################################
37+
#=============================================================================
3838

3939
# Here, we use the following model
4040
#
@@ -56,11 +56,11 @@
5656
initialState = 0
5757

5858

59-
##############################################################################
59+
#=============================================================================
6060
# Generate data
61-
##############################################################################
61+
#=============================================================================
6262

63-
(x, y) = generateData(theta, T, initialState)
63+
x, y = generateData(theta, T, initialState)
6464

6565
# Plot the measurement
6666
plt.subplot(3, 1, 1)
@@ -75,9 +75,9 @@
7575
plt.ylabel("latent state")
7676

7777

78-
##############################################################################
78+
#=============================================================================
7979
# State estimation
80-
##############################################################################
80+
#=============================================================================
8181

8282
# Using N = 100 particles and plot the estimate of the latent state
8383
xHatFilteredParticleFilter, _ = particleFilter(y, theta, 100, initialState)

0 commit comments

Comments
 (0)