Skip to content

Commit 2384d93

Browse files
author
Johan Dahlin
committed
Updated code and uploaded saved workspace for R
Former-commit-id: 6f549f7 [formerly 6f549f7 [formerly 15c47c9]] Former-commit-id: 08f8258094ac504c4a8d9b95fee14d991a460c7f Former-commit-id: d3cd8bf
1 parent ef3ac58 commit 2384d93

14 files changed

Lines changed: 425 additions & 72 deletions

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,11 @@ Included files
1818

1919
**example2-lgss.[R,py,m]** Implements the numerical illustration in Section 3.2 of parameter estimation of the parameter phi in the LGSS model using particle Metropolis-Hastings (PMH) with the faPF as the likelihood estimator. The output is the estimated parameter posterior as presented in Figure 4.
2020

21-
**example3-sv.[R,py,m]** Implements the numerical illustration in Section 4.1 of parameter estimation of the three parameters in the stochastic volatility (SV) model using particle Metropolis-Hastings (PMH) with the bootstrap particle filter (bPF) as the likelihood estimator. The output is the estimated parameter posterior as presented in Figure 5.
21+
**example3-sv.[R,py,m]** Implements the numerical illustration in Section 4.1 of parameter estimation of the three parameters in the stochastic volatility (SV) model using particle Metropolis-Hastings (PMH) with the bootstrap particle filter (bPF) as the likelihood estimator. The output is the estimated parameter posterior as presented in Figure 5. The code takes some time (hours to execute).
2222

23-
**example3-sv.[R,py,m]** Implements the numerical illustration in Section 4.2, which makes use of the same setup as in Section 4.1 but with a tuned proposal distribution. The output is the estimated ACF and IACT as presented in Figure 7.
23+
**example4-sv.[R,py,m]** Implements the numerical illustration in Section 4.2, which makes use of the same setup as in Section 4.1 but with a tuned proposal distribution. The output is the estimated ACF and IACT as presented in Figure 7. The code takes some time (hours to execute).
24+
25+
**example*.[RData,mat]** A saved copy of the workspace after running to code.
2426

2527
Supporting files
2628
--------------

python/example1-lgss.py

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
##############################################################################
2+
#
3+
# Example of particle filtering
4+
# in a linear Gaussian state space model
5+
#
6+
# (c) 2015 Johan Dahlin
7+
# johan.dahlin (at) liu.se
8+
#
9+
##############################################################################
10+
11+
# Import some libraries
12+
import matplotlib.pylab as plt
13+
import numpy as np;
14+
import stateEstimationHelper as helpState
15+
16+
# Set the random seed to replicate results in tutorial
17+
np.random.seed( 10 );
18+
19+
##############################################################################
20+
# Define the model
21+
##############################################################################
22+
23+
# Here, we use the following model
24+
#
25+
# x[tt+1] = phi * x[tt] + sigmav * v[tt]
26+
# y[tt] = x[tt] + sigmae * e[tt]
27+
#
28+
# where v[tt] ~ N(0,1) and e[tt] ~ N(0,1)
29+
30+
# Set the parameters of the model (par[0],par[1],par[2])
31+
par = np.zeros(3)
32+
par[0] = 0.75
33+
par[1] = 1.00
34+
par[2] = 1.00
35+
36+
# Set the number of time steps to simulate
37+
T = 250;
38+
39+
# Set the initial state
40+
x0 = 0;
41+
42+
##############################################################################
43+
# Generate data
44+
##############################################################################
45+
46+
(x, y) = helpState.generateData(par, T, x0)
47+
48+
# Plot the measurement
49+
plt.subplot(3,1,1);
50+
plt.plot(y,color = '#1B9E77', linewidth=1.5);
51+
plt.xlabel("time"); plt.ylabel("measurement")
52+
53+
# Plot the states
54+
plt.subplot(3,1,2);
55+
plt.plot(x,color = '#D95F02', linewidth=1.5);
56+
plt.xlabel("time"); plt.ylabel("latent state")
57+
58+
##############################################################################
59+
# State estimation using the particle filter
60+
##############################################################################
61+
62+
plt.subplot(3,1,3);
63+
64+
# Using N = 100 particles and plot the estimate of the latent state
65+
( xhat, ll ) = helpState.pf(y,par,100,T,x0);
66+
plt.plot(xhat,color = '#7570B3', linewidth=1.5)
67+
plt.xlabel("time"); plt.ylabel("state estimate")
68+
69+
###################################################################################
70+
# State estimation using the Kalman filter
71+
###################################################################################
72+
73+
xhat = helpState.kf(y,par,T,x0,0.01);
74+
plt.plot(xhat,'k', linewidth=1.5)
75+
76+
##############################################################################
77+
# End of file
78+
##############################################################################

python/example2-lgss.py

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
##############################################################################
2+
#
3+
# Example of particle Metropolis-Hastings
4+
# in a linear Gaussian state space model
5+
#
6+
# (c) 2015 Johan Dahlin
7+
# johan.dahlin (at) liu.se
8+
#
9+
##############################################################################
10+
11+
# Import some libraries
12+
import matplotlib.pylab as plt
13+
import numpy as np
14+
import stateEstimationHelper as helpState
15+
import parameterEstimationHelper as helpParam
16+
17+
# Set the random seed
18+
np.random.seed( 10 );
19+
20+
##############################################################################
21+
# Define the model
22+
##############################################################################
23+
24+
# Here, we use the following model
25+
#
26+
# x[tt+1] = phi * x[tt] + sigmav * v[tt]
27+
# y[tt] = x[tt] + sigmae * e[tt]
28+
#
29+
# where v[tt] ~ N(0,1) and e[tt] ~ N(0,1)
30+
31+
# Set the parameters of the model (par[0],par[1],par[2])
32+
par = np.zeros(3)
33+
par[0] = 0.75
34+
par[1] = 1.00
35+
par[2] = 1.00
36+
37+
# Set the number of time steps to simulate
38+
T = 250;
39+
40+
# Set the initial state
41+
x0 = 0;
42+
43+
##############################################################################
44+
# Generate data
45+
##############################################################################
46+
47+
(x, y) = helpState.generateData(par, T, x0)
48+
49+
##############################################################################
50+
# Parameter estimation using PMH
51+
##############################################################################
52+
53+
# The inital guess of the parameter
54+
initPar = 0.50;
55+
56+
# No. particles in the particle filter ( choose nPart ~ T )
57+
nPart = 500;
58+
59+
# The length of the burn-in and the no. iterations of PMH ( nBurnIn < nRuns )
60+
nBurnIn = 1000;
61+
nRuns = 5000;
62+
63+
# The standard deviation in the random walk proposal
64+
stepSize = 0.10;
65+
66+
# Run the PMH algorithm
67+
res = helpParam.pmh(y,initPar,par,nPart,T,x0,helpState.pf,nRuns,stepSize)
68+
69+
70+
##############################################################################
71+
# Plot the results
72+
##############################################################################
73+
74+
# Plot the parameter posterior estimate
75+
# Solid black line indicate posterior mean
76+
plt.subplot(2,1,1);
77+
plt.hist(res[nBurnIn:nRuns], np.floor(np.sqrt(nRuns-nBurnIn)), normed=1, facecolor='#7570B3')
78+
plt.xlabel("par[0]"); plt.ylabel("posterior density estimate")
79+
plt.axvline( np.mean(res[nBurnIn:nRuns]), linewidth=2, color = 'k' )
80+
81+
# Plot the trace of the Markov chain after burn-in
82+
# Solid black line indicate posterior mean
83+
plt.subplot(2,1,2);
84+
plt.plot(np.arange(nBurnIn,nRuns,1),res[nBurnIn:nRuns],color = '#E7298A')
85+
plt.xlabel("iteration"); plt.ylabel("par[0]")
86+
plt.axhline( np.mean(res[nBurnIn:nRuns]), linewidth=2, color = 'k' )
87+
88+
##############################################################################
89+
# End of file
90+
##############################################################################

python/example3-lgss.py

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
##############################################################################
2+
#
3+
# Example of particle Metropolis-Hastings in a stochastic volatility model
4+
#
5+
# (c) 2015 Johan Dahlin
6+
# johan.dahlin (at) liu.se
7+
#
8+
##############################################################################
9+
10+
# Import some libraries
11+
import matplotlib.pylab as plt
12+
import numpy as np
13+
import stateEstimationHelper as helpState
14+
import parameterEstimationHelper as helpParam
15+
16+
# Set the random seed to replicate results in tutorial
17+
np.random.seed( 10 );
18+
19+
##############################################################################
20+
# Define the model
21+
##############################################################################
22+
23+
# Here, we use the following model
24+
#
25+
# x[tt+1] = par[0] * x[tt] + par[1] * v[tt]
26+
# y[tt] = par[2] * exp( xt[tt]/2 ) * e[tt]
27+
#
28+
# where v[tt] ~ N(0,1) and e[tt] ~ N(0,1)
29+
30+
# Set the number of time steps
31+
T = 500;
32+
33+
##############################################################################
34+
# Load data
35+
##############################################################################
36+
y = np.loadtxt('omxs30data.csv')
37+
38+
##############################################################################
39+
# Parameter estimation using PMH
40+
##############################################################################
41+
42+
# The inital guess of the parameter
43+
initPar = np.array(( 0.99, 0.13, 0.90));
44+
45+
# No. particles in the particle filter ( choose nPart ~ T )
46+
nPart = 500;
47+
48+
# The length of the burn-in and the no. iterations of PMH ( nBurnIn < nRuns )
49+
nBurnIn = 2500;
50+
nRuns = 7500;
51+
52+
# The standard deviation in the random walk proposal
53+
stepSize = np.array(( 0.01, 0.05, 0.05));
54+
55+
# Run the PMH algorithm
56+
res = helpParam.pmh_sv(y,initPar,nPart,T,helpState.pf_sv,nRuns,stepSize)
57+
58+
# State inference
59+
( xhat, ll ) = helpState.pf_sv(y, np.mean( res[nBurnIn:nRuns,:], axis=0 ) ,nPart,T);
60+
61+
#=============================================================================
62+
# Plot the results
63+
#=============================================================================
64+
65+
plt.figure(1);
66+
67+
# Plot the log-returns with volatility estimate
68+
plt.subplot(4,2,(1,2));
69+
plt.plot(y,color = '#1B9E77', linewidth=1.5);
70+
plt.plot(xhat,color = '#D95F02', linewidth=1.5);
71+
plt.xlabel("time"); plt.ylabel("log-return")
72+
73+
# Plot the parameter posterior estimate
74+
# Plot the trace of the Markov chain after burn-in
75+
# Solid black line indicate posterior mean
76+
plt.subplot(4,2,3);
77+
plt.hist(res[nBurnIn:nRuns,0], np.floor(np.sqrt(nRuns-nBurnIn)), normed=1, facecolor='#7570B3')
78+
plt.xlabel("phi"); plt.ylabel("posterior density estimate")
79+
plt.axvline( np.mean(res[nBurnIn:nRuns,0]), linewidth=1.5, color = 'k' )
80+
81+
plt.subplot(4,2,4);
82+
plt.plot(np.arange(nBurnIn,nRuns,1),res[nBurnIn:nRuns,0],color='#7570B3')
83+
plt.xlabel("iteration"); plt.ylabel("trace of phi")
84+
plt.axhline( np.mean(res[nBurnIn:nRuns,0]), linewidth=1.5, color = 'k' )
85+
86+
plt.subplot(4,2,5);
87+
plt.hist(res[nBurnIn:nRuns,1], np.floor(np.sqrt(nRuns-nBurnIn)), normed=1, facecolor='#E7298A')
88+
plt.xlabel("sigmav"); plt.ylabel("posterior density estimate")
89+
plt.axvline( np.mean(res[nBurnIn:nRuns,1]), linewidth=1.5, color = 'k' )
90+
91+
plt.subplot(4,2,6);
92+
plt.plot(np.arange(nBurnIn,nRuns,1),res[nBurnIn:nRuns,1],color='#E7298A')
93+
plt.xlabel("iteration"); plt.ylabel("trace of sigmav")
94+
plt.axhline( np.mean(res[nBurnIn:nRuns,1]), linewidth=1.5, color = 'k' )
95+
96+
plt.subplot(4,2,7);
97+
plt.hist(res[nBurnIn:nRuns,2], np.floor(np.sqrt(nRuns-nBurnIn)), normed=1, facecolor='#66A61E')
98+
plt.xlabel("beta"); plt.ylabel("posterior density estimate")
99+
plt.axvline( np.mean(res[nBurnIn:nRuns,2]), linewidth=1.5, color = 'k' )
100+
101+
plt.subplot(4,2,8);
102+
plt.plot(np.arange(nBurnIn,nRuns,1),res[nBurnIn:nRuns,2],color='#66A61E')
103+
plt.xlabel("iteration"); plt.ylabel("trace of beta")
104+
plt.axhline( np.mean(res[nBurnIn:nRuns,2]), linewidth=1.5, color = 'k' )
105+
106+
##############################################################################
107+
# End of file
108+
##############################################################################

r/example1-lgss.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ x = data$x
4545
y = data$y
4646

4747
# Export plot to file
48-
#cairo_pdf("lgss-state.pdf", height = 8, width = 8)
48+
#cairo_pdf("example1-lgss.pdf", height = 10, width = 8)
4949

5050
# Plot the latent state and observations
5151
layout(matrix(1:3, 3, 1, byrow = TRUE));
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
5f7bb25e089d2065578cf5b15059510548653bc8

r/example2-lgss.R

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ res = pmh(data$y,initPar,sigmav,sigmae,nPart,T,x0,nIter,stepSize)
6969
##############################################################################
7070

7171
# Export plot to file
72-
#cairo_pdf("lgss-parameter.pdf", height = 9, width = 8)
72+
#cairo_pdf("example2-lgss.pdf", height = 10, width = 8)
7373

7474
# Plot the parameter posterior estimate
7575
# Solid black line indicate posterior mean
@@ -89,6 +89,7 @@ abline(h=mean(res[nBurnIn:nIter]),lwd=1,lty="dotted")
8989
# Plot the ACF of the Markov chain
9090
foo=acf( res[nBurnIn:nIter], plot=F)
9191
plot(foo$lag,foo$acf,col = '#7570B3', type="l",xlab="iteration",ylab="ACF",bty="n",lwd=2)
92+
polygon(c(foo$lag,rev(foo$lag)),c(foo$acf,rep(0,length(foo$acf))),border=NA,col=rgb(t(col2rgb("#7570B3"))/256,alpha=0.25))
9293
abline(h=1.96/sqrt(nIter-nBurnIn),lty="dotted")
9394
abline(h=-1.96/sqrt(nIter-nBurnIn),lty="dotted")
9495

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
d1941306502b78c61bf5f00ef77ec1cc6403b461

0 commit comments

Comments
 (0)