Publications

KFAS: Exponential Family State Space Models in R

Abstract State space modeling is an efficient and flexible method for statistical inference of a broad class of time series and other data. This paper describes the R package KFAS for state space modeling with the observations from an exponential family, namely Gaussian, Poisson, binomial, negative binomial and gamma distributions. After introducing the basic theory behind Gaussian and non-Gaussian state space models, an illustrative example of Poisson time series forecasting is provided.

Introducing libeemd: A program package for performing the ensemble empirical mode decomposition

Abstract The ensemble empirical mode decomposition (EEMD) and its complete variant (CEEMDAN) are adaptive, noise-assisted data analysis methods that improve on the ordinary empirical mode decomposition (EMD). All these methods decompose possibly nonlinear and/or nonstationary time series data into a finite amount of components separated by instantaneous frequencies. This decomposition provides a powerful method to look into the different processes behind a given time series data, and provides a way to separate short time-scale events from a general trend.

By Perttu J.J. Luukko and Jouni Helske and Esa Räsänen in R package

January 1, 2016

Analysing Complex Life Sequence Data with Hidden Markov Modelling

Abstract When analysing complex sequence data with multiple channels (dimensions) and long observation sequences, describing and visualizing the data can be a challenge. Hidden Markov models (HMMs) and their mixtures (MHMMs) offer a probabilistic model-based framework where the information in such data can be compressed into hidden states (general life stages) and clusters (general patterns in life courses). We studied two different approaches to analysing clustered life sequence data with sequence analysis (SA) and hidden Markov modelling.

Improved frequentist prediction intervals for autoregressive models by simulation

Abstract It is well known that the so-called plug-in prediction intervals for autoregressive processes, with Gaussian disturbances, are too short, i.e. the coverage probabilities fall below the nominal ones. However, simulation experiments show that the formulas borrowed from the ordinary linear regression theory yield one-step prediction intervals, which have coverage probabilities very close to that claimed. From a Bayesian point of view the resulting intervals are posterior predictive intervals when uniform priors are assumed for both autoregressive coefficients and logarithm of the disturbance variance.