Linear Dimensionality Reduction for Time Series

Abstract:

Visualisation by dimensionality reduction is an important tool for data exploration. In this work we are interested in visualising time series. To that end we formulate a latent variable model that mirrors probabilistic principal component analysis (PPCA). However, as opposed to PPCA which maps the latent variables directly to the data space, we first map the latent variables to the parameter space of a recurrent neural network, i.e. each latent projection instantiates a recurrent network. Each instantiated recurrent network in turn is responsible for modelling a time series in the dataset. Hence, each latent variable is indirectly mapped to a time series. Incorporating the recurrent network in the latent variable model helps us account for the temporal nature of the time series and capture their underlying dynamics. The proposed algorithm is demonstrated on two benchmark problems and a real world dataset.

SEEK ID: https://publications.h-its.org/publications/451

DOI: 10.1007/978-3-319-70087-8_40

Research Groups: Astroinformatics

Publication type: InProceedings

Journal: Neural Information Processing

Citation: Neural Information Processing 10634:375-383,Springer International Publishing

Date Published: 2017

Registered Mode: by DOI

help Submitter
Citation
Gianniotis, N. (2017). Linear Dimensionality Reduction for Time Series. In Neural Information Processing (pp. 375–383). Springer International Publishing. https://doi.org/10.1007/978-3-319-70087-8_40
Activity

Views: 5867

Created: 18th Oct 2019 at 09:39

Last updated: 5th Mar 2024 at 21:23

help Tags

This item has not yet been tagged.

help Attributions

None

Powered by
(v.1.14.2)
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH