
rsta.royalsocietypublishing.org
Research
Article submitted to journal
Subject Areas:
Deep learning, time series modelling
Keywords:
Deep neural networks, time series
forecasting, uncertainty estimation,
hybrid models, interpretability,
counterfactual prediction
Author for correspondence:
Br yan Lim
e-mail: blim@robots.ox.ac.uk
Time Ser ies Forecasting With
Deep Learning: A Survey
Bryan Lim
1
and Stefan Zohren
1
1
Department of Engineering Science, University of
Oxford, Oxford, UK
Numerous deep learning architectures have been
developed to accommodate the diversity of time series
datasets across different domains. In this article, we
survey common encoder and decoder designs used
in both one-step-ahead and multi-horizon time series
forecasting – describing how temporal information is
incorporated into predictions by each model. Next, we
highlight recent developments in hybrid deep learning
models, which combine well-studied statistical models
with neural network components to improve pure
methods in either category. Lastly, we outline some
ways in which deep learning can also facilitate decision
support with time series data.
1. Introduction
Time series modelling has historically been a key area
of academic research – forming an integral part of
applications in topics such as climate modelling [
1
],
biological sciences [
2
] and medicine [
3
], as well as
commercial decision making in retail [
4
] and finance [
5
]
to name a few. While traditional methods have focused
on parametric models informed by domain expertise –
such as autoregressive (AR) [
6
], exponential smoothing
[
7
] or structural time series models [
8
] – modern machine
learning methods provide a means to learn temporal
dynamics in a purely data-driven manner [
9
]. With the
increasing data availability and computing power in
recent times, machine learning has become a vital part
of the next generation of time series forecasting models.
Deep learning in particular has gained popularity
in recent times, inspired by notable achievements in
image classification [
10
], natural language processing
[
11
] and reinforcement learning [
12
]. By incorporating
bespoke architectural assumptions – or inductive biases
[
13
] – that reflect the nuances of underlying datasets,
deep neural networks are able to learn complex data
representations [
14
], which alleviates the need for manual
feature engineering and model design. The availability
of open-source backpropagation frameworks [
15
,
16
] has
also simplified the network training, allowing for the
customisation for network components and loss functions.
© The Authors. Published by the Royal Society under the terms of the
Creative Commons Attr ibution License http://creativecommons.org/licenses/
by/4.0/, which permits unrestricted use, provided the original author and
source are credited.
arXiv:2004.13408v1 [stat.ML] 28 Apr 2020