Authors: Andrej Gajdoš, Jozef Hanč, Martina Hančová
Faculty of Science, P. J. Šafárik University in Košice, Slovakia
email: andrej.gajdos@student.upjs.sk
EBLUP-NE for electricity consumption 2
R-based computational tools - CVXR
To get back to the contents, use the Home key.
- solver - the convex optimization solver ECOS, OSQP, and SCS chosen according to the given problem
* **OSQP** for convex quadratic problems
* `max_iter` - maximum number of iterations (default: 4000).
* `eps_abs` - absolute accuracy (default: 1e-3).
* `eps_rel` - relative accuracy (default: 1e-4).
* **ECOS** for convex second-order cone problems
* `maxit` - maximum number of iterations (default: 100).
* `abstol` - absolute accuracy (default: 1e-8).
* `reltol` - relative accuracy (default: 1e-8).
* `feastol` - tolerance for feasibility conditions (default: 1e-8).
* `abstol_inacc` - absolute accuracy for inaccurate solution (default: 5e-5).
* `reltol_inacc` - relative accuracy for inaccurate solution (default: 5e-5).
* `feastol_inacc` - tolerance for feasibility condition for inaccurate solution (default: 1e-4).
* **SCS** for large-scale convex cone problems
* `max_iters` - maximum number of iterations (default: 5000).
* `eps` - convergence tolerance (default: 1e-5).
* `alpha` - relaxation parameter (default: 1.5).
* `scale` - factor by which the data is rescaled, only used if `normalize` is TRUE (default: 1.0).
* `normalize` - whether the heuristic data rescaling should be used (default: TRUE).
Important note: After our testing, we found that the standard CVXR package installation, unlike a local installation using Anaconda R distribution or CRAN distribution, did not work in a Binder repository. We are intensively communicating with authors of the package to fix it. At this moment, you have to use your local installation of Jupyter with R kernel or you can see our live Binder Python notebooks using CVXPY with the same results and very similar commands.
library(CVXR)
Warning message: "package 'CVXR' was built under R version 3.5.3" Attaching package: 'CVXR' The following object is masked from 'package:stats': power
In this FDSLRM application, we model the econometric time series data set, representing the hours observations of the consumption of the electric energy in some department store. The number of time series observations is $n=24$. The data and model were adapted from Štulajter & Witkovský, 2004.
The consumption data can be fitted by the following Gaussian orthogonal FDSLRM:
$ \begin{array}{rl} & X(t) & \! = \! & \beta_1+\beta_2\cos\left(\tfrac{2\pi t}{24}\right)+\beta_3\sin\left(\tfrac{2\pi t}{24}\right) +\\ & & & +Y_1\cos\left(\tfrac{2\pi t\cdot 2}{24}\right)+Y_2\sin\left(\tfrac{2\pi t\cdot 2}{24}\right)+\\ & & & +Y_3\cos\left(\tfrac{2\pi t\cdot 3}{24}\right)+Y_4\sin\left(\tfrac{2\pi t\cdot 3}{24}\right) +w(t), \, t\in \mathbb{N}, \end{array} $
where $\boldsymbol{\beta}=(\beta_1,\,\beta_2,\,\beta_3)' \in \mathbb{R}^3\,,\mathbf{Y} = (Y_1, Y_2, Y_3, Y_4)' \sim \mathcal{N}_4(\boldsymbol{0}, \mathrm{D})\,, w(t) \sim \mathcal{iid}\, \mathcal{N} (0, \sigma_0^2)\,, \boldsymbol{\nu}= (\sigma_0^2, \sigma_1^2, \sigma_2^2, \sigma_3^2, \sigma_4^2) \in \mathbb{R}_{+}^5.$
# data - time series observation
x <- c(40.3,40.7,38.5,37.9,38.6,41.1,45.2,45.7,46.7,46.5,
45.2,45.1,45.8,46.3,47.5,48.5,49.1,51.7,50.6,48.0,
44.7,41.2,40.0,40.3)
t <- 1:length(x)
# auxilliary functions to create design matrices F, V and the projection matrix M_F
makeF <- function(times, freqs) {
n <- length(times)
f <- length(freqs)
c1 <- matrix(1, n)
F <- matrix(c1, n, 1)
for (i in 1:f) {
F <- cbind(F, cos(2 * pi * freqs[i] * times))
F <- cbind(F, sin(2 * pi * freqs[i] * times))
}
return(F)
}
makeV <- function(times, freqs) {
V <- makeF(times, freqs)
V <- V[, -1]
return(V)
}
makeM_F <- function(F) {
n <- nrow(F)
c1 <- rep(1, n)
I <- diag(c1)
M_F <- I - F %*% solve((t(F) %*% F)) %*% t(F)
return(M_F)
}
# model parameters
n <- 24
k <- 3
l <- 4
# model - design matrices F, V
F <- makeF(t, c(1/24))
V <- makeV(t, c(2/24, 3/24))
# columns vj of V and their squared norm ||vj||^2
nv2 = colSums(V ^ 2)
# auxiliary matrices and vectors
# Gram matrices GF, GV
GF <- t(F) %*% F
GV <- t(V) %*% V
InvGF <- solve(GF)
InvGV <- solve(GV)
# projectors PF, MF, PV, MV
In <- diag(n)
PF <- F %*% InvGF %*% t(F)
PV <- V %*% InvGV %*% t(V)
MF <- In - PF
MV <- In - PV
# residuals e
e <- MF %*% x
using formula (4.1) from Hancova et al 2019
$
\renewcommand{\arraystretch}{1.4} \tilde{\boldsymbol{\nu}}(\mathbf{e}) = \begin{pmatrix} \tfrac{1}{n-k-l}\,\mathbf{e}'\,\mathrm{M_V}\,\mathbf{e} \\ (\mathbf{e}'\mathbf{v}_1)^2/||\mathbf{v}_1||^4 \\ \vdots \\ (\mathbf{e}'\mathbf{v}_l)^2/||\mathbf{v}_l||^4 \end{pmatrix} $
# auxilliary function to calculate the norm of a vector
norm_vec <- function(x) sqrt(sum(x^2))
# NE according to formula (4.1)
NE0 <- 1/(n-k-l) * t(e) %*% MV %*% (e)
NEj <- (t(e) %*% V) ^ 2 / nv2 ^ 2
NE <- c(NE0, NEj)
print(NE)
print(norm_vec(NE))
[1] 1.093045 2.965717 1.761859 0.371935 1.863479 [1] 4.087207
NE as a convex optimization problem
\begin{array}{ll} \textit{minimize} & \quad f_0(\boldsymbol{\nu})=||\mathbf{e}\mathbf{e}' - \mathrm{VDV'}||^2+||\mathrm{M_V}\mathbf{e}\mathbf{e}'\mathrm{M_V}-\nu_0\mathrm{M_F}\mathrm{M_V}||^2 \\[6pt] \textit{subject to} & \quad \boldsymbol{\nu} = \left(\nu_0, \ldots, \nu_l\right)'\in [0, \infty)^{l+1} \end{array}$
$
# the optimization variable, objective function
v <- Variable(l+1)
fv <- sum_squares(e%*%t(e)-V%*%diag(v[2:(l+1)])%*%t(V)) + sum_squares(MV%*%e%*%t(e)%*%MV-v[1]%*%MF%*%MV)
# the optimization problem for NE
objective <- Minimize(fv)
constraints <- list(v >= 0)
prob <- Problem(objective, constraints)
# solve the NE problem
sol <- solve(prob)
cat("NEcvxr =",as.vector(sol$getValue(v)))
cat("\n")
cat("norm =", norm_vec(as.vector(sol$getValue(v))))
NEcvxr = 1.093047 2.965724 1.761866 0.3719285 1.863487 norm = 4.087219
using formula (3.9) from Hancova et al 2019.
$
\mathring{\nu}_j = \rho_j^2 \tilde{\nu}_j; j = 0,1 \ldots, l\ \rho_0 = 1, \rho_j = \dfrac{\hat{\nu}_j||\mathbf{v}_j||^2}{\hat{\nu}_0+\hat{\nu}_j||\mathbf{v}_j||^2} $
where $\boldsymbol{\tilde{\nu}}$ are NE, $\boldsymbol{\hat{\nu}}$ are initial estimates for EBLUP-NE
# EBLUP-NE based on formula (3.9)
rho2 <- function(est) {
result <- c(1)
for(j in 2:length(est)) {
result <- c(result, (est[j]*nv2[j-1]/(est[1]+est[j]*nv2[j-1])) ^ 2)
}
return(result)
}
EBLUPNE <- function(est) {
result <- NE * rho2(est)
return(result)
}
# numerical results
print(rho2(NE))
[1] 1.0000000 0.9412917 0.9041006 0.6452540 0.9089674
print(EBLUPNE(NE))
print(norm_vec(EBLUPNE(NE)))
[1] 1.0930447 2.7916050 1.5928975 0.2399925 1.6938421 [1] 3.801556
using the the KKT algorithm (tab.3, Hancova et al 2019)
$~$
$
\qquad \mathbf{q} = \left(\begin{array}{c} \mathbf{e}' \mathbf{e}\\ (\mathbf{e}' \mathbf{v}_{1})^2 \\ \vdots \\ (\mathbf{e}' \mathbf{v}_{l})^2 \end{array}\right) $
$\qquad\mathrm{G} = \left(\begin{array}{ccccc}
\small n^* & ||\mathbf{v}{1}||^2 & ||\mathbf{v}{2}||^2 & \ldots & ||\mathbf{v}{l}||^2 \ ||\mathbf{v}{1}||^2 & ||\mathbf{v}{1}||^4 & 0 & \ldots & 0 \ ||\mathbf{v}{2}||^2 & 0 & ||\mathbf{v}{2}||^4 & \ldots & 0 \ \vdots & \vdots & \vdots & \ldots & \vdots \ ||\mathbf{v}{l}||^2 & 0 & 0 & \ldots & ||\mathbf{v}_{l}||^4 \end{array}\right) $
NNMDOOLSE_kkt <- function(X, F, V, method = "NNDOOLSE") {
n_star <- length(X)
k <- ncol(F)
l <- ncol(V)
if(method == "NNMDOOLSE") {
n_star <- n_star - k
}
MF <- makeM_F(F)
u <- diag(t(V) %*% V)
G <- rbind(c(n_star,u),cbind(u, diag(u^2)))
b_comb <- expand.grid(rep(list(0:1), l))
eps <- c(MF %*% X)
q <- c(eps %*% eps, (eps %*% V)^2)
K <- G
s <- vector()
for(i in 1:nrow(b_comb)) {
K_inv <- matrix()
b <- as.vector(unlist(b_comb[i,]))
for(j in 1:length(b)) {
if(b[j] == 0) {
K[1,j+1] <- 0
K[j+1,j+1] <- -1
}
K_inv <- solve(K)
}
beta <- c(K_inv %*% q)
if(all(beta >= 0)) {
s <- beta[1]
for(m in 1:l) {
if(b[m] == 0) {
s <- c(s, 0)
} else {
s <- c(s, beta[m+1])
}
}
break
} else {
K <- G
}
}
return(list("estimates" = s, "b" = b))
}
NN_DOOLSE <- NNMDOOLSE_kkt(x, F, V)
print(NN_DOOLSE$estimates)
print(norm_vec(NN_DOOLSE$estimates))
print(NN_DOOLSE$b)
[1] 0.929088 2.888293 1.684435 0.294511 1.786055 [1] 3.914013 [1] 1 1 1 1
nonnegative DOOLSE as a convex optimization problem
\begin{array}{ll} \textit{minimize} & f_0(\boldsymbol{\nu})=||\mathbf{e}\mathbf{e}'-\Sigma_\boldsymbol{\nu}||^2 \\[6pt] \textit{subject to} & \boldsymbol{\nu} = \left(\nu_0, \ldots, \nu_l\right)'\in [0, \infty)^{l+1} \end{array}$
$
NNDOOLSE_CVXR <- function(X, F, V) {
n <- length(X)
k <- ncol(F)
l <- ncol(V)
# GF <- t(F) %*% F
# InvGF <- solve(GF)
I <- diag(n)
# PF <- F %*% InvGF %*% t(F)
#MF <- I - PF
MF <- makeM_F(F)
MFV <- MF %*% V
SX <- MF %*% X %*% t(X) %*% MF
s <- Variable(l+1)
p_obj <- Minimize(sum_squares(SX - (s[1] %*% I) - (V %*% diag(s[2:(l+1)]) %*% t(V))))
constr <- list(s >= 0)
prob <- Problem(p_obj, constr)
sol <- solve(prob)
return(as.vector(sol$getValue(s)))
}
NN_DOOLSEcvxr <- NNDOOLSE_CVXR(x, F, V)
print(NN_DOOLSEcvxr)
print(norm_vec(NN_DOOLSEcvxr))
[1] 0.9289981 2.8882994 1.6844382 0.2944570 1.7860596 [1] 3.913995
using equivalent (RE)MLE convex problem (proposition 5, Hancova et al 2019)
\begin{array}{ll} \textit{minimize} & \quad f_0(\mathbf{d})=-(n^*\!-l)\ln d_0 - \displaystyle\sum\limits_{j=1}^{l} \ln(d_0-d_j||\mathbf{v}_j||^2+d_0\mathbf{e}'\mathbf{e}-\mathbf{e}'\mathrm{V}\,\mathrm{diag}\{d_j\}\mathrm{V}'\mathbf{e} \\[6pt] \textit{subject to} & \quad d_0 > \max\{d_j||\mathbf{v}_j||^2, j = 1, \ldots, l\} \\ & \quad d_j \geq 0, j=1,\ldots l \\ & \\ & \quad\text{for MLE: } n^* = n, \text{ for REMLE: } n^* = n-k \\ \textit{back transformation:} & \quad \nu_0 = \dfrac{1}{d_0}, \nu_j = \dfrac{d_j}{d_0\left(d_0 -d_j||\mathbf{v}_j||^2\right)} \end{array}$
$
MLE_CVXR <- function(X, F, V){
n <- length(X)
l <- ncol(V)
MF <- makeM_F(F)
GV <- t(V) %*% V
p <- n - l
e <- as.vector(MF %*% X)
ee <- as.numeric(t(e) %*% e)
eV <- t(e) %*% V
Ve <- t(V) %*% e
d <- Variable(l+1)
logdetS <- p * log(d[1]) + sum(log(d[1] - GV %*% d[2:(l+1)]))
obj <- Maximize(logdetS - ((d[1] * ee) - (eV %*% diag(d[2:(l+1)]) %*% Ve)))
constr <- list(d[2:(l+1)] >= 0, d[1] >= max_entries(GV %*% d[2:(l+1)]))
p_MLE <- Problem(obj, constr)
sol <- solve(p_MLE)
s <- 1 / sol$getValue(d)[1]
sj <- vector()
for(j in 2:(l+1)) {
sj <- c(sj, sol$getValue(d)[j]/(sol$getValue(d)[1] * (sol$getValue(d)[1] - sol$getValue(d)[j] * GV[j-1,j-1])))
}
result <- c(s, sj)
return(as.vector(result))
}
MLEcvxr <- MLE_CVXR(x, F, V)
print(MLEcvxr)
print(norm_vec(MLEcvxr))
[1] 0.929088 2.888294 1.684435 0.294511 1.786056 [1] 3.914013
# numerical results
print(rho2(NN_DOOLSE$estimates))
[1] 1.0000000 0.9484689 0.9140421 0.6270020 0.9186301
print(EBLUPNE(NN_DOOLSE$estimates))
print(norm_vec(EBLUPNE(NN_DOOLSE$estimates)))
[1] 1.093045 2.812891 1.610413 0.233204 1.711848 [1] 3.832146
using the KKT algorithm (tab.3, Hancova et al 2019)
NN_MDOOLSE <- NNMDOOLSE_kkt(x, F, V, method = "NNMDOOLSE")
print(NN_MDOOLSE$estimates)
print(norm_vec(NN_MDOOLSE$estimates))
print(NN_MDOOLSE$b)
[1] 1.0930447 2.8746303 1.6707717 0.2808479 1.7723924 [1] 3.933189 [1] 1 1 1 1
nonnegative DOOLSE as a convex optimization problem
\begin{array}{ll} \textit{minimize} & f_0(\boldsymbol{\nu})=||\mathbf{e}\mathbf{e}'-\mathrm{M_F}\Sigma_\boldsymbol{\nu}\mathrm{M_F}||^2 \\[6pt] \textit{subject to} & \boldsymbol{\nu} = \left(\nu_0, \ldots, \nu_l\right)'\in [0, \infty)^{l+1} \end{array}$
$
NNMDOOLSE_CVXR <- function(X, F, V) {
n <- length(X)
k <- ncol(F)
l <- ncol(V)
# GF <- t(F) %*% F
# InvGF <- solve(GF)
I <- diag(n)
# PF <- F %*% InvGF %*% t(F)
#MF <- I - PF
MF <- makeM_F(F)
MFV <- MF %*% V
SX <- MF %*% X %*% t(X) %*% MF
s <- Variable(l+1)
p_obj <- Minimize(sum_squares(SX - (s[1] %*% MF) - (MFV %*% diag(s[2:(l+1)]) %*% t(MFV))))
constr <- list(s >= 0)
prob <- Problem(p_obj, constr)
sol <- solve(prob)
return(as.vector(sol$getValue(s)))
}
NN_MDOOLSEcvxr <- NNMDOOLSE_CVXR(x, F, V)
print(NN_MDOOLSEcvxr)
print(norm_vec(NN_MDOOLSEcvxr))
[1] 1.0930430 2.8746304 1.6707717 0.2808461 1.7723924 [1] 3.933188
using equivalent (RE)MLE convex problem (proposition 5, Hancova et al 2019)
\begin{array}{ll} \textit{minimize} & \quad f_0(\mathbf{d})=-(n^*\!-l)\ln d_0 - \displaystyle\sum\limits_{j=1}^{l} \ln(d_0-d_j||\mathbf{v}_j||^2+d_0\mathbf{e}'\mathbf{e}-\mathbf{e}'\mathrm{V}\,\mathrm{diag}\{d_j\}\mathrm{V}'\mathbf{e} \\[6pt] \textit{subject to} & \quad d_0 > \max\{d_j||\mathbf{v}_j||^2, j = 1, \ldots, l\} \\ & \quad d_j \geq 0, j=1,\ldots l \\ & \\ & \quad\text{for MLE: } n^* = n, \text{ for REMLE: } n^* = n-k \\ \textit{back transformation:} & \quad \nu_0 = \dfrac{1}{d_0}, \nu_j = \dfrac{d_j}{d_0\left(d_0 -d_j||\mathbf{v}_j||^2\right)} \end{array}$
$
REMLE_CVXR <- function(X, F, V){
n <- length(X)
k <- ncol(F)
l <- ncol(V)
MF <- makeM_F(F)
GV <- t(V) %*% V
p <- n - l - k
e <- as.vector(MF %*% X)
ee <- as.numeric(t(e) %*% e)
eV <- t(e) %*% V
Ve <- t(V) %*% e
d <- Variable(l+1)
logdetS <- p * log(d[1]) + sum(log(d[1] - GV %*% d[2:(l+1)]))
obj <- Maximize(logdetS - ((d[1] * ee) - (eV %*% diag(d[2:(l+1)]) %*% Ve)))
constr <- list(d[2:(l+1)] >= 0, d[1] >= max_entries(GV %*% d[2:(l+1)]))
p_remle <- Problem(obj, constr)
sol <- solve(p_remle)
s <- 1 / sol$getValue(d)[1]
sj <- vector()
for(j in 2:(l+1)) {
sj <- c(sj, sol$getValue(d)[j]/(sol$getValue(d)[1] * (sol$getValue(d)[1] - sol$getValue(d)[j] * GV[j-1,j-1])))
}
result <- c(s, sj)
return(as.vector(result))
}
REMLEcvxr <- REMLE_CVXR(x, F, V)
print(REMLEcvxr)
print(norm_vec(REMLEcvxr))
[1] 1.0930447 2.8746311 1.6707719 0.2808479 1.7723927 [1] 3.93319
# numerical results
print(rho2(NN_MDOOLSE$estimates))
[1] 1.0000000 0.9395166 0.8992740 0.5701753 0.9046291
print(EBLUPNE(NN_MDOOLSE$estimates))
print(norm_vec(EBLUPNE(NN_MDOOLSE$estimates)))
[1] 1.0930447 2.7863408 1.5843938 0.2120681 1.6857577 [1] 3.788865
This notebook belongs to suplementary materials of the paper submitted to Statistical Papers and available at https://arxiv.org/abs/1905.07771.
We propose a two-stage estimation method of variance components in time series models known as FDSLRMs, whose observations can be described by a linear mixed model (LMM). We based estimating variances, fundamental quantities in a time series forecasting approach called kriging, on the empirical (plug-in) best linear unbiased predictions of unobservable random components in FDSLRM.
The method, providing invariant non-negative quadratic estimators, can be used for any absolutely continuous probability distribution of time series data. As a result of applying the convex optimization and the LMM methodology, we resolved two problems $-$ theoretical existence and equivalence between least squares estimators, non-negative (M)DOOLSE, and maximum likelihood estimators, (RE)MLE, as possible starting points of our method and a practical lack of computational implementation for FDSLRM. As for computing (RE)MLE in the case of $ n $ observed time series values, we also discovered a new algorithm of order $\mathcal{O}(n)$, which at the default precision is $10^7$ times more accurate and $n^2$ times faster than the best current Python(or R)-based computational packages, namely CVXPY, CVXR, nlme, sommer and mixed.
We illustrate our results on three real data sets $-$ electricity consumption, tourism and cyber security $-$ which are easily available, reproducible, sharable and modifiable in the form of interactive Jupyter notebooks.
Discrete Spectrum Linear Regression Models](https://link.springer.com/article/10.1007/s001840300299). Metrika, 2004, Vol. 60, No. 2, pp. 105–118