#!/usr/bin/env python # coding: utf-8 # # Problem 1 [35 points] Symmetric positive (semi)definite matrices # ## (a) [(1+4) + (1+4) = 10 points] Outer product matrix # # For $x\in\mathbb{R}^{n}\setminus\{0\}$, consider the matrix $X = x x^{\top}$. # # (i) Is $X\in\mathbb{S}_{+}^{n}$? Explain why/why not. # # (ii) Is $X\in\mathbb{S}_{++}^{n}$? Explain why/why not. # ## Solution for 1(a): # # (i) **Yes, $X\in\mathbb{S}_{+}^{n}$**. To see why, notice first that $X\in\mathbb{S}^{n}$ since $X=X^{\top}=xx^{\top}$. Next, for any $v\in\mathbb{R}^{n}\setminus\{0\}$, we have $v^{\top}X v = \left(v^{\top}x\right)^{2} \geq 0$. Therefore, $X\in\mathbb{S}_{+}^{n}$. # # # (ii) **No, the matrix $X\notin\mathbb{S}_{++}^{n}$**. This is because from (i), we know $v^{\top}X v = \left(v^{\top}x\right)^{2}$ for all $v\in\mathbb{R}^{n}\setminus\{0\}$. Thus, $v^{\top}X v = 0$ whenever the nonzero vectors $x$ and $v$ are orthogonal to each other, i.e., $v^{\top}x=0$. # ## (b) [(1+4) + (1+4) = 10 points] Copositive matrix # # Let $\mathcal{X}$ denote the set of $n\times n$ real copositive matrices (see Lec. 3, p. 1-2). # # (i) True or false: $\mathcal{X} \subset \mathbb{S}_{+}^{n}$. Explain your answer. # # (ii) True or false: $\mathbb{S}_{+}^{n} \subset \mathcal{X}$. Explain your answer. # ## Solution for 1(b): # # (i) **False.** For any $v\in\mathbb{R}^{n}\setminus\{0\}$, and $X\in\mathcal{X}$, we cannot guarantee $v^{\top}Xv \geq 0$. We can only guarantee it for the componentwise nonnegative vectors $v$, i.e., when $v\in\mathbb{R}_{+}^{n}\setminus\{0\}$. A counterexample was given in Lec. 3, p. 1-2 (in green). # # # (ii) **True,** becasue the set $\mathbb{R}^{n}_{+}\setminus\{0\}$ is a subset of $\mathbb{R}^{n}\setminus\{0\}$. Thus, all positive semidefinite matrices are copositive. # ## (c) [15 points] Visualizing $\mathbb{S}_{+}^{2}$ # # Consider $X = \begin{pmatrix} # x & z\\ # z & y # \end{pmatrix} \in \mathbb{S}_{+}^{2}$ where $x,y,z$ are scalars. Discretize $x,y$ within appropriate intervals and use your favorite programs such as MATLAB/Python/Julia to visualize $\mathbb{S}_{+}^{2}$ as a subset of $\mathbb{R}^{3}$, that is, plot it as a three dimensional set. # # **Insert** the plot in the notebook. **Submit your code in the zip file** so that we can reproduce your plot. # # (**Hint:** consider the principal minor characterization from Lec. 3, p. 18-20) # ## Solution for 1(c): # # From Lec. 3, p. 18-20, for $X\in\mathbb{S}^{2}_{+}$, we must have that all principal minors of the given symmetric matrix $X$ are nonnegative, that is, # $$x\geq 0, \quad y\geq 0, \quad xy - z^{2} \geq 0.$$ # # Thus, the boundary of the set $\mathbb{S}_{+}^{2}$ must be of the form $z=\pm\sqrt{xy}$ with $x,y$ nonnegative. # # A transparent surface plot of this boundary is shown in the following figure. The points on this boundary correspond to positive semidefinite matrices while the **interior** of this set is $\mathbb{S}_{++}^{2}$ (this is because the **leading** principal minor condition for strict positive definiteness gives $xy>z^{2}$ with positive $x,y$). The vertex of this set, shown in red dot below, is the $2\times 2$ zero matrix. We learnt in Lec. 5, p. 3-4, that this set is in fact, a convex cone, which confirms the visual intuition gleaned from the plot below. # # A sample MATLAB code $\texttt{VisualizeSPSD.m}$ to generate the above figure is avalable in the CANVAS Files section. # # # Problem 2 [5 x 6 = 30 points] Vector unit norm balls # # For any fixed $p$ satisfying $0\leq p \leq \infty$, the vector unit $p$-norm ball is a set # # $$ \{x\in\mathbb{R}^{n} : \|x\|_{p} \leq 1\} \subset \mathbb{R}^{n}.$$ # # Clealry, the above set is centered at the origin. For the definition of vector $p$-norm, see Lec. 3, p. 5 . # # The following plot shows the **two dimensional** $p$-norm balls for $p\in\{0.5,1,1.5,2,3.5,\infty\}$ (from left to right, top to bottom). # # # Use your favorite programs such as MATLAB/Python/Julia to plot the **three dimensional** $p$-norm balls for the same $p$ as above. **Insert** the plot in the notebook. **Submit your code in the zip file** so that we can reproduce your plot. # ## Solution for 2: # # The desired plot is shown below. # # The Python code $\texttt{Plot3DUnitNormBall.py}$ to generate the above plot is available in the CANVAS Files section. # # Problem 3 [35 points] Schatten $p$-norm of a matrix # In Lec. 3, p. 6-8, we discussed the **induced** $p$-norm of any matrix $X\in\mathbb{R}^{m\times n}$. A different way to define matrix norm is to simply consider the $p$-norm of the vector comprising of the singular values of $X$. # # Specifically, the **Schatten** $p$-norm of a matrix $X\in\mathbb{R}^{m\times n}$ is # # $$\|X\|_{\text{Schatten}\;p} := \left(\displaystyle\sum_{i=1}^{\min\{m,n\}}\left(\sigma_{i}(X)\right)^{p}\right)^{1/p}.$$ # # In other words, if we define a vector $\sigma := (\sigma_1, ..., \sigma_{\min\{m,n\}})$, then $\|X\|_{\text{Schatten}\;p} = \|\sigma\|_{p}$. # ## (a) [10 points] Schatten 2-norm # # Prove that $\|X\|_{\text{Schatten}\;2} = \|X\|_{\text{F}}$, the Frobenius norm (see Lec. 3, p. 9 bottom). # ## Solution for 3(a): # # We have $\|X\|_{\rm{F}} = \sqrt{\text{trace}(X^{\top}X)} = \sqrt{\sum_{i=1}^{n}\lambda_{i}(X^{\top}X)} = \left(\displaystyle\sum_{i=1}^{\min\{m,n\}}\left(\sigma_{i}(X)\right)^{2}\right)^{1/2} = \|X\|_{\text{Schatten}\;2}.$ # # The second equality above follows from the fact that the trace of a square matrix is equal to sum of its eigenvalues. # ## (b) [10 points] Schatten $\infty$-norm # # Prove that $\|X\|_{\text{Schatten}\;\infty} = \|X\|_{\text{Induced}\;2}$, the spectral norm (see Lec. 3, p. 8 bottom). # ## Solution for 3(b): # # We have $\|X\|_{\text{Schatten}\;\infty} := \|\sigma\|_{\infty} = \sigma_{\max}(X)$, the maximum sigular value of $X$. # # On the other hand, from Lec. 3, p. 8, $\|X\|_{\text{Induced}\;2} = \sqrt{\lambda_{\max}\left(X^{\top}X\right)} =: \sigma_{\max}(X)$. Hence the claim. # ## (c) [10 points] Schatten 1-norm # # Prove that if $X\in\mathbb{S}^{n}_{+}$, then $\|X\|_{\text{Schatten}\;1} = \text{trace}(X)$. # # **Remark:** Schatten 1-norm is also called the nuclear norm, and is extremely important in convex optimization. We will learn more about it later in this course. # ## Solution for 3(c): # # Since $X\in\mathbb{S}^{n}_{+}$, we have $\sigma_{i}(X) := \sqrt{\lambda_{i}(X^{\top}X)} = \sqrt{\lambda_{i}(X^{2})} = \sqrt{\left(\lambda_{i}(X)\right)^{2}} = \lambda_{i}(X)$. The last equality follows from the fact that $\lambda_{i}(X)\geq 0$ for $X\in\mathbb{S}^{n}_{+}$ (see Lec. 3, p. 17). # # Therefore, in this case, $\|X\|_{\text{Schatten}\;1} = \sum_{i=1}^{n}\lambda_{i}(X) = \text{trace}(X)$, as desired. # # **Remark:** Notice that the above calculations show that for $X\in\mathbb{S}^{n}$, we have $\|X\|_{\text{Schatten}\;1} = \sum_{i=1}^{n}|\lambda_{i}(X)|$. # ## (d) [1+4 = 5 points] Schatten 0-norm # # The vector $0$-norm is defined as the cardinality (that is, number of nonzero entries) of the vector. # # **What is the interpretation** of Schatten 0-norm $\|\sigma\|_{0}$? **Explain** your answer. # ## Solution for 3(d): # # $\|\sigma\|_{0}$ equals $\text{rank}(X)$. # # The reason is that $\|X\|_{\text{Schatten}\;0} := \|\sigma\|_{0} := \text{cardinality}(\sigma) = $ the number of nonzero singular values, which is indeed the rank of $X\in\mathbb{R}^{m\times n}$ (Lec. 3, p. 9).