1. Check the following functions are inner products:
(a) Let C[-1,1] be the vector space of continuous complex-valued functions on the interval [-1,1]. Define \langle\cdot,\cdot\rangle:C[-1,1]\times C[-1,1]\rightarrow\mathbb{C} by \langle f,g \rangle=\int_{-1}^1f(x)\overline{g(x)}\mathrm{d}x for all f(x),g(x)\in C[-1,1].
- Defination: \forall f,g\in C[-1,1],f(x)\overline{g(x)}:[-1,1]\rightarrow\mathbb{C}\Rightarrow\langle f,g\rangle\in\mathbb{C}.
Conjugate symmetry: \forall f,g\in C[-1,1],\langle f,g\rangle=\int_{-1}^1f(x)\overline{g(x)}\mathrm{d}x=\overline{\int_{-1}^1g(x)\overline{f(x)}\mathrm{d}x}=\overline{\langle g,f\rangle}.
Linearity: \langle f+kg,h\rangle=\int_{-1}^1(f+kg)(x)\overline{h(x)}\mathrm{d}x=\int_{-1}^1f(x)\overline{h(x)}+kg(x)\overline{h(x)}\mathrm{d}x=\int_{-1}^1f(x)\overline{h(x)}\mathrm{d}x+k\int_{-1}^1g(x)\overline{h(x)}\mathrm{d}x=\langle f,h\rangle+k\langle g,h \rangle,\forall f,g,h\in C[-1,1],k\in\mathbb{C}.
Positive-definiteness: \forall f\in C[-1,1]/{0}\langle f,f\rangle=\int_{-1}^1f(x)\overline{f(x)}\mathrm{d}x\xRightarrow{f(x)\overline{(f(x)}\in\R^+}\langle f,f\rangle>0 and \langle 0,0\rangle=0.
Q.E.D.
(b) The function \langle\cdot,\cdot\rangle:\mathbb{R}[x]\times\mathbb{R}[x]\rightarrow\R is defined by \langle p,q\rangle=p(0)q(0)+\int_{-1}^{1}p'(x)q'(x)\mathrm{d}x for all p(x),q(x)\in\R[x].
- Defination: \forall p(x),q(x)\in\R[x],p(0)q(0)\in\R,p'(x)q'(x)\in\R[x]\Rightarrow\langle p,q\rangle\in\R.
Conjugate symmetry: \forall p(x),q(x)\in\R[x],\langle p,q\rangle=p(0)q(0)+\int_{-1}^1p'(x)q'(x)\mathrm{d}x=q(0)p(0)+\int_{-1}^1q'(x)p'(x)\mathrm{d}x=\langle q,p\rangle.
Linearity: \langle p+kq,r\rangle=(p+kq)(0)r(0)+\int_{-1}^1(p+kq)'(x)r'(x)\mathrm{d}x=p(0)r(0)+\int_{-1}^1p'(x)r'(x)\mathrm{d}x+k(q(0)r(0)+\int_{-1}^1q'(x)r'(x)\mathrm{d}x)=\langle p,r\rangle+k\langle q,r \rangle,\forall p(x),q(x),r(x)\in\R[x],k\in\R.
Positive-definiteness: \forall p(x)\in(\R[x])^*,\langle p,p\rangle=(p(0))^2+\int_{-1}^1p^2(x)\mathrm{d}x>0 and \langle 0,0\rangle=0.
Q.E.D.
(c) The function \langle\cdot,\cdot\rangle:\R[x]\times\R[x]\rightarrow\R is defined by \langle p,q\rangle=\int_0^\infty p(x)q(x)e^{-x}\mathrm{d}x for all p(x),q(x)\in\R[x].
- Defination: \forall p(x),q(x)\in\R[x],p(x)q(x)e^{-x}\in\R\Rightarrow\langle p,q\rangle\in\R.
Conjugate symmetry: \forall p(x),q(x)\in\R[x],\langle p,q\rangle=\int_0^\infty p(x)q(x)e^{-x}\mathrm{d}x=\int_0^\infty q(x)p(x)e^{-x}\mathrm{d}x=\langle q,p\rangle.
Linearity: \langle p+kq,r\rangle=\int_0^\infty (p+kq)(x)r(x)e^{-x}\mathrm{d}x=\int_0^\infty p(x)r(x)e^{-x}\mathrm{d}x+k\int_0^\infty q(x)r(x)e^{-x}\mathrm{d}x=\langle p,r\rangle+k\langle q,r \rangle,\forall p(x),q(x),r(x)\in\R[x],k\in\R.
Positive-definiteness: \forall p(x)\in(\R[x])^*,\langle p,p\rangle=\int_0^\infty p^2(x)e^{-x}\mathrm{d}x\xRightarrow{p^2(x)>0,e^{-x}>0}\langle p,p\rangle>0 and \langle 0,0\rangle=0.
Q.E.D.
2. Let (V,\langle\cdot,\cdot\rangle) be an inner product space. Show that the function \|\cdot\|:V\rightarrow\R defined by \|v\|=\sqrt{\langle v,v\rangle}, for any v\in V, is a norm.
- Defination: \forall v\in V,\langle v,v\rangle\ge0\Rightarrow\|v\|=\sqrt{\langle v,v\rangle}\in[0,\infty).
Positive-definiteness: \|v\|=0\Leftrightarrow\sqrt{\langle v,v\rangle}=0\Leftrightarrow\langle v,v\rangle=0\Leftrightarrow v=0.
Absolute homogeneity: \forall v\in V,\lambda\in \mathbb{K},\|\lambda v\|=\sqrt{\langle \lambda v,\lambda v\rangle}=\sqrt{\lambda^2\langle v,v\rangle}=|\lambda|\|v\|.
Subadditivity: Notice that \forall u,v\in V,\lambda\in\mathbb{K}, we have 0\leq\langle u+\lambda v,u+\lambda v\rangle=\langle u,u\rangle+\overline\lambda\langle u,v\rangle+\lambda\overline{\langle u,v\rangle}+\lambda\overline\lambda\langle v,v\rangle. Without loss of generality, let v\neq0,\lambda=-\frac{\langle u,v\rangle}{\langle v,v\rangle}. (When v=0, |\langle u,0\rangle|^2=0=\langle u,u\rangle\langle0,0\rangle. As \langle u,v\rangle\in\mathbb{K},\langle v,v\rangle\in\mathbb{K}^*, we can say \lambda\in\mathbb{K}.) So \langle u,u\rangle-2\frac{|\langle u,v\rangle|^2}{\langle v,v\rangle}+\frac{|\langle u,v\rangle|^2}{\langle v,v\rangle}\ge0\Rightarrow|\langle u,v\rangle|^2\leq\langle u,u\rangle\langle v,v\rangle. In this way, \forall u,v\in V,\|u+v\|=\sqrt{\langle u+v,u+v\rangle}=\sqrt{\langle u,u\rangle+\langle v,v\rangle+2\mathrm{Re}\langle u,v\rangle}\leq\sqrt{\|u\|^2+\|v\|^2+2|\langle u,v\rangle|}\leq\sqrt{\|u\|^2+\|v^2\|+2\|u\|\|v\|}=\|u\|+\|v\|.
Q.E.D.
3. Find a polynomial q(x)\in\R_2[x] such that \int_0^1p(x)\cos(\pi x)\mathrm{d}x=\int_0^1p(x)q(x)\mathrm{d}x for every p(x)\in\R_2[x].
- Consider \langle p,q\rangle=\int_0^1p(x)q(x)\mathrm{d}x\in\R. Notice that \alpha=\{1,x-\frac{1}{2},x^2-x+\frac{1}{6}\} is an orthogonal basis for \R_2[x]. So the q(x) satisfied that \langle p,\cos(\pi x)\rangle=\langle p,q\rangle,\forall p\in\R_2[x], which means \langle \alpha_i,\cos(\pi x)\rangle=\langle\alpha_i,q\rangle,\forall \alpha_i\in\alpha. So q(x)=\sum_{i=1}^3\frac{\langle \alpha_i,q\rangle}{\langle\alpha_i,\alpha_i\rangle}\alpha_i=\sum_{i=1}^3\frac{\langle \alpha_i,\cos(\pi x)\rangle}{\langle\alpha_i,\alpha_i\rangle}\alpha_i=0\cdot1-\frac{24}{\pi^2}(x-\frac{1}{2})+0\cdot(x^2-x+\frac{1}{6})=-\frac{24}{\pi^2}x+\frac{12}{\pi^2}.
4. Suppose v_1,\dots,v_m is a linearly independent list in V. Explain why the orthonormal list produced by the formulas of the Gram-Schmidt procedure is the only orthonormal list e_1,\dots,e_m in V such that \langle v_k,e_k\rangle > 0 and \mathrm{Span}(v_1,\dots,v_k) = \mathrm{Span}(e_1,\dots,e_k) for each k = 1,\dots,m.
- Existence: f_k:=v_k-\sum_{i=1}^{k-1}\frac{\langle v_k,f_i\rangle}{\langle f_i,f_i\rangle}f_i, we have e_k=\frac{f_k}{\|f_k\|}. According to the defination of e_k, we can find (e_1,e_2,\dots,e_k)=(v_1,v_2,\dots,v_k)M_{k\times k}, and M_{k\times k} is an upper triangular matrix. So M_{k\times k} is invertible, which proves that \mathrm{Span}(v_1,\dots,v_k) = \mathrm{Span}(e_1,\dots,e_k) for each k = 1,\dots,m. So \langle v_k,e_k\rangle=\frac{1}{\|f_k\|}\langle v_k,f_k\rangle=\frac{1}{\|f_k\|}\langle f_k+\sum_{i=1}^{k-1}\frac{\langle v_k,f_i\rangle}{\langle f_i,f_i\rangle}f_i,f_k\rangle=\frac{1}{\|f_k\|}\langle f_k,f_k\rangle>0
Uniqueness: Consider that an orthonormal list e'1,\dots,e'_m\neq e_1,\dots,e_m such that \langle v_k,e'_k\rangle > 0 and \mathrm{Span}(v_1,\dots,v_k) = \mathrm{Span}(e'_1,\dots,e'_k) for each k = 1,\dots,m. When k=1, we get \mathrm{Span}(v_1)=\mathrm{Span}(e_1)=\mathrm{Span}(e'_1)\Rightarrow e'_1=\lambda e_1\xRightarrow{\|e'_1\|=1}|\lambda|=1\xRightarrow{\R\ni\langle v_1,e'_1\rangle=\overline\lambda\langle v_1,e_1\rangle>0\Rightarrow\R\ni\lambda>0}\lambda=1\Rightarrow e'_1=e_1. Suppose that \forall i\in \{1,\dots,t\},t<m,e'_i=e_i, when k=t+1, we get \mathrm{Span}(v_1,\dots,v{t+1})=\mathrm{Span}(e_1,\dots,e_{t+1})=\mathrm{Span}(e'_1,\dots,e'_{t+1})=\mathrm{Span}(e_1,\dots,e_t,e'_{t+1})\Rightarrow e'_{t+1}=\lambda e_{t+1}+\sum_{i=1}^t\langle e'_{t+1},e_i\rangle e_i=\lambda e_{t+1}+\sum_{i=1}^t\langle e'_{t+1},e'i\rangle e_i=\lambda e_{t+1}\xRightarrow{\|e'_{t+1}\|=|\lambda|\|e_{t+1}\|=1}|\lambda|=1\xRightarrow{\langle v{t+1},e'_{t+1}\rangle=\overline\lambda\langle v{t+1},e_{t+1}\rangle>0\Rightarrow\lambda>0}\lambda=1\Rightarrow e'_{t+1}=e_{t+1}. So, according to the Mathematical Induction, we can say \forall i,e'_i=e_i, which is contrary to our assumption. So we have proved the uniqueness.
5. Suppose \langle\cdot,\cdot\rangle_1 and \langle\cdot,\cdot\rangle_2 are inner products on V such that \langle u,v\rangle_1 = 0 if and only if \langle u,v\rangle_2 = 0. Prove that there is a positive number c such that \langle u,v\rangle_1 = c\langle u,v\rangle_2 for every u,v \in V.
- First of all, we defaultly accept AC, so according to Zorn's lemma, we can prove for all vector spaces, which is not zero space, there exists an orthonormal basis. Let \alpha be an orthonormal basis for V,|\cdot|_1,which |u|_i:=\sqrt{\langle u,u\rangle_i}. In this way, \forall i\neq j,\langle \alpha_i,\alpha_j\rangle_1=0\Rightarrow\langle \alpha_i,\alpha_j\rangle_2=0;\langle\alpha_i,\alpha_i\rangle_1=1\Rightarrow\langle\alpha_i,\alpha_i\rangle_2=c_i\neq0. Notice that \langle\alpha_i+\alpha_j,\alpha_i-\alpha_j\rangle_1=\langle\alpha_i,\alpha_i\rangle_1-\langle\alpha_j,\alpha_j\rangle_1=0\Rightarrow\langle\alpha_i+\alpha_j,\alpha_i-\alpha_j\rangle_2=\langle\alpha_i,\alpha_i\rangle_2-\langle\alpha_j,\alpha_j\rangle_2=0\Rightarrow c_i=c_j. So \forall i,j,\langle\alpha_i,\alpha_j\rangle_1=c\langle\alpha_i,\alpha_j\rangle_2, which means there is a positive number c such that \langle u,v\rangle_1 = c\langle u,v\rangle_2 for every u,v \in V.
6. Define an operator S:F^2\rightarrow F^2 by S(w,z)=(-z,w).
(a) Find a formula for S^*.
- \langle S^*(a,b),(c,d)\rangle=\langle(a,b),S(c,d)\rangle=\langle(a,b),(-d,c)\rangle=-a\overline d+b\overline c=\langle(b,-a),(c,d)\rangle\Rightarrow S^*(a,b)=(b,-a).
(b) Show that S is normal but not self-adjoint.
- Notice that S^*S(w,z)=S^*(-z,w)=(w,z)=S(z,-w)=SS^*(w,z), so SS^*=S^*S=\mathrm{id}, which means S is normal but not self-adjoint.
(c) Find all eigenvalues of S.
- S(w,z)=\lambda(w,z)=(-z,w)\Rightarrow\lambda w=-z,\lambda z=w\Rightarrow -\lambda^2w=w\Rightarrow\lambda=i,-i,(F=\mathbb{C}) or \lambda doesn't exist, (F=\R).
7. Suppose T \in \mathcal{L}(V) and a_0 +a_1z +a_2z^2 +\cdots+a_{m−1}z^{m−1} +z^m is the minimal polynomial of T. Find the minimal polynomial of T^*.
- Let f(z):=a_0 +a_1z +a_2z^2 +\cdots+a_{m−1}z^{m−1} +z^m,\overline f(z):=\overline{a_0}+\overline{a_1}z+\overline{a_2}z^2+\cdots+\overline{a_{m-1}}z^{m-1}+z^m. Notice that \langle a_k T^ku,v\rangle=a_k\langle T^{k-1}u,T^*v\rangle=\cdots=\langle u,\overline{a_k}(T^*)^kv\rangle. So, (a_kT^k)^*=\overline{a_k}(T^*)^k,(f(T))^*=\overline f(T^*)=0. On the other hand, \forall g\in\mathbb{K}[z],\deg g<m, we know g(z) is not the zero polynomial of T, which means (g(T))^*=\overline g(T^*)\neq0. Combining the two conclusions, we can say the minimal polynomial of T^* is \overline f(z).
8. Prove or give a counterexample: If T \in \mathcal{L}(V) and there is an orthonormal basis e_1,\dots, e_n of V such that \|Te_k\| = \|T^∗e_k\| for each k = 1,\dots,n, then T is normal.
- Counterexample: Consider V=R^2,T:e_1\mapsto e_1+e_2,e_2\mapsto -e_1-e_2, we have \langle T^*(a,b),(c,d)\rangle=\langle(a,b),T(c,d)\rangle=\langle(a,b),(c-d,c-d)\rangle=\langle(a+b,-a-b),(c,d)\rangle. So, T^*:e_k\mapsto e_1-e_2,k=1,2. Notice that \|Te_k\|=\sqrt2=\|T^*e_k\|,k=1,2;TT^*:e_k\mapsto2e_1+2e_2,k=1,2;T^*T:e_1\mapsto2e_1-2e_2,e_2\mapsto-2e_1+2e_2. Therefore, TT^*\neq T^*T, which means T is not normal.
Comments NOTHING