Skip to Content
ÁlgebraEspacios Vectoriales con Producto Interno

Espacios vectoriales con producto interno

Sea VV un K\KK-espacio vectorial. Un producto interno en VV es una función ,:V×VK\langle \cdot , \cdot \rangle : V \times V \rightarrow \KK que cumple las siguientes propiedades:

  1. v1+v2,w=v1,w+v2,w\langle \bfv_1 + \bfv_2 , \bfw \rangle = \langle \bfv_1 , \bfw \rangle + \langle \bfv_2 , \bfw \rangle
    v,w1+w2=v,w1+v,w2,v,w1,w2V\langle \bfv , \bfw_1 + \bfw_2 \rangle = \langle \bfv , \bfw_1 \rangle + \langle \bfv , \bfw_2 \rangle, \quad \forall \bfv, \bfw_1, \bfw_2 \in V

  2. λv,w=λv,w\langle \lambda \bfv , \bfw \rangle = \lambda \langle \bfv , \bfw \rangle
    v,λw=λv,w,v,wV,  λK\langle \bfv , \lambda \bfw \rangle = \overline{\lambda} \cdot \langle \bfv , \bfw \rangle, \quad \forall \bfv, \bfw \in V, \; \lambda \in \KK

  3. v,w=w,v,v,wV\langle \bfv , \bfw \rangle = \overline {\langle \bfw , \bfv \rangle}, \quad \forall \bfv, \bfw \in V

  4. v,v0,v,v>0    vchar"338=0,vV\langle \bfv , \bfv \rangle \geq 0, \quad \langle \bfv, \bfv \rangle > 0 \iff \bfv \not= 0, \quad \forall \bfv \in V

Tener la primer y segunda propiedad se define ser sesquilineal en la segunda entrada.

Demostraciones:

v,w1+w2=w1+w2,v=w1,v+w2,v=w1,v+w2,v=v,w1+v,w2  v,λw=λw,v=λw,v=λw,v=λv,w  0,v=0+0,v=0,v+0,v    0,v=0\begin{aligned} \langle \bfv, \bfw_1 + \bfw_2 \rangle &= \overline{\langle \bfw_1 + \bfw_2, \bfv \rangle} = \overline{\langle \bfw_1, \bfv \rangle + \langle \bfw_2, \bfv \rangle} = \overline{\langle \bfw_1, \bfv \rangle} + \overline{\langle \bfw_2, \bfv \rangle} = \langle \bfv, \bfw_1 \rangle + \langle \bfv, \bfw_2 \rangle \\ &\; \\ \langle \bfv, \lambda \bfw \rangle &= \overline{\langle \lambda \bfw, \bfv \rangle} = \overline{\lambda \langle \bfw, \bfv \rangle} = \overline{\lambda} \cdot \overline{\langle \bfw, \bfv \rangle} = \overline{\lambda} \cdot \langle \bfv, \bfw \rangle \\ &\; \\ \langle 0, \bfv \rangle &= \langle 0 + 0, \bfv \rangle = \langle 0, \bfv \rangle + \langle 0, \bfv \rangle \implies \langle 0, \bfv \rangle = 0 \\ \end{aligned}

Ejemplo:

Sea V=RnV = \RR^n, definimos el producto interno canónico como:

(x1,,xn),(y1,,yn)=x1y1++xnyn\langle (x_1, \ldots, x_n), (y_1, \ldots, y_n) \rangle = x_1 y_1 + \ldots + x_n y_n

Sea (V,,)(V, \langle \cdot , \cdot \rangle) un espacio vectorial con producto interno. La norma en VV está definida como la función
:VR0\| \cdot \| : V \rightarrow \RR_{\geq 0} tal que: v=v,v\| \bfv \| = \sqrt{\langle \bfv , \bfv \rangle}

  1. vV,  v0  \forall \bfv \in V, \; \| \bfv \| \geq 0 \; y   v=0    v=0\; \| \bfv \| = 0 \iff \bfv = 0
  2. λK,  vV,  λv=λv\forall \lambda \in \KK, \; \bfv \in V, \; \| \lambda \bfv \| = |\lambda| \cdot \| \bfv \|
  3. Desigualdad de Cauchy-Schwarz: v,wV,  v,wvw\forall \bfv, \bfw \in V, \; |\langle \bfv , \bfw \rangle| \leq \| \bfv \| \cdot \| \bfw \|
  4. Desigualdad triangular: v,wV,  v+wv+w\forall \bfv, \bfw \in V, \; \| \bfv + \bfw \| \leq \| \bfv \| + \| \bfw \|

Ortogonalidad

Sea (V,,)(V, \langle \cdot , \cdot \rangle) un espacio euclídeo (en R\RR):
Dos vectores v,wV\bfv, \bfw \in V son ortogonales si v,w=0\langle \bfv , \bfw \rangle = 0.
Un subconjunto SVS \subseteq V es ortogonal si cada par de elementos en SS es ortogonal, es decir si
v,wS,  v,w=0\forall \bfv, \bfw \in S, \; \langle \bfv , \bfw \rangle = 0.
Un subconjunto SVS \subseteq V es ortonormal si es ortogonal y v=1,  vS\| \bfv \| = 1, \; \forall \bfv \in S.

Teorema de Pitágoras

Si v,wV\bfv, \bfw \in V son ortogonales, entonces v+w2=v2+w2\| \bfv + \bfw \|^2 = \| \bfv \|^2 + \| \bfw \|^2.

Esto queda demostrado ya que:

v+w2=v+w,v+w=v,v+v,w+w,v+w,w=v,v+0+0+w,w=v2+w2\begin{aligned} \| \bfv + \bfw \|^2 &= \langle \bfv + \bfw , \bfv + \bfw \rangle \\ &= \langle \bfv , \bfv \rangle + \langle \bfv , \bfw \rangle + \langle \bfw , \bfv \rangle + \langle \bfw , \bfw \rangle \\ &= \langle \bfv , \bfv \rangle + 0 + 0 + \langle \bfw , \bfw \rangle \\ &= \| \bfv \|^2 + \| \bfw \|^2 \end{aligned}

Método de ortogonalización de Gram-Schmidt

Sea (V,,)(V, \langle \cdot , \cdot \rangle) un espacio euclídeo y sea B={v1,,vn}B = \{ \bfv_1, \ldots, \bfv_n \} una base de VV.
Existe una base ortogonal B  ={w1,,wn}\stackrel{\sim}{B} \; = \{ \bfw_1, \ldots, \bfw_n \} de VV tal que:

v1,,vk=w1,,wk,k=1,,n\langle \bfv_1, \ldots, \bfv_k \rangle = \langle \bfw_1, \ldots, \bfw_k \rangle, \quad k = 1, \ldots, n \\

Recursivamente, wk=wkwk\bfw_k = \frac{{\bfw_k}'}{\|{\bfw_k}'\|} donde:

wk=vkj=1k1vk,wjwj2wj{\bfw_k}' = \bfv_k - \sum_{j=1}^{k-1} \frac{\langle \bfv_k, {\bfw_j}' \rangle}{\|{\bfw_j}'\|^2} \cdot {\bfw_j}'

Demostración:

Construiremos los vectores wk{\bfw_k}' como en el enunciado de modo recursivo y probaremos que
v1,,vk=w1,,wk\langle \bfv_1, \dots, \bfv_k \rangle = \langle\bfw_1, \dots,\bfw_k \rangle. Como el conjunto será ortogonal (y luego ortonormal),
en particular será l.i. y B~={w1,,wn}\therefore \tilde{\Beta} = \{\bfw_1, \dots,\bfw_n\} será una base de VV.

Si k=1k = 1:

w1=v1,w1=w1w1    v1=w1{\bfw_1}' = \bfv_1, \quad\bfw_1 = \frac{{\bfw_1}'}{\|{\bfw_1}'\|} \implies \langle \bfv_1 \rangle = \langle\bfw_1 \rangle

Continuando con el paso recursivo:

Asumimos que v1,,vk=w1,,wk\langle \bfv_1, \dots, \bfv_k \rangle = \langle\bfw_1, \dots,\bfw_k \rangle con {w1,,wk}\{\bfw_1, \dots,\bfw_k\} ortonormal

Definimos:

wk+1=vk+1j=1kvk+1,wjwj2wj=vk+1j=1kvk+1,wjwjwjwj=vk+1j=1kvk+1,wjwj\begin{aligned} {\bfw_{k+1}}' &= \bfv_{k+1} - \sum_{j=1}^k \frac{\langle \bfv_{k+1}, {\bfw_j}' \rangle}{\|{\bfw_j}'\|^2} \cdot {\bfw_j}' \\ &= \bfv_{k+1} - \sum_{j=1}^k \left\langle \bfv_{k+1}, \frac{{\bfw_j}'}{\|{\bfw_j}'\|} \right\rangle \cdot \frac{{\bfw_j}'}{\|{\bfw_j} '\|} \\ &= \bfv_{k+1} - \sum_{j=1}^k \langle \bfv_{k+1}, \bfw_j \rangle \cdot \bfw_j \end{aligned}

Luego, para cualquier l{1,,k}l \in \{1, \dots, k\}:

wk+1,wl=vk+1j=1kvk+1,wjwj,  wl=vk+1,wlj=1kvk+1,wjwj,  wlδjl=vk+1,wlvk+1,wl=0\begin{aligned} \langle {\bfw_{k+1}}', \bfw_l \rangle &= \left\langle \bfv_{k+1} - \sum_{j=1}^k \langle \bfv_{k+1}, \bfw_j \rangle \bfw_j, \; \bfw_l \right\rangle \\ &= \langle \bfv_{k+1}, \bfw_l \rangle - \sum_{j=1}^k \langle \bfv_{k+1}, \bfw_j \rangle \cdot \underbrace{\langle \bfw_j, \; \bfw_l \rangle}_{\delta_{jl}} \\ &= \langle \bfv_{k+1}, \bfw_l \rangle - \langle \bfv_{k+1}, \bfw_l \rangle \\ &= 0 \end{aligned}

{w1,,wk,wk+1}\therefore \{\bfw_1, \dots, \bfw_k, {\bfw_{k+1}}'\} es ortogonal     {w1,,wk,wk+1}\implies \{\bfw_1, \dots, \bfw_k, \bfw_{k+1}\} también lo es y más aún, es ortonormal.

Queremos ver que v1,,vk+1=w1,,wk+1\langle \bfv_1, \dots, \bfv_{k+1} \rangle = \langle \bfw_1, \dots, \bfw_{k+1} \rangle.
Sabemos que: v1,,vk=w1,,wk\langle \bfv_1, \dots, \bfv_k \rangle = \langle \bfw_1, \dots, \bfw_k \rangle y que w1,,wk+1=w1,,wk,wk+1\langle \bfw_1, \dots, \bfw_{k+1} \rangle = \langle \bfw_1, \dots, \bfw_k, {\bfw_{k+1}}' \rangle.

  • Como vk+1=wk+1+j=1kvk+1,wjwjw1,,wk,wk+1\bfv_{k+1} = {\bfw_{k+1}}' + \displaystyle{\sum_{j=1}^k} \langle \bfv_{k+1}, \bfw_j \rangle \cdot \bfw_j \in \langle \bfw_1, \ldots, \bfw_k, {\bfw_{k+1}}' \rangle
    y v1v1,,vk=w1,,wkw1,,wk,wk+1,  \bfv_1 \in \langle \bfv_1, \ldots, \bfv_k \rangle = \langle \bfw_1, \ldots, \bfw_k \rangle \subseteq \langle \bfw_1, \ldots, \bfw_k, {\bfw_{k+1}}' \rangle, \; i=1,,k\forall i = 1, \ldots, k
        v1,,vk+1w1,,wk,wk+1=w1,,wk+1\implies \langle \bfv_1, \ldots, \bfv_{k+1} \rangle \subseteq \langle \bfw_1, \ldots, \bfw_k, {\bfw_{k+1}}' \rangle = \langle \bfw_1, \ldots, \bfw_{k+1} \rangle.

  \;

  • Despejando wk+1\bfw_{k+1}'
wk+1=vk+1j=1kvk+1,wjwj  vk+1+w1,,wk  vk+1+v1,,vk=  v1,,vk+1\begin{aligned} {\bfw_{k+1}}' = \bfv_{k+1} - \displaystyle{\sum_{j=1}^k} \langle \bfv_{k+1}, \bfw_j \rangle \cdot \bfw_j \in \; & \langle \bfv_{k+1} \rangle + \langle \bfw_1, \ldots, \bfw_k \rangle \\ &\; \\ &\langle \bfv_{k+1} \rangle + \langle \bfv_1, \ldots, \bfv_k \rangle \\ = \; & \langle \bfv_1, \ldots, \bfv_{k+1} \rangle \end{aligned}

Además, i=1,,k\forall i = 1, \ldots, k

wiw1,,wk=v1,,vkv1,,vk+1    w1,,wk+1v1,,vk+1    w1,,wk+1v1,,vk+1\begin{aligned} &\bfw_i \in \langle \bfw_1, \ldots, \bfw_k \rangle = \langle \bfv_1, \ldots, \bfv_k \rangle \subseteq \langle \bfv_1, \ldots, \bfv_{k+1} \rangle \\ &\implies \langle \bfw_1, \ldots, {\bfw_{k+1}}' \rangle \subseteq \langle \bfv_1, \ldots, \bfv_{k+1} \rangle \\ &\implies \langle \bfw_1, \ldots, \bfw_{k+1} \rangle \subseteq \langle \bfv_1, \ldots, \bfv_{k+1} \rangle \end{aligned}

Por lo tanto, v1,,vk+1=w1,,wk+1\langle \bfv_1, \ldots, \bfv_{k+1} \rangle = \langle \bfw_1, \ldots, \bfw_{k+1} \rangle.


Complemento ortogonal

Sea (V,,)(V, \langle \cdot , \cdot \rangle) un espacio euclídeo de dimensión finita nn y sea SVS \subseteq V un conjunto.
El complemento ortogonal de SS es el conjunto:

S={vV    v,w=0,  wS}S^{\perp} = \{ \bfv \in V \; | \; \langle \bfv , \bfw \rangle = 0, \; \forall \bfw \in S \}

Algunas propiedades de SS^{\perp} son:

  1. SS^{\perp} es un subespacio de VV.
  2. V=SSV = S \oplus S^{\perp} si SS es un subespacio de VV.
  3. (S)=S(S^{\perp})^{\perp} = S si SS es un subespacio de VV.

Ejercicios resueltos - Ortogonalidad

1.

Aplicar el proceso de Gram-Schmidt a la base ordenada {(1,1,0),(0,1,1),(1,0,1)}\{(1, 1, 0),(0, 1, 1),(1, 0, 1)\} para obtener una base ortonormal de R3\RR^3

El primer paso es obtener la base ortogonal. Para ello aplicamos el proceso de Gram-Schmidt:

w1=(1,1,0)  w2=(0,1,1)(0,1,1),(1,1,0)(1,1,0)2(1,1,0)=(0,1,1)12(1,1,0)=(12,12,1)  w3=(1,0,1)(1,0,1),(1,1,0)(1,1,0)2(1,1,0)(1,0,1),(12,12,1)(12,12,1)2(12,12,1)  =(1,0,1)(12,12,0)(16,16,13)  =(23,23,23)\begin{aligned} \bfw_1' &= (1, 1, 0) \\ &\; \\ \bfw_2' &= (0, 1, 1) - \frac{\langle (0, 1, 1), (1, 1, 0) \rangle}{\|(1, 1, 0)\|^2} (1, 1, 0) = (0, 1, 1) - \frac{1}{2} (1, 1, 0) = \left(-\frac{1}{2}, \frac{1}{2}, 1\right) \\ &\; \\ \bfw_3' &= (1, 0, 1) - \frac{\langle (1, 0, 1), (1, 1, 0) \rangle}{\|(1, 1, 0)\|^2} (1, 1, 0) - \frac{\langle (1, 0, 1), \left(-\frac{1}{2}, \frac{1}{2}, 1\right) \rangle}{\left\|\left(-\frac{1}{2}, \frac{1}{2}, 1\right)\right\|^2} \left(-\frac{1}{2}, \frac{1}{2}, 1\right) \\ &\; \\ &= (1, 0, 1) - (\frac{1}{2}, \frac{1}{2}, 0) - (- \frac{1}{6}, \frac{1}{6}, \frac{1}{3}) \\ &\; \\ &= (\frac{2}{3}, -\frac{2}{3}, \frac{2}{3}) \end{aligned}

Su base ortogonal es:

{(1,1,0),(12,12,1),(23,23,23)}\left\{(1, 1, 0), \left(-\frac{1}{2}, \frac{1}{2}, 1\right), \left(\frac{2}{3}, -\frac{2}{3}, \frac{2}{3}\right) \right\}

Luego, normalizamos los vectores para obtener la base ortonormal:

w1=12(1,1,0)=(12,12,0)w2=16(1,1,2)=(16,16,26)w3=16(23,23,23)=(13,13,13)\begin{aligned} \bfw_1 &= \frac{1}{\sqrt{2}} (1, 1, 0) = \left(\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0\right) \\ \bfw_2 &= \frac{1}{\sqrt{6}} (-1, 1, 2) = \left(-\frac{1}{\sqrt{6}}, \frac{1}{\sqrt{6}}, \frac{2}{\sqrt{6}}\right) \\ \bfw_3 &= \frac{1}{\sqrt{6}} \left(\frac{2}{3}, -\frac{2}{3}, \frac{2}{3}\right) = \left(\frac{1}{\sqrt{3}}, -\frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}}\right) \end{aligned}

La base ortonormal resultante es:

{(12,12,0),(16,16,26),(13,13,13)}\left\{\left(\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0\right), \left(-\frac{1}{\sqrt{6}}, \frac{1}{\sqrt{6}}, \frac{2}{\sqrt{6}}\right), \left(\frac{1}{\sqrt{3}}, -\frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}}\right) \right\}