Skip to content

Commit

Permalink
new files
Browse files Browse the repository at this point in the history
  • Loading branch information
crichton-ogle committed Jun 19, 2019
1 parent a5ebbb4 commit a37b4c3
Show file tree
Hide file tree
Showing 5 changed files with 77 additions and 8 deletions.
39 changes: 37 additions & 2 deletions linearTransformations/changeOfBasis.tex
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,44 @@
\begin{exercise} Verify these three properties (notice that the second and third properties are closely related, in light of the first. Note also that the third property verifies that base transition matrices are always non-singular).
\end{exercise}

\begin{example} [to be included]
For vector spaces other than $\mathbb R^n$ (such as the function space $F[a,b]$ we looked at earlier) where the vectors do not naturally look like column vectors, we always use the above notation when working with their coordinate representations.
\vskip.1in
However, in the case of $\mathbb R^n$, the vectors were {\it defined} as column vectors even before discussing coordinate representations. So what should we do here?
\vskip.1in
The answer is that the vectors in $\mathbb R^n$ are, by convention, identified with their coordinate representations in the {\it standard basis} ${\bf e} = \{{\bf e}_1,\dots,{\bf e}_n\}$ for $\mathbb R^n$. So, for example, in $\mathbb R^3$ when we wrote ${\bf v} = \begin{bmatrix} 2\\3\\-1\end{bmatrix}$ what we really meant was that $\bf v$ is the vector in $\mathbb R^3$ with coordinate representation in the standard basis given by ${}_{\bf e}{\bf v} = \begin{bmatrix} 2\\3\\-1\end{bmatrix}$.
\vskip.1in
Because it is very important to keep track of bases whenever determining base transition matrices and computing new coordinate representations, when doing so we will {\it always} use base-subscript notation when working with coordinate vectors, even when the vectors are in $\mathbb R^n$ and are being represented in the standard basis.

\begin{example} Suppose $S = \{{\bf u}_1, {\bf u}_2, {\bf u}_3\}$ are three vectors in $\mathbb R^3$ with
\[
{}_{\bf e}{\bf u}_1 = \begin{bmatrix} 1\\0\\2\end{bmatrix},\quad
{}_{\bf e}{\bf u}_2 = \begin{bmatrix} -2\\3\\1\end{bmatrix},\quad
{}_{\bf e}{\bf u}_3 = \begin{bmatrix} 0\\1\\-1\end{bmatrix}
\]
Suppose we want to
\begin{itemize}
\item Show that the set of vectors $S$ is a basis for $\mathbb R^3$,
\item compute the base transition matrix ${}_{S}T_{\bf e}$,
\item for $\bf v$ in $\mathbb R^3$ with ${}_{\bf e}{\bf v} = \begin{bmatrix} 2\\3\\-1\end{bmatrix}$, compute the coordinate representation of $\bf v$ with repsect to the basis $S$.
\end{itemize}
To perform step 1, since $S$ has the right number of vectors to be a basis for $\mathbb R^3$, it suffices to show the vectors are linearly independent. And we know how to do this; we form the matrix $A = [{}_{\bf e}{\bf u}_1\ {}_{\bf e}{\bf u}_2\ {}_{\bf e}{\bf u}_3]$ and show that the columns are linearly independent by showing $rref(A) = Id^{3\times 3}$ (exercise: do this, using MATLAB or Octave). This verifies $S$ is a basis.
\vskip.1in
Next, we look at the matrix $A$. The columns of $A$ are the coordinate representations of the vectors in $S$ with respect to the standard basis $\bf e$. But $S$ is a basis. So the matrix $A$ identifies as a base-transition matrix. We know it must be either ${}_S T_{\bf e}$ or ${}_{\bf e}T_S$. But which one?
\vskip.1in
This is where the notation being used helps us. The coordinate vectors of the columns of $A$ have ``$\bf e$" in the lower left. The rule is that this must match what appears in the notation of the transition matrix. So:
\[
{}_{\bf e}T_S = A
\]
To complete the second step, we then compute
\[
{}_S T_{\bf e} = \left({}_{\bf e} T_S\right)^{-1}
\]
Finally, we can use this to compute ${}_S {\bf v}$ as
\[
{}_S {\bf v} = {}_S T_{\bf e}* {}_{\bf e}{\bf v}
\]
\end{example}
\vskip.2in

\begin{exercise} Let $S_1,S_2$ be two bases for $V$, and $L:V\to V$ a linear transformation from $V$ to itself. We can consider The representations ${}_{S_1}L_{S_1}$ and ${}_{S_2}L_{S_2}$ of $L$ with respect to the bases $S_1$ and $S_2$. Using the above identities, show that
\[
Expand All @@ -42,7 +77,7 @@
where $A = {}_{S_1}T_{S_2}$.
\end{exercise}

Square matrices $B,C$ which satisfy the equality $B = A*C*A^{-1}$ are called {\it similar}. This is an important relation between square matrices, and plays a prominent role in the theory of eigenvalues and eigenvectors.
Note: Square matrices $B,C$ which satisfy the equality $B = A*C*A^{-1}$ are called {\it similar}. This is an important relation between square matrices, and plays a prominent role in the theory of eigenvalues and eigenvectors as we will see later on.
\vskip.5in

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Expand Down
20 changes: 18 additions & 2 deletions linearTransformations/spacesOfMaps.tex
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,26 @@
transformation.

\begin{theorem} Suppose $V$ and $W$ are vector spaces. The collection
of linear transformations from $V$ to $W$ forms a vector space under
the above operations.
$Lin(V,W)$ of linear transformations from $V$ to $W$ forms a vector space under the above operations.
\end{theorem}

In fact, using bases we can say something more. If $S$ is a basis for an $n$-dimensional vector space $V$, and $T$ a basis for an $m$-dimensional vector space $W$, then as we have seen above this data can be used to associate to the linear transformation $L:V\to W$ an $m\times n$ matrix
\[
L\mapsto \phi_{S,T}(L) := {}_T L_S\in \mathbb R^{m\times n}
\]
which is essentially the coordinate representation of $L$ with respect to the pair of bases $S,T$. It is easily seen that under the above association
\begin{align*}
&\phi_{S,T}(L_1 + L_2) = {}_T (L_1)_S + {}_T (L_2)_S\\
& \phi_{S,T}(\alpha L) = \alpha ({}_T L_S)
\end{align*}

In other words,

\begin{theorem}
Given a basis $S$ for an $n$-dimensional vector space $V$ and a basis $T$ for an $m$-dimensional vector space $W$, the map $\phi_{S,T}: Lin(V,W)\to \mathbb R^{m\times n}$ is an isomorphism between the vector space of linear transformations from $V$ to $W$ and the vector space of $m\times n$ matrices with entries in $\mathbb R$.
\end{theorem}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
23 changes: 19 additions & 4 deletions vectorSpaces/definition.tex
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,34 @@
\maketitle

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Generalizing the setup for $\mathbb R^n$, we have

\begin{definition} A {\it vector space} is a set $V$ equipped with two operations - vector addition \lq\lq$+$\rq\rq\ and scalar multiplication \lq\lq$\cdot$\rq\rq - which satisfy the closure axioms for $V$:
\begin{definition} A {\it vector space} is a set $V$ equipped with two operations - vector addition \lq\lq$+$\rq\rq\ and scalar multiplication \lq\lq$\cdot$\rq\rq - which satisfy the two closure axioms C1, C2 as well as the eight vector space axioms A1 - A8:
\begin{description}
\item[C1] (Closure under vector addition) Given ${\bf v}, {\bf w}\in V$, ${\bf v} + {\bf w}\in V$.
\item[C2] (Closure under scalar multiplication) Given ${\bf v}\in V$ and a scalar $\alpha$, $\alpha{\bf v}\in V$.
\end{description}
together with the eight {\it vector space axioms} [A1] - [A8].
\vskip.2in

For $\bf u$, $\bf v$, $\bf w$ arbitrary vectors in $V$, and $\alpha,\beta$ arbitrary scalars in $\mathbb R$,

\begin{description}
\item[A1] (Commutativity of addition) ${\bf v} + {\bf w} = {\bf w} + {\bf v}$.
\item[A2] (Associativity of addition) $({\bf u} + {\bf v}) + {\bf w} = {\bf u} + ({\bf v} + {\bf w})$.
\item[A3] (Existence of a zero vector) There is a vector ${\bf z}\in V$ with ${\bf z} + {\bf v} = {\bf v} + {\bf z} = {\bf v}$.
\item[A4] (Existence of additive inverses) For each $\bf v$, there is a vector $-{\bf v}\in V$ with ${\bf v} + (-{\bf v}) = (-{\bf v}) + {\bf v} = {\bf z}$.
\item[A5] (Distributivity of scalar multiplication over vector addition) $\alpha({\bf v} + {\bf w}) = \alpha{\bf v} + \alpha{\bf w}$.
\item[A6] (Distributivity of scalar addition over scalar multiplication) $(\alpha + \beta){\bf v} = \alpha{\bf v} + \beta{\bf v}$.
\item[A7] (Associativity of scalar multiplication) $(\alpha \beta){\bf v}) = (\alpha(\beta {\bf v})$.
\item[A8] (Scalar multiplication with 1 is the identity) $1{\bf v} = {\bf v}$.
\end{description}
\vskip.2in
\end{definition}

In this way, a vector space should properly be represented as a triple $(V,+,\cdot)$, to emphasize the fact that the algebraic structure depends not just on the underlying set of vectors, but on the choice of operations representing addition and scalar multiplication.
\vskip.2in

\begin{example} Let $V = \mathbb R^{m\times n}$, the space of $m\times n$ matrices, with addition given by matrix addition and scalar multiplication as defined for matrices. Then $(\mathbb R^{m\times n},+,\cdot)$ is a vector space. Again, as with $\mathbb R^n$, the closure axioms are seen to be satisfied as a direct consequence of the definitions, while the other properties follow from Theorem \ref{thm:matalg} together with direct construction of the $m\times n$ \lq\lq zero vector\rq\rq\ $0^{m\times n}$, as well as additive inverses as indicated in [A4].
\begin{example}\label{example:Rmn} Let $V = \mathbb R^{m\times n}$, the space of $m\times n$ matrices, with addition given by matrix addition and scalar multiplication as previously defined for matrices. Then $(\mathbb R^{m\times n},+,\cdot)$ is a vector space. Again, as with $\mathbb R^n$, the closure axioms are seen to be satisfied as a direct consequence of the definitions, while the other properties follow from Theorem \ref{thm:matalg} together with direct construction of the $m\times n$ \lq\lq zero vector\rq\rq\ $0^{m\times n}$, as well as additive inverses as indicated in [A4].
\end{example}

Before proceeding to other examples, we need to discuss an important point regarding how theorems about vector spaces are typically proven. In any system of mathematics, one operates with a certain set of assumptions, called {\it axioms}, together with various results previously proven (possibly in other areas of mathematics) and which one is allowed to assume true without further verification.
Expand Down Expand Up @@ -131,7 +146,7 @@
\end{proof}
\vskip.2in

\begin{exercise} Show that $(\mathbb R^{m\times n}, +,\cdot)$ is a vector space, where \lq\lq +\rq\rq\ denotes matrix addition, and \lq\lq$\cdot$\rq\rq\ denotes scalar multiplication for matrices (hint: use the results of Theorem \ref{thm:matalg}).
\begin{exercise} Show that $(\mathbb R^{m\times n}, +,\cdot)$ is a vector space, where \lq\lq +\rq\rq\ denotes matrix addition, and \lq\lq$\cdot$\rq\rq\ denotes scalar multiplication for matrices (in other words, verify the claim in Example \ref{example:Rmn}. Hint: use the results of Theorem \ref{thm:matalg}).
\end{exercise}
\vskip.3in

Expand Down
1 change: 1 addition & 0 deletions vectorSpaces/linearCombinations.tex
Original file line number Diff line number Diff line change
Expand Up @@ -53,5 +53,6 @@
\end{proof}
\vskip.3in

Given a collection of vectors $S = \{{\bf v}_1,\dots {\bf v}_n\}$, a fundamental question one can ask is whether the collection (or set) $S$ is {\it linearly independent}. One of our main goals in the following sections will be to develop numerical methods for answering this question.
\end{document}

2 changes: 2 additions & 0 deletions vectorSpaces/subspaces.tex
Original file line number Diff line number Diff line change
Expand Up @@ -42,4 +42,6 @@
Axioms A3 and A4 then follow immediately for $W$, by virtue of the fact that $W$ is closed under scalar multiplication.
\end{proof}

We will explore how to construct subspaces using various different methods. However we first need to revisit the operation of forming linear combinations in the more general setting of vector spaces.

\end{document}

0 comments on commit a37b4c3

Please sign in to comment.