Shortcut to this page: ntrllog.netlify.app/diff_alg
Notes provided by Professor Guershon Harel (UCSD)
We have the following setup:
`{(y'(t) = Ay(t)),(y(0)=C):}`
where `A` is an `nxxn` matrix and `C` is an `nxx1` vector, and we want to find a solution to that system, i.e., we want to solve for `y(t)`. Before we begin, we need to introduce some concepts from calculus.
The derivative of `e^(alphax)` is `alpha*e^(alphax)`.
Notice that the derivative looks very similar to the original function. This is a special case where taking a derivative of something results in itself.
`e^x` is defined to be
`e^color(red)(x) = sum_(i=0)^(oo)1/(i!)color(red)(x)^i`
(which is equal to `1/(0!)x^0 + 1/(1!)x^1 + 1/(2!)x^2 + 1/(3!)x^3 + ...`, but this is irrelevant)
Using that definition, we can plug in anything for `x` to get different results.
If we have `x=2x`, then
`e^(color(red)(2x)) = sum_(i=0)^(oo)1/(i!)(color(red)(2x))^i`
`= sum_(i=0)^(oo)1/(i!)2^ix^i`
(we can distribute the exponent to each term inside the parentheses)
If we have `x=t`, then
`e^(color(red)(t)) = sum_(i=0)^(oo)1/(i!)color(red)(t)^i`
If we have `x=lambdat`, then
`e^(color(red)(lambdat)) = sum_(i=0)^(oo)1/(i!)(color(red)(lambdat))^i`
`= sum_(i=0)^(oo)1/(i!)lambda^it^i`
(we can distribute the exponent to each term inside the parentheses)
Going back to our system
`{(y'(t) = Ay(t)),(y(0)=C):}`
we notice that the derivative of `y(t)`, denoted by `y'(t)`, is `y(t)` itself. This suggests that `y(t) = Ce^(At)` (Refer back to the first box in the calculus stuff).
We could simplify `y(t)` by introducing eigenvalues and eigenvectors.
So suppose `C` is an eigenvector of `A`, i.e., `AC = lambdaC` for some scalar `lambda`.
Recall that
`e^(color(red)(x)) = sum_(i=0)^(oo)1/(i!)color(red)(x)^i`
So it is also true that
`e^(color(red)(At)) = sum_(i=0)^(oo)1/(i!)(color(red)(At))^i`
We can rewrite `y(t) = Ce^(At)` to get
`y(t) = e^(color(red)(At))C`
`= sum_(i=0)^(oo)1/(i!)(color(red)(At))^iC`
`= sum_(i=0)^(oo)1/(i!)A^it^iC`
`= sum_(i=0)^(oo)1/(i!)t^icolor(blue)(A^i)C`
There's a result in linear algebra that `color(blue)(A^iC = lambda^iC)`.
The proof isn't relevant, but I'll provide it.
Well, this isn't a real proof because this is for a specific case (when `i=2`), but it is generalizable.
`A^2C = A AC`
`= A(AC)`
`= A(lambdaC)`
`= AlambdaC`
`= lambdaAC`
`= lambda(AC)`
`= lambda(lambdaC)`
`= lambdalambdaC`
`= lambda^2C`
So we can continue rewriting to get
`= sum_(i=0)^(oo)1/(i!)t^icolor(blue)(lambda^i)C`
`= sum_(i=0)^(oo)1/(i!)lambda^it^iC`
`= sum_(i=0)^(oo)1/(i!)(color(red)(lambdat))^iC`
`= e^(color(red)(lambdat))C`
(refer back to the last example in the second box in the calculus stuff)
So we found that `y(t) = Ce^(lambdat)`. This is an easily computable solution because we can just plug that into good ol' MATLAB. However, to get this result, we made a very important assumption: `C` is an eigenvector of `A`.
`y(t) = Ce^(lambdat)` is the easily computable solution if `C` is an eigenvector of `A`.
Does that mean that there's no hope if `C` is not an eigenvector of `A`? Actually, no.
If `C` is a linear combination of the eigenvectors of `A`, then there is still an easily computable solution.
Being a linear combination of the eigenvectors of `A` means that there exist numbers `c_1, c_2, ..., c_k` such that
`C = c_1a_1 + c_2a_2 + ... + c_ka_k`
where `a_1, a_2, ..., a_k` are the eigenvectors of `A`.
So if we know the eigenvectors (and eigenvalues) of `A`, we can rewrite `y(t)` as
`y(t) = e^(At)C`
`= e^(At)(c_1a_1 + c_2a_2 + ... + c_ka_k)`
`= c_1e^(At)a_1 + c_2e^(At)a_2 + ... + c_ke^(At)a_k`
`= c_1e^(lambda_1t)a_1 + c_2e^(lambda_2t)a_2 + ... + c_ke^(lambda_kt)a_k`
So we get that `y(t) = c_1e^(lambda_1t)a_1 + c_2e^(lambda_2t)a_2 + ... + c_ke^(lambda_kt)a_k`, which is also an easily computable solution.
Is there an easily computable solution for the system
`{(y'(t) = Ay(t)),(y(0)=C):}`
given that
`A = [[1, -2],[0, -1]]` and `C = [[1],[1]]`?
If so, find an easily computable solution for the system.
Recall the two results we found:
First, we should check if `C` is an eigenvector of `A`.
`C` is an eigenvector of `A` if there exists a scalar `lambda` such that `AC = lambdaC`.
So we should multiply `A` and `C` and see if we can find a `lambda` that satisfies that equation.
`AC = [[1, -2],[0, -1]][[1],[1]] = [[-1],[-1]] = -1[[1],[1]]`
Well, it looks like `lambda=-1`, so `C` is an eigenvector of `A`.
This means the easily computable solution is
`y(t) = Ce^(lambdat) = color(red)(e^(-t)[[1],[1]])`
Is there an easily computable solution for the system
`{(y'(t) = Ay(t)),(y(0)=C):}`
given that
`A = [[2, 3],[-1, -2]]` and `C = [[-1],[2]]`?
If so, find an easily computable solution for the system.
Recall the two results we found:
First, we should check if `C` is an eigenvector of `A`.
`C` is an eigenvector of `A` if there exists a scalar `lambda` such that `AC = lambdaC`.
So we should multiply `A` and `C` and see if we can find a `lambda` that satisfies that equation.
`AC = [[2, 3],[-1, -2]][[-1],[2]] = [[4],[-3]]`
It doesn't look like we can multiply `[[-1],[2]]` by a number to get `[[4],[-3]]`, so `C` is not an eigenvector of `A`.
Next, we should check if `C` is a linear combination of the eigenvectors of `A`.
To do this, we need to find the eigenvectors of `A`. And to do that, we need to find the eigenvalues of `A`.
To find the eigenvalues of `A`, we row reduce until it is in row echelon form:
`[[2, 3],[-1, -2]] rarr [[-1, -2],[2, 3]] rarr [[-1, -2],[0, -1]] rarr [[1, 2], [0, -1]]`
The eigenvalues are the numbers along the diagonal, which are `1` and `-1`.
Now that we have the eigenvalues, we can find the eigenvectors.
We have
`Ax = lambdax`
`= Ax - lambdax = 0`
`= (A - lambdaI)x = 0`
For `lambda = 1`, we have
`(A-I)x = 0`
`([[2, 3],[-1, -2]] - [[1, 0], [0, 1]])x = 0`
`[[1, 3],[-1, -3]]x = 0`
`x` is the eigenvector, so we have to try and solve for it. We do this by setting up the augmented matrix and row reducing.
`[[1, 3, |, 0], [-1, -3, |, 0]] rarr [[1, 3, |, 0], [0, 0, |, 0]]`
From this, we conclude that
`x_2` is a free variable, let it be equal to `1`
`x_1 = -3x_2`.
So an eigenvector `x` for the eigenvalue `1` is equal to
`x = [[x_1],[x_2]] = [[-3x_2],[x_2]] = x_2[[-3],[1]] = [[-3],[1]]`
For `lambda = -1`, we have
`(A+I)z = 0`
`([[2, 3],[-1, -2]] + [[1, 0], [0, 1]])z = 0`
`[[3, 3],[-1, -1]]z = 0`
`z` is the eigenvector, so we have to try and solve for it. We do this by setting up the augmented matrix and row reducing.
`[[3, 3, |, 0], [-1, -1, |, 0]] rarr [[1, 1, |, 0], [-1, -1, |, 0]] rarr [[1, 1, |, 0], [0, 0, |, 0]]`
From this we conclude that
`z_2` is a free variable, let it be equal to `1`
`z_1 = -z_2`.
So an eigenvector `z` for the eigenvalue `-1` is equal to
`z = [[z_1],[z_2]] = [[-z_2],[z_2]] = z_2[[-1],[1]] = [[-1],[1]]`
So two eigenvectors of `A` are `[[-3],[1]]` and `[[-1],[1]]`. Recall that the goal was to determine if `C = [[-1],[2]]` was a linear combination of the eigenvectors of `A`. To determine this, we need to row reduce yet again.
For `C = [[-1],[2]]` to be a linear combination of `[[-3],[1]]` and `[[-1],[1]]`, there need to be numbers `c_1` and `c_2` such that `[[-1],[2]] = c_1[[-3],[1]] + c_2[[-1],[1]]`.
This is equivalent to setting up the augmented matrix `[[-3, -1, |, -1], [1, 1, |, 2]]` and row reducing to find `c_1` and `c_2`.
`[[-1],[2]] = c_1[[-3],[1]] + c_2[[-1],[1]]` is equivalent to
`-3c_1 - c_2 = -1`
`c_1 + c_2 = 2`
Now consider the equation
`[[-3, -1], [1, 1]][[c_1],[c_2]] = [[-1],[2]]`
Notice that if we do the multiplication, we get the exact two equations above.
Also notice that solving for `c_1` and `c_2` requires setting up the augmented matrix and row reducing (that's how we've learned to solve these types of problems).
This is why we row reduce the augmented matrix when we want to find out if something is a linear combination of something else.
`[[-3, -1, |, -1], [1, 1, |, 2]] rarr [[1, 1, |, 2], [-3, -1, |, -1]] rarr [[1, 1, |, 2], [0, 2, |, 5]]`
From this we conclude that
`2c_2 = 5 implies c_2 = 5/2`
`c_1 + c_2 = 2 implies c_1 = 2 - 5/2 = -1/2`
This means that if we do `c_1[[-3],[1]] + c_2[[-1],[1]] = -1/2[[-3],[1]] + 5/2[[-1],[1]]`, we should get `[[-1],[2]]`, which we do.
So `C` is a linear combination of the eigenvectors of `A`, which means there exists an easily computable solution:
`y(t) = c_1e^(lambda_1t)a_1 + c_2e^(lambda_2t)a_2 = color(red)(-1/2e^t[[-3],[1]] + 5/2e^(-t)[[-1],[1]])`