Linear Algebra Diagnolization?

2010-04-27 12:39 am
A, P and D are nXn matrices.

Check the true statements below:

A. If AP=PD, with D diagonal, then the nonzero columns of P must be eigenvectors A.
B. If A is diagonalizable, then A has n distinct eigenvalues.
C. A is diagonalizable if A has n distinct eigenvectors.
D. If A is invertible, then A is diagonalizable.

回答 (3)

2010-04-27 12:59 am
✔ 最佳答案
A)

PDij = ∑Pix * Dxj

But if D is diagonal, then ONLY x = j will be non-zero:

Therefore:

PDij = PijDjj

That is that the ith row and jth column of the resulting matrix is merely the ith row and jth column element of P times the jj diagonal element of D.

So you should realize that the columns (given by j) are ALL multiplied by the same amount: Djj

So, the columns of the resulting matrix are merely the previous column (of P) times the diagonal of the column (i.e. first column, times D11, second column, time D22, etc.):

After this it gets a little weird, so basically, you should be able to (although I admit, this isn't exactly obvious unless you really understand matrices), but basically when you multiply two matrices, you can sort of just write it like this:

AP = A * P₁ + A * P₂ + ... A * Pn

(where Pi = the ith column of P).

Now of course this isn't mathematically correct, but each of these elements is essentially each of the columns when you multiply them out, but we already said that each column on the right side (PD) is just the original column in P times some value Dj.

So we have n equations:

A * v = Dv --> this is the definition of eigenvector/eigenvalue.

So all of the eigenvalues of A are the diagonal values of D (for non-zero columns).

B) This isn't correct, because if I give you the Identity matrix which is clearly diagonalizable (since it's already diagonalized), it will have only one eigenvalue: 1.

Now it DOES have n linear independent eigenvectors, but NOT distinct eigenvalues.

(counter example):
1 0 0
0 1 0 --> only ONE eigenvalue = 1
0 0 1

on the other hand there ARE three eigenvectors: (1 0 0), (0 1 0), and (0 0 1)

To me it's easier to prove D) rather than C):

D) If A is invertible than it MUST be diagonlizable, because if it's invertible, then it's non-singular and therefore it should be possible to get it into echelon form (to be able to be solved), anything matrix that can be put into echelon form with non-zero diagonals can be diagonalized, I mean this just makes sense, think about how to diagonalize a matrix.

C) If you have n "distinct" (by which I assume you mean linearly independent) vectors then you span Rn, if the eigenvectors span Rn, then the matrix must NOT be singular and therefore must be diagonalizable (as in D).

None of this is a real proof, but hopefully this gets you going in some direction.
2017-04-17 12:30 pm
A: Correct
B: Incorrect
C: Incorrect
D: Incorrect

Now we need someone to come up with the explanations.
2016-12-04 4:35 pm
The matrix version of the problem is AX = b the place A is a 2x2 matrix [4, ok; ok, a million] X is a vector variable [x; y] and b is a vector consistent [7; 0] there'll be a different answer as long as A is non-singular. there is not any longer any unique answer if A *is* singular. For A to be singular det(A) = 0 the place det() is the "determinant" of a sq. matrix for a 2x2 matrix [a, b; c, d] det([a, b; c, d]) = advert - bc subsequently on your case 4*a million - ok*ok = 0 ok^2 = 4 ok = +/- 2


收錄日期: 2021-04-13 17:13:25
原文連結 [永久失效]:
https://hk.answers.yahoo.com/question/index?qid=20100426163938AAQNmUh

檢視 Wayback Machine 備份