First, let's name the entries in the row r 1 , r 2 , The number of columns in the first is the same as the number of rows in the second, so they are compatible. Now that you know how to multiply a row by a column, multiplying larger matrices is easy. For the entry in the i th row and the j th column of the product matrix, multiply each entry in the i th row of the first matrix by the corresponding entry in the j th column of the second matrix and adding the results. The entries of the product matrix are called e i j when they're in the i th row and j th column.
To get e 11 , multiply Row 1 of the first matrix by Column 1 of the second. To get e 12 , multiply Row 1 of the first matrix by Column 2 of the second. To get e 21 , multiply Row 2 of the first matrix by Column 1 of the second. The point 2,1 is also where the graphs of the two equations intersect.
The matrix that depicts those two equations would be a two-by-two grid of numbers: The top row would be [1 -2], and the bottom row would be [1 -1], to correspond to the coefficients of the variables in the two equations.
In a range of applications from image processing to genetic analysis, computers are often called upon to solve systems of linear equations — usually with many more than two variables. Matrix multiplication can be thought of as solving linear equations for particular variables. They could be represented as a matrix with three rows: [1 2 3], [4 5 6], and [7 8 9]. Now suppose that, at two different times, you take temperature, pressure, and humidity readings outside your home.
Those readings could be represented as a matrix as well, with the first set of readings in one column and the second in the other.
Multiplying these matrices together means matching up rows from the first matrix — the one describing the equations — and columns from the second — the one representing the measurements — multiplying the corresponding terms, adding them all up, and entering the results in a new matrix. The numbers in the final matrix might, for instance, predict the trajectory of a low-pressure system.
Of course, reducing the complex dynamics of weather-system models to a system of linear equations is itself a difficult task. But that points to one of the reasons that matrices are so common in computer science: They allow computers to, in effect, do a lot of the computational heavy lifting in advance. Decoding digital video, for instance, requires matrix multiplication; earlier this year, MIT researchers were able to build one of the first chips to implement the new high-efficiency video-coding standard for ultrahigh-definition TVs, in part because of patterns they discerned in the matrices it employs.
In the same way that matrix multiplication can help process digital video, it can help process digital sound. In pseudo code, it gives something like that we will give the version for 4x4 matrices later :. The identity matrix or unit matrix is a square matrix whose coefficients are all 0 excepted the coefficients along the diagonal which are set to The result of P multiplied by the identity matrix is P.
If we replace the coefficient of the identity matrix in the point-matrix multiplication code we can clearly understand why:. When these coefficients are set to 1 and all the other coefficients of the matrix are set to 0 , we get the identity matrix. However when these coefficients along the diagonal are different than 1 whether smaller or bigger than 1 , then they act as a multiplier on the point's coordinates in other words, the points coordinates are scaled up or down by some amount.
If you remember what we have said in the chapter on coordinate systems, multiplying the coordinates of a point by some real numbers results in scaling the point's coordinates. The scaling matrix can therefore be written as:. As an example, imagine a point P which coordinates are 1, 2, 3. Note that if either one of the scaling coefficients in the matrix are negative, then the point's coordinate for the corresponding axis will be flipped it will be mirrored to the other side of the axis.
What we will be talking about in this paragraph is about building a matrix that will rotate a point or a vector around one axis of the cartesian coordinate system. And for doing so, we will need to use trigonometric functions.
Lets take a point P defined in a three-dimensional coordinate system with coordinate 1, 0, 0. Lets ignore the z-axis for a while and assume that the point lies in the xy plane.
As you can see in figure 1 this can be done by rotating the point around the z-axis by 90 degrees counterclockwise. Considering what we know about matrix multiplication lets see how we can re-write a point-matrix multiplication and isolate the computation of each of the transformed point coordinates:. Lets have a look at the second line of code. Let's recap. Don't worry for now if you don't understand why the coefficients have the value they have. That will be explained soon.
Figure 3: cosine and sine can be used to determine the coordinate of a point on the x- and y-axis of the unit circle. This is where our knowledge of trigonometric functions will become handy. So we could re-write the matrix R as:. Thus, it seems that we can generalise the notation for R a matrix that rotates points around the z-axis and write:. Lets check:. That doesn't seem quite right since we started from the point with coordinate 0, 1, 0 and after transformation, we have the coordinates -1, 0, 0 instead of 1, 0, 0.
In that case we would get for R:. To find the matrices that could rotate a point around the x and y axis or in the yz and xz planes we can simply follow the same logic we used to find the matrix that rotates points and vectors around the z-axis on in the xy plane.
0コメント