In the previous post we explained how an image can be represented as a matrix of pixels, where each pixel is expressed as a tridimensional vector, composed by the amount of red, green and blue of the color. In this post, we are going to give some examples about the use of linear algebra in the digital image processing.
There are two main kind of image processing:
 When the color of every pixel is changed, using a function that gets as input the original pixel, or in more complex cases, a submatrix of pixels (usually submatrices around the pixel in the matrix, depending on an extra factor).
 When the pixels change their position inside the image, or most precisely, when every pixel in the matrix is build based on another pixel of the matrix, but without altering its color.
Image processing procedures of the first kind are usually called filters. Among the most used there are: adjustment of brightness, contrast and colors, grayscale conversion, color inversion (negative), gamma correction, blur and noise reduction.
In the second kind we can mention: Rotation, flips, scaling, skewing and translation.
Filters
From the point of view of linear algebra, filters are applied to each pixel of the matrix using the filter function. As explained before, the input of this function can be just a pixel like the adjustment of brightness, or a submatrix of pixels like the blur, where the order of the submatrix will depend on the blur ratio.
Let's consider the matrix M, as the matrix associated to a full color image:
M = 

p_{11} 
p_{12} 
⋯ 
p_{1n} 
p_{21} 
p_{22} 
⋯ 
p_{2n} 
⋮ 
⋮ 
⋱ 
⋮ 
p_{m1} 
p_{m2} 
⋯ 
p_{mn} 


Here, p_{ij} is the pixel in the position (i, j), which is represented as the vector:
In the simplest case (the filter needs only a pixel as input), the function can be a linear transformation, that transforms a tridimensional vector (pixel) into another tridimensional vector, or not.
When it's a linear transformation, the transformation can be represented as a 3x3 matrix T, where:
Some of the filters that use linear transformations are:
Grayscale conversion
T = 

1/3 
1/3 
1/3 
1/3 
1/3 
1/3 
1/3 
1/3 
1/3 


The components of each new pixel is obtained by calculating the average of the three components.
Sepia effect
T = 

0.393 
0.769 
0.189 
0.349 
0.686 
0.168 
0.272 
0.534 
0.131 


The numbers inside the matrix T in the previous example, give the images, the reddishbrown color of early 20^{th} century monochrome photographs.
Another common transformations are the one where the resulting pixel is obtained by adding a 3x1 matrix (tridimensional vector) to the original pixel:
Although these transformations are very simple, there are not linear transformations, but use the concept of sum of matrices. Examples of this kind of transformation are:
Red channel adjustment
Green channel adjustment
Blue channel adjustment
Brightness adjustment
In the four previous examples, f is a number that depends on the grade of adjustment the person wants to apply, and it can be a positive or negative number, usually ranging from 150 to 150.
Some other transformations can be obtained using a combination of the previous two transformations:
This is the case of finding the negative of an image (color inversion), where each new pixel's component is obtained by subtracting the actual value from 255. The matrices are:
For the contrast adjustment, the operation is a little more complicated:
Where
In this case f is computed using the formula: f = (259 * (value + 255)) / (255 * (259  value), where value is the grade of adjustment, usually ranging from 100 to 100.
For gamma correction, we need more than adding and multiplying matrices, we need the exponentiation operator. The filter can be computed using the following formula:



= 255 • 

(r/255)^{1/f} 
(g/255)^{1/f} 
(b/255)^{1/f} 


The factor f is a number ranging between 0 and 10, but without reaching the number 0.
The blur, as mentioned before, needs a submatrix of pixels as input, where the order of the submatrix will depend on the ratio of the blur. The primary idea is that the components of every output pixel are computed as the average of the corresponding component using all the pixels around it.
In the next post we'll be talking about the second kind of image processing; the one related to the change of the position of the pixels.
Still no comments