Easy Last updated on May 7, 2022, 1:05 a.m.

Image filtering is a mathematical operation done at each pixel of the image matrix. In general, we do it so as to extract features from image input. To understand it in more detail, we can think of an image, which is comprised of individual pixel values, as a function, f. In the case of a grayscale image, the function f will be a 1-d matrix of shape say nxn, with values ranging from 0 to 255.

Let f be the image and g be the filter kernel. The output of the filtering `f`

with `g`

denoted `f*g`

is given by:

$$ f*g[m,n] = \sum_{k,l}f[m+k][n+l]g[k,l] $$

Filtering operation computes the correlation between the g and f at each location. We use filtering for 3 main reasons:

**1. To Enhance Image:** In case of noisy images, we use filters such as Moving Average, Median Filter, or Gaussian Filter to smoothen out values in the grid.

**2. To Extract features:** Filtering can also be used to extract important information such as Edges. In such scenarios filters such as Prewitt and Sobel filters can be implemented.

**3. To Detect Patterns:** When it comes to creating a Machine learning model to be able to learn similar objects or learn patterns of objects in an image we can use more advanced filters. For Example, if we are trying to write an algorithm for table detection, we know it has edges and corners, but at the same time, the photo can be taken from different angles, so the filter has to be position invariant. Considering this, we can use filters like Marr-Hildreth edge detector, Canny edge detector, etc, to extract the interest points and later them to find similarity score.

Later, this ability of feature extraction became the root idea of creating convolutional neural networks.

Frequently Asked Questions by

Adobe Apple Amazon IBM