Bilateral FIlter

Gaussian filtering has been one of the most successful image filters in history. However, its faults as highlighted in the post on anisotropic diffusion has propelled the development of new methods that filter piecewise constant regions without dislocating or blurring the the edges. Anisotropic diffusion is one of such method. Anisotropic diffusion however, is an iterative process thus can be computationally intensive. A method that overcomes the computationally demands of Anisotropic diffusion is the Bilateral filter. To understand the development of Bilateral Filter, it is imperative to understand the development of the Gaussian filter.
Gaussian Filter is a neighbourhood operator that replaces the pixel under consideration by the weighted average of its neighbours. I know what I just said may sound like gibberish but  let me break it down.
Consider a man, who stays in a rich neighbourhood. The premise of this neighbourhood operator is this, if a man stays in a rich neighbourhood, he must be rich man. So, if there is a supposed poor man living in a rich neighbourhood, it means he wealth has not been correctly evaluated and hence, although he was previously considered poor, the mistake is now rectified and he is now considered rich. The seems like a logical approach in all regards. One question comes to light though, how do we define the neighbourhood.


Consider a rich neighbourhood that shares a boundary with a poor neighbourhood. How do you characterize a man based on his neighbours as he might stay close to the boundary and hence might be poor or “average”. Mr Gaussian Filtering then says, let me solve this


“The farther a neighbour stays from the man whose actual wealth is being considered, the less he is likely to contribute any information about the wealth the man considered. The probable information based on distance corresponds to a Gaussian curve”


A problem arose. Poor people living on the boundary of poor neighbourhood were considered to be averagely wealthy because they lived close to reach people on the other side of the boundary. The hard boundary disappeared and it was hard to distinguish both poor and rich, especially at the boundary.


Two men thus decided to end the rift between both neighbourhoods. Their names were Tomasi and Manduchi. They said “Mr Gaussian Filtering is right, but his hypothesis is not complete. Let us not characterize only by how close our neighbours are, but only by how much money they have as compared to how much money you appear to have.” It worked perfectly, people started to have their true worth and the boundary remained.


Now to the images, de-noising images is the art of removing noise and giving pixels their true worth.
The familiar Gaussian filter, penalises the contributions of neighbouring pixels based on their distance to the pixel under consideration. The assigned weights in the convolution is determined based on the predefined variance assigned to the Gaussian.


Enough talk in plain English. Let us translate into a mathematical representation. To evolve into the bilateral filter, we first provide a mathematical representation of the Gaussian filter.

Let be the position of a neighbouring pixel in the neighbourhood of the pixel under consideration at position . defined the pixel intensity at a given position. The neighbourhood operator to filter a given pixel is given by



The parameter defines a weight for the neighbour pixel under consideration. As we have earlier iterated, this weight is obtained as a penalization of the distance difference using a Gaussian.  is thus defined as




Bilateral filter presents an extra weighting term. This weighting term , is a penalization based on the difference in pixel or voxel values. The addition of this term thus changes the neighbourhood function




As you can guess from earlier explanation, this can be computed using a Gaussian function



Applying this function to images smooths the different regions of the image without any loss to the image boundaries or edges. We would practically implement this filter and apply it to an image in the next “episode”

0 comments :

Anisotropic Diffusion 2: The Implementation




As it is well known, deviations occur in the implementation of algorithms especially, if the algorithm was defined with reference to the continuous domain. The same is applicable in the implementation of the already defined principle for anisotropic diffusion. We would be implementing here, in python, a discrete model of anisotropic diffusion to deal with Digital Images which are ofcourse discrete.


First of all, we import the necessary libraries that we would use throughout the course of the implementation. Our implementation would most importantly be hinged on the numpy and scipy libraries in python.

 import numpy as np  
 import scipy.ndimage as sp  
 import scipy.misc as sv  
 from scipy.ndimage.filters import gaussian_filter as gaussian  


It is much easier to work with a grayscale image so we load the image as a grayscale image


 img = sp.imread("fruits.jpg", mode = "L")  




As previously defined, anisotropic diffusion is based on computing a first order gradient. There are two “normally used” types to compute the first order gradient; forward difference and backward difference. In our implementation of anisotropic diffusion, we use the forward difference gradient operator. The familiar reader would be conversant with the fact that we would obtain the gradient along the x and the y axis of the image. However, a problem manifests. In the computation of the gradient using forward difference, it returns an array that is less one along the axis of computation. Meaning, the x gradient of a 5 by 5 matrix would be 5 by 4. We would analyse and take care of this problem later. However, we zero pad the gradient to maintain the image shape of 5 by 5


 def forwardDifferenceGradient(img):  
   diffY = np.zeros_like(img)  
   diffX = np.zeros_like(img)  
   diffY[:-1, :] = np.diff(img, axis = 0)  
   diffX[:, :-1] = np.diff(img, axis = 1)  
   return diffY, diffX  

Now we define the diffusion equation, as stated in the previous post, we can have a function resembling the sigmoidal function or one stemming from the tanh function. We hence implement both schemes.


 def sigmoidFormula(gradientY, gradientX, k):  
   cGradientY = np.exp(-(gradientY/k) **2.)  
   cGradientX = np.exp(-(gradientX/k) **2.)  
   YVal = cGradientY * gradientY  
   XVal = cGradientX * gradientX  
   return YVal, XVal  

 def tanhFormula(gradientY, gradientX, k):  
   cGradientY = 1./(1. + ((gradientY/k)**2))  
   cGradientX = 1./(1. + ((gradientX/k)**2))  
   YVal = cGradientY * gradientY  
   XVal = cGradientX * gradientX  
   return YVal, XVal  

Anisotropic diffusion is an iterative process, with each iteration working on the previous image. However, to control the way the diffusion at a current iteration affects the whole image, a constant lambda. Such that, the image diffusion at a particular iteration level is multiplied by lambda and added to the image. We set lambda to 0.25


A shift in gradient would gradually shift the edge and if the number of iteration is large enough, it would completely dislocate the image, so we compensate for the shift of the gradient operator by introducing a new variable, to shift the gradient back and hence mitigate any shift in edge over time.

We choose K also to be equal to 20

 img = sp.imread("fruits.jpg", mode = "L")  
 img = img.astype("float32")  
 gauss = gaussian(img, 11)  
 shiftedY = np.zeros_like(img)  
 shiftedX = np.zeros_like(img)  
 for i in range(10):  
   dY, dX = forwardDifferenceGradient(img)  
   cY, cX = sigmoidFormula(dY, dX, 20)  
   shiftedY[:] = cY  
   shiftedX[:] = cX  
   shiftedY[1:,:] -= cY[:-1,:]  
   shiftedX[:,1:] -= cX[:,:-1]  
   img += 0.25*(shiftedY+shiftedX)  
 sv.imsave('anisotropic.jpg', img)  
 sv.imsave('gaussian.jpg', gauss)  


Anisotropic Diffusion (iteration =10, K =20)

Gaussian filtering (sigma=11)



We compare anisotropic diffusion after 10 iterations with Gaussian filtering using a window size of 11 and notice that the Gaussian filter completely blurs out the edges. With gaussian filtering, the edges are completely blurred out and information with the image is almost completely lost. Anisotropic diffusion however, maintains the edge information.

partly derived with reference to this




0 comments :

Anisotropic Diffusion in Image Processing






Images provide insight into the world, our lives, places, things e.t.c Human beings have the implicit ability to understand and make sense of an image both semantically and syntactically. Thus we are able (assuming I am human), to make visual conclusions easily, from visual information presented in images. This is not the same for Artificial Intelligence as the ease eludes it.

Image processing is a science that facilitates the world with algorithms that can extract information embedded in images with the eventual aim to make sense of such information. However, factors such as inefficient image capture e.t.c introduces noise into images. It is therefore important to be able differentiate noise from information. Filtering techniques help in this separation. 

Gaussian filtering is a process of noise elimination in images by the convolution of an image with a Gaussian. This Gaussian of scale or standard deviation , isotropically blurs the image thereby reducing the noise. Since noise exists in images in high frequencies, this replaces every pixel by the weighted combination of its neighbours. The weights are determined by  which also determines the window size for the convolution. Mathematically, a Gaussian in 2D corresponds to,

   

It is however generally known that this corresponds to low pass filter, which would blur the image. Thus, reducing the noise and also, edges in the data which correspond to high frequency information. Edges are very important in image processing just as they are important in human vision because they correspond to object boundaries. These boundaries hence define the localised object within the image scene. The blurring of edges is thus an undesirable effect. Another undesirable quality is, due to the fact that a Gaussian filter replaces a pixel value by the weighted combination of its neighbour pixels, there is an edge shift. So the eventual location of the edge is not the correct location of it. Assume we are designing the vision system of a Robot and we decide to filter the images that come through using a Gaussian, this shift in edge (if large), would make the Robot to bump into walls and objects.

To combat these effects, Perona and Malik  introduced a new filter that blurs homogeneous spaces whilst not blurring the edges.

Anisotropic diffusion filter, introduced by Perona and Malik, is built on the Mathematical formulation of the Gaussian filter which can be correlated to the Diffusion law.

Ficks law of equilibration as defined by the formula.


This formula states that a concentration gradient creates a flux which aims to compensate for the concentration gradient. During this flow, no new mass is created or destroyed. The mathematics savvy reader would be drawn to the fact that this can be represented using the continuity equation


If we plug Ficks law into the continuity equation, we get the diffusion law which would be


 The diffusion tensor is a positive definite symmetric matrix. When it is scalar, it corresponds to isotropic diffusion all through the image such as using a Gaussian. The real task is finding  that would allow for the blurring of homogeneous regions while protecting the edges. Perona and Malik solved that by introducing a dependence of  on the gradient of the image. The gradient of the image, is a rough estimate of edges. So, if we have a diffusion tensor that is dependent on the gradient, we hence can create a non uniform diffusion process that avoids the edges (Pretty obvious isn't it?)


As is observed, the new diffusion tensor is dependent on the gradient and thus, provided the function is chosen carefully, can avoid the blurring of the edges. Two functions were thus proposed

 and



Those who are familiar with the paradigms of machine learning would immediately recognise the fact that these functions gives weights to the diffusion tensor, scaling this weight based on the edge strength. is called the conduction parameter. From the analysis of the above equation, it can be inferred that  helps the generation of weights by regularising the gradient matrix. This helps to place emphasis on how strong the edge should be before it avoided in the filtering or diffusion process. This parameter is important as noise is scaled in the gradient of an image, so  should be optimised to separate true edges from false; men from boys. Now that we have defined the mathematics for anisotropic diffusion, we go on to implement it in python. Not in this post though, in the next one.  

0 comments :