VIEWS: 13 PAGES: 5 POSTED ON: 7/12/2010
Image Processing and Edge Detection Masks
f Image Processing and Edge Detection 2 Computer Vision Â©Department of Computing, Imperial Collegeâ¨GZ Yang and DF Gillies â¢ http://www.doc.ic.ac.uk/~gzy Image Pre-processing Pre-processing is an operation with images at the lowest level of abstraction - both input andâ¨output are intensity images. Image pre-processing methods generally involve the use of theâ¨considerable redundancy in images. Neighbouring pixels corresponding to one object in realâ¨images have essentially the same or similar brightness value, so if a distorted pixel can beâ¨picked out from the imag. It can usually be restored as an average value of neighbouringâ¨pixels. General pre-processing methods can be categorised into three groups dependingâ¨upon the use of a priori information: No knowledge about the nature of the degradation is used, only very generalâ¨properties of the degradation are assumed. Knowledge about the properties of the image acquisition device and the conditionsâ¨under which the image was obtained are employed, the nature of noise is sometimesâ¨known. Knowledge about the objects that are searched for in the image is used to guide andâ¨simplify the pre-processing process. Elementary noise reduction Local average: g(ij)=â Y f(m, n) W{m, Â«)(?% To prevent the change taking place unless the pixel is considerably different to its neighbours,â¨the following refinement can be made: gl(Sj)=g(iV), if \f(ij)~g(ij)\>T; otherwise, g/(ij)=f(ij) where T is a predefined threshold value.â¨Median filtering: Instead of calculating the mean inside the window, the median value is used for medianâ¨filtering, i.e.,. g(ij) = med It requires at least a partial sort of the pixels in the window. K-closest averaging: The pixels in the window are ordered and the K pixels closest in intensity to the target pixel -1- ! are averaged. (Question: what is the advantage ? ) Convolution and convolution masks Convolution defines a way of combining two functions. In the most genera! form it is definedâ¨as a continuous integral which in one-dimension is: g(x) =Jf(u)h(u-x)du or in two-dimensions: g(x,y)=j" Jf(u,v)h(u-x,v-y) dudv The function h{*) in the above equations is usually called a filter. If both f(Â») and h{Â») are discrete, the convolution integral simplifies to: {m,n) eH Usually the convolution kernel h(Â») only has non-zero value in a small neighbourhood Q, andâ¨is also called a convolution mask. Rectangular neighbourhoods Q are often used with an oddâ¨number of pixels in rows and columns, enabling the specification of the central pixel of theâ¨neighbourhood. The local average process mentioned in the previous section can then beâ¨expressed in terms of convolution of the original image with mask 1 1 1 h=I 1 1 1 9 1 1 1 Structure adaptive filtering Most smoothing methods use a fixed convolution mask for the whole image, with noâ¨consideration of local structure changes. This may result in images with a loss of fine structureâ¨details. Averaging according to inverse gradient The convolution mask is calculated at each pixel according to the inverse gradient, the ideaâ¨being that the brightness change within a region is usually smaller than between neighbouringâ¨regions. The inverse gradient 5 at the point (m,n) with respect to (i,j) is defined as: 1 if Â§(ij,m,n) -< |f(m,ri) -f(Q) \ 2, otherwise. the inverse gradient 5 is then in the interval (0,2], and 5 is smaller on the edge than in the -2- interior of a homogeneous region. Weight coefficients in the convolution mask h areâ¨normalized by the inverse gradient, and the whole term is multiplied by 0.5 to keep brightnessâ¨values in the original range: h(ij,m,ri)=0.5 the convolution mask coefficient corresponding to the central pixel is defined as h(i,j) = 0.5.â¨This method assumes sharp edges during the noise removal process. Averaging using a rotating mask This is another method that avoids edge blurring by searching for the homogeneous part ofâ¨the current pixel neighbourhood, and gives a sharpened output image. The process startsâ¨with calculating dispersion in the mask for all possible mask rotations about a given pixel (i,j)â¨according to the following equation: s2=- { E E MD 2 K{k. OeH The mask with minimum dispersion is then chosen to perform the averaging process. Edge Detection Edge detection is to examine whether one e!dge passes through or near a given pixel. Thisâ¨is done by examining the rate of change of intensity near the pixel - sharp changes (steepâ¨gradients) are good evidence of an edge, slow changes will suggest the contrary. Roberts operator The convolution masks used by Roberts operator are: 1 0 0 1 h.= ^2 1 0 -1 ' -1 0 the outputs are then combined to give the edge strength and direction. Prewitt operator 1 1 1 0 1 1 -1 0 1 ht = 0 0 0 ' ^2 -101, h3=â¨-1 -1 0 -10 1. i -1 -1 -1 -1 0 1 Sobel operator 1 2 1 0 1 2 -1 0 1 0 0 0, h2=â¨-1 -2 -1 h.= -101, h^=â¨-2 -1 0 -2 0 2. i -1 0 1 -3- f- Robinson operator 1 1 1 1 1 1 -1 1 1 k< = 1 -2 1 , /*2= -1-2 1, ft3 = -1 -1 1 -1-2 1. l -1 -1 -1 -1 1 1 Kirsch operator 3 3 3 3 3 3â¨-5 0 3 , h^ =â¨-5 -5 3 -5 3 3 h,= 3 0 3 , h2 -5 0 3. 1 -5 -5 -5 -5 3 3 Laplace operator The Laplace operator V2 is usually used for approximating the second derivative of f(i,j). A 3x3â¨mask h is often used; for 4-neighbourhoods and 8-neighbourhoods it is defined as 0 1 0â¨1-41 , h=â¨0 1 0 1 1 1 h= 1 -8 1 1 1 1 3.4 Marr-Hildreth Edge Detector From neurophysiological experiments,â¨boundaries are the most important cues that link an intensity image with its interpretation. Heâ¨proposed the use of zero crossings of the second derivative for accurate edge detectionâ¨(Marr-Hildreth edge detector). The first derivative of the image function should haveâ¨extremum at the position corresponding to the edge in the image, so the second derivativeâ¨should be zero at the same position, however, it is much easier and more precise to find aâ¨zero crossing position than an extremum. The operator can be represented by the following: Marr concluded in the seventies that object LoG =V2{G(x,y,a) */(x,y)} where LoG stands for Lapiacian of Gaussian, and G is given by G(x,y,o)=ââe "(*2*' W.â¨if2jl<32 The order of performing differentiation and convolution can be interchanged due to theâ¨linearity of the operators involved, so LoG={v2G(x,y,o)} */(xj) -4- I (what is the advantage of doing this ?). This operator has the following features: A larger area surrounding the current pixel is taken into account compared toâ¨classical edge operators of small size, the influence of more distant points decreasesâ¨according to the a of the Gaussian. The ci variation does not affect the location of the zero crossings, but as a increasesâ¨less significant edges are suppressed. ^G operator can be decomposed into row and column filters which permits aâ¨significant increase in computation speed (A Huertas and G Medion. Detection ofâ¨intensity changes with subpixel accuracy using Laplacian-Gaussian masks. IEEEâ¨Transactions on Patern Analysis and Machine Intelligence, PAMi-8;651-664,1986). Neurophysiological experiments provide evidence that human eye retina performsâ¨operations very similar to the ^G operations. The ^G operator can be very effectively approximated by convolution with a mask that is theâ¨difference of two Gaussian masks with substantially different a (why ?). 3.5 Image Restoration Pre-processing methods that aim to suppress degradation using knowledge about its natureâ¨are called image restoration. Image restoration techniques can be classified into two groups:â¨deterministic and stochastic. Deterministic methods are applicable to images with little noiseâ¨and a known degradation function. The original image is obtained from the degraded one byâ¨a transformation inverse to the degradation. Stochastic techniques try to find the bestâ¨restoration according to a particular stochastic criterion, e.g. a least squares method. It isâ¨always advantageous to know the degradation function explicitly. The better this knowledgeâ¨is, the better are the results of the restoration -5-