Document Sample

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010 1181 Image Reconstruction Using Particle Filters and Multiple Hypotheses Testing Noura Azzabou, Nikos Paragios, Senior Member, IEEE, and Frédéric Guichard Abstract—In this paper, we introduce a reconstruction frame- each pixel improves the quality of restoration compared to a work that explicitly accounts for image geometry when deﬁning the ﬁxed size window. A further step in this direction consists in spatial interaction between pixels in the ﬁltering process. To this deﬁning more appropriate means for selecting the spatial scale end, image structure is captured using local co-occurrence statis- tics and is incorporated to the enhancement algorithm in a sequen- of interaction between pixels. In this paper, we would like to tial fashion using the particle ﬁltering technique. In this context, study the geometric structure of the image and if possible deﬁne the reconstruction process is modeled using a dynamical system novel mechanisms for pixels selection that explicitly account with multiple states and its evolution is guided by the prior density for the observed local image structure. Such a process involves describing the image structure. Towards optimal exploration of the two steps, (i) a learning stage where the image structure is image geometry, an evaluation process of the state of the system is performed at each iteration. The resulting framework explores op- modeled, and (ii) a reconstruction step that consists of a novel timally spatial dependencies between image content towards vari- mathematical approach to encode measurements driven by the able bandwidth image reconstruction. Promising results using ad- geometric image model. Our aim is to introduce a strategy ditive noise models demonstrate the potentials of such an explicit that allows a best possible selection of the pixels contributing modeling of the geometry. to the reconstruction process driven by the observed image Index Terms—Additive noise, co-occurrence matrices, multiple geometry. Such geometry is expected to be anisotropic. More hypothesis testing, nonparametric densities, particle ﬁltering, explicitly, we take into account the image structure to retrieve speckle noise, statistical models, structure enhancement. similar pixels to a reference one (that we want to reconstruct) without any exhaustive scan of the image domain. To this end, I. INTRODUCTION the reconstruction is done in a sequential fashion using notions of dynamical systems evolution. The idea is to consider mul- T HE core components of an image enhancement approach are: (i) the selection of the bandwidth or the scale of interaction between pixels, and (ii) the way of introducing tiple random walks/trajectories as potential candidates for the ﬁltering window and evaluate the pertinence of each trajectory. Such a process involves two issues (i) the selection of the tra- interactions between them based on the image content. Prior jectory, (ii) and the evaluation of the trajectory appropriateness. art consists of neighborhood ﬁltering methods, PDE-based Each walk is composed of a number of possible neighboring methods and energy minimization approaches [3], [6], [22], sites/pixels in the image which are determined according to [25], [26]. Neighborhood ﬁlters consist in performing a the observed image structure. Towards optimizing the selec- weighted sum of noisy pixels over a given neighborhood tion of candidate pixels within a walk as well as the overall (ﬁltering window). Such approaches include the sigma ﬁlter performance of the method, image structure at local scale is [17], the bilateral ﬁlter [24] and the NL-mean ﬁlter [5] and considered through a learning stage. It consists in computing a the fast version of the NL-mean ﬁlter [18]. In spite the fact probability density function that describes the spatial relation that these methods include the image structure in the weight between similar image patches in a local scale. Random pertur- deﬁnition, the selection of ﬁltering window is always ﬁxed in bations according to these densities guide the “trajectories” of shape and size. The main advantages of these methods is com- a discrete number of walkers. The weighted integration of the putational efﬁciency, but they are constrained by the amount average values of these walks is the reconstructed image value. of information present at the considered window. Examples Image similarities, spatial smoothness as well photometric of variable size ﬁltering windows are introduced in [2], [10], consistency for the samples are the criteria used to determine [14], [21] where the authors propose an iterative algorithm to the quality of the reconstructed value associated to a given select the most appropriate bandwidth for each pixel. Their walk. The overview of our method is presented in Fig. 1. experimental results show that adapting the bandwidth for The remainder of the paper is organized as follows. In Sec- tion II, we present the learning of image structure at a local Manuscript received August 24, 2008. First published December 01, 2009; scale. Random walks and particle ﬁlters are presented in Sec- current version published April 16, 2010. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Luminita tion III. The Section IV is devoted to the application of the Vese. particle ﬁltering to image denoising for the case of additive N. Azzabou and N. Paragios are with the Ecole Centrale, Grande Voie des Vi- Gaussian noise. In Section V, we present experimental results gnes, 92 295 Chatenay-Malabry, France (e-mail: n.azzabou@institut-myologie. org; nikos.paragios@ecp.fr). and comparisons with other ﬁltering methods. F. Guichard is with the DxOLabs, 92100 Boulogne Billancourt, France e-mail: f.guichard@dxo.com). II. STATISTICAL DESCRIPTION OF IMAGE STRUCTURE Color versions of one or more of the ﬁgures in this paper are available online at http://ieeexplore.ieee.org. During the past three decades, signiﬁcant efforts were de- Digital Object Identiﬁer 10.1109/TIP.2009.2037468 voted to provide an efﬁcient tool towards characterization of 1057-7149/$26.00 © 2010 IEEE 1182 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010 Fig. 1. Overview of “Random Walks” based image enhancement. natural images content. In the context of computer vision and placement vector . For a given intensity it is deﬁned in particular image reconstruction, the performance of the de- as signed solutions is highly dependent on the accuracy of such models. Image models can be either global or local. Global such that (2) models focus on the statistics over the entire domain, while which corresponds to the probability of observing the intensity local ones aim to capture the local co-dependencies within the at the position knowing that . Such co-oc- observed image structure. The most popular work in this ﬁeld currences will likely be sensitive to noise, and, therefore, the refers to the generalized Laplacian distribution of the wavelet constraint of exact matching could be relaxed, leading to (3) coefﬁcients distribution [19] as well as the joint statistics of pairs or triplets of pixels in the wavelet domain [13], [16]. These sta- such that tistical models are generic for all natural images but they are global and they do not provide a clear understanding on how the and (3) information is spatially distributed in the image. The Gray Level Co-occurrence Matrix (GLCM) [12] is a similar tool for statis- with a constant determined according to the observed noise tical description and characterization of texture. It is deﬁned for level. a displacement vector by the matrix With this formulation, and for each gray level in the image we (where is the number of gray levels in the image). The coefﬁ- associate to each spatial transition the probability of leading cient is the number of occurrence of the pair of gray to the similar pixels. It is important to note that this probability level and after a displacement . In other words, it is pro- measure relies only on gray level similarity which is not sufﬁ- portional to the joint probability of the intensity of two pixels cient to characterize image structure. For instance, if we con- that are separated by the displacement vector sider an image that involves texture as well as smooth regions, and assume the same observed intensity in both classes, then a generic model will fail to capture the distinct characteristics of the two classes. Therefore, we will rather consider the local (1) mean instead of the intensity at a pixel level towards dealing 1 is the observed noisy image, the spatial domain and the the noise effect and the local variance towards dealing with refers to the number of elements in a set. This matrix is of sig- multiple image components structure with similar appearance. niﬁcant importance since implicitly it provides information on We used the local variance since it is a simple primitive ca- the geometric structure of the image. pable to describe texture at local scale. Considering both local As far as our denoising algorithm is concerned, we aim to ﬁnd mean and local variance , would result in a more speciﬁc pdf a description of the spatial relationship between similar pixels describing spatial relation only between patches having close belonging to the same structure in the image. Incorporating such mean intensity and variance. The probability measure that de- information will help the algorithm to cope with different image scribes the spatial relationship between similar patches knowing components leading to an adapted ﬁltering window. More ex- their local mean and variance is then plicitly, given the position of an observation, our objective is to know what is the most likely pixel position in the vicinity such that with similar image content. This could help to deﬁne the transi- tions on the image lattice towards ﬁnding similar pixels within and a window of a ﬁxed radius . Such a move is by deﬁnition anisotropic and is deduced from the observations. For this pur- (4) pose, we introduce a probability function with respect to the dis- where (resp ) corresponds to the function that com- 1I = U + n U is the noise free image and n is the additive noise. putes the local mean (resp local standard deviation) in a given AZZABOU et al.: IMAGE RECONSTRUCTION USING PARTICLE FILTERS AND MULTIPLE HYPOTHESES TESTING 1183 interactions between similar pixels was restricted to a local neighborhood of size . The generalization to larger scale will be done in a sequential fashion and this will be explained in the following section. Given this density, our aim is to determine the most appro- priate set of neighbors to estimate the noise-free intensity of a given pixel without an exhaustive scan of a large image domain. To this end, one needs to deﬁne a strategy to generate neigh- borhood candidate windows that takes into account the image content. In the present work, this is done through particle ﬁlters technique and this will be the focus of the following section. III. OVERVIEW OF PARTICLE FILTERING TECHNIQUE Fig. 2. Two pdf distributions p (d) for different values of and [top As stated before, our purpose is to determine the most ap- ( = 39; = 11:67)], bottom [ = 96; = 3:55)], and sample genera- tion according to these pdf (red pixel) for two different positions. propriate window or neighborhood shape and size to estimate the image intensity in a given position. One attempt to do that was presented in [23] where the authors perform ﬁltering by se- position. are constants to be ﬁxed according to the noise lecting the neighboring pixels in a random fashion but without level. taking image structure into account. The image lattice is repre- We must point out that we compute the probabilities of dis- sented using a graph where the weights of vertices encode the placement values included in a window of radius which cor- gray level difference between pixels. The transition of a walker responds to the local scale. The selection of such a parameter is from one position to another is based only on these weights important and must be adapted to the image content. Choosing a without considering the previous positions or local image sta- high value for increases computation complexity while a small tistics. In our algorithm, we explore the image domain using a value for the scale parameter would result in a less accurate de- sequential procedure taking into account the image structure as scription of the structure. well as the previous observation and positions. To this end, we The outcome of this process consists in a probability den- will consider the trajectory of a particle that starts from the pixel sity function that aims to ﬁnd a spatial representation that we want to recover and moves to other neighboring ones. of different structures through the computation of the relative Each trajectory corresponds to a statistical hypothesis and will position of similar patches. It is obvious that such a density is result in a reconstructed value. One should address two aspects far from being parametric due to the randomness of the observed in such a context: (i) the trajectories themselves or transitions image geometry. In practice the estimation of the pdf is based on between stages, and (ii) the “appropriateness” of each trajec- a nonparametric kernel density approximation [27] like Parzen tory given the origin and the associated observations. Such ap- windows. propriateness should depend on the consistency of the observa- Let and tions, the ability to encode local image structure and the rela- . Then, the probability density function estimator tionship between the origin value and the contributing pixels. is given by the following expression Toward optimizing the trajectory computation, we have to es- timate the probability density function of visiting a position (5) at time knowing all the past observations that refer to the in- tensities relative to the previously visited positions. This pdf is noted where is the current position of the particle with being a symmetric deﬁnite positive kernel. Gaussian and are the observation related to the previous and current kernels are the most common selection of such an approach and state of the particle that are the different positions and image they were considered in our case to approximate . The properties in these positions. bandwidth is a diagonal matrix with coefﬁcients equal to In order to estimate such a pdf, we use particle ﬁltering tech- and . In terms of the mean and variance quantization niques that are an efﬁcient sequential Monte-Carlo methods in- values, we considered a quantiﬁcation step . Examples of troduced in [11]. We will give a brief overview of the use of these densities are shown in Fig. 2. particle ﬁltering as a way of sequential estimation of posterior Once the structural model has been constructed from the pdf and focus on the Sequential Importance Resampling tech- image, we are able starting from a given image position nique. For an extended review the reader can refer to [1], [7], characterized by the couple , to determine the probable [8]. transitions towards the pixels that belong to the same structure as . This property is illustrated in Fig. 2, where several posi- A. Preliminaries tions (in green) similar to the origin (in red) are drawn without Let us introduce ﬁrst the problem of Bayesian ﬁltering where scanning its entire neighborhood. The displacements with one has to estimate the evolution of a hidden state of a system important probability are those that guarantee local statistics given a set of observations. Such a system involves a function preservation in terms of appearance. We recall that the learning that relates states with observations and a function that relates of the probability density function that describes the spatial the present state with the past ones. To introduce this notion, 1184 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010 let us note the state sequence and the sequence of To summarize, the concept of particle ﬁltering consists of measurements that verify three main steps: • hypothesis generation according to the transition law ; • computation of the likelihood of observations generated by this hypothesis ; where is the transition function, • weight updating according to . is the observation function while A common problem in this estimation strategy is the samples and are independent and identically distributed degeneration, where many particles carry on less information noise sequences. and the variance in the weight increases. In such a situation, Filtering is then equivalent to determine an estimation of the many particles have their weight close to zero and do not con- state vector knowing the set of all available observations tribute to the approximation of . To overcome this . In the Bayesian framework, one has to compute the pos- problem, a resampling step is necessary. The basic idea is to terior pdf which can be obtained recursively in two eliminate the particles that have limited capacity to approximate stages: prediction and update. The main idea is to suppose that the density and focus on the ones which substantially contribute the pdf is available and calculate via to the reconstruction of the density (important weights). Many the Bayes rule (6) algorithms were proposed to perform particle selection. In the present work, we considered the systematic resampling scheme (6) [15] which consists in eliminating particle with small weight and making several copies of the other particles according to their weight. where the prior pdf is computed via the Chapman-Kolmogorov Now towards the application of such an algorithm in the con- equation text of image denoising we have to deﬁne the following ele- ments: (7) • a state vector at a given time that refers to the state of the reconstruction process. It corresponds to the position where , of the new pixel added to the ﬁltering window, as well as is deﬁned by the observation model and by the tran- the currently reconstructed value; sition model. One has then to evaluate the expression (7) in order • an observation which corresponds to local image statistics to determine the probability of having the state given the ob- (patch, local mean, local standard deviation) relative to a servations . The recursive computation of the prior and new candidate pixel to the current state vector; the posterior pdf leads to the exact computation of the poste- • a measure that evaluates the contribution of each new can- rior density. Nevertheless, in practice, it is impossible to com- didate pixel to the estimation process; pute explicitly the posterior pdf , and, therefore, an • a systematic way of generating new samples given the approximation method needs to be introduced. existing ones able to account for the expected image geometry. B. Sampling Importance Resampling Filter (SIR) The key idea is to approximate the posterior pdf by IV. APPLICATION TO IMAGE RESTORATION random state samples with associated weights that reﬂect the importance of the Thus, given an origin pixel the process aims to recover a associated sample in the pdf. Thus, the posterior pdf can be number of “random” positions with approximated by a discrete weighted sum similar properties to to reconstruct the corrupted origin value . To this end we have to estimate in a sequential fashion the posterior using the previously described algo- and rithm. One can imagine multiple particles that are moving ran- domly on the image domain. The path (or random walk) of a par- where is the Kronecker Delta function deﬁned as if ticle corresponds to a candidate ﬁltering window. The weighted and if . mean of the intensities along a given particle trajectory is used The generation of the samples is done through the principle to restore the intensity of . As far as observations are con- of Importance Sampling [4], [7]. This approach suggest an itera- cerned, they correspond to the local patch around the pixel tive scheme where the particle are generated using the transition and around the candidate . To perform ﬁltering, our approach model and then compute the particle weights itera- requires the deﬁnition of a perturbation model as well as a like- tively according to lihood measure that reﬂects the contribution of a trajectory to the denoising process. (8) A. Transition Model Hence, particle weights are updated using two main informa- tion: the observation pdf which reﬂects the likelihood of having An important issue for the proposed algorithm consists in an observation knowing the state and the transition model deﬁning an appropriate strategy for samples perturbation. An which controls the evolution of a particle state. intuitive solution is to perform Gaussian perturbations. Such AZZABOU et al.: IMAGE RECONSTRUCTION USING PARTICLE FILTERS AND MULTIPLE HYPOTHESES TESTING 1185 a choice, suggests an isotropic way for particle transitions where is the bandwidth which must be carefully se- which can be justiﬁed in the case of homogeneous regions lected to get a reliable measure of similarity while being but not in the presence of some texture or edges. An efﬁcient computationally efﬁcient. manner to adapt the particle transition to the image content • Another alternative to the distance is the Sobolev dis- is to rely on the statistical model for image structure intro- tance between patches deﬁned as duced in the (Section II). The distribution determines the transition model between particle at position and . We recall that is a probability density function that indicates for a pixel (whose local mean and variance are equal to and ) the probability that the destination position according to the transition vector has close local (11) statistics to . This pdf is computed in the beginning of the process for each couple (mean, variance). We recall here Such a distance accounts also for the image gradient which its deﬁnition when considering local statistics of . Using is an important information on image structures and their and orientation. Thus, similar patch have to be similar both in , transition model is sense of their intensity and their gradient. In addition to the measure of the quality of a new position in the (9) particle walk, we consider a measure of its consistency within the whole trajectory. This measure is the intravariability of in- tensities in the different positions visited by the particle during Transitions according to this pdf guarantee that a maximum its walk. This constraint gives more importance in the ﬁltering number of particles will explore sites that are similar to the process to the pixel set composed of element having homoge- origin pixel . Such a strategy makes possible the use of a re- neous properties. This is mainly interesting in the case of edges duced number of particles and could also prevent their degen- or other linear structures in the image, where walks that follow eration. The essence of the proposed algorithm is to explore the exactly the structures would be favored compared to other tra- image domain in a random fashion and to ﬁnd the maximum jectories that fail to take into consideration this structure during number of similar sites to the origin one. Our approach does their evolution. This feature can enhance in an efﬁcient manner perform this task and explore the image domain with a smart image structures. Hence, in order to account for the intravari- fashion such that particles trajectories follow image structure ability of the trajectories, we consider as an observation the walk without an exhaustive scan of the pixel neighborhood. In this variance, with respect to the origin value context, the statistical structural description of the image geom- etry is a convenient transition model. Once such transitions have been identiﬁed, the next step consists of deﬁning a likelihood (12) measure for each particle based on its trajectory and the inten- sity observation relative to the particle position. Note that such an observation provides a measure of the “uni- formity” of the trajectory and could also be determined within B. Likelihood Measure a larger neighborhood (not at the pixel level). Finally, the im- Within the proposed approach, the random walk that is the set portance of a new sample given the prior state of the walk and of the image sites contributing to the reconstruction of a given the observation that is the patch around is then deﬁned as an pixel is determined using a sequential approach. To evaluate the exponential function of the two metrics relevance of a new particle state, a likelihood measure must be (13) introduced. This measure evaluates the signiﬁcance of a new candidate pixel position. The ﬁrst criterion that is considered is where and are constants that determine the bandwidth of the similarity between the image content observed in the origin the weight computation function. The choice of these values de- pixel and the current path position . We adopt the image pends on the noise level of the image. High values of and patches comparison, motivated by the fact that measuring simi- would relax the constraints on trajectories evolution and would, larities between image patches is an efﬁcient and popular metric therefore, result in a loss of image details. used in many computer vision tasks such as texture synthesis, restoration etc [5], [9]. In such a context we deﬁne the observa- C. Intensity Reconstruction tion model as follows. • The distance between the local patch around the origin After having deﬁned the transition model as well as the pixel and the one corresponding to a current particle observation model, we focus on the intensity reconstruction position . Thus, if we call this similarity measure, process. To this end, for each pixel of the image, we we have generate particles by applying perturbations starting from the initial position and according to the transition law . Iterating the transition process time for each particle yields a walk that is the set (10) of successive positions of the particle . We 1186 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010 noise model, but this framework can be used with different noise models. The extension of the method to cope with different noise models should involve both the statistical modeling of co-dependencies between pixels as well as the likelihood used to determine the importance of a new sample. Fig. 3. Example of three particles walks (red pixel) starting from the same origin position (green pixel), the walks follow image structure. D. Comparison With NL-Mean In the recent years the NL-mean [5] algorithm has become TABLE I the state of the art in image denoising. In this section we will DESCRIPTION OF THE PROPOSED DENOISING ALGORITHM provide the theoretical differences between the two algorithms. The NL-mean algorithm assumes ﬁxed size with respect to the local ﬁltering window ( window centered at the origin pixel). Then based on the similarity between the center patch and the candidate patches it performs ﬁltering. This is not the case of the proposed approach where the ﬁltering window is adapted to the pixel content. More explicitly, the set of candidate pixels in our approach is not ﬁxed and changes per pixel location according to local pixel properties. In other words the ﬁltering window is reconstructed on line to ensure a selection of greater number of similar local patches to be used in the reconstruction with minimum computation effort. In our ﬁltering process we consider local neighborhood then we enlarge it iteratively fol- deﬁne then the walk value as the average weighted value lowing the structure present in the image. This is done through along the walk. It corresponds to the value used to reconstruct the transition step of the particle ﬁlter where we select in each the original pixel according to the walk of the particle position the most appropriate transition to ﬁnd similar patches. The way the candidate pixels are selected is made according to the probability of co-occurrence of similar pixels at a given (14) scale and given orientation. Towards recovering these probabil- ities, we perform a nonparametric density approximation per As we discussed above, the pertinence of a walk as well as the each couple (intensity/variance). Hence, for each couple (inten- associated reconstructed value is computed using the likelihood sity/variance), we deﬁne the probability of stepping into a sim- measure deﬁned in expression (13). Once the weights of ilar pixel given a speciﬁc transition. This aspect of our algorithm all the particles are determined, we normalize them so that their is the fundamental difference with respect to the NL-mean ap- sum is equal to one. Next, we perform resampling in order to proach as well as its fast version proposed in [18]. Actually, the replace the particles with insigniﬁcant weights. These particles technique proposed in [18] assumes a ﬁxed size window, where are not consistent with the reference pixel and using them local image characteristics like variance and gradient orientation will introduce a bias in the reconstruction process. Finally, the are only used to make pixels preselection to avoid unnecessary reconstructed value is a weighted average of the mean intensity computation of weight values. The number of pixels considered of each walk and deﬁned as in the denoising is simply deﬁned by the initial size of the ﬁl- tering window. In our case the local variance is used to describe the image structure and the spatial relationship between similar (15) pixels. This information will be used to reconstruct iteratively the ﬁltering window. To illustrate this concept, we considered a The whole process (transition, weight computation, and resam- textured image and one pixel in this image (marked in red cross) pling) is repeated (T) times. An example of particle walks is and we tried to compute the set of candidate using NL-mean shown in Fig. 3, where the pixel in green refers to the origin pixel with a ﬁxed window size centered on the pixel (19 19) and we want to denoise and each trajectory is a candidate ﬁltering our algorithm with similar complexity using (60 particle and window. In practice we use particles, with six iterations). The results are presented in Fig. 4. We can see pixels contributing to each walk. To illustrate the random walks that one iteration of our algorithm using 60 particles detects an ﬁltering an overview of the whole process is presented in Fig. 1 equivalent number of similar pixels to the central one [Fig. 4(a) and the denoising algorithm is given in Table I. We should point and (b)]. After six iterations, the particle ﬁlter is able to recover out here, that the process is performed individually per pixel; more candidate pixels at larger scale. This is the main strength therefore, the reconstruction is always done using the observed of our algorithm, where the candidate pixels are sampled from image and there is no interaction between reconstructed values. a distribution that takes into account the local pixels statistics. It is important to note that the model described above and Regarding the NL-mean, the number of candidates is related to mainly the likelihood measure assumes an additive zero mean the research window size and the image content is not taken AZZABOU et al.: IMAGE RECONSTRUCTION USING PARTICLE FILTERS AND MULTIPLE HYPOTHESES TESTING 1187 algorithm to suppress noise while preserving image details. In addition to that, we considered a more objective indicator which is the Peak Signal to Noise Ration (PSNR) deﬁned as In the ﬁrst part, we will present comparison with other state of the art techniques. Next, we will focus on the performance difference between the NL-mean algorithm and the proposed approach. A. Comparison With State of the Art For performance comparison, we considered competitive ap- proaches based on ﬁlters that encode implicitly the image struc- ture through the deﬁnition of the weights such as the bilateral Fig. 4. Example of textured image where the pixel in red refers to the cen- ﬁlter [24], the NL-mean [5], and a neighborhood ﬁltering tech- tral pixel that must be denoised. (a) The set of candidate pixels provided by nique using variable bandwidth size [14]. Furthermore, we have the NL-mean. (b) The set of candidate pixels provided by one transition of the considered PDE’s based approaches that are the total variation particle ﬁlter. (d) The set of candidate pixels provided by six transitions of the particle ﬁlter. [22] and the anisotropic ﬁltering [20] using an edge stopping function of the type . Our sim- ulation consists in adding a synthetic white Gaussian noise of into account for the selection of the size and shape of the ﬁl- known standard deviation to the set of images fre- tering window. This is also the case of the fast denoising algo- quently adopted by the image restoration community for val- rithm based on NL-mean [18]. We have to emphasize that, if we idation. Their parameters were tuned manually to achieve op- consider the very simple case where the co-occurrence densi- timal performance in terms of PSNR and visual quality. Re- ties are set to uniform with respect to the displacement vector, garding our approach, each walk consists of six steps and for then we can assume to a certain extent that our approach is each pixel we have considered 60 particles . equivalent in terms of bandwidth and ﬁltering window with the Given this choice we are able to explore 6 60 positions in the NL-mean. However, in the presence of structure and textured image for each pixel. The radius of the explored region is of the component, our technique adapts the search window to these order where is the maximum distance that a particle structures. As far as weight computation is concerned, like the can achieve in a transition (in our case ). The window size NL-mean case we can also encode any imaginable patch-based used for likelihood computation is 7 7 the parame- similarity (intensity-driven or probability-driven) in the process ters of the Gaussian used for likelihood computation (13) are set through a slight modiﬁcation of the cost related with the ap- to . The same set of parameters was considered propriateness of the considered walk. Furthermore, opposite to when the Sobolev distance between patches was used. In such a case, the image gradient is determined using a smoothed version simple patch-based similarities weights being considered in the of the image by a Gaussian kernel. For the bilateral ﬁlter, the pa- case of NL-mean, we adopt a metric that combines this mea- rameters are and . The NL-mean ﬁlter was used sure with a consistency of the walk. Such consistency is de- with , the patch size used for comparison is 7 7 and ﬁned according to the homogeneity of the samples being vis- the weighted average was computed over a local neighborhood ited during the walk. Such a choice introduces a more robust of size 11 11. As far as the variable bandwidth method is con- behavior with respect to isolated and unstructured data where cerned, the results were provided by the authors and one can ﬁnd the notion of geometric consistency across candidate pixels for more details in [14]. In Table II, we provided the PSNR values ﬁltering is not considered. Last, but not least the method is mod- relative to each method. From this table, we can conclude that ular, gradient-free and can be used to deal with various models in general, computing similarity on image patches is better than of noise. Finally, to compare performance of the two algorithms, the pixel-wise comparison. The Bilateral ﬁlter, the total varia- we will present experimental results in the next section. tion minimization technique (TV), and the anisotropic diffusion (AD) have small PSNR values because they are not based on V. EXPERIMENTAL RESULTS patch comparison which provides a best measure of similarity In order to evaluate the performance of the proposed frame- than the pixel wise comparison or the gradient information. The work, the case of additive Gaussian noise was considered and approach we presented and the NL-mean algorithm have about compared with some of the state of the art methods with assump- the same performance, while the method presented in [14] out- tions/aims similar to the ones of our approach. We used natural performs all the others which is natural since the underlying idea images corrupted by a additive Gaussian noise with different of this approach is to minimize a bound of the quadratic error standard deviations. Performance evaluation involves a qualita- mean. Moreover, our algorithm selects the patches for compar- tive criterion (visual assessment) that reﬂect the ability of the ison randomly while being guided at the same time by the image 1188 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010 TABLE II PSNR VALUES FOR THE DENOISED IMAGES (THE PSNR OF THE IMAGE CORRUPTED BY ADDITIVE GAUSSIAN NOISE OF STD = 20 IS EQUAL TO 22.15) Fig. 5. (a) Noise free image. (b)Image corrupted with Gaussian noise = 20. Fig. 6. (a) Restoration result obtained with NL-mean algorithm. (b) NL-mean (c) Bilateral ﬁlter restoration. (d) Residual of the bilateral ﬁlter. residual. (c) Restoration result obtained with random walks algorithm (L dis- tance). (d) Residual of the random walk based approach. structure. The number of pixels contributing to the walks has a tremendous impact on the performance of the method. When only a few samples have an important weight in the ﬁltering process, it produces under smoothing in homogeneous areas and the texture is not well reconstructed. On the other hand, the vari- able bandwidth approach [14] involves all the pixels within a neighborhood in the ﬁltering process and uses more candidates than our method. We recall that for our method, in homogeneous zones, particles evolve in an isotropic fashion which is equiva- lent with considering a window of a ﬁxed bandwidth. In tex- tured regions, hypothesis generation relies on image structure, and, therefore, capturing local image characteristics is favored. In terms of photometric distance comparison, one can observe that the Sobolev norm has a slightly better performance than the distance because it involves other features than the image intensity. The qualitative results presented in Figs. 5–7 show that the restoration quality of our algorithm when compared to the one Fig. 7. (a) Restoration result obtained with random walks algorithm (Sobolev using an adaptive window size as well as the NL-mean algo- distance). (b) Residual of the random walk based approach. (c) Restoration re- rithm is satisfactory. In Fig. 8, restoration results emphasizing sult obtained with KB06 [14]. (d) KB06 [14] residual. an area with a very ﬁne texture scale are presented. This ﬁgure [as well as Fig. 7(c)] shows the presence of some artifacts in the denoised image using [14] due to over smoothing. Such a result of important noise level the efﬁciency of the learning step of the is due to the large window size selected in this region that re- image structures is compromised. In such a situation the transi- duced excessively the variance of the estimator. This is not the tion model is not too robust with respect to noise. Considering case for our algorithm, as it yields results with a more natural such a model for sampling new particle positions could result aspect. In terms of qualitative results, our approach does equally in small numbers of relevant pixels which may not be sufﬁcient well if compared to the NL-mean algorithm. Note that in case for the restoration of some structures. AZZABOU et al.: IMAGE RECONSTRUCTION USING PARTICLE FILTERS AND MULTIPLE HYPOTHESES TESTING 1189 TABLE III PSNR VALUES FOR THE DENOISED IMAGES (THE PSNR OF THE IMAGE CORRUPTED BY ADDITIVE GAUSSIAN NOISE OF std = 20 IS EQUAL TO 22.15) USING THE NL-MEAN AND OUR TECHNIQUE WITH DIFFERENT PARAMETERS VALUES ( ) N 0T small. We can notice also that for equivalent complexity, our method outperforms globally the NL-mean algorithm. However, in the case of images that are composed of irregular texture and a dominant local smooth component (like lena and house), an ex- haustive search of local neighborhood (as done in the NL-mean) outperforms our approach. In our case the sampling is not ex- haustive because we use a small number of particles. Besides, for stochastic texture our model reaches its limit because spa- tial relationship between similar patches cannot be extended to larger scale. For the case of images that present a signiﬁcant amount of texture (barbara, ﬁngerprint, baboon), our method performs better since we consider larger scale then the NL-mean for texture restoration. Fig. 8. Zoom on the Baboon image. (a) Original image. (b) Random walk restoration. (c) NL-Mean restoration. (d) Adaptive window size restoration [14]. VI. CONCLUSION In this paper, we introduced a new denoising technique that explicitly accounts for geometric relationships of image con- B. Comparison With the NL-Mean tent. This was achieved using a particle ﬁltering technique with In order to better understand the potential of the method, we a perturbation model determined using the observed statistical have considered for a ﬁxed number of candidate pixels different behavior of the image. The structural model was learned in a geometric exploitation strategies. This is achieved by varying ﬁrst step in order to guide the particle evolution toward image the number of particles and the number of transitions to structure such as edges. Based on such a model, the approach build the trajectories. In terms of computation complexity this acts like an isotropic ﬁlter when considering smooth areas, and is equivalent to considering a window containing pixels anisotropic when dealing with edges. Besides, when we con- for the NL-mean algorithm. The use of a small number of par- sider the case of regular texture with a repetitive pattern, the ticles with a large number of iterations (or number of pixels per presented technique is able to recover similar patches in a large walk) would have the ability to perform a sparse and elongated neighborhood using a small number of particles while visiting a sampling and to seek similar structure far from the origin pixel. limited number of positions using a learning step at local scale On the other hand, while considering small number of pertur- (Section II). However, in the case of random texture, gener- bation, we perform local dense sampling to seek similar con- alizing at larger scale the spatial relationship between similar tent within limited-bandwidth anisotropic window. In this set pixels learned at local scale is not straightforwrd. In such a case, of experiments we compared the performance of the NL-mean the evolution of particles is not optimal and a higher number using , a patch size equal to 7 7 for weight com- of particles is needed. However, both the theoretical framework putation while the weighted average was computed over a local and the experimental validation suggest that the use of the image neighborhood of size 11 11. For our algorithm the patch size structure to determine the perturbation model and the considera- was set to 7 7 and . The tested set of tion of multiple hypotheses testing can yield interesting results. couple are selected in order to have the same com- We have also shown that our method is ﬂexible with respect to plexity with NL-mean. In Table III, we provide SNR value of re- the choice of the similarity measure between image patches. stored images. It is clear that the choice made between sparsity Improving the learning stage of the image structural model versus dense sampling affects signiﬁcantly the results. For in- and guiding the particles to the most appropriate directions stance, considering a small number of particles and an important could be a step toward increasing the efﬁciency of particle number of iterations will result in a large scale research window. transitions. On the other hand, an automatic technique to In that case ( and ), the SNR values show that capture the scale of texture should be considered. It enables non local sampling is not always a good choice which is a nat- to determine the patch size considered when computing the ural outcome since similar patches are more likely to be found at likelihood of a new particle state on one hand and the size local scale in natural images. This statement is also valid for the of the maximum radius of transitions on the other hand. The NL-mean algorithm since using a large size neighborhood is ef- likelihood measure could be also modiﬁed to be more speciﬁc ﬁcient only with an adequate choice of which must be very to each noise distribution and more robust. Last, but not least, 1190 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010 the ability to perform the process in parallel for all pixels [26] L. Vese and S. Osher, “Modeling textures with total variation minimiza- and beneﬁt from the reconstructed values might improve the tion and oscillating patterns in image processing,” J. Sci. Comput., vol. 19, pp. 553–572, 2003. performance of the method. [27] M. Wand and M. Jones, Kernel Smoothing. New York: Chapman & Hall, 1995. REFERENCES [1] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on Noura Azzabou received the engineering diploma particle ﬁlters for on-line non-linear/non-gaussian Bayesian tracking,” from the Ecole Polytechnique of Tunisia in 2001, the IEEE Trans. Signal Process., vol. 50, pp. 174–188, 2002. M.S. degree in applied mathematics and computer vi- [2] N. Azzabou, N. Paragios, and F. Guichard, “Random walks, con- sion from the Ecole Normale Supérieure de Cachan, strained multiple hypothesis testing and image enhancement,” in Proc. France, in 2004, and the Ph.D. degree from the Ecole Eur. Conf. Computer Vision, 2006, pp. 379–390. Nationale de Ponts et Chaussees, France, in 2008. [3] N. Azzabou, N. Paragios, F. Guichard, and F. Cao, “Variable bandwidth Her Ph.D. work was carried in the computer vision image denoising using image-based noise models,” in Proc. IEEE Int. group in the Applied Mathematic Department, Ecole Conf. Computer Vision and Pattern Recognition, 2007, pp. 1–7. Centrale de Paris and DxOLabs. [4] N. Bergman, “Recursive Bayesian Estimation: Navigation and Her research activities were focused on variable Tracking Applications,” Ph.D. dissertation, Linkoping Univ., bandwidth image models for texture-preserving en- Linkoping, Sweden, 1999. hancement of natural images. She is currently a research scientist in the RMN [5] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image Laboratory in the Myology Institute (France), working on medical image pro- denoising,” in Proc. IEEE Int. Conf. Computer Vision and Pattern cessing and enhancement. Her research interests include registration, segmen- Recognition, 2005, pp. 60–65. tation, inhomogeneity correction, and denoising. She is the author of several [6] V. Caselles, J. M. Morel, G. Sapiro, and A. Tannenbaum, “Introduc- paper published in intrenatioal conference and journals. tion to the special issue on partial-differential equations and geom- etry-driven diffusion in image-processing and analysis,” IEEE Trans. Image Process., vol. 7, no. 3, pp. 269–273, Mar. 1998. [7] A. Doucet, On Sequential Simulation-Based Methods for Bayesian Fil- tering, Dept. Eng., Cambridge Univ., Cambridge, U.K., 1998, Tech. Nikos Paragios (SM’03) received the B.Sc. and Rep. CUED/F-INFENG/TR. 310. M.Sc. degrees from the University of Crete, Greece, [8] A. Doucet, J. de Freitas, and N. Gordon, Sequential Monte Carlo in 1994 and 1996, respectively, the Ph.D. degree Methods in Practice. New York: Springer-Verlag, 2001. from I.N.R.I.A./University of Nice/Sophia Antipolis, [9] A. Efros and T. Leung, “Texture synthesis by non-parametric sam- France, in 2000, and the HDR (Habilitation a Diriger pling,” in Proc. Int. Conf. Computer Vision, 1999, pp. 1033–1038. de Recherches) from the University of Nice/Sophia [10] K. Egiazarian, V. Katkovnik, and J. Astola, “Adaptive window size Antipolis in 2005. image denoising based on ICI rule,” in Proc. IEEE Int. Conf. Acoustic, He is currently a Professor at the Ecole Centrale Speech and Signal Processing, 2001, pp. 1869–1872. de Paris—one of most exclusive engineering schools [11] N. Gordon, D. Salmond, and A. Smith, “Novel approach to non- “Grange Ecoles”—leading the Medical Imaging and linear/non-gaussian bayesian state estimation,” in Proc. IEEE Radar Computer Vision Group at the Applied Mathematics and Signal Processing, 1993, vol. 140, pp. 107–113. Department. He is also afﬁliated with INRIA Saclay Ile-de-France, the French [12] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for Research Institute in Informatics and Control heading the GALEN group. Prior image classiﬁcation,” IEEE Trans. Syst., Man, Cybern., vol. SMC-6, to that, he was Professor/Research Scientist (2004–2005) at the Ecole Nationale pp. 610–621, 1973. de Ponts et Chaussees-France afﬁliated with Siemens Corporate Research [13] J. Huang and D. Mumford, “Statistics of natural images and models,” in (Princeton, NJ) as a project manager (smart cameras and machine vision Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 1999, group) (2002–2004), senior research scientist (2004), and research scientist pp. 541–547. (1999–2003). In 2002, he was an adjunct Professor at Rutgers University and [14] C. Kervrann and J. Boulanger, “Unsupervised patch-based image reg- in 2004 at New York University and a visiting Professor at Yale University, ularization and representation,” in Proc. Eur. Conf. Computer Vision, New Haven, CT, in 2007. He has co-edited four books, published more than 2006, pp. 555–567. 100 papers in the most prestigious journals and peer reviewed conferences of [15] G. Kitagawa, “Monte carlo ﬁlter and smoother for non-gaussian non- computer vision and medical imaging, and holds nine U.S. issued patents and linear state space models,” J. Comput. Graph. Statist., vol. 5, pp. 1–25, more than 20 pending. His research interests include computer vision, medical 1996. image analysis, remote sensing, and human computer interaction. [16] A. Lee, K. Pedersen, and D. Mumford, “The nonlinear statistics of Prof. Paragios serves as a member of the Editorial Board of the International high-contrast patches in natural images,” Int. J. Comput. Vis., pp. Journal of Computer Vision (IJCV), the Computer Vision and Image Under- 83–103, 2003. standing Journal (CVIU), and Medical Image Analysis (MedIA). In 2008, he [17] S. Lee, “Digital image smoothing and the sigma ﬁlter,” CVGIP, vol. was the laureate of one of Greece’s highest honor for young academics and sci- 24, no. 2, pp. 255–269, Nov. 1983. entists, the Bodossaki Foundation Prize in the ﬁeld of applied sciences, while in [18] M. Mahmoudi and G. Sapiro, “Fast image and video denoising via non- 2006, he received the TR35 prize from the MIT Technology Review journal. local means of similar neighborhoods,” IEEE Signal Process. Lett., vol. 12, pp. 839–842, 2005. [19] S. Mallat, “A theory for multiresolution signal decomposition: The waveletrepresentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Frédéric Guichard graduated in mathematics from 11, pp. 674–693, 1989. the Ecole Normale Supérieure, Paris, France, and is [20] P. Perona and J. Malik, “Scale space and edge detection using also a member of the “Corps des Ponts et Chaussées”. anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. He received the Ph.D. degree in applied mathematics 12, pp. 629–639, 1990. and image processing from the University Paris IX in [21] J. Polzehl and V. Spokoiny, “Adaptive weights smoothing with applica- 1994. tions to image restoration,” J. Roy. Statist. Soc. B, vol. 62, pp. 335–354, He is currently Chief Scientist and CTO at DxO 2000. Labs, which he cofounded in 2003. He is responsible [22] L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based for the image processing research and development noise removal,” Phys. D, vol. 60, pp. 259–268, 1992. for all products of the company. Prior to that, he oc- [23] B. Smolka and K. Wojciechowski, “Random walk approach to image cupied several academic and industrial research po- enhancement,” Signal Process., vol. 81, pp. 465–482, 2001. sitions. His research interests include digital photography, image processing, [24] C. Tomasi and R. Manduchi, “Bilateral ﬁltering for gray and color im- computer vision, and applied math. He has published over 100 papers in inter- ages,” in Proc. Int. Conf. Computer Vision, 1998, pp. 839–846. national journals, conferences, or patents. [25] D. Tschumperlé, “PDE’s Based Regularization of Multivalued Images Dr. Guichard has received several awards, such as the “Prix Sciences et and Applications,” Ph.D. dissertation, Univ. Nice Sophia Antipolis, Défense” in 1996 and a “grand prix” from the French Academy of Sciences in France, 2002. 2006.

DOCUMENT INFO

Shared By:

Tags:
adaptive, enhancement, enhancement algorithm, adaptive algorithm, low illumination, A binarization, method with learning, built rules, for document images, Adaptive, document, image binarization, An Objective, Evaluation, Methodology for Document, Image Binarization, Computer vision theory, global Thresholding Techniques, GRAY-SCALE, IMAGE ENHANCEMENT, Image Binarization By Back Propagation, Algorithm, Image Enhancement in the Spatial, Image Enhancement Techniques Using Local, Image Processing

Stats:

views: | 32 |

posted: | 3/29/2012 |

language: | English |

pages: | 10 |

Description:
A MODIFIED UNSHARP-MASKING TECHNIQUE FOR IMAGE CONTRAST, A NEW APPROACH FOR VERY DARK VIDEO DENOISING AND ENHANCEMENT, A PDE Approach to Super-resolution with, A Unified Histogram and Laplacian Based for Image, An Adaptive Image Enhancement Technique, An Improved Retinex Image Enhancement Technique, Automatic Exact Histogram Specification for, Bayesian Foreground and Shadow Detection in, Color Image Enhancement and Denoising Using an Optimized Filternet, Content Based Image Retrieval Using, Contrast Enhancement for Ziehl-Neelsen Tissue, DETAIL WARPING BASED VIDEO SUPER-RESOLUTION USING IMAGE GUIDES, Gray-level Image Enhancement By Particle Swarm, Image Contrast Enhancement based on Histogram, Image Enhancement and Segmentation Using Dark, Image Enhancement Technique Based On Improved, Image Quality Improvement fo r Electrophoretic Displays by, Image Reconstruction Using Particle Filters and, IMPROVED IDENTIFICATION OF IRIS AND EYELASH FEATURES, Improving Colour Image Segmentation on Acute, K�R ANALİZ Y�NTEMLERİ İLE İMGE İYİLEŞTİRME, NATURAL RENDERING OF COLOR IMAGE BASED ON RETINEX, Power-Constrained Contrast Enhancement, Research on Road Image Fusion Enhancement, Shadow Detection and Compensation in High Resolution Satellite Image Based, Smoothing Cephalographs Using Modified, Three-Dimensional Computational Integral Imaging Reconstruction by Use of, Towards integrating level-3 Features with

OTHER DOCS BY jasonndcosta

How are you planning on using Docstoc?
BUSINESS
PERSONAL

By registering with docstoc.com you agree to our
privacy policy and
terms of service, and to receive content and offer notifications.

Docstoc is the premier online destination to start and grow small businesses. It hosts the best quality and widest selection of professional documents (over 20 million) and resources including expert videos, articles and productivity tools to make every small business better.

Search or Browse for any specific document or resource you need for your business. Or explore our curated resources for Starting a Business, Growing a Business or for Professional Development.

Feel free to Contact Us with any questions you might have.