# Study on the variational models and dictionary learning

Abstract : This dissertation is dedicated to the use of dictionaries in the image analysis and image restoration. We are interested in various mathematical and practical aspects of this kind of methods: modeling, analysis the solution to such model, numerical analysis, dictionary learning and experimentation. After Chapter \ref{ch:intro}, which reviews the most significant works of this field, we present in Chapter \ref{ch:genral} the implementation and results which we obtained with the model consisting in solving $$\label{tv-infen} \left\{\begin{array}{l} \min_{w} TV(w), \\ \mbox{subject to } |\PS{w-v}{\psi}|\leq \tau, \forall \psi \in \DD \end{array}\right.$$ for $v\in\RRN$, an initial image, $\tau>0$, $TV (\cdot)$ the total variation and a {\em translation invariant} dictionary $\DD$. Actually, the dictionary, is built as all the translations of a collection $\FF$ of elements of $\RRN$ (of features or of the patches). The implementation of this model with this kind of dictionary is new. (The authors before this dissertation only considered the dictionaries of wavelet basis/packets or curvelets.) The flexibility of the construction of the dictionary leads to several experiments which we report in chapter \ref{ch:genral} and \ref{ch:knowfeathers}. The experiments of Chapter \ref{ch:genral} confirm that, to obtain good results of denoising with the above model, the dictionary must represent the curvature of textures well. Hence, when one uses Gabor dictionary, it is better to use Gabor filters whose support is isotropic (or almost isotropic). Indeed, for represent the curvature of a texture with a given frequency and living on a support $\Omega$, it is necessary that the support, in space, of Gabor filters allows a “paving” with few elements of support $\Omega$. Insofar as, for a general class images, the support $\Omega$ is independent of the frequency of the texture, it is most reasonable to choose Gaobr filters whose support is isotropic. This is a strong argument in favor of the wavelet packets, which allow in addition to having several sizes of supports in space (for a given frequency) and for which \eqref{tv-infen} can be solved quickly. %%%%%%%%%%%%%%% In Chapter \ref{ch:knowfeathers}, we present the experiments in which the dictionary contains the curvatures of known forms (letters). The data-fidelity term of the model \eqref{tv-infen} authorizes the appearance in the residue $w^*-v$ of all the structures, except forms being used to build the dictionary. Thus, we can expect that these forms remain in the result $w^*$ and that the other structures will disappear. Our experiments are carried on a problem of sources separation and confirm this impression. The starting image contains letters (known) on a very structured background (an image). We show that it is possible, with \eqref{tv-infen}, to obtain a reasonable separation of these structures. Finally this work illustrates clearly that the dictionary $\DD$ must contain the {\em curvature} of elements which we seek to preserve and not the elements themselves, as we might think this naively. % Chapter \ref{ch:k-svd} presents a work in which we try to integrate the K-SVD method with the model \eqref{tv-infen}. Our starting idea is to use the fact that some iterations of the algorithm which we use to solve \eqref{tv-infen} allow to reappear the lost structures from the image which we used as the initialization of the algorithm (and whose curvature is present in dictionary). We thus apply some of these iterations to the result of K-SVD and recover lost textures well. This allows a gain of visual and in PSNR. In Chapter \ref{primaldualbasispursuit}, we expose a numerical schema to solve a variant of Basis Pursuit. This consists to apply a proximal point algorithm to this model. The interest is to transform a non-differentiable convex problem to a sequence (quickly converging) of very regular convex problem. We show the theoretical convergence of the algorithm. This one is confirmed by the experiment. This algorithm allows to improve remarkably the quality (in term of sparseness) of the solution compared to the state-of-the-art concerning the practical resolution of Basis Pursuit. This algorithm should have a consequent impact in this rapidly developing field. In chapter \ref{ch:sparseandmpthresholding}, we adapt to the cases of a variational model, whose regularization term is that of Basis Pursuit and whose data-fidelity term is that of the model \eqref{tv-infen}, a result of D. Donoho (see [55]). This result shows that, under a condition relating the dictionary defining the regularization term to the dictionary defining the data-fidelity term, it is possible to extend the results of D. Donoho to the models which interest us in this chapter. The obtained result says that, if the given data is very sparse, the solution of the model is close to its most sparse decomposition. This guarantee the stability of this model within this framework and establishes a link between $l^1$ and $l^0$ regularization, for this type of data-fidelity term. Chapter \ref{ch:mpshrinkage} contains the study of a variant of Matching Pursuit. In this variant, we proposes to reduce the scalar product with the element best correlated with the residue, before modifying the residue. This is for a general threshold function. By using simple properties of these threshold functions, we show that the algorithm thus obtained converges towards the orthogonal projection of the data on linear space generated by the dictionary (the whole modulo an approximation quantified by the characteristics of the threshold function). Finally, under a weak assumption on the threshold function (for example the hard-threshold satisfies this assumption), this algorithm converges in a finite time which one can deduce from the properties of the threshold function. Typically, this algorithm might be useful to make the orthogonal projections in the algorithm “Orthogonal Matching Pursuit”. This we have not done yet. Chapter \ref{ch:mcmc} explores finally the dictionary learning problem. The developed point of view is to regard this problem as a parameter estimation problem in a family of additive generative models. The introduction of random on/off switches of Bernoulli activating or deactivating each element of a translation invariant dictionary to be estimated allows the identification under rather general conditions in particular if the coefficients are Gaussian. By using an EM variational technic and the approximation of the posteriori distribution by mean field, we derive from a estimation principle by maximum likelihood a new effective algorithm of dictionary learning which one can connect for certain aspects with algorithm K-SVD. The experimental results on synthetic data illustrate the possibility of a correct identification of a source dictionary and several applications in image decomposition and image denoising.
Keywords :
Document type :
Theses
Domain :

Cited literature [67 references]

https://tel.archives-ouvertes.fr/tel-00178024
Contributor : Tieyong Zeng <>
Submitted on : Friday, November 9, 2007 - 10:35:41 AM
Last modification on : Wednesday, April 28, 2021 - 6:45:34 PM
Long-term archiving on: : Thursday, September 23, 2010 - 4:45:42 PM

### Identifiers

• HAL Id : tel-00178024, version 3

### Citation

Tieyong Zeng. Study on the variational models and dictionary learning. Mathematics [math]. Université Paris-Nord - Paris XIII, 2007. English. ⟨tel-00178024v3⟩

Record views