Abstract : Two fundamental types of methods for non-linear black-box modeling are linear prediction of random processes, or Kriging, and kernel-based regularized regression (which includes Splines, Radial Basis Functions and Support Vector Regression as special cases). Mathematical links between these approaches had been noticed in the past, but this dissertation presents an original synthesis of these links, drawn from the literature on approximation, learning, time series, geostatistics, etc. Though quite confidential up to now, these links are nevertheless essential, for instance in order to understand how regularized regression via kernel-based methods should be formulated in the context of the approximation of vector-valued functions (for multivariable, or MIMO, systems). In all of these approaches, the choice of an adequate kernel is crucial since it has a direct impact on the quality of the resulting model. The main theoretical results are provided by statistics. Although mainly asymptotical in nature, these results have important practical consequences which are recalled and illustrated. Classical kernels constitute a relatively restricted family that does not offer much flexibility. This led us to propose methods that build up a kernel by combining a large number of elementary kernels. These methods made it possible to obtain promising results, for example on a classical benchmark of the literature on time-series prediction. Finally, the question of how kernel-based regression methods can be applied is addressed, via the consideration of real problems. The choice of appropriate kernels for these problems is discussed. We show how prior knowledge can be taken into account using Intrinsic Kriging (semi-regularized regression). Some contributions to experiment design are also presented.