L. Devroye, L. Györfigy¨györfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition, 1996.
DOI : 10.1007/978-1-4612-0711-5

H. Hotelling, Analysis of a complex of statistical variables into principal components., Journal of Educational Psychology, vol.24, issue.6, pp.417-441, 1933.
DOI : 10.1037/h0071325

K. Karhunen, ¨ Uber lineare methoden in der wahrscheinlichkeitsrechnung, Amer. Acad

J. Macqueen, Some methods for classification and analysis of multivariate observations, Proceedings of the Fifth Berkeley Symposium on Mathematics, Statistics and Probability, pp.281-296, 1967.

Y. Linde, A. Buzo, and R. M. Gray, An Algorithm for Vector Quantizer Design, IEEE Transactions on Communications, vol.28, issue.1, pp.84-95, 1980.
DOI : 10.1109/TCOM.1980.1094577

T. Kohonen, Clustering, taxonomy, and topological maps of patterns, Proceedings of the 6th International Conference on Pattern Recognition, pp.114-128, 1982.

M. A. Kramer, Nonlinear principal component analysis using autoassociative neural networks, AIChE Journal, vol.37, issue.2, pp.233-243, 1991.
DOI : 10.1002/aic.690370209

S. Roweis and S. L. , Nonlinear Dimensionality Reduction by Locally Linear Embedding, Science, vol.290, issue.5500, pp.2323-2326, 2000.
DOI : 10.1126/science.290.5500.2323

URL : http://astro.temple.edu/~msobel/courses_files/saulmds.pdf

J. B. Tenenbaum, V. De-silva, and J. C. Langford, A Global Geometric Framework for Nonlinear Dimensionality Reduction, Science, vol.290, issue.5500, pp.2319-2323, 2000.
DOI : 10.1126/science.290.5500.2319

R. I. Kondor and J. Lafferty, Diffusion kernels on graphs and other discrete input spaces, Proceedings of the 19th International Conference on Machine Learning, 2002.

M. Belkin and P. Niyogi, Laplacian Eigenmaps for Dimensionality Reduction and Data Representation, Neural Computation, vol.15, issue.6, pp.1373-1396, 2003.
DOI : 10.1126/science.290.5500.2319

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.131.3745

G. E. Hinton and S. R. , Reducing the Dimensionality of Data with Neural Networks, Science, vol.313, issue.5786, pp.504-507, 2006.
DOI : 10.1126/science.1127647

D. Erhan, Y. Bengio, A. Courville, P. Manzagol, P. Vincent et al., Why does unsupervised pre-training help deep learning?, Journal of Machine Learning Research, vol.11, pp.625-660, 2010.

B. Kégl, A. Krzy, ?. Zak, T. Linder, and K. Zeger, Learning and design of principal curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.22, issue.3, pp.281-297, 2000.
DOI : 10.1109/34.841759

B. Kégl, A. Krzy, and ?. Zak, Piecewise linear skeletonization using principal curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, issue.1, pp.59-74, 2002.
DOI : 10.1109/34.982884

A. Imiya and N. Yamagishi, Principal curve analysis for temporal data, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., pp.475-478, 2004.
DOI : 10.1109/ICPR.2004.1334267

D. Chen, J. Zhang, S. Tanks, and J. Wang, Freeway Traffic Stream Modeling Based on Principal Curves and Its Analysis, IEEE Transactions on Intelligent Transportation Systems, vol.5, issue.4, pp.246-258, 2004.
DOI : 10.1109/TITS.2004.838226

I. Cleju, P. Fränti, and W. X. , Clustering Based on Principal Curve, Image Analysis, pp.61-73, 2005.
DOI : 10.1007/11499145_88

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.85.6232

B. Bhushan, J. A. Romagnoli, and W. D. , FAULT DETECTION USING RADIAL BASIS FUNCTION NETWORK AND POLYGONAL LINE, 16th IFAC world congress, 2005.
DOI : 10.3182/20050703-6-CZ-1902.01827

P. N. Bernasconi, D. M. Rust, and D. Hakim, Advanced Automated Solar Filament Detection And Characterization Code: Description, Performance, And Results, Solar Physics, vol.563, issue.1-2, pp.97-117, 2005.
DOI : 10.1007/s11207-005-2766-y

F. Zhang, NONLINEAR FEATURE EXTRACTION AND DIMENSION REDUCTION BY POLYGONAL PRINCIPAL CURVES, International Journal of Pattern Recognition and Artificial Intelligence, vol.20, issue.01, pp.63-78, 2006.
DOI : 10.1142/S0218001406004508

W. Wilbur and A. Chung, Principal curves to extract vessels in 3D angiograms, Computer Vision and Pattern Recognition Workshops, pp.1-8, 2008.

C. Rubbia, Underground operation of the ICARUS T600 LAr-TPC: first results, Journal of Instrumentation, vol.6, issue.07, 2011.
DOI : 10.1088/1748-0221/6/07/P07011

S. Sandilya and S. R. Kulkarni, Principal curves with bounded turn, IEEE Transactions on Information Theory, vol.48, issue.10, pp.2789-2793, 2002.
DOI : 10.1109/TIT.2002.802614

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.7227

J. J. Verbeek, N. Vlassis, and B. Krösekr¨kröse, A k-segments algorithm for finding principal curves, Pattern Recognition Letters, vol.23, issue.8, pp.1009-1017, 2002.
DOI : 10.1016/S0167-8655(02)00032-6

URL : https://hal.archives-ouvertes.fr/inria-00321497

J. Einbeck, G. Tutz, and L. Evers, Local principal curves, Statistics and Computing, vol.88, issue.4, pp.301-313, 2005.
DOI : 10.1007/s11222-005-4073-8

A. Gorban and A. Zinovyev, Elastic Principal Graphs and Manifolds and their Practical Applications, Computing, vol.4, issue.4, pp.359-379, 2005.
DOI : 10.1007/s00607-005-0122-6

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.585

U. Ozertem and D. Erdogmus, Locally defined principal curves and surfaces, Journal of Machine Learning Research, pp.1249-1286, 2011.

G. Biau and A. Fischer, Parameter Selection for Principal Curves, IEEE Transactions on Information Theory, vol.58, issue.3, 2011.
DOI : 10.1109/TIT.2011.2173157

URL : https://hal.archives-ouvertes.fr/hal-00565540

T. Hastie, Principal curves and surfaces, 1984.

T. Hastie and W. Stuetzle, Principal Curves, Journal of the American Statistical Association, vol.26, issue.406, pp.502-516, 1989.
DOI : 10.1080/03610927508827223

P. L. Bartlett, S. Boucheron, and G. Lugosi, Model Selection and Error Estimation, Machine Learning, pp.85-113, 2001.
DOI : 10.2139/ssrn.248567

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.141.4491

G. Biau, L. Devroye, and G. Lugosi, On the Performance of Clustering in Hilbert Spaces, IEEE Transactions on Information Theory, vol.54, issue.2, pp.781-790, 2008.
DOI : 10.1109/TIT.2007.913516

URL : https://hal.archives-ouvertes.fr/hal-00290855

L. Birgé and P. Massart, Minimal Penalties for Gaussian Model Selection, Probability Theory and Related Fields, pp.33-73, 2007.
DOI : 10.1007/s00440-006-0011-8

P. Delicado, Another Look at Principal Curves and Surfaces, Journal of Multivariate Analysis, vol.77, issue.1, pp.84-116, 2002.
DOI : 10.1006/jmva.2000.1917

URL : http://doi.org/10.1006/jmva.2000.1917

T. Kohonen, The Self-Organizing Map, 1997.

E. Chávez, G. Navarro, R. Baeza-yates, and J. Marroquín, Searching in metric spaces, ACM Computing Surveys, vol.33, issue.3, 2001.
DOI : 10.1145/502807.502808

P. Grassberger and I. Procaccia, Measuring the strangeness of strange attractors, Physica D: Nonlinear Phenomena, vol.9, issue.1-2, pp.189-208, 1983.
DOI : 10.1016/0167-2789(83)90298-1

F. Camastra and A. Vinciarelli, Estimating intrinsic dimension of data with a fractal-based approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002.

A. Belussi and C. Faloutsos, Self-spacial join selectivity estimation using fractal concepts, ACM Transactions on Information Systems, vol.16, issue.2, pp.161-201, 1998.
DOI : 10.1145/279339.279342

Y. Freund and R. E. Schapire, A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting, Journal of Computer and System Sciences, vol.55, issue.1, pp.119-139, 1997.
DOI : 10.1006/jcss.1997.1504

R. E. Schapire and Y. Singer, Improved boosting algorithms using confidence-rated predictions, Proceedings of the eleventh annual conference on Computational learning theory , COLT' 98, pp.297-336, 1999.
DOI : 10.1145/279943.279960

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.156.2440

B. Kégl and R. Busa-fekete, Boosting products of base classifiers, Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, pp.497-504, 2009.
DOI : 10.1145/1553374.1553439

J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, Algorithms for hyper-parameter optimization, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00642998

D. S. Johnson and F. P. Preparata, The densest hemisphere problem, Theoretical Computer Science, vol.6, issue.1, pp.93-107, 1978.
DOI : 10.1016/0304-3975(78)90006-3

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning representations by back-propagating errors, Nature, vol.85, issue.6088, pp.533-536, 1986.
DOI : 10.1038/323533a0

C. M. Bishop, Neural Networks for Pattern Recognition, 1995.

B. Boser, I. Guyon, and V. Vapnik, A training algorithm for optimal margin classifiers, Proceedings of the fifth annual workshop on Computational learning theory , COLT '92, pp.144-152, 1992.
DOI : 10.1145/130385.130401

C. Cortes and V. Vapnik, Support-vector networks, Machine Learning, pp.273-297, 1995.
DOI : 10.1007/BF00994018

V. N. Vapnik, Statistical Learning Theory, 1998.

P. Bartlett, For valid generalization, the size of the weights is more important than the size of the network, Advances in Neural Information Processing Systems, pp.134-140, 1997.

P. Bartlett, The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network, IEEE Transactions on Information Theory, vol.44, issue.2, pp.525-536, 1998.
DOI : 10.1109/18.661502

I. Nabney, Netlab: Algorithms for Pattern Recognition, 2002.

L. Bottou and O. Bousquet, The tradeoffs of large scale learning, Advances in Neural Information Processing Systems, pp.161-168, 2008.

L. Mason, P. Bartlett, J. Baxter, and M. Frean, Boosting algorithms as gradient descent, Advances in Neural Information Processing Systems, pp.512-518, 2000.

L. Mason, P. Bartlett, and J. Baxter, Improved generalization through explicit optimization of margins, Machine Learning, pp.243-255, 2000.

M. Collins, R. E. Schapire, and Y. Singer, Logistic regression, AdaBoost and Bregman distances, Machine Learning, pp.253-285, 2002.

R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, Boosting the margin: a new explanation for the effectiveness of voting methods, The Annals of Statistics, vol.26, issue.5, pp.1651-1686, 1998.
DOI : 10.1214/aos/1024691352

P. Viola and M. Jones, Robust Real-Time Face Detection, International Journal of Computer Vision, vol.57, issue.2, pp.137-154, 2004.
DOI : 10.1023/B:VISI.0000013087.49260.fb

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.9805

O. Chapelle and Y. Chang, Yahoo! learning to rank challenge overview, Yahoo Learning to Rank Challenge (JMLR W&CP), pp.1-24, 2010.

L. Von-ahn, B. Maurer, C. Mcmillen, D. Abraham, and M. Blum, reCAPTCHA: Human-Based Character Recognition via Web Security Measures, Science, vol.321, issue.5895, pp.1465-1468, 2008.
DOI : 10.1126/science.1160379

V. Ahn and L. Dabbish, Labeling images with a computer game, Proceedings of the 2004 conference on Human factors in computing systems , CHI '04, pp.319-326, 2004.
DOI : 10.1145/985692.985733

L. and V. Ahn, Games with a Purpose, Computer, vol.39, issue.6, 2006.
DOI : 10.1109/MC.2006.196

J. Bennett and S. Lanning, The Netflix prize, 2007.

N. Casagrande, D. Eck, and B. Kégl, Geometry in sound: A speech/music audio classifier inspired by an image classifier, International Computer Music Conference, 2005.

J. Bergstra, N. Casagrande, D. Erhan, D. Eck, and B. Kégl, Aggregate features and ADABOOST for music classification, Machine Learning, vol.10, issue.5, pp.473-484, 2006.
DOI : 10.1007/s10994-006-9019-7

URL : https://hal.archives-ouvertes.fr/inria-00176062

P. Auger and C. , Pierre Auger project design report, Tech. Rep, 1997.

R. Busa-fekete, B. Kégl, T. Eltet?-o, and G. Szarvas, Ranking by calibrated AdaBoost, Yahoo! Ranking Challenge, pp.37-48, 2010.
URL : https://hal.archives-ouvertes.fr/hal-00643001

R. Busa-fekete, B. Kégl, T. Eltet?-o, and G. Szarvas, A Robust Ranking Methodology Based on Diverse Calibration of AdaBoost, European Conference on Machine Learning, 2011.
DOI : 10.1007/978-3-642-23780-5_27

URL : https://hal.archives-ouvertes.fr/hal-00643000

Y. Takizawa, T. Ebisuzaki, Y. Kawasaki, M. Sato, M. E. Bertaina et al., JEM-EUSO: Extreme Universe Space Observatory on JEM/ISS, JEM-EUSO: Extreme Universe Space Observatory on JEM/ISS, pp.72-76, 2007.
DOI : 10.1016/j.nuclphysbps.2006.12.007

V. Gligorov, A single track HLT1 trigger, 2011.

L. Bourdev and J. Brandt, Robust Object Detection via Soft Cascade, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), pp.236-243, 2005.
DOI : 10.1109/CVPR.2005.310

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.190.3554

R. Xiao, L. Zhu, and H. J. Zhang, Boosting chain learning for object detection, Ninth IEEE International Conference on Computer Vision, pp.709-715, 2003.

J. Sochman and J. Matas, WaldBoost ??? Learning for Time Constrained Sequential Detection, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), pp.150-156, 2005.
DOI : 10.1109/CVPR.2005.373

B. P. ´-oczos, Y. Abbasi-yadkori, C. Szepesvári, R. Greiner, and N. Sturtevant, Learning when to stop thinking and do something!, Proceedings of the 26th International Conference on Machine Learning, pp.825-832, 2009.

M. Saberian and N. Vasconcelos, Boosting classifier cascades, Advances in Neural Information Processing Systems, pp.2047-2055, 2010.

T. Aaltonen, Observation of Electroweak Single Top-Quark Production, Physical Review Letters, vol.103, issue.9, 2009.
DOI : 10.1103/PhysRevLett.103.092002

URL : https://hal.archives-ouvertes.fr/in2p3-00366602

V. M. Abazov, Observation of Single Top-Quark Production, Physical Review Letters, vol.103, issue.9, 2009.
DOI : 10.1103/PhysRevLett.103.092001

URL : https://hal.archives-ouvertes.fr/in2p3-00365919

R. Busa-fekete, D. Benbouzid, and B. Kégl, MDDAG: designing sparse decision DAGs using Markov decision processes, 2011.

W. Cohen, Learning trees and rules with set-valued features, Proceedings of the AAAI Conference on Artificial Intelligence, pp.709-716, 1996.

W. Cohen and Y. Singer, A simple, fast, and effective rule learner, Proceedings of the AAAI Conference on Artificial Intelligence, pp.335-342, 1999.

J. Quinlan, Induction of decision trees, Machine Learning, pp.81-106, 1986.
DOI : 10.1007/BF00116251

J. Quinlan, Bagging, boosting and C4, Proceedings of the 13th National Conference on Artificial Intelligence, pp.725-730, 1996.

T. Cormen, C. Leiserson, and R. Rivest, IntroductionáIntroduction´Introductioná l'algorithmique, Dunod, 1994.

N. Srebro, J. D. Rennie, and T. Jaakkola, Maximum margin matrix factorization, Advances in Neural Information Processing Systems, pp.1329-1336, 2005.

I. H. Witten and E. Frank, Data mining, ACM SIGMOD Record, vol.31, issue.1, 2005.
DOI : 10.1145/507338.507355

R. Salakhutdinov and G. Hinton, Learning a nonlinear embedding by preserving class neighbourhood structure, International Conference on Artificial Intelligence and Statistics, pp.412-419, 2007.

B. Marlin, Collaborative filtering: a machine learning perspective, 2004.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, vol.86, issue.11, pp.2278-2324, 1998.
DOI : 10.1109/5.726791

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.138.1115

S. Robertson and H. Zaragoza, The Probabilistic Relevance Framework: BM25 and Beyond, Foundations and Trends?? in Information Retrieval, vol.3, issue.4, pp.333-389, 2009.
DOI : 10.1561/1500000019

P. Li, C. Burges, and Q. Wu, McRank: Learning to rank using multiple classification and gradient boosting, Advances in Neural Information Processing Systems, pp.897-904, 2007.

Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer, An efficient boosting algorithm for combining preferences, Journal of Machine Learning Research, vol.4, pp.933-969, 2003.

H. Valizadegan, R. Jin, R. Zhang, and J. Mao, Learning to rank by optimizing NDCG measure, Advances in Neural Information Processing Systems 22, pp.1883-1891, 2009.

D. Cossock and T. Zhang, Statistical Analysis of Bayes Optimal Subset Ranking, IEEE Transactions on Information Theory, vol.54, issue.11, pp.5140-5154, 2008.
DOI : 10.1109/TIT.2008.929939

O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan, Expected reciprocal rank for graded relevance, Proceeding of the 18th ACM conference on Information and knowledge management, CIKM '09, pp.621-630, 2009.
DOI : 10.1145/1645953.1646033

URL : http://ciir.cs.umass.edu/~metzler/metzler-cikm09.pdf

A. Niculescu-mizil and R. Caruana, Obtaining calibrated probabilities from boosting, Proceedings of the 21st International Conference on Uncertainty in Artificial Intelligence, pp.413-420, 2005.

Q. Wu, C. J. Burges, K. M. Svore, and J. Gao, Adapting boosting for information retrieval measures, Information Retrieval, vol.10, issue.3, pp.254-270, 2010.
DOI : 10.1007/s10791-009-9112-1

O. Chapelle and M. Wu, Gradient descent optimization of smoothed information retrieval metrics, Information Retrieval, vol.20, issue.4, pp.216-235, 2010.
DOI : 10.1007/s10791-009-9110-3

J. Xu and H. Li, AdaRank, Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '07, pp.391-398, 2007.
DOI : 10.1145/1277741.1277809

Z. Cao, T. Qin, T. Liu, M. Tsai, and H. Li, Learning to rank, Proceedings of the 24th international conference on Machine learning, ICML '07, pp.129-136, 2007.
DOI : 10.1145/1273496.1273513

R. Herbrich, T. Graepel, and K. Obermayer, Large margin rank boundaries for ordinal regression, Advances in Large Margin Classifiers, pp.115-132, 2000.

O. Chapelle, Y. Chang, and T. Y. Liu, Future directions in learning to rank, Yahoo Learning to Rank Challenge (JMLR W&CP), pp.91-100, 2011.

H. Lee, A. Battle, R. Raina, and A. Y. Ng, Efficient sparse coding algorithms, Advances in Neural Information Processing Systems, pp.801-808, 2007.

M. Ranzato, C. Poultney, S. Chopra, and Y. Lecun, Efficient learning of sparse representations with an energy-based model, Advances in Neural Information Processing Systems 19, pp.1137-1144, 2007.

H. Larochelle and G. Hinton, Learning to combine foveal glimpses with a third-order Boltzmann machine, Advances in Neural Information Processing Systems, pp.1243-1251, 2010.

Y. Freund and L. Mason, The alternating decision tree learning algorithm, Proceedings of the 16th International Conference on Machine Learning, pp.124-133, 1999.

G. Neu, A. Györgygy¨györgy, and C. Szepesvári, The online loop-free stochastic shortest-path problem, Proceedings of the 23th Annual Conference on Computational Learning Theory, pp.231-243, 2010.

R. S. Sutton and A. G. Barto, Reinforcement learning: an introduction, Adaptive computation and machine learning, 1998.
DOI : 10.1007/978-1-4615-3618-5

G. A. Rummery and M. Niranjan, On-line Q-learning using connectionist systems, 1994.

. Cs and . Szepesvári, Algorithms for Reinforcement Learning, 2010.

C. J. Watkins and P. Dayan, Q-learning, Machine Learning, pp.279-292, 1992.

G. Davis, S. Mallat, and M. Avellaneda, Adaptive greedy approximations, Constructive Approximation, vol.21, issue.1, pp.57-98, 1997.
DOI : 10.1007/BF02678430

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.7530

V. Ejov, J. Filar, and J. Gondzio, An Interior Point Heuristic for the Hamiltonian Cycle Problem via Markov Decision Processes, Journal of Global Optimization, vol.29, issue.3, pp.315-334, 2004.
DOI : 10.1023/B:JOGO.0000044772.11089.1a

S. Munder and D. M. Gavrila, An Experimental Study on Pedestrian Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.28, issue.11, pp.1863-1868, 2006.
DOI : 10.1109/TPAMI.2006.217

B. B. Cambazoglu, H. Zaragoza, O. Chapelle, J. Chen, C. Liao et al., Early exit optimizations for additive machine learned ranking systems, Proceedings of the third ACM international conference on Web search and data mining, WSDM '10, pp.411-420, 2010.
DOI : 10.1145/1718487.1718538

K. Järvelin and J. Kekäläinen, Cumulated gain-based evaluation of IR techniques, ACM Transactions on Information Systems, vol.20, issue.4, pp.422-446, 2002.
DOI : 10.1145/582415.582418

A. Antos, B. Kégl, T. Linder, and G. Lugosi, Data-dependent margin-based generalization bounds for classification, Journal of Machine Learning Research, pp.73-98, 2002.

B. Kégl, T. Linder, and G. Lugosi, Data-Dependent Margin-Based Generalization Bounds for Classification, Proceedings of the 14th Conference on Computational Learning Theory, pp.368-384, 2001.
DOI : 10.1007/3-540-44581-1_24

B. Kégl and L. Wang, Boosting on manifolds: adaptive regularization of base classifiers, Advances in Neural Information Processing Systems, pp.665-672, 2004.

S. Gambs, B. Kégl, and E. A¨?meura¨?meur, Privacy-preserving boosting, Data Mining and Knowledge Discovery, vol.3, issue.1, pp.131-170, 2007.
DOI : 10.1007/s10618-006-0051-9

URL : https://hal.archives-ouvertes.fr/inria-00176059

R. Busa-fekete and B. Kégl, Accelerating AdaBoost using UCB, pp.111-122, 2009.

R. Busa-fekete and B. Kégl, Fast boosting using adversarial bandits, International Conference on Machine Learning, pp.143-150, 2010.
URL : https://hal.archives-ouvertes.fr/in2p3-00614564

B. Kégl and D. Veberi?, Single muon response: Tracklength, 2009.

R. Bardenet, B. Kégl, and D. Veberi?, Single muon response: The signal model, 2010.

H. Haario, E. Saksman, and J. Tamminen, An Adaptive Metropolis Algorithm, Bernoulli, vol.7, issue.2, pp.223-242, 2001.
DOI : 10.2307/3318737

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8948

R. Bardenet, O. Cappé, G. Fort, and B. Kégl, Adaptive Metropolis with online relabeling, 2011.
DOI : 10.3150/13-bej578

URL : https://hal.archives-ouvertes.fr/in2p3-00698479

P. J. Green, Reversible jump Markov chain Monte Carlo computation and Bayesian model determination, Biometrika, vol.82, issue.4, pp.711-732, 1995.
DOI : 10.1093/biomet/82.4.711

H. Robbins, An empirical Bayes approach to statistics, roceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability Contributions to the Theory of Statistics, pp.157-163, 1956.
DOI : 10.1007/978-1-4612-5110-1_3

URL : http://projecteuclid.org/download/pdf_1/euclid.bsmsp/1200501653

B. P. Carlin and T. A. Louis, Bayes and Empirical Bayes Methods for Data Analysis, 2000.

P. S. Allison and D. Barnhill, Calculation of the number of photoelectrons produced per tank based on PMT test data and station monitoring data, 2004.

M. Aglietta, A direct measurement of the photoelectron number per vertical muon in the Capisa SD detector, 2005.

D. Dornic, Développement et caractérisation de photomultiplicateurs hémisphériques pour les expériences d'astroparticules d'´ etalonnage des détecteurs de surface et analyse des gerbes horizontales de l'Observatoire Pierre Auger, 2006.

D. Ravignani, Calculation of the number of photoelectrons with the water Cherenkov detector model, 1997.

B. Genolini, T. Nguyen-trunc, J. Pouthas, P. Lavoute, C. Meunier et al., Photonis XP1805 and PAO SD bases: effects of the temperature and of the Earth's magnetic field, 2003.

G. Roberts, A. Gelman, and W. Gilks, Weak convergence and optimal scaling of random walk Metropolis algorithms, The Annals of Applied Probability, vol.7, issue.1, pp.110-120, 1997.
DOI : 10.1214/aoap/1034625254

G. O. Roberts and J. S. Rosenthal, Optimal scaling for various Metropolis-Hastings algorithms, Statistical Science, vol.16, issue.4, pp.351-367, 2001.
DOI : 10.1214/ss/1015346320

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.3153

S. Richardson and P. J. Green, On Bayesian Analysis of Mixtures with an Unknown Number of Components (with discussion), Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.59, issue.4, pp.731-792, 1997.
DOI : 10.1111/1467-9868.00095

G. Celeux, Bayesian Inference for Mixture: The Label Switching Problem, COMP- STAT 98, 1998.
DOI : 10.1007/978-3-662-01131-7_26

M. Stephens, Dealing with label switching in mixture models, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.62, issue.4, pp.795-809, 2000.
DOI : 10.1111/1467-9868.00265

J. M. Marin, K. Mengersen, and C. P. Robert, Bayesian Modelling and Inference on Mixtures of Distributions, Handbook of Statisics, vol.25, 2004.
DOI : 10.1016/S0169-7161(05)25016-2

A. Jasra, C. C. Holmes, and D. A. Stephens, Markov Chain Monte Carlo Methods and the Label Switching Problem in Bayesian Mixture Modeling, Statistical Science, vol.20, issue.1, pp.50-67, 2005.
DOI : 10.1214/088342305000000016

A. Jasra, Bayesian inference for mixture models via Monte Carlo, 2005.

M. Sperrin, T. Jaki, and E. Wit, Probabilistic relabelling strategies for the label switching problem in Bayesian mixture models, Statistics and Computing, vol.62, issue.2, pp.357-366, 2010.
DOI : 10.1007/s11222-009-9129-8

N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of State Calculations by Fast Computing Machines, The Journal of Chemical Physics, vol.21, issue.6, pp.1087-1092, 1953.
DOI : 10.1063/1.1699114

E. Saksman and M. Vihola, On the ergodicity of the adaptive Metropolis algorithm on unbounded domains, The Annals of Applied Probability, vol.20, issue.6, pp.2178-2203, 2010.
DOI : 10.1214/10-AAP682

G. Fort, E. Moulines, and P. Priouret, Convergence of adaptive MCMC algorithms: ergodicity and law of large numbers, Tech. Rep, 2010.

M. Vihola, Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?, Electronic Journal of Probability, vol.16, issue.0, pp.45-75, 2011.
DOI : 10.1214/EJP.v16-840

X. Garrido, A. Cordier, S. Dagoret-campagne, B. Kégl, D. Monnier-ragaigne et al., Measurement of the number of muons in Auger tanks by the FADC jump counting method, 2007.

X. Garrido, B. Kégl, A. Cordier, S. Dagoret-campagne, D. Monnier-ragaigne et al., Update and new results from the FADC jump counting method, 2009.

A. Dempster, N. Laird, and D. Rubin, Maximum likelihood from incomplete data via the em algorithm, Journal of the Royal Statistical Society, vol.39, issue.1, pp.1-38, 1977.