K. Ali, Learning probabilistic relational concept descriptions. Doctoral dissertation, 1995.

K. M. Ali and M. J. Pazzani, On the link between error correlation and error reduction in decision tree ensembles University of California, 1995.

K. M. Ali and M. J. Pazzani, Error reduction through learning multiple descriptions, Machine Learning, pp.173-202, 1996.
DOI : 10.1007/BF00058611

D. Angluin, Learning regular sets from queries and counterexamples, Information and Computation, vol.75, issue.2, pp.337-350, 1987.
DOI : 10.1016/0890-5401(87)90052-6

D. Angluin, Queries and concept learning, Machine Learning, pp.319-342, 1987.
DOI : 10.1007/BF00116828

S. C. Bagui and N. R. Pal, A multistage generalization of the rank nearest neighbor classification rule, Pattern Recognition Letters, vol.16, issue.6, pp.601-614, 1995.
DOI : 10.1016/0167-8655(95)80006-F

D. Bahler and L. Navarro, Methods for combining heterogeneous sets of classifiers, 17th National Conference on Artificial Intelligence, 2000.

A. Berger, Error-correcting output coding for text classification. IJ- CAI'99: Workshop on machine learning for information filtering, 1999.

A. Blum and T. Mitchell, Combining labeled and unlabeled data with co-training, Proceedings of the eleventh annual conference on Computational learning theory , COLT' 98, 1998.
DOI : 10.1145/279943.279962

L. Breiman, Bagging predictors, Machine Learning, pp.123-140, 1996.
DOI : 10.1007/BF00058655

L. Breiman, Bias, variance, and arcing classifiers Statistics Department, 1996.

L. Breiman, Arcing classifiers Statistics Department, 1998.

L. Breiman, Prediction Games and Arcing Algorithms, Neural Computation, vol.10, issue.7, pp.1493-1517, 1999.
DOI : 10.2307/1969530

L. Breiman, Random forests, Machine Learning, pp.5-32, 2001.

L. Breiman, J. H. Friedman, R. H. Olshen, and C. J. Stone, Classification and regression trees, 1984.

C. Brodley and M. A. Friedl, Identifying and eliminating mislabeled training instances, Proc. of the 13 th National Conference on Artificial Intelligence, pp.799-805, 1996.

G. Brown, J. Wyatt, R. Harris, and X. Yao, Diversity creation methods: a survey and categorisation, Information Fusion, vol.6, issue.1, pp.5-20, 2005.
DOI : 10.1016/j.inffus.2004.04.004

W. Buntine, A theory of learning classification rules. Doctoral dissertation, School of Computing Science, 1990.

R. Carrasco and J. Oncina, Learning stochastic regular grammars by means of a state merging method, Proc. 2nd International Colloquium on Grammatical Inference -ICGI '94, pp.139-150, 1994.
DOI : 10.1007/3-540-58473-0_144

P. K. Chan and S. J. Stolfo, Toward parallel and distributed learning by meta-learning, Working Notes AAAI Work. Knowledge Discovery in Databases, pp.227-240, 1993.

K. Cherkauer, Human expert-level performance on a scientific image analysis task by a system using combined artificial neural networks, Working Notes of the AAAI Workshop on Integrating Multiple Learned Models, pp.15-21, 1996.

S. Cost and S. Salzberg, A weighted nearest neighbor algorithm for learning with symbolic features, Machine Learning, pp.57-78, 1993.
DOI : 10.1007/BF00993481

F. Coste, State merging inference of finite state classifiers, 1999.

T. Cover and P. Hart, Nearest neighbor pattern classification, IEEE Transactions on Information Theory, vol.13, issue.1, pp.13-21, 1967.
DOI : 10.1109/TIT.1967.1053964

S. Dasgupta and P. M. Long, Boosting with Diverse Base Classifiers, Proc. of the 16 th International Conference on Computational Learning Theory, pp.273-287, 2003.
DOI : 10.1007/978-3-540-45167-9_21

C. De-la-higuera, Characteristic sets for polynomial grammatical inference, Machine Learning, pp.125-138, 1997.
DOI : 10.1007/BFb0033342

C. De-la-higuera, A bibliographical study of grammatical inference, Pattern Recognition, vol.38, issue.9, pp.1332-1348, 2005.
DOI : 10.1016/j.patcog.2005.01.003

URL : https://hal.archives-ouvertes.fr/ujm-00376590

J. J. De-oliveira-jr, M. N. Kapp, C. O. De-almendra-freitas, J. M. De-carvalho, and R. Sabourin, Handwritten recognition with multiple classifiers for restricted lexicon, Proceedings. 17th Brazilian Symposium on Computer Graphics and Image Processing, pp.82-89, 2004.
DOI : 10.1109/SIBGRA.2004.1352947

T. Dietterich, Machine learning research: for current directions, pp.97-136, 1997.

T. G. Dietterich, Ensemble Methods in Machine Learning, Lecture Notes in Computer Science, vol.1857, pp.1-15, 2000.
DOI : 10.1007/3-540-45014-9_1

T. G. Dietterich, An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization, Machine Learning, pp.139-157, 2000.

T. G. Dietterich and G. Bakiri, Error-correcting output codes: a general method for improving multiclass inductive learning programs, Proc. of the Ninth AAAI National Conference on Artificial Intelligence, pp.572-577, 1991.

T. G. Dietterich and G. Bakiri, Solving multiclass learning problems via error-correcting output codes, Journal of Artificial Intelligence Research, vol.2, pp.263-286, 1995.

T. G. Dietterich and E. B. Kong, Machine learning bias, statistical bias, and statistical variance of decision tree algorithms, 1995.

C. Domingo and O. Watanabe, MadaBoost: A modification of AdaBoost, Proc. 13th Annu. Conference on Comput. Learning Theory, pp.180-189, 2000.

N. Duffy, D. P. Helmbold, and . Solla, Potential boosters?, pp.258-264, 1999.

P. Dupont, Noisy sequence classification with smoothed markov chains Actes de lahuitì eme Conférence francophone sur l'apprentissage automatique, Trégastel, 2006.

P. Dupont and L. Miclet, Inférence grammaticalerégulì ere : fondements théoriques et principaux algorithmes, INRIA, 1998.

S. Dzeroski and B. Zenko, Stacking with Multi-response Model Trees, MCS '02: Proc. of the Third International Workshop on Multiple Classifier Systems, pp.201-211, 2002.
DOI : 10.1007/3-540-45428-4_20

S. Dzeroski and B. Zenko, Is combining classifiers with stacking better than selecting the best one? Machine Learning, pp.255-273, 2004.

B. Efron and R. Tibshirani, An introduction to the bootstrap, 1993.
DOI : 10.1007/978-1-4899-4541-9

B. Efron and R. Tibshirani, Cross-validation and the bootstrap: Estimating the error rate of a prediction rule, 1995.

W. Fan, S. J. Stolfo, and J. Zhang, The application of AdaBoost for distributed, scalable and on-line learning, Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining , KDD '99, pp.362-366, 1999.
DOI : 10.1145/312129.312283

Y. Freund, Boosting a Weak Learning Algorithm by Majority, Information and Computation, vol.121, issue.2, pp.256-285, 1995.
DOI : 10.1006/inco.1995.1136

Y. Freund, An adaptive version of the boost by majority algorithm, Proceedings of the twelfth annual conference on Computational learning theory , COLT '99, pp.102-113, 1999.
DOI : 10.1145/307400.307419

Y. Freund and R. Schapire, A short introduction to boosting, Journal of Japanese Society for Artificial Intelligence, vol.5, pp.771-880, 1999.

Y. Freund and R. E. Schapire, Experiments with a new boosting algorithm, International Conference on Machine Learning, pp.148-156, 1996.

Y. Freund and R. E. Schapire, Game theory, on-line prediction and boosting, Proceedings of the ninth annual conference on Computational learning theory , COLT '96, pp.325-332, 1996.
DOI : 10.1145/238061.238163

Y. Freund and R. E. Schapire, A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting, Journal of Computer and System Sciences, vol.55, issue.1, pp.119-139, 1997.
DOI : 10.1006/jcss.1997.1504

J. Friedman, T. Hastie, and R. Tibshirani, Additive logistic regression: a statistical view of boosting, 1998.

J. Gama and P. Brazdil, Cascade generalization, Machine Learning, pp.315-343, 2000.

S. Garcia-salicetti, C. Beumier, G. Chollet, B. Dorizzi, J. L. Jardins et al., BIOMET: A Multimodal Person Authentication Database Including Face, Voice, Fingerprint, Hand and Signature Modalities, Fourth International Conference on Audio and Video-Based Biometric Person Authentication, 2003.
DOI : 10.1007/3-540-44887-X_98

G. Giacinto and F. Roli, Adaptive selection of image classifiers, Proc. of the 9th International Conference on Image Analysis and Processing- Volume I, pp.38-45, 1997.
DOI : 10.1007/3-540-63507-6_182

E. M. Gold, Language identification in the limit, Information and Control, vol.10, issue.5, pp.447-474, 1967.
DOI : 10.1016/S0019-9958(67)91165-5

E. M. Gold, Complexity of automaton identification from given data, Information and Control, vol.37, issue.3, pp.302-320, 1978.
DOI : 10.1016/S0019-9958(78)90562-4

J. Goodman, A bit of progress in language modeling, Computer Speech & Language, vol.15, issue.4, 2001.
DOI : 10.1006/csla.2001.0174

S. Günter and H. Bunke, Generating Classifier Ensembles from Multiple Prototypes and Its Application to Handwriting Recognition, MCS '02: Proc. of the Third International Workshop on Multiple Classifier Systems, pp.179-188, 2002.
DOI : 10.1007/3-540-45428-4_18

L. K. Hansen and P. Salamon, Neural network ensembles, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.12, issue.10, pp.993-1001, 1990.
DOI : 10.1109/34.58871

T. Hastie, R. Tibshirani, and J. H. Friedman, The elements of statistical learning, 2001.

T. Heskes, Balancing between bagging and bumping, Advances in Neural Information Processing Systems, p.466, 1997.

T. K. Ho, The random subspace method for constructing decision forests, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.20, pp.832-844, 1998.

N. R. Howe and C. Cardie, Weighting unusual feature types, 1999.

M. Islam, X. Yao, and K. Murase, A constructive algorithm for training cooperative neural network ensembles, IEEE Transactions on Neural Networks, vol.14, issue.4, pp.820-834, 2003.
DOI : 10.1109/TNN.2003.813832

S. Jacquemont, F. Jacquenet, and M. Sebban, Constrained sequence mining based on probabilistic finite state, Proc. of the Workshop on Mining Graphs, Trees and Structured Data at ECML/PKDD, 2005.
URL : https://hal.archives-ouvertes.fr/hal-00374061

G. H. John and P. Langley, Estimating continuous distributions in bayesian classifiers, Proc. of the 15 th International Conference on Uncertainty in Artificial Intelligence, pp.338-345, 1995.

M. Kearns and Y. Mansour, On the boosting ability of top-down decision tree learning algorithms, STOC '96: Proc. of the twenty-eighth annual ACM symposium on Theory of computing, pp.459-468, 1996.

M. J. Kearns and U. V. Vazirani, An Introduction to Computational Learning Theory, 1994.

J. Kittler, M. Hatef, R. P. Duin, and J. Matas, On combining classifiers, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.20, issue.3, pp.226-239, 1998.
DOI : 10.1109/34.667881

J. Kivinen and M. Warmuth, Boosting as entropy projection, Proceedings of the twelfth annual conference on Computational learning theory , COLT '99, pp.134-144, 1999.
DOI : 10.1145/307400.307424

J. Kivinen and M. K. Warmuth, Boosting as entropy projection, Proceedings of the twelfth annual conference on Computational learning theory , COLT '99, pp.134-144, 1999.
DOI : 10.1145/307400.307424

R. Kohavi, Feature subset selection as search with probabilistic estimates, AAAI Fall Symposium on Relevance, pp.122-126, 1994.

R. Kohavi and D. H. Wolpert, Bias plus variance decomposition for zeroone loss functions, Machine Learning: Proc. of the Thirteenth International Conference, pp.275-283, 1996.

V. Koltchinskii and D. Panchenko, Empirical margin distributions and bounding the generalization error of combined classifiers, Annals of Statistics, vol.30, pp.1-50, 2002.

E. B. Kong and T. G. Dietterich, Error-Correcting Output Coding Corrects Bias and Variance, International Conference on Machine Learning, pp.313-321, 1995.
DOI : 10.1016/B978-1-55860-377-6.50046-3

N. Krause and Y. Singer, Leveraging the margin more carefully, Twenty-first international conference on Machine learning , ICML '04, 2004.
DOI : 10.1145/1015330.1015344

L. Kuncheva and R. K. Kounchev, Generating classifier outputs of fixed accuracy and diversity, Pattern Recognition Letters, vol.23, issue.5, pp.593-600, 2002.
DOI : 10.1016/S0167-8655(01)00155-6

L. Kuncheva, M. Skurichina, and R. Duin, An experimental study on diversity for bagging and boosting with linear classifiers, Information Fusion, vol.3, issue.4, pp.245-258, 2002.
DOI : 10.1016/S1566-2535(02)00093-3

L. I. Kuncheva and C. J. Whitaker, Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy, Machine Learning, pp.181-207, 2003.

T. Lane and C. E. Brodley, Data reduction techniques for instance-based learning from human/computer interface data, Proc. 17th International Conf. on Machine Learning, pp.519-526, 2000.

K. Lang, B. Pearlmutter, and R. Price, Results of the abbadingo one dfa learning competition. 4 th Int, Coll. on Grammatical Inference, pp.1-12, 1998.

A. Lazarevic and Z. Obradovic, Adaptive boosting techniques in heterogeneous and spatial databases, Intelligent Data Analysis, vol.5, pp.285-308, 2001.

V. Lecce, G. Dimauro, A. Guerriero, S. Impedovo, G. Pirlo et al., Classifier combination: The role of a-priori knowledge, Proc. of the seventh International Workshop on Frontiers in Handwriting Recognition, pp.143-152, 2000.

V. Levenshtein, Binary codes capable of correcting deletions, insertions, and reversals, Cybernetics and Control Theory Original in Doklady Akademii Nauk SSSR, vol.10, issue.1634, pp.707-710, 1965.

N. Littlestone and M. K. Warmuth, The weighted majority algorithm, IEEE Symposium on Foundations of Computer Science, pp.256-261, 1989.

H. Liu and R. Setiono, A probabilistic approach to feature selection -a filter solution, International Conference on Machine Learning, pp.319-327, 1996.

R. Maclin, Boosting classifiers regionally, AAAI '98/IAAI '98: Proc. of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, pp.700-705, 1998.

R. Maclin and D. Opitz, An empirical evaluation of bagging and boosting, 1997.

D. D. Margineantu and T. G. Dietterich, Pruning adaptive boosting, Proc. 14th International Conference on Machine Learning, pp.211-218, 1997.

J. Mary, Etude de l'apprentissage actif : ApplicationàApplicationà la conduite d'expériences . Doctoral dissertation, 2005.

L. Mason, J. Baxter, P. Bartlett, and M. Frean, Boosting algorithms as gradient descent, Advances in Neural Information Processing Systems 12, pp.512-518, 2000.

R. Meir and G. Raetsch, An Introduction to Boosting and Leveraging, Advanced Lectures on Machine Learning LNCS, pp.119-184, 2003.
DOI : 10.1007/3-540-36434-X_4

C. J. Merz, Dynamical selection of learning algorithms Learning from data: Artificial intelligence and statistics, 1996.

C. J. Merz, Using correspondence analysis to combine classifiers, Machine Learning, pp.33-58, 1999.

L. Mico and J. Oncina, Comparison of fast nearest neighbour classifiers for handwritten character recognition, Pattern Recognition Letters, vol.19, issue.3-4, pp.351-356, 1998.
DOI : 10.1016/S0167-8655(98)00007-5

T. M. Mitchell, The need for biases in learning generalizations, Readings in machine learning, pp.184-191, 1990.

K. Nigam, A. K. Mccallum, S. Thrun, and T. M. Mitchell, Text classification from labeled and unlabeled documents using EM, Machine Learning, pp.103-134, 2000.

R. Nock and M. Sebban, A Bayesian boosting theorem, Pattern Recognition Letters, vol.22, issue.3-4, pp.413-419, 2001.
DOI : 10.1016/S0167-8655(00)00137-9

J. Oncina and P. García, Inferring regular languages in polynomial update time of Machine Perception and Artificial Intelligence, pp.49-61, 1992.

J. Oncina and M. Sebban, Learning stochastic edit distance: Application in handwritten character recognition, Pattern Recognition, vol.39, issue.9, 2006.
DOI : 10.1016/j.patcog.2006.03.011

URL : https://hal.archives-ouvertes.fr/hal-00114106

D. W. Opitz and J. W. Shavlik, Actively Searching for an Effective Neural Network Ensemble, Connection Science, vol.8, issue.3-4, pp.337-353, 1996.
DOI : 10.1080/095400996116802

M. P. Perrone and L. N. Cooper, When networks disagree: Ensemble methods for hybrid neural networks, Neural networks for speech and image processing, pp.126-142, 1993.
DOI : 10.1142/9789812795885_0025

L. Pitt, Inductive inference, DFA's, and computational complexity, Analogical and inductive inference, no. 397 in LNAI, pp.18-44, 1989.

J. R. Quinlan, Simplifying decision trees Knowledge acquisition for knowledge-based systems, pp.239-252, 1988.

J. R. Quinlan, C4.5: programs for machine learning, 1993.

J. R. Quinlan, Bagging, boosting, and c4.5, AAAI/IAAI, vol.1, pp.725-730, 1996.

F. Ricci and D. W. Aha, Error-correcting output codes for local learners, ECML '98: Proc. of the 10th European Conference on Machine Learning, pp.280-291, 1998.
DOI : 10.1007/BFb0026698

E. S. Ristad and P. N. Yianilos, Learning string-edit distance, ICML '97: Proc. of the Fourteenth International Conference on Machine Learning, pp.287-295, 1997.
DOI : 10.1109/34.682181

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning internal representations by error propagation. Parallel distributed processing: explorations in the microstructure of cognition, pp.318-362, 1986.

G. Rätsch, T. Onoda, and K. R. Müller, Regularizing adaboost, Proc. of the 1998 conference on Advances in neural information processing systems II, pp.564-570, 1999.

G. Rätsch and M. Warmuth, Efficient margin maximizing with boosting, Journal of Machine Learning Research, vol.6, 2002.

R. Schapire, The boosting approach to machine learning: An overview. MSRI Workshop on Nonlinear Estimation and Classification, 2001.

R. E. Schapire, The strength of weak learnability, Machine Learning, pp.197-227, 1990.

R. E. Schapire, Using output codes to boost multiclass learning problems, Proc. 14th International Conference on Machine Learning, pp.313-321, 1997.

R. E. Schapire, A brief introduction to boosting, IJCAI '99: Proc. of the Sixteenth International Joint Conference on Artificial Intelligence, pp.1401-1406, 1999.

R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, Boosting the margin: a new explanation for the effectiveness of voting methods, Proc. 14th International Conference on Machine Learning, pp.322-330, 1997.
DOI : 10.1214/aos/1024691352

R. E. Schapire and Y. Singer, Improved boosting algorithms using confidence-rated predictions, Proceedings of the eleventh annual conference on Computational learning theory , COLT' 98, pp.297-336, 1999.
DOI : 10.1145/279943.279960

R. E. Schapire and Y. Singer, BoosTexter: A boosting-based system for text categorization, Machine Learning, pp.135-168, 2000.

M. Sebban and J. Janodet, On state merging in grammatical: a statistical approach for dealing with noisy data, Proc. of the 20 th International Conference on Machine Learning See also CAp'03, pp.688-695, 2003.

M. Sebban, J. Janodet, H. Suchier, and R. Nock, Boosting grammatical inference with confidence oracles, International Conference on Machine Learning (ICML), pp.425-432, 2004.

M. Sebban, R. Nock, and S. Lallich, Boosting neighborhood-based classifiers, ICML '01: Proc. of the Eighteenth International Conference on Machine Learning, pp.505-512, 2001.

M. Sebban, R. Nock, and S. Lallich, Stopping criterion for boostingbased data reduction techniques: from binary to multiclass problems, Journal of Machine Learning Research, pp.863-885, 2003.

M. Sebban and H. Suchier, On Boosting Improvement: Error Reduction and Convergence Speed-Up, 14th European Conference on Machine Learning (ECML), pp.349-360, 2003.
DOI : 10.1007/978-3-540-39857-8_32

M. Seeger, Learning with labeled and unlabeled data, 2000.

R. A. Servedio, Smooth Boosting and Learning with Malicious Noise, 14th Annual Conference on Computational Learning Theory, COLT 2001 and 5th European Conference on Computational Learning Theory Proc. (pp, pp.473-489, 2001.
DOI : 10.1007/3-540-44581-1_31

C. E. Shannon, A Mathematical Theory of Communication, Bell System Technical Journal, vol.27, issue.3, pp.379-423623, 1948.
DOI : 10.1002/j.1538-7305.1948.tb01338.x

C. A. Shipp and L. Kuncheva, Relationships between combination methods and measures of diversity in combining classifiers, Information Fusion, vol.3, issue.2, pp.135-148, 2002.
DOI : 10.1016/S1566-2535(02)00051-9

S. A. Solla, T. K. Leen, and K. Müller, Advances in neural information processing systems 12, [nips conference, 1999.

F. Thollard, P. Dupont, and C. Higuera, Probabilistic DFA inference using Kullback-Leibler divergence and minimality, Proc. 17th International Conf. on Machine Learning, pp.975-982, 2000.

K. M. Ting, An empirical study of metaCost using boosting algorithms Machine Learning, 11th European Conference on Machine Learning Proc. (pp, pp.413-425, 2000.

K. M. Ting and I. H. Witten, Issues in stacked generalization, Journal of Artificial Intelligence Research, vol.10, pp.271-289, 1999.

G. Tsoumakas, I. Katakis, and I. P. Vlahavas, Effective Voting of Heterogeneous Classifiers, Machine Learning: ECML 2004, 15th European Conference on Machine Learning Proceedings, pp.465-476, 2004.
DOI : 10.1007/978-3-540-30115-8_43

A. Tsymbal, S. Puuronen, and V. Y. Terziyan, Arbiter Meta-Learning with Dynamic Selection of Classifiers and its Experimental Investigation, Advances in Databases and Information Systems, pp.205-217, 1999.
DOI : 10.1007/3-540-48252-0_16

L. G. Valiant, A theory of the learnable, Proc. of the sixteenth annual ACM symposium on Theory of computing, pp.436-445, 1984.

V. Vapnik and A. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, pp.264-280, 1971.

V. N. Vapnik, The statistical learning theory, 1998.

P. Verlinde, G. Chollet, and M. Acheroy, Multi-modal identity verification using expert fusion, Information Fusion, vol.1, issue.1, pp.17-33, 2000.
DOI : 10.1016/S1566-2535(00)00002-6

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.7260

P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, p.551, 2001.
DOI : 10.1109/CVPR.2001.990517

R. A. Wagner and M. J. Fischer, The String-to-String Correction Problem, Journal of the ACM, vol.21, issue.1, pp.168-173, 1974.
DOI : 10.1145/321796.321811

D. Wilson and T. Martinez, Instance pruning techniques, Proc. of the 14 th International Conference on Machine Learning, pp.404-411, 1997.

D. R. Wilson and T. R. Martinez, Reduction techniques for instance-based learning algorithms, Machine Learning, pp.257-286, 2000.

D. H. Wolpert, Stacked generalization, Neural Networks, vol.5, issue.2, pp.241-259, 1992.
DOI : 10.1016/S0893-6080(05)80023-1

E. De and ?. , Leprobì eme de l'apprentissage automatique : ´ etant donné unéchanunéchantillon d'apprentissage E = {e i = (x i ,y i ) : i ? 1..m}, formés par des individus de ? ´ etiquetés par un oracle O, le but d'un méthode d'apprentissage A est de construire une hypothèse h ` a partir, Table des figures 1.1, p.13

A. , E. Et-la-sélection, and .. , en noir) des exemples pour construire E 1 . A droite, l'hypothèse h 1 généréegénéréè a, p.35

L. Du-haut-est-le-pta-de, E. , and .. , abb,bab} ; celui du milieu résulte de la fusion desétatsdesétats 1 et 0 dans le pta, et celui du bas, de la fusion desétatsdesétats 2 et 0, p.80

E. Afd-inféré-par-rpni-lorsque, E. , and +. , ba,bbbb} ; si l'afd cible est celui de la Figure 4, p.81