. [. Bibliographie, R. Borrero, and . Akhavan-tabatabaei, Time and inventory dependent optimal maintenance policies for single machine workstations : An MDP approach, European Journal of Operational Research, vol.228, issue.3, pp.545-555, 2013.

P. Bianchi, L. Decreusefond, G. Fort, and J. Najim, Cours de probabilités ? MDI 104

D. P. Bertsekas, Dynamic programming : deterministic and stochastic models, 1987.

I. Vladimir and . Bogachev, Measure Theory, 2007.

[. Bally and G. Pagès, A quantization algorithm for solving multidimensional discrete-time optimal stopping problems, Bernoulli, vol.9, issue.6, pp.1003-1049, 2003.
DOI : 10.3150/bj/1072215199

URL : https://hal.archives-ouvertes.fr/hal-00104798

N. Bäuerle and U. Rieder, Markov decision processes with applications to finance, 2011.
DOI : 10.1007/978-3-642-18324-9

P. Dimitri, . Bertsekas, E. Steven, and . Shreve, Stochastic optimal control : the discrete-time case, Mathematics in Science and Engineering, vol.139, 1978.

H. Soo-chang, M. C. Fu, J. Hu, and S. I. Marcus, An Asymptotically Efficient Simulation-Based Algorithm for Finite Horizon Stochastic Dynamic Programming, IEEE Transactions on Automatic Control, vol.52, issue.1, pp.89-94, 2007.
DOI : 10.1109/TAC.2006.887917

H. Soo-chang, J. Hu, M. C. Fu, and S. I. Marcus, Simulationbased algorithms for Markov decision processes, Communications and Control Engineering Series, 2013.

A. Morris, . Cohen, L. Hau, and . Lee, Strategic analysis of integrated productiondistribution systems : models and methods, Operations research, vol.36, issue.2, pp.216-228, 1988.

B. George, Dantzig : Linear programming and extensions, Princeton Landmwarks in Mathematics, 1998.

E. V. Denardo, Dynamic programming : models and applications, 2003.

[. Dufour and A. Piunovskiy, Multiobjective Stopping Problem for Discrete-Time Markov Processes: Convex Analytic Approach, Journal of Applied Probability, vol.40, issue.04, pp.947-966, 2010.
DOI : 10.1214/09-AAP667

[. Dufour and T. Prieto-rumeau, Approximation of Markov decision processes with general state space, Journal of Mathematical Analysis and Applications, vol.388, issue.2, pp.1254-1267, 2012.
DOI : 10.1016/j.jmaa.2011.11.015

URL : https://hal.archives-ouvertes.fr/hal-00648223

[. Dufour and T. Prieto-rumeau, Finite Linear Programming Approximations of Constrained Discounted Markov Decision Processes, SIAM Journal on Control and Optimization, vol.51, issue.2, pp.1298-1324, 2013.
DOI : 10.1137/120867925

URL : https://hal.archives-ouvertes.fr/hal-00925862

[. Dufour and T. Prieto-rumeau, Approximation of average cost Markov decision processes using empirical distributions and concentration inequalities, Stochastics An International Journal of Probability and Stochastic Processes, vol.30, issue.2, pp.273-307, 2015.
DOI : 10.1109/TAC.2009.2022097

URL : https://hal.archives-ouvertes.fr/hal-01246225

F. Beno??tebeno??te-de-saporta, H. Dufour, C. Zhang, and . Elegbede, Arrêt optimal pour la maintenance prédictive, 17 e Congrès de Ma??triseMa??trise des Risques et de Sûreté de Fonctionnement, pp.4-7, 2010.

[. Elegbede, D. Bérard-bergery, J. Béhar, T. Stauffer, and R. D. Steenbergen, Dynamical modelling and stochastic optimization for the design of a launcher integration process ´ editeur : Safety, Reliability and Risk analysis : beyond the horizon, pp.3039-3046, 2013.

[. Graf and H. Luschgy, Foundations of quantization for probability distributions, volume 1730 de Lecture Notes in Mathematics, 2000.

[. Hansen and Z. Feng, Dynamic programming for POMDPs using a factored state representation, AIPS, pp.130-139, 2000.

J. Hu, M. C. Fu, and S. I. Marcus, A model reference adaptive search method for stochastic global optimization, Communications in Information and Systems, vol.8, issue.3, pp.245-275, 2008.

J. Hu, P. Hu-et-hyeong, and S. Chang, A stochastic approximation framework for a class of randomized optimization algorithms, IEEE Trans. Automat. Control, vol.57, issue.1, pp.165-178, 2012.

[. Hernández-lerma, Adaptive Markov control processes, de Applied Mathematical Sciences, 1989.

[. Lasserre, Discrete-Time Markov Control Processes : Basic Optimality Criteria, de Applications of Mathematics, 1996.

[. Lasserre, Further topics on discrete-time Markov control processes, de Applications of Mathematics, 1999.

A. Ronald and . Howard, Dynamic programming and Markov processes. The Technology Press of M.I.T, 1960.

J. Hoey and P. Poupart, Solving POMDPs with continuous or large discrete observation spaces, IJCAI, pp.1332-1338, 2005.

P. Daniel, M. J. Heyman, and . Sobel, Stochastic models in operations research Vol II, 2004.

[. Jensen, Coloured Petri nets Basic concepts, analysis methods and practical use. Monographs in Theoretical Computer Science. An EATCS Series, 1997.

A. Lynwood, D. C. Johnson, and . Montgomery, Operations research in production planning, scheduling, and inventory control, 1974.

G. Katta, Murty : Linear programming In Operations reserach methodologies, Oper. Res. Ser, pp.1-35, 2009.

F. Dufour, J. Béhar, D. Bérard-bergery, and C. Elegbede, Modeling and optimization of a launcher integration process, Luca Podofillini et al., ´ editeur : Safety and Reliability of Complex Engineered Systems : ESREL 2015, pp.2281-2288, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01202585

[. Pagès, A space quantization method for numerical integration, Journal of Computational and Applied Mathematics, vol.89, issue.1, pp.1-38, 1998.
DOI : 10.1016/S0377-0427(97)00190-8

[. Pagès and J. Printems, Optimal quadratic quantization for numerics: the Gaussian case, PPP04] Gilles Pagès Handbook of computational and numerical methods in finance, pp.135-165, 2003.
DOI : 10.1515/156939603322663321

[. Pham, W. Runggaldier, and A. Sellami, Approximation by quantization of the filter process and applications to optimal stopping problems under partial observation, Monte Carlo Methods and Applications, vol.11, issue.1, pp.57-81, 2005.
DOI : 10.1515/1569396054027283

URL : https://hal.archives-ouvertes.fr/hal-00002973

L. Martin and . Puterman, Markov decision processes : discrete stochastic dynamic programming Wiley Series in Probability and Mathematical Statistics : Applied Probability and Statistics, 1994.

S. Ross and J. Pineau, Sébastien Paquet et Brahim Chaib-Draa : Online planning algorithms for POMDPs, Journal of Artificial Intelligence Research, vol.32, pp.663-704, 2008.

[. Srinivasan and H. Lee, Production-Inventory Systems with Preventive Maintenance, IIE Transactions, vol.3, issue.11, pp.879-89045, 1994.
DOI : 10.1002/1520-6750(198908)36:4<419::AID-NAV3220360407>3.0.CO;2-5

C. C. White, . Iii, and J. Douglas, Markov decision processes, European Journal of Operational Research, vol.39, issue.1, pp.1-16, 1989.
DOI : 10.1016/0377-2217(89)90348-2

[. Ye and E. Zhou, Optimal Stopping of Partially Observable Markov Processes: A Filtering-Based Duality Approach, IEEE Transactions on Automatic Control, vol.58, issue.10, pp.2698-2704, 2013.
DOI : 10.1109/TAC.2013.2257970

[. Zhou, M. C. Fu, and S. I. Marcus, Solving Continuous-State POMDPs via Density Projection, IEEE Transactions on Automatic Control, vol.55, issue.5, pp.1101-1116, 2010.
DOI : 10.1109/TAC.2010.2042005

[. Zhou, Optimal stopping under partial observation : near-value iteration. Automatic Control, IEEE Transactions on, vol.58, issue.2, pp.500-506, 2013.