F. Abe-+-18]-rahaf-aljundi, M. Babiloni, M. Elhoseiny, T. Rohrbach, and . Tuytelaars, Memory aware synapses: Learning what (not) to forget, Proceedings of the European Conference on Computer Vision (ECCV), pp.139-154, 2018.

M. Arjovsky, S. Chintala, and L. Bottou, , 2017.

B. Ans and S. Rousset, Avoiding catastrophic forgetting by coupling two reverberating neural networks, Comptes Rendus de l'Académie des Sciences-Series III-Sciences de la Vie, vol.320, pp.989-997, 1997.
URL : https://hal.archives-ouvertes.fr/hal-00171579

A. Besedin, P. Blanchart, M. Crucianu, and M. Ferecatu, Evolutive deep models for online learning on data streams with no storage, Workshop on Large-scale Learning from Data Streams in Evolving Environments, 2017.
URL : https://hal.archives-ouvertes.fr/cea-01832986

A. Besedin, P. Blanchart, M. Crucianu, and M. Ferecatu, Deep online storage-free learning on unordered image streams, Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp.103-112, 2018.
URL : https://hal.archives-ouvertes.fr/hal-02454302

J. Blitzer, M. Dredze, and F. Pereira, Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification, Proceedings of the 45th annual meeting of the association of computational linguistics, pp.440-447, 2007.

R. Calandra, T. Raiko, M. Peter-deisenroth, and F. Pouzols, Learning deep belief networks from non-stationary streams, International Conference on Artificial Neural Networks, pp.379-386, 2012.

J. Cwb-+-11]-ronan-collobert, L. Weston, M. Bottou, K. Karlen, P. Kavukcuoglu et al., Natural language processing (almost) from scratch, Journal of Machine Learning Research, vol.12, pp.2493-2537, 2011.

S. Dcf-+-15-;-emily-l-denton, R. Chintala, and . Fergus, Deep generative image models using a laplacian pyramid of adversarial networks, Advances in neural information processing systems, pp.1486-1494, 2015.

P. Domingos and G. Hulten, Mining high-speed data streams, Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pp.71-80, 2000.

J. Timothy, N. E. Draelos, . Miner, C. Christopher, J. A. Lamb et al., Neurogenesis deep learning: Extending deep networks to accommodate new classes, 2017 International Joint Conference on Neural Networks (IJCNN), pp.526-533, 2017.

. Fbb-+-17]-chrisantha, D. Fernando, C. Banarse, Y. Blundell, D. Zwols et al., Evolution channels gradient descent in super neural networks, 2017.

X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, In Aistats, vol.9, pp.249-256, 2010.

A. Gepperth and C. Karaoguz, A bio-inspired incremental learning architecture for applied perceptual problems, Cognitive Computation, vol.8, issue.5, pp.924-934, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01418123

J. Gmx-+-13-;-ian, M. Goodfellow, D. Mirza, A. Xiao, Y. Courville et al., An empirical investigation of catastrophic forgetting in gradient-based neural networks, 2013.

. Gpam-+-14]-ian, J. Goodfellow, M. Pouget-abadie, B. Mirza, D. Xu et al., Generative adversarial nets, Advances in neural information processing systems, pp.2672-2680, 2014.

S. Grossberg, How does a brain build a cognitive code, Studies of mind and brain, pp.1-52, 1982.

. Gama, A. Indr??liobait?, M. Bifet, A. Pechenizkiy, and . Bouchachia, A survey on concept drift adaptation, ACM Computing Surveys (CSUR), vol.46, issue.4, p.44, 2014.

L. Tyler, N. D. Hayes, C. Cahill, and . Kanan, Memory efficient experience replay for streaming learning, 2018.

K. He, G. Gkioxari, P. Dollár, and R. Girshick, , 2017.

L. Tyler, R. Hayes, . Kemker, D. Nathan, C. Cahill et al., New metrics and experimental paradigms for continual learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp.2031-2034, 2018.

G. E. Hinton, S. Osindero, and Y. Teh, A fast learning algorithm for deep belief nets, Neural computation, vol.18, issue.7, pp.1527-1554, 2006.

K. He, X. Zhang, S. Ren, and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, The IEEE International Conference on Computer Vision (ICCV), 2015.

K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, p.117, 2016.

S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, 2015.

T. Karras, T. Aila, S. Laine, and J. Lehtinen, Progressive growing of gans for improved quality, stability, and variation, 2017.

D. Kingma and J. Ba, Adam: A method for stochastic optimization, 2014.

M. Kim, H. Cha, J. K. Kim, J. Lee, and . Kim, Learning to discover cross-domain relations with generative adversarial networks, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.1857-1865, 2017.

N. Kamra, U. Gupta, and Y. Liu, Deep generative dual memory network for continual learning, 2017.

R. Kemker and C. Kanan, Fearnet: Brain-inspired model for incremental learning, 2017.

. Kma-+-17]-ronald, M. Kemker, A. Mcclure, T. Abitino, C. Hayes et al., Measuring catastrophic forgetting in neural networks, 2017.

T. Kohonen, Self-organized formation of topologically correct feature maps, Biological cybernetics, vol.43, issue.1, pp.59-69, 1982.

R. Kpr-+-17]-james-kirkpatrick, N. Pascanu, J. Rabinowitz, G. Veness, A. A. Desjardins et al., Overcoming catastrophic forgetting in neural networks, Proceedings of the national academy of sciences, p.201611835, 2017.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, pp.1097-1105, 2012.

P. Diederik, M. Kingma, and . Welling, Auto-encoding variational bayes, 2013.

Z. Li and D. Hoiem, Learning without forgetting, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.

D. Lopez-paz, Gradient episodic memory for continual learning, Advances in Neural Information Processing Systems, pp.6467-6476, 2017.

M. Mermillod, A. Bugaiska, and P. Bonin, The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects, Frontiers in psychology, vol.4, p.504, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00965078

M. Mccloskey, J. Neal, and . Cohen, Catastrophic interference in connectionist networks: the sequential learning problem, Psychology of learning and motivation, vol.24, pp.109-118, 1989.

. Mch-+-18]-tom, W. Mitchell, E. Cohen, P. Hruschka, B. Talukdar et al., Never-ending learning, Communications of the ACM, vol.61, issue.5, pp.103-115, 2018.

L. Andrew, . Maas, Y. Awni, A. Hannun, and . Ng, Rectifier nonlinearities improve neural network acoustic models, Proc. icml, vol.30, p.3, 2013.

M. Mirza and S. Osindero, Conditional generative adversarial nets, 2014.

V. Nair and G. E. Hinton, Rectified linear units improve restricted boltzmann machines, Proceedings of the 27th international conference on machine learning (ICML-10), pp.807-814, 2010.

H. Nguyen, Y. Woon, and W. Ng, A survey on data stream clustering and classification. Knowledge and information systems, vol.45, pp.535-569, 2015.

A. Odena, C. Olah, and J. Shlens, Conditional image synthesis with auxiliary classifier gans, 2016.

C. Nikunji and . Oza, Online bagging and boosting, 2005.

I. German, R. Parisi, . Kemker, L. Jose, C. Part et al., Continual lifelong learning with neural networks: A review, 2018.

L. Jose, O. Part, and . Lemon, Incremental on-line learning of object classes using a combination of self-organizing incremental neural networks and deep convolutional neural networks, Workshop on Bio-inspired Social Robot Learning in Home Scenarios (IROS), 2016.

P. Rai, H. Daumé, I. , and S. Venkatasubramanian, Streamed learning: Onepass svms, IJCAI, vol.9, pp.1211-1216, 2009.

A. Sylvestre-alvise-rebuffi, G. Kolesnikov, C. H. Sperl, and . Lampert, icarl: Incremental classifier and representation learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2001.

A. Radford, L. Metz, and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, 2015.

A. Robins, Catastrophic forgetting, rehearsal and pseudorehearsal, Connection Science, vol.7, issue.2, pp.123-146, 1995.

A. Andrei, . Rusu, C. Neil, G. Rabinowitz, H. Desjardins et al., Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks, 2016.

. Sak-+-09]-thomas, I. Seidl, P. Assent, R. Kranen, J. Krieger et al., Index-BIBLIOGRAPHY, vol.119

, ing density models for incremental learning and anytime classification on data streams, Proceedings of the 12th international conference on extending database technology: advances in database technology, pp.311-322, 2009.

J. Tobias-springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, Striving for simplicity: The all convolutional net, 2014.

T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford et al., Improved techniques for training gans, Advances in Neural Information Processing Systems, pp.2226-2234, 2016.

. Shk-+-14]-nitish, G. E. Srivastava, A. Hinton, I. Krizhevsky, R. Sutskever et al., Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research, vol.15, issue.1, pp.1929-1958, 2014.

P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. Lecun, Pedestrian detection with unsupervised multi-stage feature learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.3626-3633, 2013.

H. Shin, J. Jung-kwon-lee, J. Kim, and . Kim, Continual learning with deep generative replay, Advances in Neural Information Processing Systems, pp.2990-2999, 2017.

I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton, On the importance of initialization and momentum in deep learning, ICML, vol.28, issue.3, pp.1139-1147, 2013.

K. Rupesh, J. Srivastava, S. Masci, F. Kazerounian, J. Gomez et al., Compete to compute, Advances in neural information processing systems, pp.2310-2318, 2013.

J. Serra, D. Surís, M. Miron, and A. Karatzoglou, Overcoming catastrophic forgetting with hard attention to the task, 2018.

I. Sutskever, O. Vinyals, and Q. Le, Sequence to sequence learning with neural networks, Advances in neural information processing systems, pp.3104-3112, 2014.

R. Venkatesan, H. Venkateswara, S. Panchanathan, and B. Li, A strategy for an uncompromising incremental learner, 2017.

D. Williams and G. Hinton, Learning representations by back-propagating errors, Nature, vol.323, issue.6088, pp.533-538, 1986.

I. Whc-+-16]-geoffrey, R. Webb, H. Hyde, H. L. Cao, F. Nguyen et al., Characterizing concept drift, Data Mining and Knowledge Discovery, vol.30, issue.4, pp.964-994, 2016.

J. Yoon, E. Yang, J. Lee, and S. J. Hwang, Lifelong learning with dynamically expandable networks, 2018.

F. Zenke, B. Poole, and S. Ganguli, Continual learning through synaptic intelligence, 2017.

G. Zhou, K. Sohn, and H. Lee, Online incremental feature learning with denoising autoencoders, Ann Arbor, vol.1001, p.48109, 2012.

H. Zhang, X. Xiao, and O. Hasegawa, A load-balancing self-organizing incremental neural networluk, IEEE Transactions on Neural Networks and Learning Systems, vol.25, issue.6, pp.1096-1105, 2014.

H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang et al., Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks, Proceedings of the IEEE International Conference on Computer Vision, pp.5907-5915, 2017.