L. Weber-raphaël, . Jingting, C. Soladie, and R. Seguier, , 2019.

, Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2018, Cham International Conference, vol.997

J. Li, C. Soladie, . Seguier, . Renaud, . Wang et al.,

. Hoon, Spotting Micro-Expressions on Long Videos Sequences, 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019, pp.1-5, 2019.

J. See, Y. , M. Hoon, . Li, . Jingting et al., Su-Jing. MEGC 2019 -The Second Facial Micro-Expressions Grand Challenge, 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019, pp.1-5, 2019.

J. Li, C. Soladie, and R. Seguier, A Survey on Databases for Facial Micro-expression Analysis. VISIGRAPP(5: VISAPP), 2019.
URL : https://hal.archives-ouvertes.fr/hal-02181454

J. Li, C. Soladie, and R. Seguier, LTP-ML: Micro-Expression Detection by Recognition of Local Temporal Pattern of Facial Movements, Automatic Face & Gesture Recognition (FG 2018), pp.634-641, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01831201

. Accuracy, , vol.138

. Bibliography,

M. Aadit, M. Tabassum-mahin, and S. Juthi, Spontaneous micro-expression recognition using optimal firefly algorithm coupled with iso-flann classification, Humanitarian Technology Conference (R10-HTC), vol.10, pp.714-717, 2017.

H. Abdi and L. J. Williams, Principal component analysis, Wiley interdisciplinary reviews: computational statistics, vol.2, pp.433-459, 2010.
URL : https://hal.archives-ouvertes.fr/hal-01259094

H. B. Iyanu-pelumi-adegun and . Vadapalli, Automatic recognition of microexpressions using local binary patterns on three orthogonal planes and extreme learning machine, Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), pp.1-5, 2016.

B. Allaert, I. M. Bilasco, and C. Djeraba, Advanced local motion patterns for macro and micro facial expression recognition, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01857093

B. Allaert, M. Ioan, C. Bilasco, B. Djeraba, J. Allaert et al., Ioan Marius Bilasco, Chabane Djeraba, Afifa Dahmane, Slimane Larabi, and Ioan Marius Bilasco. Consistent optical flow maps for full and micro facial expression recognition, VISIGRAPP (5: VISAPP), pp.235-242, 2017.

X. Ben, X. Jia, R. Yan, X. Zhang, and W. Meng, Learning effective binary descriptors for micro-expression recognition transferred by macroinformation, Pattern Recognition Letters, 2017.

A. Leonas, P. E. Bernotas, H. J. Crago, and . Chizeck, A discrete-time model of electrcally stimulated muscle, IEEE transactions on biomedical engineering, issue.9, pp.829-838, 1986.

L. Ray and . Birdwhistell, Communication without words. Ekistics, pp.439-444, 1968.

D. Borza, R. Danescu, R. Itu, and A. Darabant, High-speed video system for micro-expression detection and recognition, Sensors, vol.17, issue.12, p.2913, 2017.

D. Borza, R. Danescu, R. Itu, and A. Darabant, High-speed video system for micro-expression detection and recognition, Sensors, vol.17, issue.12, p.2913, 2017.

D. Borza, R. Itu, and R. Danescu, Micro expression detection and recognition from high speed cameras using convolutional neural networks, VISIGRAPP (5: VISAPP), 2018.

C. Chang and C. Lin, Libsvm: a library for support vector machines, ACM transactions on intelligent systems and technology (TIST), vol.2, p.27, 2011.

M. Chiu, H. L. Liaw, Y. Yu, and C. Chou, Facial micro-expression states as an indicator for conceptual change in students' understanding of air pressure and boiling points, British Journal of Educational Technology

D. Chou, Efficacy of Hammerstein models in capturing the dynamics of isometric muscle stimulated at various frequencies, 2006.

A. K. Davison, C. Lansley, N. Costen, K. Tan, and M. H. Yap, Samm: A spontaneous micro-facial movement dataset, IEEE Transactions on Affective Computing, vol.9, issue.1, pp.116-129, 2018.

A. Davison, W. Merghani, C. Lansley, C. Ng, and M. Yap, Objective micro-facial movement detection using facs-based regions and baseline evaluation, Automatic Face & Gesture Recognition (FG 2018), pp.642-649, 2018.

W. Adrian-k-davison, M. Merghani, and . Yap, Objective classes for micro-facial expression recognition(submitted), Royal Society open science

A. K. Davison, N. Moi-hoon-yap, K. Costen, C. Tan, D. Lansley et al., Micro-facial movements: an investigation on spatio-temporal descriptors, European conference on computer vision, pp.111-123, 2014.

A. K. Davison, C. Moi-hoon-yap, and . Lansley, Micro-facial movement detection using individualised baselines and histogram-based descriptors, 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp.1864-1869, 2015.

C. Duque, O. Alata, R. Emonet, A. Legrand, and H. Konik, Micro-expression spotting using the riesz pyramid, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01699355

P. Eckman, Emotions revealed, 2003.

P. Ekman, Lie catching and microexpressions. The philosophy of deception, pp.118-133, 2009.

P. Ekman and W. Friesen, Facial action coding system: a technique for the measurement of facial movement, Consulting Psychologists, 1978.

P. Ekman and W. V. Friesen, Nonverbal leakage and clues to deception, Psychiatry, vol.32, issue.1, pp.88-106, 1969.

P. Ekman and . Wallace-v-friesen, Constants across cultures in the face and emotion, Journal of personality and social psychology, vol.17, issue.2, p.124, 1971.

J. Endres and A. Laidlaw, Micro-expression recognition training in medical students: a pilot study, BMC medical education, vol.9, issue.1, p.47, 2009.

M. Mg-frank, K. Herbasz, . Sinuk, C. Keller, and . Nolan, I see how you feel: Training laypeople and professionals to recognize fleeting emotions, The Annual Meeting of the International Communication Association, 2009.

Y. S. Gan and S. Liong, Bi-directional vectors from apex in cnn for microexpression recognition

I. Goodfellow, J. Pouget-abadie, M. Mirza, B. Xu, D. Warde-farley et al., Generative adversarial nets, Advances in neural information processing systems, pp.2672-2680, 2014.

J. Grobova, M. Colovic, M. Marjanovic, A. Njegus, H. Demire et al., Automatic hidden sadness detection using micro-expressions, Automatic Face & Gesture Recognition (FG 2017, pp.828-832, 2017.

J. Guo, S. Zhou, J. Wu, J. Wan, X. Zhu et al., Multi-modality network with visual and geometrical information for micro emotion recognition, Automatic Face & Gesture Recognition (FG 2017, pp.814-819, 2017.

Y. Guo, Y. Tian, X. Gao, and X. Zhang, Micro-expression recognition based on local binary patterns from three orthogonal planes and nearest neighbor method, Neural Networks (IJCNN), 2014 International Joint Conference on, pp.3473-3479, 2014.

Y. Guo, C. Xue, Y. Wang, and M. Yu, Micro-expression recognition based on cbp-top feature with elm, Optik-International Journal for Light and Electron Optics, vol.126, issue.23, pp.4446-4451, 2015.

Y. Guo, C. Xue, Y. Wang, and M. Yu, Micro-expression recognition based on cbp-top feature with elm, Optik-International Journal for Light and Electron Optics, vol.126, issue.23, pp.4446-4451, 2015.

E. A. Haggard and K. S. Isaacs, Micromomentary facial expressions as indicators of ego mechanisms in psychotherapy, Methods of research in psychotherapy, pp.154-165, 1966.

Y. Han, B. Li, Y. Lai, and Y. Liu, Cfd: A collaborative feature difference method for spontaneous micro-expression spotting, 25th IEEE International Conference on Image Processing (ICIP), pp.1942-1946, 2018.

X. Hao and M. Tian, Deep belief network based on double weber local descriptor in micro-expression recognition, Advanced Multimedia and Ubiquitous Engineering, pp.419-425, 2017.

S. L. Happy and A. Routray, Fuzzy histogram of optical flow orientations for micro-expression recognition, IEEE Transactions on Affective Computing, 2017.

S. L. Happy and A. Routray, Recognizing Subtle Micro-facial Expressions Using Fuzzy Histogram of Optical Flow Orientations and Feature Selection Methods, pp.341-368, 2018.

J. He, J. Hu, X. Lu, and W. Zheng, Multi-task mid-level feature learning for micro-expression recognition, Pattern Recognition, vol.66, pp.44-52, 2017.

J. He, J. Hu, X. Lu, and W. Zheng, Multi-task mid-level feature learning for micro-expression recognition, Pattern Recognition, vol.66, pp.44-52, 2017.

C. House and R. Meyer, Preprocessing and descriptor features for facial micro-expression recognition, 2015.

X. Huang, S. Wang, X. Liu, G. Zhao, X. Feng et al., Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition, IEEE Transactions on Affective Computing, 2017.

X. Huang, S. Wang, G. Zhao, and M. Piteikainen, Facial microexpression recognition using spatiotemporal local binary pattern with integral projection, Proceedings of the IEEE International Conference on Computer Vision Workshops, pp.1-9, 2015.

X. Huang, S. Wang, X. Liu, G. Zhao, X. Feng et al., Spontaneous facial micro-expression recognition using discriminative spatiotemporal local binary pattern with an improved integral projection, 2016.

X. Huang and G. Zhao, Spontaneous facial micro-expression analysis using spatiotemporal local radon-based binary pattern, the Frontiers and Advances in Data Science (FADS, pp.159-164, 2017.

X. Huang, G. Zhao, X. Hong, W. Zheng, and M. Pietikäinen, Spontaneous facial micro-expression analysis using spatiotemporal completed local quantized patterns, Neurocomputing, vol.175, pp.564-578, 2016.

P. Husák, J. , and J. Matas, Spotting facial micro-expressions" in the wild, 22nd Computer Vision Winter Workshop, 2017.

R. E. Jack, G. B. Oliver, H. Garrod, R. Yu, P. G. Caldara et al., Facial expressions of emotion are not culturally universal, Proceedings of the National Academy of Sciences, vol.109, pp.7241-7244, 2012.

Z. Deepak-kumar-jain, K. Zhang, and . Huang, Random walk-based feature learning for micro-expression recognition, Pattern Recognition Letters, 2018.

X. Jia, X. Ben, H. Yuan, K. Kpalma, and W. Meng, Macro-tomicro transformation model for micro-expression recognition, Journal of Computational Science, 2017.

. Huai-qian, J. Khor, R. See, W. Phan, and . Lin, Enriched long-term recurrent convolutional network for facial micro-expression recognition, Automatic Face & Gesture Recognition (FG 2018), pp.667-674, 2018.

. Dae-hoe, W. J. Kim, Y. M. Baddar, and . Ro, Micro-expression recognition with expression-state constrained spatio-temporal feature representations, Proceedings of the 2016 ACM on Multimedia Conference, pp.382-386, 2016.

A. Ngo, A. Johnston, C. Raphael, J. Phan, and . See, Microexpression motion magnification: Global lagrangian vs. local eulerian approaches, Automatic Face & Gesture Recognition (FG 2018), pp.650-656, 2018.

A. Ngo, Y. Oh, R. Phan, and J. See, Eulerian emotion magnification for subtle expression recognition, Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pp.1243-1247, 2016.

A. Ngo, R. Chung-wei-phan, and J. See, Spontaneous subtle expression recognition: Imbalanced databases and solutions, Asian conference on computer vision, pp.33-48, 2014.

A. Ngo, R. Chung-wei-phan, and J. See, Spontaneous subtle expression recognition: Imbalanced databases and solutions, Asian conference on computer vision, pp.33-48, 2014.

A. Ngo, J. See, and C. Phan, Sparsity in dynamics of spontaneous subtle emotion: Analysis & application. IEEE Transactions on Affective Computing, 2017.

L. I. Jingting, C. Soladié, and R. Séguier, Ltp-ml: Micro-expression detection by recognition of local temporal pattern of facial movements, Automatic Face & Gesture Recognition (FG 2018), pp.634-641, 2018.

L. I. Jingting, C. Soladié, R. Séguier, S. Wang, and M. Yap, Spotting micro-expressions on long videos sequences, Automatic Face & Gesture Recognition (FG 2019), 2019.

X. Li, X. Hong, A. Moilanen, X. Huang, T. Pfister et al., Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods, IEEE Transactions on Affective Computing, 2017.

X. Li, T. Pfister, X. Huang, G. Zhao, and M. Pietikäinen, A spontaneous micro-expression database: Inducement, collection and baseline, pp.1-6, 2013.

X. Li, A. Hong-xiaopeng, X. Moilanen, T. Huang, G. Pfister et al., Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods, IEEE Transactions on Affective Computing, 2017.

X. Li, J. Yu, and S. Zhan, Spontaneous facial micro-expression detection based on deep learning, 2016 IEEE 13th International Conference on, pp.1130-1134, 2016.

Y. Li, X. Huang, and G. Zhao, Can micro-expression be recognized based on single apex frame?, 25th IEEE International Conference on Image Processing (ICIP), pp.3094-3098, 2018.

H. Chern, K. Lim, and . Goh, Fuzzy qualitative approach for microexpression recognition, Proceedings of APSIPA Annual Summit and Conference, vol.2017, pp.12-15, 2017.

C. Lin, F. Long, J. Huang, and J. Li, Micro-expression recognition based on spatiotemporal gabor filters, Eighth International Conference on Information Science and Technology (ICIST), pp.487-491, 2018.

Y. S. Sze-teng-liong, W. Gan, and . Yau, Yen-Chang Huang, and Tan Lit Ken. Off-apexnet on micro-expression recognition system, 2018.

. Sze-teng, J. Liong, . See, C. Raphael, A. C. Phan et al., Subtle expression recognition using optical strain weighted features, Asian Conference on Computer Vision, pp.644-657, 2014.

. Sze-teng, J. Liong, . See, C. Raphael, Y. Phan et al., Spontaneous subtle expression detection and recognition based on facial strain, Signal Processing: Image Communication, vol.47, pp.170-182, 2016.

. Sze-teng, J. Liong, . See, C. Raphael, K. Phan et al., Hybrid facial regions extraction for micro-expression recognition system, Journal of Signal Processing Systems, pp.1-17, 2017.

. Sze-teng, J. Liong, R. See, K. Chung-wei-phan, and . Wong, Less is more: Micro-expression recognition from video using apex frame, 2016.

. Sze-teng, J. Liong, K. See, A. Wong, Y. Ngo et al., Automatic apex frame spotting in micro-expression database, Pattern Recognition (ACPR), 2015 3rd IAPR Asian Conference on, pp.665-669, 2015.

. Sze-teng, J. Liong, K. See, R. Wong, and . Phan, Less is more: Micro-expression recognition from video using apex frame, Signal Processing: Image Communication, vol.62, pp.82-92, 2018.

. Sze-teng, J. Liong, K. See, R. Wong, and . Phan, Automatic micro-expression recognition from long video using a single spotted apex, Asian Conference on Computer Vision, pp.345-360, 2016.

K. Sze-teng-liong and . Wong, Micro-expression recognition using apex frame with phase information, Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp.534-537, 2017.

Y. Liu, B. Li, and Y. Lai, Sparse mdmo: Learning a discriminative feature for spontaneous micro-expression recognition, IEEE Transactions on Affective Computing, 2018.

Y. Liu, J. Zhang, W. Yan, S. Wang, G. Zhao et al., A main directional mean optical flow feature for spontaneous microexpression recognition, IEEE Transactions on Affective Computing, vol.7, issue.4, pp.299-310, 2016.

H. Lu, K. Kpalma, and J. Ronsin, Micro-expression detection using integral projections, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01622626

H. Lu, K. Kpalma, and J. Ronsin, Motion descriptors for microexpression recognition, Image Communication, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01834879

Z. Lu, Z. Luo, H. Zheng, J. Chen, and W. Li, A delaunaybased temporal coding model for micro-expression recognition, Asian Conference on Computer Vision, pp.698-711, 2014.

I. Lusi, J. Junior, J. Gorbova, X. Baró, S. Escalera et al., Joint challenge on dominant and complementary emotion recognition using micro emotion features and head-pose estimation: Databases, 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp.809-813, 2017.

H. Ma, G. An, S. Wu, and F. Yang, A region histogram of oriented optical flow (rhoof) feature for apex frame spotting in micro-expression, Intelligent Signal Processing and Communication Systems (ISPACS), 2017 International Symposium on, pp.281-286, 2017.

W. Brian and . Matthews, Comparison of the predicted and observed secondary structure of t4 phage lysozyme, Biochimica et Biophysica Acta (BBA)-Protein Structure, vol.405, issue.2, pp.442-451, 1975.

. S-mohammad-mavadati, H. Mohammad, K. Mahoor, P. Bartlett, J. Trinh et al., Disfa: A spontaneous facial action intensity database, IEEE Transactions on Affective Computing, vol.4, issue.2, pp.151-160, 2013.

W. Merghani, A. Davison, and M. Yap, Facial micro-expressions grand challenge 2018: Evaluating spatio-temporal features for classification of objective classes, Automatic Face & Gesture Recognition (FG 2018), pp.662-666, 2018.

A. Moilanen, G. Zhao, and M. Pietikäinen, Spotting rapid facial movements from videos using appearance-based feature difference analysis, Pattern Recognition (ICPR), 2014 22nd International Conference on, pp.1722-1727, 2014.

Y. Oh, A. Ngo, R. Phari, J. See, and H. Ling, Intrinsic two-dimensional local structures for micro-expression recognition, Acoustics, Speech and Signal Processing (ICASSP), pp.1851-1855, 2016.

Y. Oh, A. Ngo, J. See, . Sze-teng, R. Liong et al., Monogenic riesz wavelet representation for micro-expression recognition, Digital Signal Processing (DSP), pp.1237-1241, 2015.

P. Anju, M. K. Sureshbabu, and K. P. Arjun, Facial micro-expression recognition using feature extraction, International Journal of Computer Science and Engineering Communications, vol.5, issue.4, pp.1702-1708, 2017.

S. Sung-yeong-park, Y. M. Ho-lee, and . Ro, Subtle facial expression recognition using adaptive magnification of discriminative facial motion, Proceedings of the 23rd ACM international conference on Multimedia, pp.911-914, 2015.

D. Patel, G. Zhao, and M. Pietikäinen, Spatiotemporal integration of optical flow vectors for micro-expression detection, International Conference on Advanced Concepts for Intelligent Vision Systems, pp.369-380, 2015.

M. Peng, C. Wang, T. Chen, G. Liu, and X. Fu, Dual temporal scale convolutional neural network for micro-expression recognition. Frontiers in Psychology, vol.8, p.1745, 2017.

M. Peng, Z. Wu, Z. Zhang, and T. Chen, From macro to micro expression recognition: Deep learning on small datasets using transfer learning, Automatic Face & Gesture Recognition (FG 2018), pp.657-661, 2018.

M. Peng, Z. Wu, Z. Zhang, and T. Chen, From macro to micro expression recognition: Deep learning on small datasets using transfer learning, Automatic Face & Gesture Recognition (FG 2018), pp.657-661, 2018.

T. Pfister, X. Li, G. Zhao, and M. Pietikäinen, Recognising spontaneous facial micro-expressions, Computer Vision (ICCV), 2011 IEEE International Conference on, pp.1449-1456, 2011.

S. Polikovsky, Facial micro-expressions recognition using high speed camera and 3d-gradients descriptor, Conference on Imaging for Crime Detection and Prevention, vol.6, 2009.

S. Polikovsky, Y. Kameda, and Y. Ohta, Facial micro-expression detection in hi-speed video based on facial action coding system (facs). IEICE transactions on information and systems, vol.96, pp.81-92, 2013.

. David-martin-powers, Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation, 2011.

F. Qu, S. Wang, W. Yan, H. Li, S. Wu et al., Cas(me) 2 : a database for spontaneous macro-expression and micro-expression spotting and recognition, IEEE Transactions on Affective Computing, 2017.

K. Radlak, M. Bozek, and B. Smolka, Silesian deception database: Presentation and analysis, Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection, pp.29-35, 2015.

M. Shreve, S. Godavarthy, D. Goldgof, and S. Sarkar, Macroand micro-expression spotting in long videos using spatio-temporal strain, pp.51-56, 2011.

M. Shreve, S. Godavarthy, V. Manohar, D. Goldgof, and S. Sarkar, Towards macro-and micro-expression spotting in video using strain patterns, Applications of Computer Vision (WACV), pp.1-6, 2009.

P. A. Stewart, B. M. Waller, and J. N. Schubert, Presidential speechmaking style: Emotional response to micro-expressions of facial affect, Motivation and Emotion, vol.33, issue.2, p.125, 2009.

N. Stoiber, Modeling emotionnal facial expressions and their dynamics for realistic interactive facial animation on virtual characters, 2010.
URL : https://hal.archives-ouvertes.fr/tel-00558851

M. Takalkar and M. Xu, Image based facial micro-expression recognition using deep learning on small datasets, The International Conference on Digital Image Computing: Techniques and Applications, 2017.

T. Tran, X. Hong, and G. Zhao, Sliding window based micro-expression spotting: A benchmark, International Conference on Advanced Concepts for Intelligent Vision Systems, pp.542-553, 2017.

A. Vinciarelli, A. Dielmann, S. Favre, and H. Salamin, Canal9: A database of political debates for analysis of social interactions, Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on, 2009.

S. Wang, H. Chen, W. Yan, Y. Chen, and X. Fu, Face recognition and micro-expression recognition based on discriminant tensor subspace analysis plus extreme learning machine. Neural processing letters, vol.39, pp.25-43, 2014.

S. Wang, B. Li, Y. Liu, W. Yan, X. Ou et al., Micro-expression recognition with small sample size by transferring long-term convolutional neural network, Neurocomputing, 2018.

S. Wang, S. Wu, and X. Fu, A main directional maximal difference analysis for spotting micro-expressions, Asian Conference on Computer Vision, pp.449-461, 2016.

S. Wang, S. Wu, X. Qian, J. Li, and X. Fu, A main directional maximal difference analysis for spotting facial movements from longterm videos, Neurocomputing, vol.230, pp.382-389, 2017.

S. Wang, W. Yan, X. Li, G. Zhao, and X. Fu, Microexpression recognition using dynamic textures on tensor independent color space, Pattern Recognition (ICPR), 2014 22nd International Conference on, pp.4678-4683, 2014.

S. Wang, W. Yan, X. Li, G. Zhao, C. Zhou et al., Micro-expression recognition using color spaces, IEEE Transactions on Image Processing, vol.24, issue.12, pp.6034-6047, 2015.

S. Wang, W. Yan, T. Sun, G. Zhao, and X. Fu, Sparse tensor canonical correlation analysis for micro-expression recognition, Neurocomputing, vol.214, pp.218-232, 2016.

S. Wang, W. Yan, T. Sun, G. Zhao, and X. Fu, Sparse tensor canonical correlation analysis for micro-expression recognition, Neurocomputing, vol.214, pp.218-232, 2016.

S. Wang, W. Yan, G. Zhao, X. Fu, and C. Zhou, Micro-expression recognition using robust principal component analysis and local spatiotemporal directional features, ECCV Workshops (1), pp.325-338, 2014.

Y. Wang, J. See, Y. Oh, R. Phan, Y. Rahulamathavan et al., Effective recognition of facial micro-expressions with video motion magnification, Multimedia Tools and Applications, vol.76, issue.20, pp.21665-21690, 2017.

Y. Wang, J. See, C. Raphael, Y. Phan, and . Oh, Lbp with six intersection points: Reducing redundant information in lbp-top for micro-expression recognition, Asian Conference on Computer Vision, pp.525-537, 2014.

Y. Wang, J. See, C. Raphael, Y. Phan, and . Oh, Efficient spatiotemporal local binary patterns for spontaneous facial micro-expression recognition, PloS one, vol.10, issue.5, p.124674, 2015.

Y. Wang, J. See, C. Raphael, Y. Phan, and . Oh, Efficient spatiotemporal local binary patterns for spontaneous facial micro-expression recognition, PloS one, vol.10, issue.5, p.124674, 2015.

G. Warren, E. Schertler, and P. Bull, Detecting deception from emotional and unemotional cues, Journal of Nonverbal Behavior, vol.33, issue.1, pp.59-69, 2009.

R. Weber, J. Li, C. Soladié, and R. Séguier, A survey on databases of facial macro-expression and micro-expression(submitted). Communications in Computer and Information Science

R. Weber, C. Soladié, and R. Séguier, A survey on databases for facial expression analysis, Proceedings of the 13th International Joint Conference on Computer Vision, vol.5, pp.73-84, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01831396

Q. Wu, X. Shen, and X. Fu, The machine knows what you are hiding: an automatic micro-expression recognition system, Affective Computing and Intelligent Interaction, pp.152-162, 2011.

Z. Xia, X. Feng, J. Peng, X. Peng, and G. Zhao, Spontaneous micro-expression spotting via geometric deformation modeling, Computer Vision and Image Understanding, vol.147, pp.87-94, 2016.

H. Xiaohua, S. Wang, X. Liu, G. Zhao, X. Feng et al., Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition, IEEE Transactions on Affective Computing, 2017.

X. Xiong and F. De-la-torre, Supervised descent method and its applications to face alignment, Proceedings of the IEEE conference on computer vision and pattern recognition, pp.532-539, 2013.

F. Xu, J. Zhang, and J. Z. Wang, Microexpression identification and categorization using a facial dynamics map, IEEE Transactions on Affective Computing, vol.8, issue.2, pp.254-267, 2017.

W. Yan and Y. Chen, Measuring dynamic micro-expressions via feature extraction methods, Journal of Computational Science, 2017.

W. Yan, X. Li, S. Wang, G. Zhao, Y. Liu et al., Casme ii: An improved spontaneous micro-expression database and the baseline evaluation, PloS one, vol.9, issue.1, p.86041, 2014.

W. Yan, S. Wang, Y. Liu, Q. Wu, and X. Fu, For microexpression recognition: Database and suggestions, Neurocomputing, vol.136, pp.82-87, 2014.

W. Yan, Q. Wu, Y. Liu, S. Wang, and X. Fu, CASME database: a dataset of spontaneous micro-expressions collected from neutralized faces, pp.1-7, 2013.

S. Yao, N. He, H. Zhang, and O. Yoshie, Micro-expression recognition by feature points tracking, 10th International Conference on, pp.1-4, 2014.

J. Moi-hoon-yap, X. See, S. Hong, and . Wang, Facial microexpressions grand challenge 2018 summary, Automatic Face & Gesture Recognition (FG 2018), pp.675-678, 2018.

E. Zarezadeh and M. Rezaeian, Micro expression recognition using the eulerian video magnification method, BRAIN. Broad Research in Artificial Intelligence and Neuroscience, vol.7, issue.3, pp.43-54, 2016.

P. Zhang, X. Ben, R. Yan, C. Wu, and C. Guo, Micro-expression recognition system, Optik-International Journal for Light and Electron Optics, vol.127, issue.3, pp.1395-1400, 2016.

S. Zhang, B. Feng, Z. Chen, and X. Huang, Microexpression recognition by aggregating local spatio-temporal patterns, International Conference on Multimedia Modeling, pp.638-648, 2017.

X. Zhang, L. Yin, F. Jeffrey, S. Cohn, M. Canavan et al., Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database, Image and Vision Computing, vol.32, issue.10, pp.692-706, 2014.

Z. Zhang, T. Chen, H. Meng, G. Liu, and X. Fu, Smeconvnet: A convolutional neural network for spotting spontaneous facial microexpression from long videos, IEEE Access, vol.6, pp.71143-71151, 2018.

Y. Zhao and J. Xu, Necessary morphological patches extraction for automatic micro-expression recognition, Applied Sciences, vol.8, issue.10, p.1811, 2018.

H. Zheng, Micro-expression recognition based on 2d gabor filter and sparse representation, Journal of Physics: Conference Series, vol.787, p.12013, 2017.

X. Hao-zheng, Z. Geng, and . Yang, A relaxed k-svd algorithm for spontaneous micro-expression recognition, Pacific Rim International Conference on Artificial Intelligence, pp.692-699, 2016.

X. Zhu, X. Ben, S. Liu, R. Yan, and W. Meng, Coupled source domain targetized with updating tag vectors for micro-expression recognition, Multimedia Tools and Applications, pp.1-20, 2017.

Y. Zong, X. Huang, W. Zheng, Z. Cui, and G. Zhao, Learning a target sample re-generator for cross-database micro-expression recognition, Proceedings of the 2017 ACM on Multimedia Conference, pp.872-880, 2017.

Y. Zong, X. Huang, W. Zheng, Z. Cui, and G. Zhao, Learning from hierarchical spatiotemporal descriptors for micro-expression recognition, IEEE Transactions on Multimedia, 2018.

Y. Zong, W. Zheng, X. Huang, J. Shi, Z. Cui et al., Domain regeneration for cross-database micro-expression recognition, IEEE Transactions on Image Processing, vol.27, issue.5, pp.2484-2498, 2018.

. .. Tendance-de-la-recherche-mesr-;, Le nombre d'articles sur MESR augmente d'année en année, principalement dans le domaine de la reconnaissance de l'ME (colonne du bas). La recherche sur la détection de ME n'a pas encore suffisamment attiré l'attention (colonne en haut), vol.4

, Exemple de motif temporel local (LTP) au cours d'une ME située dans la région du sourcil droit sur une période de 300 ms (durée moyenne de ME)

. Ce, Il forme un motif en S (S-pattern). La courbe atteint son sommet au bout de 150 ms environ, puis reste stable ou diminue légèrement. Ce motif est spécifique aux mouvements ME et est appelé motif en S (S-pattern) en raison de la forme de la courbe.(Vidéo: Sub01_EP01_5 of CASME I)

. Vue-d'ensemble-de-notre-méthode, Les LTPs, y compris les S-patterns, sont ensuite utilisés comme échantillons d'apprentissage pour construire le modèle d'apprentissage automatique (SVM) en vue de la classification. En particulier, une fusion spatiale et temporelle finale est réalisée pour éliminer les faux positifs tels que les clignements des yeux. L'une des spécificités du processus réside dans l'utilisation de motifs temporels locaux (LTP), pertinents pour la détection de micro-expressions: les microexpressions sont des mouvements brefs et locaux, La méthode proposée comporte trois étapes de traitement: pré-traitement, extraction de caractéristiques et détection de micro-expressions. Nous mélangeons des processus globaux (tout le visage) et locaux (les ROIs)

, Une séquence vidéo locale ROI j avec N images (durée de la vidéo ? 3s) est traitée par l'ACP sur l'axe des temps. Les premières composantes de l'ACP conservent le mouvement principal de la texture de niveau de gris sur cette ROI

). .. L'échantillon-vidéo-provient-de-casme-i-(-c-xiaolan-fu, , p.11

, Les LTPs passent par 3 étapes pour l'annotation: l'annotation par image, la sélection de ROI à partir de l'AU et la sélection du motif de LTP. Les S-patterns annotés et les non-Spatterns sont ensuite transmis à l'étape d'apprentissage du classifieur SVM. 13 during the micro-expression movement at ROI 32 and ROI 38 (the right and left corners of the mouth) in the video Sub01_EP01_5. The emotion of this video is labeled as joy, which is often expressed by mouth corner displacement. 3-8e and 3-8f are the LTPs during the micro-expression movement at ROI 5 and ROI 6 (the inside corners of the right and left eyebrows) in the video Sub04_EP12. The emotion of this video is labeled as stress, which is often expressed by the movement of the eyebrows. The pattern of curves in these four images are similar even through the ROIs and subjects are different, we call it S-pattern. 3-8c and 3-8g show the LTP of other ROIs at the same time as 3-8a/ 3-8b and 3-8e/ 3-8f respectively. The pattern is different from S-pattern because the micro-expression does not occur on these regions. 3-8d and 3-8h illustrate the LTPs in the same ROI as 3-8a and 3-8f respectively, but at a different moment in the video

. .. , 95 3-10 Label annotation per frame. The rectangle represents an entire video, and the interval from onset to offset is the ME sequence. The S-pattern is expressed at the beginning of the onset. Hence, frames with S-pattern (in the range, LTP selection for training stage. All the LTPs are classified into 2 classes: S-patterns and non-S-patterns

, Hammerstein model represents well the muscle movement. S-patterns are caused by the facial muscle movement and the variation curve is similar to the local muscle movement. Thus, S-patterns can be well synthesized by Hammerstein model. The model is a concatenation of two modules: a static non-linearity module (that manipulates the magnitude) and a second-order dynamics linear module (that simulates the movement pattern), LTP pattern selection of training stage for local classification. LTPs labeled as S-pattern pass through this process to conserve reliable S-patterns

, Depending on the data which is constructed by constant command and chosen Spattern O , the corresponding Hammerstein model can be estimated by system identification. In other words, the parameters of the non-linearity module (p), of the linear module (?, ?) and the system estimation error (E H ) are determined, The basic system identification process for Hammerstein model

, One Example of real S-pattern (S-pattern O ) and the corresponding synthesized S-pattern (S-pattern ST ) by estimated by Hammerstein model, vol.113, p.204

, Each S-pattern (S-pattern OS ) has been associated with its own identified Hammerstein model (?, ?, E H ). The upper-left figure shows the distribution of (?, ?) (x: ?, y: ?) and the associated error: E H (heat map). (?, ?) densely distributes at the top-left corner with a small error. In the six below images, the blue curve means the original S-patterns (S-pattern OS ). Then, based on the estimated Hammerstein model with original (?, ?), the synthesized S-patterns (S-pattern ST 0 ) are generated (red curve). The curve shape of Spatterns in these six figures vary along with the change of (?, ?)

, They have different curve shapes compared with the first three. The distribution of (?, ?) is associated with the dynamic property of ME (shape of S-pattern). Hence, we would be able to both filter wronglylabeled S-patterns O using E H values and also synthesize virtual S-patterns based on the value range of (?, ?), The corresponding (?, ?) for the last three curve images are far from the upper-left region

, Parameter configuration for LTP filtering and S-pattern synthesizing. For one database, each selected S-pattern is treated separately to estimate its specific Hammerstein model. Based on these obtained data, the mean value (?,?,? H ) and the standard deviation (? ? , ? ? ) can be calculated, p.117

, Each original S-pattern (S-pattern O ) passes through the system identification to obtain its estimation error E H . By comparing with the threshold T E , the S-pattern O is decided to be kept as S-pattern after LTP filtering (S-pattern OF ) or be removed from training dataset, vol.118, p.205

. .. , 120 4-10 Flow chart of S-pattern synthesizing. The number of generation loops is defined by n. Once the (? i , ? i ) is determined, along with the S-pattern OF , the specific Hammerstein model is constructed for synthesizing, The curve shape in 4-9a represents a movement which begins to fade out. 4-9b and 4-9c show facial movements which are about to begin, vol.122, pp.4-11

. .. , In the first step, the face image is divided into several blocks. Then LBP is computed per pixel. The feature for the jth block on current frame (CF) is the LBP histogram per ROI after a normalisation. The second step is ? 2 -distance computation. i is the ith bin in the histogram. AFF (Average feature frame) means the feature vector representing the average of tail frame and head frame, where tail frame is K/2th frame before the CF, head frame is K/2th frame after the CF. The third step is to obtain the final feature difference value C CF for current frame. F is obtained by the first M blocks with the biggest feature difference values. The fourth step spots micro-expression by setting a threshold, where p is an empirical data, Organization of Chapter5 based on our four contributions, vol.130

, T E is set to 0.25 for LTP filtering. 5-11a shows the ratio between S-pattern and the total quantity of LTPs for training. And 5-11b illustrates the F1-score for ME spotting. The parameter n for S-pattern synthesizing stops at 5 for CASME II, because the amount of S-patterns in training dataset is large enough and the F1-score have already begun to decline. By comparing these two figures, when the proportion of S-patterns is around 0.3, the spotting method performs best. Otherwise, the data augmentation process also synthesizes more wrongly-labeled S-patterns, Result Evaluation according to the generation times n for the combination of LTP filtering and S-pattern synthesizing on CASME II

, Unlike the curve for S-pattern amount, in 5-12b the curve of F1-score f r increases at the beginning when there are more samples for training, then it starts to decline as the filtering process conserves too many wrongly-labeled Spatterns, vol.157

. .. , Histogram of ? and ? of the linear model for S-pattern, p.158

C. , Distance calculation for one ROI sequence ROI m j in video clip I m, p.180

. .. Database, 36 2.3 Characteristics summary for micro-expression databases -part 1. Databases are sorted by alphabetical order. The following formatting distinguishes databases: normal for posed databases, bold for spontaneous database, italic for in-the-wild databases, ?means that the database is not available online. MaE: macro-expression, PSR: participants' self-report, p.38

, Characteristics summary for micro-expression databases -part 2. Databases are sorted by alphabetical order. The following formatting distinguishes databases: normal for posed databases, bold for spontaneous database, italic for in-the-wild databases, ?means that the database is not available online. Neutralization paradigm: NP, PD:Polikovsky's Database, GD: Grobova's database, PSR: participants' self-report

, Characteristics summary for micro-expression databases -part 3. The following formatting distinguishes databases: normal for posed databases, bold for spontaneous database, italic for in-the-wild databases. PSR: participants' self-report. HS: high speed camera, VIS: a normal visual camera, NIR: a near-infrared

, Categories and characteristics of ME databases. The characteristics are coded to simplify further representation

, Databases are sorted by alphabetical order. The following formatting distinguishes databases: normal for posed databases, bold for spontaneous database, italic for in-the-wild databases, * means the database is not available online. 2D V: 2D video. SMIC and SMIC-E both have three sub-classes: NIR, VIS and HS. Sub-class HS of SMIC / SMIC-E is separated from the other two because of the different number of ME video samples

, Number of frames for a micro-expression with average duration (300ms) depending on different FPS

. .. , 53 2.10 The number and frequency of ME recognition articles, according to the metrics used.CM means the confusion matrix, time includes the training time, recognition time and computation/run time, p.58

, Summary of number of published articles, with their corresponding metrics and databases. CAS I: CASME I; CAS II: CASME II; SMIC includes SMIC and SMIC-E. ACC: accuracy, CM: confusion matrix, p.58

, Summary of emotion classes for micro-expression recognition. The two most commonly used emotion classes are highlighted in bold, P: positive, N: negative, H: Happiness, D: Disgust, SU: Surprise, R: Repression, T: Tense, F: Fear, C: Contempt, SA: Sadness, p.61

, Published spontaneous micro-expression spotting per frame methods and their corresponding general metrics. The method highlighted in bold is the most commonly used method for comparison. DF.1: apex/onset spotting methods, DF.2 : feature difference methods, DF.3: machine learning methods

, Impact of T CN on the fusion process. CN means the average value of normalization coefficient for each ROI sequence

, LBP-? 2 ) + method represents the LBP-? 2 with a supplementary merge process. F1-score f r means F1-score of an evaluation per frame; F1-score I means the metric proposed by MEGC (F1-score of an evaluation per interval), Result evaluation of LTP-SpFS and comparison with state-of-art method

, S-patterns synthesized by Hammerstein model outperform that generated by GAN for micro-expression spotting

, 16 ME spotting result in terms of F1-score f r with data augmentation by Hammerstein model. LTP-ML represents our method without Hammerstein model

, LTP-SpF is our method with HM but only LTP filtering

, LTP-SpS is our method with HM but only S-pattern synthesizing; LTP-SpFS represents the whole process with Hammerstein model (LTP filtering + S-pattern synthesizing). The spotting results for SpF and SpS are better than LTP-ML in CASME I because the size of S-pattern dataset for training stage is increased. In addition, the combination of these two steps improves the spotting performance due to a larger data volume of S-pattern, p.154

, Spotting performance comparison between S-pattern synthesizing by normal distribution and Poisson distribution

, L window is the length of sliding window W video , L overlap is the overlap size between sliding windows, L interval is the interval length of W ROI . The facial resolution given in the table is the average value among the entire database, Parameter configuration for SAMM and CAS(ME) 2 depending on FPS and faical resolution