, Light-field segmentation results on real datasets

, Example of angular patches

, The super-ray update step

, Average displacement of the super-ray centroids with respect to the number of iterations

, Different super-rays evaluation metrics across different parameters for a synthetic scene

, Comparison of super-rays versus independently merged super-pixels, p.95

, Super-rays for a sparsely sampled light field

, Super-rays for a densely sampled light field

, Graph-cut segmentation using our super-rays

. .. Angular-aliasing-comparison,

.. .. Duper-ray-neighborhood,

. .. , Dynamic super-rays for a few frames and views, p.108

, Over-segmentation comparison with the state of the art, p.109

M. Hog, N. Sabater, and C. Guillemot, Light Field Segmentation Using a Ray-Based Graph Structure, European Conference on Computer Vision (ECCV), pp.35-50, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01644651

M. Hog, N. Sabater, and C. Guillemot, Dynamic Super-Rays for Efficient Light Field Video Processing, British Machine Vision Conference (BMVC), pp.1-12, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01649342

M. Hog, N. Sabater, and C. Guillemot, Long Short Term Memory Networks for Light Field View Synthesis, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02202120

, Journal papers

M. Hog, N. Sabater, B. Vandame, and V. Drazic, An Image Rendering Pipeline for Focused Plenoptic Cameras, IEEE Transactions on Computational Imaging (TCI), vol.3, issue.4, pp.811-821, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01401501

M. Hog, N. Sabater, and C. Guillemot, Super-rays for Efficient Light Field Processing, IEEE Journal of Selected Topics in Signal Processing (J-STSP, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01407852

, Motion Picture Association of America, 2017.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon et al., Dataset and Pipeline for Multiview Light-Field Video, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.1743-1753, 2017.

L. Guillo, X. Jiang, G. Lafruit, and C. Guillemot, Light field video dataset captured by a R8 Raytrix camera (with disparity maps), INTERNATIONAL ORGANISATION FOR STANDARDISATION ISO/IEC JTC1/SC29/WG1 & WG11, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01804578

A. Gershun, The light field, Studies in Applied Mathematics, vol.18, issue.1-4, pp.51-151, 1939.

E. H. Adelson and J. R. Bergen, The plenoptic function and the elements of early vision, 1991.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, The lumigraph, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp.43-54, 1996.

M. W. Halle, Holographic stereograms as discrete imaging systems, International Society for Optics and Photonics, vol.2176, pp.73-85, 1994.

M. Levoy and P. Hanrahan, Light field rendering, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp.31-42, 1996.

L. Gabriel, La photographie intégrale, Comptes-Rendus, Académie des Sciences, vol.146, pp.446-551, 1908.

B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, High-speed videography using a dense camera array, CVPR, vol.2, p.294, 2004.

J. C. Yang, A light field camera for image based rendering, 2000.

C. Zhang and T. Chen, A self-reconfigurable camera array, SIGGRAPH Sketches, p.151, 2004.

L. Dabala, M. Ziegler, P. Didyk, F. Zilly, J. Keinert et al., Efficient Multi-image Correspondences for On-line Light Field Video Processing, Comput. Graph. Forum, vol.35, issue.7, pp.401-410, 2016.

K. Venkataraman, D. Lelescu, J. Duparré, A. Mcmahon, G. Molina et al., Picam: An ultra-thin high performance monolithic camera array, ACM Transactions on Graphics (TOG), vol.32, issue.6, p.166, 2013.

C. Huang, J. Chin, H. Chen, Y. Wang, and L. Chen, Fast realistic refocusing for sparse light fields, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1176-1180, 2015.

E. H. Adelson and J. Y. Wang, Single lens stereo with a plenoptic camera, IEEE Transactions on Pattern Analysis & Machine Intelligence, issue.2, pp.99-106, 1992.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz et al., Light field photography with a hand-held plenoptic camera, Computer Science Technical Report, vol.2, issue.11, pp.1-11, 2005.

R. C. Bolles, H. H. Baker, and D. H. Marimont, Epipolar-plane image analysis: An approach to determining structure from motion, International Journal of Computer Vision, vol.1, issue.1, pp.7-55, 1987.

A. Lumsdaine and T. Georgiev, The focused plenoptic camera, Computational Photography (ICCP), pp.1-8, 2009.

M. Kobayashi, M. Johnson, Y. Wada, H. Tsuboi, H. Takada et al., A low noise and high sensitivity image sensor with imaging and phase-difference detection af in all pixels, ITE Transactions on Media Technology and Applications, vol.4, issue.2, pp.123-128, 2016.

S. Krishnan, Capturing wide field of view light fields using spherical mirrors

D. Tsai, D. G. Dansereau, T. Peynot, and P. Corke, Image-based visual servoing with light field cameras, IEEE Robotics and Automation Letters, vol.2, issue.2, pp.912-919, 2017.

Y. Taguchi, A. Agrawal, S. Ramalingam, and A. Veeraraghavan, Axial light field for curved mirrors: Reflect your perspective, widen your view, Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp.499-506, 2010.

D. Scharstein and R. Szeliski, A taxonomy and evaluation of dense two-frame stereo correspondence algorithms, International journal of computer vision, vol.47, issue.1-3, pp.7-42, 2002.

C. Liang, T. Lin, B. Wong, C. Liu, and H. H. Chen, Programmable aperture photography: multiplexed light field acquisition, ACM Transactions on Graphics (TOG), vol.27, p.55, 2008.

T. Wang, A. A. Efros, and R. Ramamoorthi, Occlusion-aware Depth Estimation Using Light-field Cameras, Proceedings of the IEEE International Conference on Computer Vision, pp.3487-3495, 2015.

E. Penner and L. Zhang, Soft 3d reconstruction for view synthesis, ACM Transactions on Graphics (TOG), vol.36, issue.6, p.235, 2017.

T. Basha, S. Avidan, A. Hornung, and W. Matusik, Structure and motion from scene registration, 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp.1426-1433, 2012.

C. Chen, H. Lin, Z. Yu, S. B. Kang, and J. Yu, Light field stereo matching using bilateral statistics of surface cameras, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1518-1525, 2014.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures, Computer Vision and Pattern Recognition, vol.2, pp.2331-2338, 2006.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, Depth from combining defocus and correspondence using light-field cameras, Computer Vision (ICCV), 2013 IEEE International Conference on, pp.673-680, 2013.

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, Depth from Shading, Defocus, and Correspondence Using Light-Field Angular Coherence, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1940-1948, 2015.

W. Williem, I. , and K. Park, Robust Light Field Depth Estimation for Noisy Scene With Occlusion, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.4396-4404, 2016.

S. Wanner and B. Goldluecke, Globally consistent depth labeling of 4D light fields, Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp.41-48, 2012.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-hornung, and M. Gross, Scene reconstruction from high spatio-angular resolution light fields, ACM Transactions on Graphics, vol.32, issue.4, pp.73-74, 2013.

A. Isaksen, L. Mcmillan, and S. J. Gortler, Dynamically reparameterized light fields, Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp.297-306, 2000.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. Mcdowall et al., Synthetic aperture confocal imaging, ACM Transactions on Graphics (ToG), vol.23, pp.825-834, 2004.

Y. Xu, H. Nagahara, A. Shimada, and R. Taniguchi, Transcut: Transparent object segmentation from a light-field image, Proceedings of the IEEE International Conference on Computer Vision, pp.3442-3450, 2015.

K. Maeno, H. Nagahara, A. Shimada, and R. Taniguchi, Light field distortion feature for transparent object recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.2786-2793, 2013.

T. Wang, J. Zhu, E. Hiroaki, M. Chandraker, A. A. Efros et al., A 4d light-field dataset and cnn architectures for material recognition, European Conference on Computer Vision, pp.121-138, 2016.

N. Li, J. Ye, Y. Ji, H. Ling, and J. Yu, Saliency detection on light field, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.2806-2813, 2014.

Z. Ji, H. Zhu, and Q. Wang, Lfhog: a discriminative descriptor for live face detection from light field image, Image Processing (ICIP), 2016 IEEE International Conference on, pp.1474-1478, 2016.

R. Ng, Digital light field photography, 2006.

T. Georgiev and A. Lumsdaine, Focused plenoptic camera and rendering, Journal of Electronic Imaging, vol.19, issue.2, p.21106, 2010.

S. Wanner, J. Fehr, and B. Jaehne, Generating EPI Representations of 4D Light Fields with a Single Lens Focused Plenoptic Camera, International Symposium on Visual Computing (ISVC, oral presentation), 2011.

T. Georgiev and A. Lumsdaine, Full resolution lightfield rendering, 2008.

D. G. Dansereau, O. Pizarro, and S. B. Williams, Decoding, calibration and rectification for lenselet-based plenoptic cameras, Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp.1027-1034, 2013.

J. Fiss, B. Curless, and R. Szeliski, Refocusing plenoptic images using depthadaptive splatting, Computational Photography (ICCP), pp.1-9, 2014.

D. Cho, M. Lee, S. Kim, and Y. Tai, Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction, Computer Vision (ICCV), 2013 IEEE International Conference on, pp.3280-3287, 2013.

T. E. Bishop and P. Favaro, Plenoptic depth estimation from multiple aliased views, Computer Vision Workshops (ICCV Workshops), pp.1622-1629, 2009.

N. Sabater, M. Seifi, V. Drazic, G. Sandri, and P. Perez, Accurate disparity estimation for plenoptic images, ECCV Workshop on Light Fields for Computer Vision, 2014.

M. Kim, T. Oh, and I. S. Kweon, Cost-aware depth map estimation for Lytro camera, IEEE International Conference on, pp.36-40, 2014.

S. Heber, R. Ranftl, and T. Pock, Variational shape from light field, Energy Minimization Methods in Computer Vision and Pattern Recognition, pp.66-79, 2013.

H. Jeon, J. Park, G. Choe, J. Park, Y. Bok et al., Accurate Depth Map Estimation from a Lenslet Light Field Camera, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1547-1555, 2015.

C. Perwass and L. Wietzke, Single lens 3D-camera with extended depth-of-field, Proc. SPIE, vol.8291, p.829108, 2012.

C. Chang, M. Chen, P. Hsu, and Y. Lu, A pixel-based depth estimation algorithm and its hardware implementation for 4-D light field data, IEEE International Symposium on, pp.786-789, 2014.

S. Tulyakov, T. H. Lee, and H. Han, Quadratic formulation of disparity estimation problem for light-field camera, 20th IEEE International Conference on, pp.2063-2067, 2013.

O. Fleischmann and R. Koch, Lens-Based Depth Estimation for Multi-focus Plenoptic Cameras, Pattern Recognition, pp.410-420, 2014.

M. Uliyar, G. Putraya, S. Ukil, S. V. Basavaraja, K. Govindarao et al., Pixel resolution plenoptic disparity using cost aggregation, IEEE International Conference on, pp.3847-3851, 2014.

S. Wanner, C. Straehle, and B. Goldluecke, Globally consistent multi-label assignment on the ray space of 4d light fields, Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, p.132, 2013.

M. Uliyar, G. Putraya, and S. V. Basavaraja, Fast EPI based depth for plenoptic cameras, 20th IEEE International Conference on, pp.1-4, 2013.

D. Dansereau and L. Bruton, Gradient-based depth estimation from 4d light fields, Circuits and Systems, 2004. ISCAS'04. Proceedings of the 2004 International Symposium on, vol.3, p.549, 2004.

Y. Kao, C. Liang, L. Chang, and H. H. Chen, Depth detection of light field, Acoustics, Speech and Signal Processing, vol.1, p.893, 2007.

A. Mousnier, E. Vural, and C. Guillemot, Partial light field tomographic reconstruction from a fixed-camera focal stack, 2015.

H. Lin, C. Chen, S. B. Kang, and J. Yu, Depth Recovery from Light Field Using Focal Stack Symmetry, Proceedings of the IEEE International Conference on Computer Vision, pp.3451-3459, 2015.

S. Heber and T. Pock, Convolutional Networks for Shape From Light Field, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.3746-3754, 2016.

T. Georgiev and A. Lumsdaine, Reducing plenoptic camera artifacts, Computer Graphics Forum, vol.29, pp.1955-1968, 2010.

C. Liang and R. Ramamoorthi, A light transport framework for lenslet light field cameras, ACM Transactions on Graphics (TOG), vol.34, issue.2, p.16, 2015.

M. Seifi, N. Sabater, V. Drazic, and P. Perez, Disparity guided demosaicking of light field images, IEEE International Conference on Image Processing (ICIP), 2014.

Z. Yu, J. Yu, A. Lumsdaine, and T. Georgiev, An analysis of color demosaicing in plenoptic cameras, Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp.901-908, 2012.

Y. Bok, H. Jeon, and I. S. Kweon, Geometric Calibration of Micro-Lens-Based Light-Field Cameras using Line Features, Proceedings of European Conference on Computer Vision (ECCV), 2014.

B. G. Quinn, Estimating frequency by interpolation using Fourier coefficients, IEEE Transactions on, vol.42, issue.5, pp.1264-1268, 1994.

V. Drazic and N. Sabater, A precise real-time stereo algorithm, Proceedings of the 27th Conference on Image and Vision Computing New Zealand, pp.138-143, 2012.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, Image and depth from a conventional camera with a coded aperture, TOG, vol.26, p.70, 2007.

A. Levin, Analyzing depth from coded aperture sets, ECCV, pp.214-227, 2010.

L. Mcmillan and G. Bishop, Plenoptic modeling: An image-based rendering system, Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp.39-46, 1995.

C. Buehler, M. Bosse, L. Mcmillan, S. Gortler, and M. Cohen, Unstructured lumigraph rendering, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp.425-432, 2001.

G. Chaurasia, O. Sorkine, and G. Drettakis, Silhouette-aware warping for imagebased rendering, Computer Graphics Forum, vol.30, pp.1223-1232, 2011.
URL : https://hal.archives-ouvertes.fr/inria-00607039

G. Chaurasia, S. Duchene, O. Sorkine-hornung, and G. Drettakis, Depth synthesis and local warps for plausible image-based navigation, ACM Transactions on Graphics (TOG), vol.32, issue.3, p.30, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00907793

S. Wanner and B. Goldluecke, Variational light field analysis for disparity estimation and super-resolution, PAMI, vol.36, issue.3, pp.606-619, 2014.

S. Pujades, F. Devernay, and B. Goldluecke, Bayesian view synthesis and imagebased rendering principles, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.3906-3913, 2014.
URL : https://hal.archives-ouvertes.fr/hal-00983315

L. Shi, H. Hassanieh, A. Davis, D. Katabi, and F. Durand, Light field reconstruction using sparsity in the continuous fourier domain, ACM Transactions on Graphics (TOG), vol.34, issue.1, p.12, 2014.

F. Hawary, G. Boisson, C. Guillemot, and P. Guillotel, Compressive 4d light field reconstruction using orthogonal frequency selection, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01810353

Z. Liu, R. A. Yeh, X. Tang, Y. Liu, and A. Agarwala, Video frame synthesis using deep voxel flow, ICCV, pp.4473-4481, 2017.

C. Wu, N. Singhal, and P. Krähenbühl, Video compression through image interpolation, 2018.

M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert et al., Video (language) modeling: a baseline for generative models of natural videos, 2014.

M. Mathieu, C. Couprie, and Y. Lecun, Deep multi-scale video prediction beyond mean square error, 2015.

M. Jaderberg, K. Simonyan, and A. Zisserman, Spatial transformer networks, Advances in neural information processing systems, pp.2017-2025, 2015.

J. Van-amersfoort, W. Shi, A. Acosta, F. Massa, J. Totz et al., Frame interpolation with multi-scale deep loss functions and generative adversarial networks, 2017.

J. Flynn, I. Neulander, J. Philbin, and N. Snavely, Deepstereo: Learning to predict new views from the world's imagery, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.5515-5524, 2016.

N. K. Kalantari, T. Wang, and R. Ramamoorthi, Learning-Based View Synthesis for Light Field Cameras, Proceedings of SIGGRAPH Asia, vol.35, 2016.

S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural computation, vol.9, issue.8, pp.1735-1780, 1997.

S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber, Gradient flow in recurrent nets: the difficulty of learning long-term dependencies

S. Xingjian, Z. Chen, H. Wang, D. Yeung, W. Wong et al., Convolutional lstm network: A machine learning approach for precipitation nowcasting, Advances in neural information processing systems, pp.802-810, 2015.

W. Luo, A. G. Schwing, and R. Urtasun, Efficient deep learning for stereo matching, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.5695-5703, 2016.

M. Schuster and K. K. Paliwal, Bidirectional recurrent neural networks, IEEE Transactions on Signal Processing, vol.45, issue.11, pp.2673-2681, 1997.

M. Rerabek and T. Ebrahimi, New light field image dataset, 8th International Conference on Quality of Multimedia Experience (QoMEX), 2016.

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, 2014.

D. S. Hochbaum and V. Singh, An efficient algorithm for co-segmentation, ICCV, pp.269-276, 2009.

A. Djelouah, J. Franco, E. Boyer, F. Clerc, and P. Pérez, Multi-view object segmentation in space and time, ICCV, pp.2640-2647, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00873544

Y. Boykov, O. Veksler, and R. Zabih, Fast approximate energy minimization via graph cuts, PAMI, vol.23, issue.11, pp.1222-1239, 2001.

Y. Boykov and G. Funka-lea, Graph cuts and efficient ND image segmentation, The (New) Stanford Light Field Archive, vol.70, pp.109-131, 2006.

S. M. Seitz and K. N. Kutulakos, Plenoptic image editing, IJCV, vol.48, issue.2, pp.115-129, 2002.

A. Jarabo, B. Masia, and D. Gutierrez, Efficient propagation of light field edits, SIACG, 2011.

X. An and F. Pellacini, AppProp: all-pairs appearance-space edit propagation, ACM Transactions on Graphics (TOG), vol.27, p.40, 2008.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, Joint bilateral upsampling, ACM Transactions on Graphics (TOG), vol.26, p.96, 2007.

A. Jarabo, B. Masia, A. Bousseau, F. Pellacini, and D. Gutierrez, How Do People Edit Light Fields?, SIGGRAPH Conference Proceedings), vol.33, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01060860

F. Zhang, J. Wang, E. Shechtman, Z. Zhou, J. Shi et al., PlenoPatch: Patch-based Plenoptic Image Manipulation, Transactions on Visualization and Computer Graphics, 2016.

J. Berent and P. L. Dragotti, Unsupervised Extraction of Coherent Regions for Image Based Rendering, BMVC, pp.1-10, 2007.

P. L. Dragotti and M. Brookes, Efficient Segmentation and Representation of Multi-View Images, SEAS-DTC workshop, SEAS-DTC workshop, 2007.

J. Berent and P. L. Dragotti, Plenoptic Manifolds-Exploiting structure and coherence in multiview images, 2007.

H. Mihara, T. Funatomi, K. Tanaka, H. Kubo, H. Nagahara et al., 4D Light-field Segmentation with Spatial and Angular Consistencies, ICCP, 2016.

C. Rother, T. Minka, A. Blake, and V. Kolmogorov, Cosegmentation of image pairs by histogram matching-incorporating a global constraint into mrfs, CVPR, vol.1, pp.993-1000, 2006.

L. Mukherjee, V. Singh, and J. Peng, Scale invariant cosegmentation for image groups, CVPR, pp.1881-1888, 2011.

C. Reinbacher, M. Rüther, and H. Bischof, Fast variational multi-view segmentation through backprojection of spatial constraints, Image and Vision Computing, vol.30, issue.11, pp.797-807, 2012.

N. D. Campbell, G. Vogiatzis, C. Hernández, and R. Cipolla, Automatic object segmentation from calibrated images, CVMP, pp.126-137, 2011.

M. Sormann, C. Zach, and K. Karner, Graph cut based multiple view segmentation for 3d reconstruction, vol.3, pp.1085-1092, 2006.

B. Goldlucke and M. A. Magnor, Joint 3d-reconstruction and background separation in multiple views using graph cuts, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.1, 2003.

V. Kolmogorov and R. Zabin, What energy functions can be minimized via graph cuts?, PAMI, vol.26, issue.2, pp.147-159, 2004.

Y. Boykov and V. Kolmogorov, An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision, PAMI, vol.26, issue.9, pp.1124-1137, 2004.

C. Mutto, P. Zanuttigh, and G. M. Cortelazzo, Scene Segmentation by Color and Depth Information and its Applications, 2010.

C. D. Mutto, P. Zanuttigh, and G. M. Cortelazzo, Fusion of geometry and color information for scene segmentation, J-STSP, vol.6, issue.5, pp.505-521, 2012.

C. Rother, V. Kolmogorov, and A. Blake, Grabcut: Interactive foreground extraction using iterated graph cuts, TOG, vol.23, pp.309-314, 2004.

J. , A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models, ICSI, vol.4, issue.510, p.126, 1998.

M. Harville, G. Gordon, and J. Woodfill, Foreground segmentation using adaptive mixture models in color and depth, Workshop on Detection and Recognition of Events in Video, pp.3-11, 2001.

M. A. Hasnat, O. Alata, and A. Trémeau, Unsupervised RGB-D image segmentation using joint clustering and region merging, J-STSP, vol.6, issue.5, pp.505-521, 2012.
URL : https://hal.archives-ouvertes.fr/ujm-01020565

T. Yang, Y. Zhang, J. Yu, J. Li, W. Ma et al., All-In-Focus Synthetic Aperture Imaging, ECCV, pp.1-15, 2014.

V. Vineet and P. J. Narayanan, CUDA cuts: Fast graph cuts on the GPU, CVPR, pp.1-8, 2008.

H. Ao, Y. Zhang, A. Jarabo, B. Masia, Y. Liu et al., Light Field Editing Based on Reparameterization, Pacific Rim Conference on Multimedia, pp.601-610, 2015.

K. W. Shon, I. K. Park, and O. , Spatio-angular consistent editing framework for 4D light field images, Multimedia Tools and Applications, pp.1-17, 2016.

X. Ren and J. Malik, Learning a classification model for segmentation, ICCV, pp.10-17, 2003.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua et al., SLIC superpixels compared to state-of-the-art superpixel methods, IEEE transactions on pattern analysis and machine intelligence, vol.34, pp.2274-2282, 2012.

M. Bergh, X. Boix, G. Roig, and L. Van-gool, Seeds: Superpixels extracted via energy-driven sampling, International Journal of Computer Vision, vol.111, issue.3, pp.298-314, 2015.

J. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Transactions, vol.22, issue.8, pp.888-905, 2000.

P. F. Felzenszwalb and D. P. Huttenlocher, Efficient graph-based image segmentation, International Journal of Computer Vision, vol.59, issue.2, pp.167-181, 2004.

A. P. Moore, S. J. Prince, J. Warrell, U. Mohammed, and G. Jones, Superpixel lattices, Computer Vision and Pattern Recognition, pp.1-8, 2008.

O. Veksler, Y. Boykov, and P. Mehrani, Superpixels and supervoxels in an energy optimization framework, European conference on Computer vision, pp.211-224, 2010.

Y. Zhang, R. Hartley, J. Mashford, and S. Burn, Superpixels via pseudo-boolean optimization, 2011 International Conference on Computer Vision, pp.1387-1394, 2011.

F. Meyer and P. Maragos, Multiscale morphological segmentations based on watershed, flooding, and eikonal PDE, International Conference on Scale-Space Theories in Computer Vision, pp.351-362, 1999.

A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson et al., Turbopixels: Fast superpixels using geometric flows, IEEE transactions on pattern analysis and machine intelligence, vol.31, pp.2290-2297, 2009.

A. Vedaldi and S. Soatto, Quick shift and kernel methods for mode seeking, European Conference on Computer Vision, pp.705-718, 2008.

P. Wang, G. Zeng, R. Gan, J. Wang, and H. Zha, Structure-sensitive superpixels via geodesic distance, International journal of computer vision, vol.103, issue.1, pp.1-21, 2013.

R. Birkus, Accelerated gSLIC for Superpixel Generation used in Object Segmentation, Proc. of CESCG, vol.15, 2015.

M. Bleyer, C. Rother, P. Kohli, D. Scharstein, and S. Sinha, Object stereo-joint stereo matching and object segmentation, Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp.3081-3088, 2011.

Y. Taguchi, B. Wilburn, and C. L. Zitnick, Stereo reconstruction with mixed pixels using adaptive over-segmentation, Computer Vision and Pattern Recognition, pp.1-8, 2008.

B. Mi?u?\'\ik and J. Ko?ecká, Multi-view superpixel stereo in urban environments, International journal of computer vision, vol.89, issue.1, pp.106-119, 2010.

C. Xu and J. J. Corso, Evaluation of super-voxel methods for early video processing, Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp.1202-1209, 2012.

A. Levinshtein, C. Sminchisescu, and S. Dickinson, Spatiotemporal closure, Asian Conference on Computer Vision, pp.369-382, 2010.

J. Chang, D. Wei, and J. W. Fisher, A video representation using temporal superpixels, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.2051-2058, 2013.

M. Reso, J. Jachalsky, B. Rosenhahn, and J. Ostermann, Temporally consistent superpixels, Proceedings of the IEEE International Conference on Computer Vision, pp.385-392, 2013.

M. Bergh, G. Roig, X. Boix, S. Manen, and L. Van-gool, Online video seeds for temporal window objectness, Proceedings of the IEEE International Conference on Computer Vision, pp.377-384, 2013.

M. Reso, J. Jachalsky, B. Rosenhahn, and J. Ostermann, Fast label propagation for real-time superpixels for video content, Image Processing (ICIP), 2015 IEEE International Conference on, pp.902-906, 2015.

J. Yang, Z. Gan, K. Li, and C. Hou, Graph-Based Segmentation for RGB-D Data Using 3-D Geometry Enhanced Superpixels, IEEE transactions on cybernetics, vol.45, issue.5, pp.927-940, 2015.

P. Neubert and P. Protzel, Superpixel benchmark and comparison, Proc. Forum Bildverarbeitung, pp.1-12, 2012.

D. G. Dansereau, O. Pizarro, and S. B. Williams, Linear Volumetric Focus for Light Field Cameras, ACM Transactions on Graphics (TOG), vol.34, issue.2, 2015.

E. Garces, J. I. Echevarria, W. Zhang, H. Wu, K. Zhou et al., Intrinsic Light Field Images, Computer Graphics Forum, 2017.

F. Galasso, R. Cipolla, and B. Schiele, Video Segmentation with Superpixels, Proceedings of the 11th Asian Conference on Computer Vision -Volume Part I, ACCV'12, pp.760-774, 2013.

M. Bergh, X. Boix, G. Roig, B. De-capitani, and L. Van-gool, SEEDS: Superpixels extracted via energy-driven sampling, European conference on computer vision, pp.13-26, 2012.

P. P. Srinivasan, M. W. Tao, R. Ng, and R. Ramamoorthi, Oriented light-field windows for scene flow, Proceedings of the IEEE International Conference on Computer Vision, pp.3496-3504, 2015.

S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black et al., A database and evaluation methodology for optical flow, International Journal of Computer Vision, vol.92, issue.1, pp.1-31, 2011.

P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, DeepFlow: Large displacement optical flow with deep matching, IEEE Intenational Conference on Computer Vision (ICCV), 2013.
URL : https://hal.archives-ouvertes.fr/hal-00873592

J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid, Deepmatching: Hierarchical deformable dense matching, International Journal of Computer Vision, vol.120, issue.3, pp.300-323, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01148432

G. Fracastoro, F. Verdoja, M. Grangetto, and E. Magli, Superpixel-driven graph transform for image compression, Image Processing (ICIP), 2015 IEEE International Conference on, pp.2631-2635, 2015.

R. Giraud, V. Ta, and N. Papadakis, Superpixel-based Color Transfer, IEEE International Conference on Image Processing, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01519644

Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein et al., Dynamic graph cnn for learning on point clouds, 2018.

R. Giraud, V. Ta, and N. Papadakis, Scalp: Superpixels with contour adherence using linear path, 23rd International Conference on Pattern Recognition (ICPR 2016), 2016.
URL : https://hal.archives-ouvertes.fr/hal-01349569