. Résumé-de-mouvements, est l'adaptation de critères sociaux et l'adaptation d'un modèle de la vision humaine dans la recherche d'une configuration et d'un emplacement du robot pour l'interaction

@. Une-façon-d, ´ evaluer l'accomplissement de la tâche ? Un schéma de raisonnement sur l'espacè a travers les " yeux de l'humain " pour aider le robotà robotà comprendre etêtreetêtre compris

@. Prise-de-perspective, Le concept principal de ce travail et qui fait référencè a la capacité de comprendre et de percevoir ce que les autres personnes disent, font ou regardent, en fonction de son point de vue. On se focalisera sur la " prise de perspective visuelle

/. @bullet-attention-partagée and . Jointe, Le fait que deux individus ou plus, font attention au même objet ou endroit pour une intention commune de communication

T. Tversky, Dans ledeuxì eme niveau une personne est capable de se faire une image mentale de la perception de son partenaire, autrement dit, elle est capable de se mettre dans la tête de son partenaire Tversky 99, Lee 01] ont fait différentesdifférentesétudes sur lamanì ere dont les humains changent leur perspectives. Ils concluent que pour qu'une personne en comprenne une autre, le développement de la prise de perspective est un outil nécessaire. La prise de perspective est utilisée comme base pour développer des algorithmes et systèmes qui permettent une interaction plus fiable entre machines et personnes. C'est le cas pour la réalisation de systèmes d'images de synthèse en 3D. O` u la perspective de l'homme doitêtredoitêtre simulée pour permettre une immersion dans l'ambiance virtuelle, Taylor, vol.96

. Kaplan, revientàrevientà la classification de Tomasselo et mentionne que pour arriverà arriverà une meilleure attention partagée entre l'homme et le robot, il doit exister au moins quatre préconditions: ? Détection de l'attention : pour suivre l

. Scassellati, Scassellati 99] divise l'accomplissement de l'attention partagée en quatre tâches : regard mutuel, suivi du regarde, pointage impératif d'un objet et un pointage déclaratif de un objet distant

. De-façon-complémentaire, il existe des travaux qui mesurent comment les personnes dirigent l'attention d'un robot [Imai 03, Ogata 09]. La prise de perspective est aussi utilisée pour pouvoir obtenir les objets d'attention, ces objetsétantobjetsétant définis soit manuellement

P. La-technique-utilisée-par and . Peut, Elle consiste tout d'abordàabordà délimiter l'espace de configuration dans une zone proche de l'objet d'interaction (que ce soit un humain ou un objet) Une fois l'espace défini, l'´ etape suivante est de discrétiser la zone en points sur le sol dans l'espace polaire, et finalement d'´ evaluer chaque point obtenu lors de la discrétisation pour choisir le meilleur. L'´ evaluation est faite en terme d'utilité, basé sur l'assignation des qualités et des coûts Ces deux propriétés dépendront de la tâche et il s'agit de maximiser la qualité et de minimiser les coûts. Notre tâche de base est celle de la perception. Dans cette tâche, la qualité est basée sur l'implémentation de la prise de perspectivé egocentrique sur le robot et la quantité que l'objet desiré est per¸uper¸u par le robotàrobotà partir de chaque position, Le robot doitéviterdoitéviter au maximum les obstacles visuels qui peuvent géner sa perception et nuire au bon déroulement de la tâche. La figure 7.1 montre les valeurs obtenus pour la qualité de la perception

G. Avec-le-système and . Dans-ladernì-ere-partie-de-ce-travail, le robot est capable d'obtenir la perspective de l'homme et de raisonner sur les perspectives de tous les deux, permettant au robot de produire un canal de communication entre lui et l'homme pour arriveràarriverà unepremì eréeré etape d'attention partagée entre eux. Ce système nous a permit de conna??ttreconna??ttre principalement que: ? Il n'est pas possible d'obtenir une prise de perspective de niveau deux si on ne réalise que le suivi du regard

B. Abdel-malek, Z. Mi, J. Yang, and &. K. Nebel, Optimization-based layout design, ABBI 2005, pp.187-196, 2005.

R. Alami, R. Chatila, S. Fleury, M. Ghallab, and &. F. Ingrand, An Architecture for Autonomy, The International Journal of Robotics Research, vol.17, issue.4, pp.315-337, 1998.
DOI : 10.1177/027836499801700402

URL : https://hal.archives-ouvertes.fr/hal-00123273

R. Rachid-alami, A. Chatila, S. Clodic, M. Fleury, and . Herrb, Towards Human-Aware Cognitive Robots. The Fifth International Cognitive Robotics Workshop, The AAAI-06 Workshop on Cognitive Robotics), 2006.

S. Ali and J. Ye, Anshuman Razdan & Peter Wonka. Compressed Facade Displacement Mapping, IEEE Transactions on Visualization and Computer Graphics, issue.2, 2009.

P. Baerlocher and &. R. Boulic, An inverse kinematics architecture enforcing an arbitrary number of strict priority levels. The Visual Computer, pp.402-417, 2004.

. Abidi, The Next-Best-View System for Autonomous 3-D Object Reconstruction BIBLIOGRAPHY [Baron-Cohen 95a] Simon Baron-Cohen. The Eye Direction Detector (EDD) and The Shared Attention Mechanisms(SAM): Two cases for evolutionary psychology Joint Attention: Its origins and role in development, IEEE Transactions on Systems, Man and Cybernetics, vol.30, pp.589-598, 1995.

. Baron-cohen-95b-]-simon-baron-cohen, Mindblindness: An essay on autism and theory of mind, 1995.

]. Berlin, J. Gray, A. L. Thomaz, and C. Breazeal, Perspective Taking: An Organizing Principle for Learning in Human-Robot Interaction, International Conf. on Artificial Intelligence, AAAI, volume AAAI, 2006.

A. Bottino and &. Laurentini, What's NEXT? An interactive next best view approach, Pattern Recognition, vol.39, issue.1, pp.126-132, 2006.
DOI : 10.1016/j.patcog.2005.06.008

C. Breazeal, M. Berlin, A. Brooks, J. Gray, &. Andrea et al., Using perspective taking to learn from ambiguous demonstrations, Robotics and Autonomous Systems, vol.54, issue.5, pp.385-393, 2006.
DOI : 10.1016/j.robot.2006.02.004

L. Brèthes, F. Lerasle, and &. P. Danès, Data Fusion for Visual Tracking dedicated to Human-Robot Interaction, Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp.2087-2092, 2005.
DOI : 10.1109/ROBOT.2005.1570419

A. G. Brooks, J. Gray, G. Hoffman, and A. Lockerd, Robot's play, Computers in Entertainment, vol.2, issue.3, pp.10-10, 2004.
DOI : 10.1145/1027154.1027171

. Broqù-ere, D. Xavierbroqù-ere, &. Sidobre, and . Herrera-aguilar, Soft motion trajectory planner for service manipulator robot, IEEE/RSJ International Conference on Intelligent Robots and Systems IROS, 2008.

G. Butterworth, Origins of mind in perception and Action Joint Attention: Its origins and role in development, 1995.

]. Chhugani, B. Purnomo, S. Krishnan, J. Cohen, S. Venkatasubramanian et al., vLOD: High-Fidelity Walkthrough of Large Virtual Environments, IEEE Transactions on Visualization and Computer Graphics, vol.11, issue.01, 2005.
DOI : 10.1109/TVCG.2005.17

P. John and . Costella, A Beginner's Guide to the Human Field of View. Rapport technique, School of Physics, 1995.

K. Dautenhahn, M. Walters, S. Woods, K. L. Koay, C. Nehaniv et al., How may I serve you?, A Robot Companion Approaching a Seated Person in a Helping Context, Conference on Human-Robot Interaction, 2006.

D. Davidson, Inquiries into truth and interpretation, 1984.
DOI : 10.1093/0199246297.001.0001

]. Faust, C. Simon, &. William, and D. Smart, A Video Game-Based Mobile Robot Simulation Environment, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006.
DOI : 10.1109/IROS.2006.281757

A. Fedrizzi, L. Moesenlechner, F. Stulp, and &. Michael-beetz, Transformational Planning for Mobile Manipulation based on Actionrelated Places, Proceedings of the International Conference on Advanced Robotics, 2009.

J. David, &. Feil-seifer, J. Maja, and . Matari´cmatari´c, A Multi-Modal Approach to Selective Interaction in Assistive Domains, IEEE Proceedings of the International Workshop on Robot and Human Interactive Communication, pp.416-421, 2005.

J. H. Flavell, Perspectives On Perspective-Taking

S. Fleury, M. Herrb, and &. R. Chatila, Genom: a Tool for the Specification and the Implementation of Operating Modules in a Distributed Robot Architecture, IEEE/RSJ International Conference on Intelligent Robots and Systems, 1997.

O. Torea-foissotte, A. Stasse, P. Escande, &. Wieber, and . Kheddar, A two-steps next-best-view algorithm for autonomous 3D object modeling by a humanoid robot, 2009 IEEE International Conference on Robotics and Automation, 2009.
DOI : 10.1109/ROBOT.2009.5152350

M. Fontmarty, T. Germa, B. Burger, L. F. Marin, and &. Steffen-knoop, IMPLEMENTATION OF HUMAN PERCEPTION ALGORITHMS ON A MOBILE ROBOT, Le 6th IFAC Symposium on Intelligent Autonomous Vehicles, p.BIBLIOGRAPHY, 2007.
DOI : 10.3182/20070903-3-FR-2921.00062

U. Frith, Mind Blindness and the Brain in Autism, Neuron, vol.32, issue.6, p.969979, 2001.
DOI : 10.1016/S0896-6273(01)00552-9

A. Green, Characterising Dimensions of Use for Designing Adaptive Dialogues for Human-Robot Communication, RO-MAN 2007, The 16th IEEE International Symposium on Robot and Human Interactive Communication, pp.1078-1083, 2007.
DOI : 10.1109/ROMAN.2007.4415241

A. Green, The Need for a Model of Contact and Perception to Support Natural Interactivity in Human-Robot Communication, RO-MAN 2007, The 16th IEEE International Symposium on Robot and Human Interactive Communication, pp.552-557, 2007.
DOI : 10.1109/ROMAN.2007.4415147

]. E. Hall, The hidden dimension. Doubleday, Garden City, 1966.

H. Hicheur, S. Glasauer, S. Vieilledent, and &. A. Berthoz, Head direction control during active locomotion in humans, Head Direction Cells and the Neural Mechanisms of Spatial Orientation, pp.383-408, 2005.

F. Hoeller, D. Schulz, M. Moors, &. Frank, and E. Schneider, Accompanying persons with a mobile robot using motion prediction and probabilistic roadmaps, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1260-1265, 2007.
DOI : 10.1109/IROS.2007.4399194

&. C. Hoffman and . Breazeal, Collaboration in Human-Robot Teams, AIAA 1st Intelligent Systems Technical Conference, 2004.
DOI : 10.2514/6.2004-6434

S. W. Hsu and &. Y. Li, Third-Person Interactive Control of Humanoid with Real-Time Motion Planning Algorithm, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006.
DOI : 10.1109/IROS.2006.282361

H. Huang, J. Chen, and &. Jian, Development of the joint attention with a new face tracking method for multiple people, 2008 IEEE Workshop on Advanced robotics and Its Social Impacts, 2008.
DOI : 10.1109/ARSO.2008.4653580

. Topp, Investigating Spatial Relationships in Human-robot Interaction, Proc in (IEEE/RSJ) International Conference on Intelligent Robots and Systems, 2006.

J. Ido, Y. Matsumoto, T. Ogasawara, and &. Nisimura, Humanoid with Interaction Ability Using Vision and Speech Information, Proc. in (IEEE/RSJ) International Conference on Intelligent Robots and Systems, pp.1316-1321, 2006.

M. Imai, T. Ono, and &. H. Ishiguro, Physical relation and expression: joint attention for human-robot interaction, IEEE Transactions on Industrial Electronics, vol.50, issue.4, 2003.

M. Isard and &. A. Blake, CONDENSATION -Conditional Density Propagation For Visual Tracking, International Journal of Computer Vision, vol.29, issue.1, pp.5-28, 1998.
DOI : 10.1023/A:1008078328650

T. Kanda, M. Kamasima, M. Imai, T. Ono, D. Sakamoto et al., A humanoid robot that pretends to listen to route guidance from a human, Autonomous Robots, vol.35, issue.4, pp.87-100, 2007.
DOI : 10.1007/s10514-006-9007-6

]. F. Kaplan and &. V. Hafner, The challenges of Joint Attention. The challenges of joint attention, Interaction Studies, vol.7, issue.2, pp.135-169, 2006.

M. Katayama and &. H. Hasuura, Optimization principle determines human arm postures and " comfort, SICE 2003 Annual Conference, pp.1000-1005, 2003.

M. Kleinehagenbrock, S. Lang, J. Fritsch, F. Lomker, G. A. Fink et al., Person Tracking with a Mobile Robot based onMulti- Modal Anchoring, Int. Workshop on Robot and Human Interactive Communication, 2002.

S. Lambrey, M. Amorim, S. Samson, M. Noulhiane, D. Hasboun et al., Distinct visual perspective-taking strategies involve the left and right medial temporal lobe structures differently, Brain, vol.131, issue.2, pp.523-534, 2008.
DOI : 10.1093/brain/awm317

URL : https://hal.archives-ouvertes.fr/hal-00579444

J. Latombe, Robot motion planning, 1991.
DOI : 10.1007/978-1-4615-4022-9

J. Laumond, Robot motion planning and control. Telos, 1997.

S. Lavalle-lee and &. B. Tversky, Planning algorithms Costs of Switching Perspectives in Route and Survey Description Physical Distance based Human Robot Interaction in Intelligent Environments, Proceedings of the twenty-third Annual Conference of the Cognitive Science Society IEEE 2002 28th Annual Conference of the Industrial Electronics Society, 2002.

Y. F. Li, B. He, &. Paul-bao-setha, M. Low, and &. , Automatic View Planning with Selftermination in 3D Object Reconstructions The antropology of space and place: Locating culture, Journ. on Sensors and Actuators, pp.335-344, 2003.

]. R. Marler, J. Yang, J. S. Arora, and &. Abdel-malek, Study of Bi- Criterion Upper Body Posture Prediction using ParetoOptimal Sets, IASTED International Conference on Modeling, 2005.

A. M. Liu, C. M. Oman, and A. Natapoff, Influence of Perspective-taking and Mental Rotation Abilities in Space Teleoperation, HRI '07:Proceeding of the ACM/IEEE international conference on Human-robot interaction, 2007.

P. Menezes, F. Lerasle, J. Dias, and &. R. Chatila, A single camera motion capture system dedicated to gestures imitation, 5th IEEE-RAS International Conference on Humanoid Robots, 2005., pp.430-435, 2005.
DOI : 10.1109/ICHR.2005.1573605

T. Noriaki-mitsunaga, H. Miyashita, K. Ishiguro, &. Kogure, and . Hagita, Robovie-IV: A Communication Robot Interacting with People Daily in an Office, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.5066-5072, 2006.
DOI : 10.1109/IROS.2006.282594

H. Mizuhara, J. Wu, and &. Nishikawa, The degree of human visual attention in the visual search, Fourth International Symposium on Artificial Life and Robotics, 1999.
DOI : 10.1007/BF02480857

H. Moll and &. Michael-tomasello, Level 1 perspective-taking at 24 months of age, British Journal of Developmental Psychology, vol.23, issue.1, pp.603-613, 2006.
DOI : 10.1348/026151005X55370

G. Notger, M. Müller, A. Mollenhauer, and A. Kleinschmidt, The attentional field has a Mexican hat distribution, Vision Research, vol.45, issue.9, pp.1129-1137, 2005.

Y. Nagai, K. Hosoda, and &. Asada, How does an infant acquire the ability of joint attention?: A Constructive Approach, 2003.

Y. Nakauchi and &. Simmons, A social robot that stands in line, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113), pp.323-324, 2002.
DOI : 10.1109/IROS.2000.894631

T. Seya, &. N. Miyoshi, and . Keren, Measurement of Visual Attention and Useful Field of View during Driving Tasks Using a Driving Simulator, The 2007 Mid-Continent Transportation Research Symposium, 2007.

D. Bradley, &. Null, D. Eric, and . Sinzinger, Next Best View Algorithms for Interior and Exterior Model Acquisition, LNCS, Advances in Visual Computing, vol.4292, pp.668-677, 2006.

K. Tani, &. Komatani, G. Hiroshi, and . Okuno, Ryonosuke Yokoya Prediction and Imitation of other's motions by reusing own forward-inverse model in robots, IEEE International Conference on Robotics and Automation, 2009.

M. Okamoto, Y. I. Nakano, K. Okamoto, &. Ken-'ichi-matsumura, and . Nishida, Producing Effective Shot Transitions in CG contents Based on a Cognitive Model of User Involvement. in Special Section of Life-like Agent and its Communication, IEICE Transactions of Information and Systems, issue.11, pp.2523-2532, 2005.

E. Pacchierotti, H. Christensen, and &. P. Jensfelt, Embodied social interaction for service robots in hallway environments. Field and Service Robotics, pp.476-487, 2005.

J. Perez, C. Germain-renaud, and C. Loomis, Utility-Based Reinforcement Learning for Reactive Grids, 2008 International Conference on Autonomic Computing, pp.205-206, 2008.
DOI : 10.1109/ICAC.2008.18

URL : https://hal.archives-ouvertes.fr/inria-00287354

C. Peters, S. Asteriadis, K. Karpouzis, and &. E. De-sevin, Towards a Real-time Gaze-based Shared Attention for a Virtual Agent, Workshop in Affective Interaction in Natural Environments, AFFINE, Satellite Workshop of the ACM International Conference on Multimodal Interfaces (ICMI), 2008.

M. E. Pfeiffer, . Latoschik-&-ipke, and . Wachsmuth, Conversational Pointing Gestures for Virtual Reality Interaction: Implications from an Empirical Study, 2008 IEEE Virtual Reality Conference, p.BIBLIOGRAPHY, 2008.
DOI : 10.1109/VR.2008.4480801

J. Piaget, The origins of intelligence on children, 1952.
DOI : 10.1037/11494-000

J. Piaget and &. B. Inhelder, The child's conception of space, 1956.

J. Richarz, C. Martin, &. H. Scheidig, and . Gross, There You Go! - Estimating Pointing Gestures In Monocular Images For Mobile Robot Instruction, ROMAN 2006, The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006.
DOI : 10.1109/ROMAN.2006.314446

J. Rix and &. Stork, Combining ergonomic and field-of-view analysis using virtual humans, SME Computer Technology Solutions Conference, 1999.

J. M. Sanchiz and &. R. Fisher, A Next-Best-View Algorithm for 3D Scene Recovery with 5 Degrees of Freedom, Proc. 10th British Machine Vision Conference, 1999.

S. Satake, T. Kanda, D. F. Glas, M. Imai, H. Ishiguro et al., How to approach humans?, Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, HRI '09, pp.109-116, 2009.
DOI : 10.1145/1514095.1514117

B. Scassellati, Imitation and Mechanisms of Joint Attention: A Developmental Structure for Building Social Skills on a Humanoid Robot Computation for Metaphors, Analogy and Agents, 1999.

D. Schmalstieg, A Survey of Advanced Interactive 3-D Graphics Techniques, 1997.

R. N. Shepard and &. J. Metzler, Mental Rotation of Three-Dimensional Objects, Science, vol.171, issue.3972, pp.701-703, 1971.
DOI : 10.1126/science.171.3972.701

R. N. Shepard and &. L. Cooper, Mental images and their transformations, 1996.

D. Shulz, W. Burgard, D. Fox, and &. B. Cremers, Tracking multimple moving objects with a mobile robot, Proc. of the IEEE Computer Society Conference on computer vision and pattern recognition (CVPR), 2001.

C. L. Sidner, C. D. Kidd, C. Lee, and &. Neal-lesh, Where to look, Proceedings of the 9th international conference on Intelligent user interface , IUI '04, pp.78-84, 2004.
DOI : 10.1145/964442.964458

C. L. Sidner, Humanoid Agents as Hosts, Advisors, Companions , and Jesters, Proceedings of the Twenty-First International Florida Artificial Intelligence Research Society Conference, pp.11-15, 2008.

T. Siméon, J. Laumond, and &. F. Lamiraux, Move3D: a Generic Platform for Motion Planning, 4th International Symposium on Assembly and Task Planning, 2001.

M. Staudte, &. Matthew, and W. Crocker, Visual attention in spoken human-robot interaction, Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, HRI '09, pp.77-84, 2009.
DOI : 10.1145/1514095.1514111

&. Y. Asada and . Yoshikawa, Causality detected by transfer entropy leads acquisition of joint attention, IEEE 6th International Conference on Development and Learning, 2007.

S. Richard, &. Sutton, G. Andrew, and . Barto, Reinforcement learning: An introduction, 1998.

H. Takemura, K. Ito, and &. Mizoguchi, Person following mobile robot under varying illumination based on distance and color information, 2007 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2007.
DOI : 10.1109/ROBIO.2007.4522386

. Taylor, A. Holly, B. Taylor, and . Tversky, Perspective in Spatial Descriptions, Journal of Memory and Language, vol.35, issue.3, pp.371-391, 1996.
DOI : 10.1006/jmla.1996.0021

A. Thayananthan, B. Stenger, P. H. Torr, and &. R. Cipolla, Learning a Kinematic Prior for Tree-Based Filtering, Procedings of the British Machine Vision Conference 2003, pp.589-598, 2003.
DOI : 10.5244/C.17.60

D. Tolani, A. Goswami, and &. N. Badler, Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs, Graphical Models, vol.62, issue.5, pp.353-388, 1995.
DOI : 10.1006/gmod.2000.0528

]. J. Trafton-05a, N. L. Gregory-trafton, M. D. Cassimatis, D. P. Bugajska, F. Brock et al., Enabling Effective Human???Robot Interaction Using Perspective-Taking in Robots, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol.35, issue.4, pp.460-470, 2005.
DOI : 10.1109/TSMCA.2005.850592

. G. Trafton-05b-]-j, A. C. Trafton, M. Schultz, &. F. Bugajska, and . Mintz, Perspectivetaking with Robots: Experiments and Models, Robot and Human Interactive Communication ROMAN, pp.580-584, 2005.

&. A. Turk and . Pentland, Face recognition using eigenfaces, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.586-591, 1991.
DOI : 10.1109/CVPR.1991.139758

B. Tversky, P. Lee, and &. S. Waring, Why Do Speakers Mix Perspectives, Spatial Cogn. Computat, vol.1, pp.312-399, 1999.

&. M. Jones, Rapid Object Detection using a Boosted Cascade of Simple Features, Int. Conf. on Computer Vision and Pattern Recognition (CVPR'01), 2001.

M. L. Walters, K. Dautenhahn, R. Te-boekhorst, K. L. Koay, C. Kaouri et al., The influence of subjects' personality traits on personal spatial zones in a human-robot interaction experiment, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005., 2005.
DOI : 10.1109/ROMAN.2005.1513803

P. Wang and &. Gupta, A Configuration Space View of View Planning, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006.
DOI : 10.1109/IROS.2006.281892

J. Wang, T. Korhonen, and &. Zhao, Weighted Network utility Maximization Aided by Combined Queueing Priority in OFDMA Systems, 2008 IEEE International Conference on Communications, pp.3323-3327, 2008.
DOI : 10.1109/ICC.2008.625

P. Warreyn, H. Roeyers, T. Oelbrandt, and &. Isabel-de-groote, What Are You Looking at? Joint Attention and Visual Perspective Taking in Young Children With Autism Spectrum Disorder, Journal of Developmental and Physical Disabilities, vol.6, issue.1, pp.55-73, 2005.
DOI : 10.1007/s10882-005-2201-1

&. M. Dumont and . Abidi, Next Best View System in a 3-D Object Modeling Task, Proc. International Symposium on Computational Intelligence in Robotics and Automation CIRA, pp.306-311, 1999.

J. Xavier, M. Pacheco, D. Castro, and &. Antonio-ruano, Last line, arc/circle and leg detection from laser scan data in a Player driver, IEEE International Conference on Robotics and Automation, 2005.

. Nakamura, Natural motion animation through constraining and deconstraining at will, IEEE Transactions on Visualization and Computer Graphics, vol.9, issue.3, pp.352-360, 2003.

]. Yamaoka, T. Kanda, H. Ishiguro, and &. Hagita, How Close? Model of proximity control fo Informationpresenting Robots, Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, 2008.

F. Yamaoka, T. Kanda, H. Ishiguro, and &. Hagita, Developing a model of robot behavior to identify and appropriately respond to implicit attention-shifting, Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp.133-140, 2009.

M. Takashi-yoshimi, T. Nishiyama, H. Sonoura, S. Nakamoto, H. Tokura et al., Development of a Person Following Robot with Vision Based Target Detection, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006.

F. Zacharias, C. Borst, and &. Hirzinger, Capturing robot workspace structure: representing robot capabilities, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.3229-3236, 2007.
DOI : 10.1109/IROS.2007.4399105

J. M. Zacks, J. Mires, B. Tversky, and E. Hazeltine, Mental spatial transformations of objects and perspective, Spatial Cognition and Computation, vol.2, issue.4, pp.315-332, 2001.
DOI : 10.1023/A:1015584100204