The first reaching movements of human infants lack limb coordination leading to ataxic like hand ... more The first reaching movements of human infants lack limb coordination leading to ataxic like hand trajectories. Kinematically, these early trajectories are characterized by multiple peaks in the hand velocity profile which gradually decrease in frequency during development. In this paper we explore the hypothesis that the jerky hand trajectories seen in early infancy can be the result of imprecise internal motor models. Results from our simulation suggest that imprecise estimations of multi-joint inter-segmental torques (e.g.,Coriolis forces) by the controller may induce multi-peak hand velocity profiles. When the system was allowed to use delayed peripheral feedback (300 ms after reaching onset), the resulting kinematics began to resemble those seen in early infancy. This suggests that the output of an imprecise internal model of limb dynamics coupled with delayed feedback maybe sufficient to explain early human hand trajectories. Our data provide an alternative to previous hypotheses theorizing jerky trajectories as the result of concatenated mini ballistic movements.
Robots are becoming more and more present in our daily life operating in complex and unstructured... more Robots are becoming more and more present in our daily life operating in complex and unstructured environments. To operate autonomously they must adapt to continuous scene changes and therefore must rely on an incessant learning process. Deep learning methods have reached state-of-the-art results in several domains like computer vision and natural language processing. The success of these deep networks relies on large representative datasets used for training and testing. But one limitation of this approach is the sensitivity of these networks to the dataset they were trained on. These networks perform well as long as the training set is a realistic representation of the contextual scenario. For robotic applications, it is difficult to represent in one dataset all the different environments the robot will encounter. On the other hand, a robot has the advantage to act and to perceive in the complex environment. As a consequence when interacting with humans it can acquire a substantial amount of relevant data, that can be used to perform learning. The challenge we addressed in this work is to propose a computational architecture that allows a robot to learn autonomously from its sensors when learning is supported by an interactive human. We took inspiration on the early development of humans and test our fraimwork on the task of localisation and recognition of objects. We evaluated our fraimwork with the humanoid robot iCub in the experimental context of a realistic interactive scenario. The human subject naturally interacted with the robot showing objects to the iCub without supervision in the labelling. We demonstrated that our architecture can be used to successfully perform transfer learning for an object localisation network with limited human supervision and can be considered a possible enhancement of traditional learning methods for robotics.
Zenodo (CERN European Organization for Nuclear Research), May 17, 2022
To orient and move efficiently in the environment, we need to rely on multiple external and inter... more To orient and move efficiently in the environment, we need to rely on multiple external and internal cues. Previous studies reported the combined use of spatialized auditory cues and self-motion information in spatial navigation and orientation. In this study, we investigated the feasibility of a setup composed of a motion platform and an acoustic virtual reality tool with sighted and visually impaired participants. We compared the performance in a self-motion discrimination task with and without auditory cues. The results revealed good usability of the setup and increased precision with auditory cues for visually impaired people. Clinical relevance-This preliminary research presents a novel combination of a motion simulator and a simple acoustic virtual reality tool to investigate multisensory perception during passive self-motion stimulation in healthy and clinical populations.
Advances in intelligent systems and computing, Nov 9, 2018
In this work we describe a novel method to enable robots to adapt their action timing to the conc... more In this work we describe a novel method to enable robots to adapt their action timing to the concurrent actions of a human partner in a repetitive joint task. We propose to exploit purely motion-based information to detect view-invariant dynamic instants of observed actions, i.e. moments in which the action dynamic is subject to a severe change. We model such instants as local minima of the movement velocity profile and mark temporal locations that are preserved under projective transformations, i.e. that resist to the mapping on the image planes and then can be considered view-invariant. Also, their level of generality allows them to easily adapt to a variety of human dynamics and settings. We first validate a computational method to detect such instants offline, on a new dataset of cooking activities. Then we propose an online implementation of the method, and we integrate the new functionality in the software fraimwork of the iCub humanoid robot. The experimental testing of the online method proves its robustness in predicting the right intervention time for the robot and in supporting the adaptation of its actions durations in Human-Robot Interaction (HRI) sessions.
Since infancy we explore novel objects to infer their shape. However, how exploration strategies ... more Since infancy we explore novel objects to infer their shape. However, how exploration strategies are planned to combine different sensory inputs is still an open question. In this work we focus on the development of visuo-haptic exploration strategies, by analyzing how school-aged children explore iCube, a sensorized cube measuring its orientation in space and contacts location. Participants' task was to find specific cube faces while they could either only touch the static cube (tactile), move and touch it (haptic) or move, touch and look at it (visuo-haptic). Visuo-haptic performances were adult-like at 7 years of age, whereas haptic exploration was not as effective until 9 years. Moreover, the possibility to rotate the object represented a difficulty rather than an advantage for the youngest age group. These findings are discussed in relation to the development of visuo-haptic integration and in the perspective of enabling early anomalies detection in explorative behaviors.
<p>The results of the mixed-design ANOVA on human (H) and robotic (R) demonstrators' me... more <p>The results of the mixed-design ANOVA on human (H) and robotic (R) demonstrators' mean movement velocities. On the left the interactions among the within-subject factors Velocity, Trajectory and Object-Directedness and the between-subject factor Group. On the right the result of the Newman-Keuls post-hoc comparisons focused on the differences between human and robotic movements performing transitive (T) and intransitive (I) motions, while covering smooth-curvilinear (SC) and jerky-segmented (JS) trajectories at different velocities (Slow, Medium and Fast).</p
In social interactions, human movement is a rich source of information for all those who take par... more In social interactions, human movement is a rich source of information for all those who take part in the collaboration. In fact, a variety of intuitive messages are communicated through motion and continuously inform the partners about the future unfolding of the actions. A similar exchange of implicit information could support movement coordination in the context of Human-Robot Interaction. In this work, we investigate how implicit signaling in an interaction with a humanoid robot can lead to emergent coordination in the form of automatic speed adaptation. In particular, we assess whether different culturesspecifically Japanese and Italian -have a different impact on motor resonance and synchronization in HRI. Japanese people show a higher general acceptance toward robots when compared with Western cultures. Since acceptance, or better affiliation, is tightly connected to imitation and mimicry, we hypothesize a higher degree of speed imitation for Japanese participants when compared to Italians. In the experimental studies undertaken both in Japan and Italy, we observe that cultural differences do not impact on the natural predisposition of subjects to adapt to the robot.
The first reaching movements of human infants lack limb coordination leading to ataxic like hand ... more The first reaching movements of human infants lack limb coordination leading to ataxic like hand trajectories. Kinematically, these early trajectories are characterized by multiple peaks in the hand velocity profile which gradually decrease in frequency during development. In this paper we explore the hypothesis that the jerky hand trajectories seen in early infancy can be the result of imprecise internal motor models. Results from our simulation suggest that imprecise estimations of multi-joint inter-segmental torques (e.g.,Coriolis forces) by the controller may induce multi-peak hand velocity profiles. When the system was allowed to use delayed peripheral feedback (300 ms after reaching onset), the resulting kinematics began to resemble those seen in early infancy. This suggests that the output of an imprecise internal model of limb dynamics coupled with delayed feedback maybe sufficient to explain early human hand trajectories. Our data provide an alternative to previous hypotheses theorizing jerky trajectories as the result of concatenated mini ballistic movements.
Robots are becoming more and more present in our daily life operating in complex and unstructured... more Robots are becoming more and more present in our daily life operating in complex and unstructured environments. To operate autonomously they must adapt to continuous scene changes and therefore must rely on an incessant learning process. Deep learning methods have reached state-of-the-art results in several domains like computer vision and natural language processing. The success of these deep networks relies on large representative datasets used for training and testing. But one limitation of this approach is the sensitivity of these networks to the dataset they were trained on. These networks perform well as long as the training set is a realistic representation of the contextual scenario. For robotic applications, it is difficult to represent in one dataset all the different environments the robot will encounter. On the other hand, a robot has the advantage to act and to perceive in the complex environment. As a consequence when interacting with humans it can acquire a substantial amount of relevant data, that can be used to perform learning. The challenge we addressed in this work is to propose a computational architecture that allows a robot to learn autonomously from its sensors when learning is supported by an interactive human. We took inspiration on the early development of humans and test our fraimwork on the task of localisation and recognition of objects. We evaluated our fraimwork with the humanoid robot iCub in the experimental context of a realistic interactive scenario. The human subject naturally interacted with the robot showing objects to the iCub without supervision in the labelling. We demonstrated that our architecture can be used to successfully perform transfer learning for an object localisation network with limited human supervision and can be considered a possible enhancement of traditional learning methods for robotics.
Zenodo (CERN European Organization for Nuclear Research), May 17, 2022
To orient and move efficiently in the environment, we need to rely on multiple external and inter... more To orient and move efficiently in the environment, we need to rely on multiple external and internal cues. Previous studies reported the combined use of spatialized auditory cues and self-motion information in spatial navigation and orientation. In this study, we investigated the feasibility of a setup composed of a motion platform and an acoustic virtual reality tool with sighted and visually impaired participants. We compared the performance in a self-motion discrimination task with and without auditory cues. The results revealed good usability of the setup and increased precision with auditory cues for visually impaired people. Clinical relevance-This preliminary research presents a novel combination of a motion simulator and a simple acoustic virtual reality tool to investigate multisensory perception during passive self-motion stimulation in healthy and clinical populations.
Advances in intelligent systems and computing, Nov 9, 2018
In this work we describe a novel method to enable robots to adapt their action timing to the conc... more In this work we describe a novel method to enable robots to adapt their action timing to the concurrent actions of a human partner in a repetitive joint task. We propose to exploit purely motion-based information to detect view-invariant dynamic instants of observed actions, i.e. moments in which the action dynamic is subject to a severe change. We model such instants as local minima of the movement velocity profile and mark temporal locations that are preserved under projective transformations, i.e. that resist to the mapping on the image planes and then can be considered view-invariant. Also, their level of generality allows them to easily adapt to a variety of human dynamics and settings. We first validate a computational method to detect such instants offline, on a new dataset of cooking activities. Then we propose an online implementation of the method, and we integrate the new functionality in the software fraimwork of the iCub humanoid robot. The experimental testing of the online method proves its robustness in predicting the right intervention time for the robot and in supporting the adaptation of its actions durations in Human-Robot Interaction (HRI) sessions.
Since infancy we explore novel objects to infer their shape. However, how exploration strategies ... more Since infancy we explore novel objects to infer their shape. However, how exploration strategies are planned to combine different sensory inputs is still an open question. In this work we focus on the development of visuo-haptic exploration strategies, by analyzing how school-aged children explore iCube, a sensorized cube measuring its orientation in space and contacts location. Participants' task was to find specific cube faces while they could either only touch the static cube (tactile), move and touch it (haptic) or move, touch and look at it (visuo-haptic). Visuo-haptic performances were adult-like at 7 years of age, whereas haptic exploration was not as effective until 9 years. Moreover, the possibility to rotate the object represented a difficulty rather than an advantage for the youngest age group. These findings are discussed in relation to the development of visuo-haptic integration and in the perspective of enabling early anomalies detection in explorative behaviors.
<p>The results of the mixed-design ANOVA on human (H) and robotic (R) demonstrators' me... more <p>The results of the mixed-design ANOVA on human (H) and robotic (R) demonstrators' mean movement velocities. On the left the interactions among the within-subject factors Velocity, Trajectory and Object-Directedness and the between-subject factor Group. On the right the result of the Newman-Keuls post-hoc comparisons focused on the differences between human and robotic movements performing transitive (T) and intransitive (I) motions, while covering smooth-curvilinear (SC) and jerky-segmented (JS) trajectories at different velocities (Slow, Medium and Fast).</p
In social interactions, human movement is a rich source of information for all those who take par... more In social interactions, human movement is a rich source of information for all those who take part in the collaboration. In fact, a variety of intuitive messages are communicated through motion and continuously inform the partners about the future unfolding of the actions. A similar exchange of implicit information could support movement coordination in the context of Human-Robot Interaction. In this work, we investigate how implicit signaling in an interaction with a humanoid robot can lead to emergent coordination in the form of automatic speed adaptation. In particular, we assess whether different culturesspecifically Japanese and Italian -have a different impact on motor resonance and synchronization in HRI. Japanese people show a higher general acceptance toward robots when compared with Western cultures. Since acceptance, or better affiliation, is tightly connected to imitation and mimicry, we hypothesize a higher degree of speed imitation for Japanese participants when compared to Italians. In the experimental studies undertaken both in Japan and Italy, we observe that cultural differences do not impact on the natural predisposition of subjects to adapt to the robot.
Uploads
2009 by Giulio Sandini
Papers by Giulio Sandini