M2 Room

Human-Machine Interface

M2 focuses on the dynamics of remote human–machine interaction and robust approaches for interfaces, aiming to improve anticipatory capabilities and situational awareness for more effective interactions.

Key Questions

We will develop multimodal models to improve prediction even under visual occlusion.

We will explore multimodal visualizations to promote transparency and trust, enabling transitions between human and autonomous control.

We will study digital twins and feedback strategies to improve physical interaction at a distance.
We will study semantic representations to synthesize haptic feedback.
We will analyze how social cues affect trust, considering various contexts.

Approach

  • Human Activity Understanding and Intent Prediction: Using models from M1, we will extract contextual and gestural information, with applications for assistive systems (U2).
  • Shared Autonomy: We will develop a shared autonomy framework for robotics and teleoperation, applied in surgery (U1).
  • HMI Design for Remote Physical Interaction: Define design requirements using digital twins and sensors.
  • Semantic Communication of Haptic Signals: Develop models to reduce bandwidth consumption and enhance haptic synthesis.
  • Social Touch and Trust: Investigate the impact of social cues on trust. 

Expected Results

  • 10% improvement in intent prediction accuracy.
  • 30% reduction in data rate with semantic models.
  • Shared autonomy framework tested with U5.