
M2 Room
Human-Machine Interface
M2 focuses on the dynamics of remote human–machine interaction and robust approaches for interfaces, aiming to improve anticipatory capabilities and situational awareness for more effective interactions.
Key Questions
Approach
- Human Activity Understanding and Intent Prediction: Using models from M1, we will extract contextual and gestural information, with applications for assistive systems (U2).
- Shared Autonomy: We will develop a shared autonomy framework for robotics and teleoperation, applied in surgery (U1).
- HMI Design for Remote Physical Interaction: Define design requirements using digital twins and sensors.
- Semantic Communication of Haptic Signals: Develop models to reduce bandwidth consumption and enhance haptic synthesis.
- Social Touch and Trust: Investigate the impact of social cues on trust.
Expected Results
- 10% improvement in intent prediction accuracy.
- 30% reduction in data rate with semantic models.
- Shared autonomy framework tested with U5.