In modern times, the demands on production processes are becoming increasingly complex. To meet these, robots are increasingly being used to repeat industrial production steps countless times without failure. The U2 use case addresses interactions of humans and machines in the industrial context. The focus is to show how one machine can learn directly from a human being, and passes on the learned behaviour to more machines, even if they exist in different contexts. Here, the direct learning from humans will replace the state-of-the-art programming of machines, which is time consuming and costly.

We advocate establishing the Centre for Tactile Internet with Human-in-the-Loop (CeTI) at Technische Universität Dresden (TUD) to achieve significant breakthroughs for enhancing collaborations between humans and machines or, more generally, cyber-physical systems (CPS) in real, virtual, and remote environments. CeTI’s vision is to enable humans to interact with co-operating CPS over intelligent wide-area-communication networks to promote equitable access to remote work and learning environments for people of different genders, ages, cultural backgrounds, or physical limitations. Thus, going far beyond the current state of the art, CeTI democratises the access to skills and expertise the same way as the current Internet has democratised the access to information.

Purple and magenta graphic illustration depicting a man next to a table with two robotic arms

Industry

The emergence of safe and collaborative robots is currently changing the way our workplaces are designed and structured. State-of-the-art robotic systems are capable of complex manipulation tasks and intuitive human–robot interaction making them potential coworkers (cobots) in many manufacturing scenarios (human–robotic cohabitation in industry). This new trend of collaborative industrial workspaces (cobotic cells) is motivated by various factors, such as the growing general durability and precision of robots, the cost-reduction potential for assembly and production, and the discovery of new application areas, such as remote work in dangerous areas.

Human-to-machine: Learn from the expert

Capitalising on outstanding expertise within TU Dresden and associated institutions in the fields of communication, robotics, electrical engineering, computer science, psychology, neuroscience, and medicine, the innovations of CeTI are reflected in its structural and research objectives. CeTI conducts multidisciplinary research to (i) advance the understanding of the complexities and dynamics of human goal-directed perceptions and actions from the psychological and medical perspectives, (ii) develop novel sensor and actuator technologies that augment the human mind and body, (iii) develop fast, bendable, adaptive, and reconfigurable electronics, (iv) create intelligent communication networks that connect humans and CPS by continuously adapting and learning to provide low latency, as well as high levels of resilience and security, (v) design new haptic coding schemes to cope with the deluge of information from massive numbers of body sensors, (vi) design online learning mechanisms as well as interface solutions for machines and humans to predict and augment each other’s actions, and (vii) to evaluate the above solutions as well as to engage the general public about the societal and ethical changes and new opportunities the new technologies will bring by means of use cases in medicine (context-aware robotic assistance systems in medical environments), industry (co-working industrial space), and the Internet of Skills (education and skill acquisition for the general public).

Although there has already been much research on the design of collaborative workcells and robots, there are still major challenges to overcome. One such challenge is to give robots all the required skills to be useful in a given workplace (skill learning). The way state-of-the-art robots are programmed is very different from the tedious and time-consuming processes of earlier systems. For example, kinesthetic teaching is a more and more adopted approach to program collaborative robots, enabling much more intuitive and efficient programming schemes by directly guiding the robots. The robot can then build on the demonstrated task and optimise it autonomously by making use of suitable learning methods. Examples of such tasks are screwing /drilling, stamping, grinding, hammering, or welding. The tasks can have multiple variations depending on the application scenario, e.g. in electronics assembly, or in the automobile industry.

Demonstrator Cocktail Robot

The robot assists in the cocktail making process and is controlled by gestures and by touching it. A digital glove with multiple positional sensors and the torque sensors in the joints of the robot are used to interpret intuitive control methods of a human, like gestures and touching. The demonstrator shows the feasibility of controlling devices reliably via hand gestures. This technology allows an easier adoption of new technologies which would be important in the industrial context.

Aus Datenschutz-Gründen benötigen wir Ihre Zustimmung, um die Daten zu laden. Mehr Informationen finden Sie im Impressum. For more details, please see our Imprint.
I Accept
Aus Datenschutz-Gründen benötigen wir Ihre Zustimmung, um die Daten zu laden. Mehr Informationen finden Sie im Impressum. For more details, please see our Imprint.
I Accept

CleaningUp Demonstrator

The CleaningUp demonstrator shows the abilities of our cobotic framework. The robots pick up building blocks and put them into bins, while the user wears a VR headset. With this headset the user is able to control multiple robots, which are working within different environments/settings. Controlling the robot remotely enables higher productivity which is a substantial key index in modern industrial production.

Rock Papers Scissors Demonstrator

The Rock, Paper, Scissors demonstrator allows users to interact naturally with a robot. Users wear the CeTI Glove, which captures finger movements, detects gestures (Rock, Paper, and Scissors), and finally mirrors those gestures to the robotic hand. The playful approach to play Rock, Paper, Scissors serves as a low-barrier entry point for visitors on public events to interact. The game was deliberately designed without anyone winning or losing. Visitors should experience controlling the robot’s movements and should not perceive the robot as an opponent but as a friendly playmate.

Aus Datenschutz-Gründen benötigen wir Ihre Zustimmung, um die Daten zu laden. Mehr Informationen finden Sie im Impressum. For more details, please see our Imprint.
I Accept
  • Photograph of a child next to a robotic arm while using a smart pen. Next to the child is a researcher explaining the functionality of the robot.
  • Photograph of two men next to a robotic arm, one of them is holding a smart pen
  • Photograph of a boy holding a smart pen and two girls looking at him

Demonstrator TracePen

The demonstration TracePen is a robot that can mimic movements performed by humans. For this purpose, the robot uses the software of the start-up wandelbots, which enables the robot arm to repeat movements learned from humans by means of the TracePen. The user can show the robot simple drawings with the aid of the TracePen – the robot then performs the same drawing movement with a real pen. This shows that robot programming is greatly simplified with the help of demonstration, which is very well received in the production industry.

  • A woman standing next to a robotic arm while wearing a connected glove and explaining the functionality of the robot to a man next to her.
  • Photograph of a child wearing a virtual reality visor and holding a controller. A robotic arm can be seen in the background.
  • Three white containers on a table with a robotic arm over them grasping a wooden cube
  • A man wearing a virtual reality visor and holding two controllers. Next to the man we see a robotic arm.

Machine-to-human: Evaluation by the expert

Capitalising on outstanding expertise within TU Dresden and associated institutions in the fields of communication, robotics, electrical engineering, computer science, psychology, neuroscience, and medicine, the innovations of CeTI are reflected in its structural and research objectives. CeTI conducts multidisciplinary research to (i) advance the understanding of the complexities and dynamics of human goal-directed perceptions and actions from the psychological and medical perspectives, (ii) develop novel sensor and actuator technologies that augment the human mind and body, (iii) develop fast, bendable, adaptive, and reconfigurable electronics, (iv) create intelligent communication networks that connect humans and CPS by continuously adapting and learning to provide low latency, as well as high levels of resilience and security, (v) design new haptic coding schemes to cope with the deluge of information from massive numbers of body sensors, (vi) design online learning mechanisms as well as interface solutions for machines and humans to predict and augment each other’s actions, and (vii) to evaluate the above solutions as well as to engage the general public about the societal and ethical changes and new opportunities the new technologies will bring by means of use cases in medicine (context-aware robotic assistance systems in medical environments), industry (co-working industrial space), and the Internet of Skills (education and skill acquisition for the general public).

The tactile robot serves as a multimodal monitoring device providing full audio-visual and tactile feedback to the human via smart wearables. As an observer in the avatar-mode the human can directly experience the contact situation of the connected robot. From this feedback, programmed or even learned tasks can be evaluated. Furthermore, connecting via the Tactile Internet allows for complex tasks to be learned by robots and later to be experienced by humans.

Fingertac Demonstrator

The Fingertac demonstrator shows how intuitive robot control with tactile feedback works. The user can control a robotic arm through a textile glove incorporating the Fingertac (vibrotactile feedback). By experiencing the feedback on the fingertips, the user is intuitive guided through defined tasks. The advantages of vibrotactile feedback in a remote robot operation is the easement of robot control and an increased workplace safety. These will be shown in the context of a chemistry lab application which is still in development.

  • A hand wearing a smart glove and a man reaching towards the glove
  • A woman standing next to a window wearing a smart glove with sensors attached to her arm
  • A person wearing a smart glove with sensors attached to the arm
  • An outstretched hand that is graphically reproduced on a laptop
  • An outstretched hand with red painted nails and a screen virtually playing a piano
  • Photograph of four people with one of them pointing to a piano on a TV and the others watching

Haptic Ultrasound Demonstrator

The goal with the Haptic Ultrasound demonstrator is to show touchless haptic control in virtual environment such as tactile cues over physical objects or force feedback under a palm of a hand for dynamic conditions. Those immersive sensations can improve the haptic experience for example when controlling a robot via virtual reality.

Collaborative Research Projects

Capitalising on outstanding expertise within TU Dresden and associated institutions in the fields of communication, robotics, electrical engineering, computer science, psychology, neuroscience, and medicine, the innovations of CeTI are reflected in its structural and research objectives. CeTI conducts multidisciplinary research to (i) advance the understanding of the complexities and dynamics of human goal-directed perceptions and actions from the psychological and medical perspectives, (ii) develop novel sensor and actuator technologies that augment the human mind and body, (iii) develop fast, bendable, adaptive, and reconfigurable electronics, (iv) create intelligent communication networks that connect humans and CPS by continuously adapting and learning to provide low latency, as well as high levels of resilience and security, (v) design new haptic coding schemes to cope with the deluge of information from massive numbers of body sensors, (vi) design online learning mechanisms as well as interface solutions for machines and humans to predict and augment each other’s actions, and (vii) to evaluate the above solutions as well as to engage the general public about the societal and ethical changes and new opportunities the new technologies will bring by means of use cases in medicine (context-aware robotic assistance systems in medical environments), industry (co-working industrial space), and the Internet of Skills (education and skill acquisition for the general public).

Robot and human cohabitation requires different technologies that are being tested in different research projects that complement and build on each other.

Teaching and learning skills over a long distance requires different technologies that are being tested in different research projects that complement and build on each other.

World Capturing and Modelling

With the popularity of RGB-D cameras, a variety of on the fly 3D reconstruction systems were proposed. Nevertheless, for safe handover situations between humans and robots, the precision and latency of current 3D acquisition and tracking approaches are still insufficient. Immersive Modelling is a promising way to label, segment, and extract semantic properties of 3D objects in VR/AR. Therefore we propose a multi layer immersive 3D scanning framework to provide an AR/VR environment for different applications. The framework is based on structuring point clouds into meaningful parts. Bridging the physical world and the virtual world lead to designing and implementation of natural interaction between humans and machines.

Textile Sensors and Actuators

Textile technology at CeTI deals for example with the functionalization of textiles and knitting technology. It gives a wide range of possibilities to enhance textiles for different applications, which is constantly expanding. Research is also focused on textile-based sensors and actuators and their conductive behavior. These sensors and actuators bring many advantageous properties concerning wearables. Wearables, smart textiles and devices are the hardware basis for software development and psychological examinations. The CeTI glove and Smart Kinesiotape are the result of research where, textile-physical and electromechanical investigations come together.

Human Hand Interaction

The human hand is an enormously complex system. It is important to understand properly how the human hand interacts with objects. This information is used to find the requirements necessary to build robotic hands able to perform the analysed scenarios – initially grasping postures. The accuracy of the human interaction representation framework is assesed by observing the recorded contact surfaces and comparing it to the data from the motion tracking system.

Room Leaders

Uwe Aßmann
Uwe AßmannProf. Dr.
Chair of Software Engineering; TU Dresden
Diana Göhringer
Diana GöhringerProf. Dr.
Chair of Adaptive Dynamic Systems; TU Dresden
Sami Haddadin
Sami HaddadinProf. Dr.
Chair of Robotics Science and Systems Intelligence; TU Munich