Industry

How can we ad­vance work rou­tines?

In modern times, the demands on production processes are becoming increasingly complex. To meet these, robots are increasingly being used to repeat industrial production steps countless times without failure. The U2 use case addresses interactions of humans and machines in the industrial context. The focus is to show how one machine can learn directly from a human being, and passes on the learned behaviour to more machines, even if they exist in different contexts. Here, the direct learning from humans will replace the state-of-the-art programming of machines, which is time consuming and costly.

Purple and magenta graphic illustration depicting a man next to a table with two robotic arms

Human-to-machine: Learn from the expert

Although there has already been much research on the design of collaborative workcells and robots, there are still major challenges to overcome. One such challenge is to give robots all the required skills to be useful in a given workplace (skill learning). The way state-of-the-art robots are programmed is very different from the tedious and time-consuming processes of earlier systems. For example, kinesthetic teaching is a more and more adopted approach to program collaborative robots, enabling much more intuitive and efficient programming schemes by directly guiding the robots. The robot can then build on the demonstrated task and optimise it autonomously by making use of suitable learning methods.

A woman standing next to a robotic arm while wearing a connected glove and explaining the functionality of the robot to a man next to her.
A man wearing a virtual reality visor and holding two controllers. Next to the man we see a robotic arm.
Three white containers on a table with a robotic arm over them grasping a wooden cube
Photograph of a child wearing a virtual reality visor and holding a controller. A robotic arm can be seen in the background.

Demonstrator Cocktail Robot

The robot assists in the cocktail making process and is controlled by gestures and by touching it. A digital glove with multiple positional sensors and the torque sensors in the joints of the robot are used to interpret intuitive control methods of a human, like gestures and touching. The demonstrator shows the feasibility of controlling devices reliably via hand gestures. This technology allows an easier adoption of new technologies which would be important in the industrial context.

CleaningUp Demonstrator

The CleaningUp demonstrator shows the abilities of our cobotic framework. The robots pick up building blocks and put them into bins, while the user wears a VR headset. With this headset the user is able to control multiple robots, which are working within different environments/settings. Controlling the robot remotely enables higher productivity which is a substantial key index in modern industrial production.

Rock Papers Scissors Demonstrator

The Rock, Paper, Scissors demonstrator allows users to interact naturally with a robot. Users wear the CeTI Glove, which captures finger movements, detects gestures (Rock, Paper, and Scissors), and finally mirrors those gestures to the robotic hand. The playful approach to play Rock, Paper, Scissors serves as a low-barrier entry point for visitors on public events to interact. The game was deliberately designed without anyone winning or losing. Visitors should experience controlling the robot’s movements and should not perceive the robot as an opponent but as a friendly playmate.

Demonstrator TracePen

The demonstration TracePen is a robot that can mimic movements performed by humans. For this purpose, the robot uses the software of the start-up wandelbots, which enables the robot arm to repeat movements learned from humans by means of the TracePen. The user can show the robot simple drawings with the aid of the TracePen – the robot then performs the same drawing movement with a real pen. This shows that robot programming is greatly simplified with the help of demonstration, which is very well received in the production industry.

  • Photograph of a boy holding a smart pen and two girls looking at him
  • Photograph of two men next to a robotic arm, one of them is holding a smart pen
  • Photograph of a child next to a robotic arm while using a smart pen. Next to the child is a researcher explaining the functionality of the robot.
A hand wearing a smart glove and a man reaching towards the glove
An outstretched hand that is graphically reproduced on a laptop

Machine-to-human: Evaluation by the expert

The tactile robot serves as a multimodal monitoring device providing full audio-visual and tactile feedback to the human via smart wearables. As an observer in the avatar-mode the human can directly experience the contact situation of the connected robot. From this feedback, programmed or even learned tasks can be evaluated. Furthermore, connecting via the Tactile Internet allows for complex tasks to be learned by robots and later to be experienced by humans.

  • A woman standing next to a window wearing a smart glove with sensors attached to her arm
  • A person wearing a smart glove with sensors attached to the arm

Fingertac Demonstrator

The Fingertac demonstrator shows how intuitive robot control with tactile feedback works. The user can control a robotic arm through a textile glove incorporating the Fingertac (vibrotactile feedback). By experiencing the feedback on the fingertips, the user is intuitive guided through defined tasks. The advantages of vibrotactile feedback in a remote robot operation is the easement of robot control and an increased workplace safety. These will be shown in the context of a chemistry lab application which is still in development.

  • An outstretched hand with red painted nails and a screen virtually playing a piano
  • Photograph of four people with one of them pointing to a piano on a TV and the others watching

Haptic Ultrasound Demonstrator

The goal with the Haptic Ultrasound demonstrator is to show touchless haptic control in virtual environment such as tactile cues over physical objects or force feedback under a palm of a hand for dynamic conditions. Those immersive sensations can improve the haptic experience for example when controlling a robot via virtual reality.

Collaborative Research Projects

Robot and human cohabitation requires different technologies that are being tested in different research projects that complement and build on each other.

World Capturing and Modelling

With the popularity of RGB-D cameras, a variety of on the fly 3D reconstruction systems were proposed. Nevertheless, for safe handover situations between humans and robots, the precision and latency of current 3D acquisition and tracking approaches are still insufficient. Immersive Modelling is a promising way to label, segment, and extract semantic properties of 3D objects in VR/AR. Therefore we propose a multi layer immersive 3D scanning framework to provide an AR/VR environment for different applications. The framework is based on structuring point clouds into meaningful parts. Bridging the physical world and the virtual world lead to designing and implementation of natural interaction between humans and machines.

Textile Sensors and Actuators

Textile technology at CeTI deals for example with the functionalization of textiles and knitting technology. It gives a wide range of possibilities to enhance textiles for different applications, which is constantly expanding. Research is also focused on textile-based sensors and actuators and their conductive behavior. These sensors and actuators bring many advantageous properties concerning wearables. Wearables, smart textiles and devices are the hardware basis for software development and psychological examinations. The CeTI glove and Smart Kinesiotape are the result of research where, textile-physical and electromechanical investigations come together.

Human Hand Interaction

The human hand is an enormously complex system. It is important to understand properly how the human hand interacts with objects. This information is used to find the requirements necessary to build robotic hands able to perform the analysed scenarios – initially grasping postures. The accuracy of the human interaction representation framework is assesed by observing the recorded contact surfaces and comparing it to the data from the motion tracking system.

Room Leaders

Portrait of Uwe Assmann

Prof. Dr.

Uwe Aßmann

TU Dresden

Chair of Software Technology

Portrait of Diana Göhringer

Prof. Dr.

Diana Göhringer

TU Dresden

Chair of Adaptive Dynamic Systems

Portrait of Sami Haddadin

Prof. Dr.

Sami Haddadin

TU München

Chair of Robotics Science and Systems Intelligence