Curriculum Vitae
Dominik studied Computer Science (Diplom) at TU Dresden with a focus on Machine Learning as well as Knowledge Representation and Reasoning. For his studies, Dominik received the Lohrmann Medal as the best graduate of the year at the faculty of Computer Science. In 2019, he joined CeTI as a PhD student at the National Center for Tumor Diseases.
Within CeTI, Dominik’s research revolves around robot-assisted surgical assistance systems. He develops Deep-Learning methods for analyzing surgical video data and extracting information which is relevant for providing automated, context-aware assistance to the surgical team.
Projects/Cooperation within CeTI you are involved in:
- Generating simulated surgical video data: A Deep-Learning model which can generate detailed, realistic and temporally consistent video sequences from simulated surgical scenes. Such data potentially provides a powerful environment for training and evaluating components of assistance systems.
(with Surgical Department VTG @ U1) - Video-based anticipation of surgical events for context-aware assistance: A probabilistic neural network which can predict the usage of surgical instruments before they appear in a video while taking into account the uncertainty associated with future events. This information can be used to proactively recognize the need for assistance during surgery.
What do you value most about your work at CeTI?
Working in a diverse and interdisciplinary environment while also having access to expertise and resources that are closely related to my field.
What was your best moment at CeTI so far?
Participating in the CeTI truck demos during my first week at CeTI. I immediately got a great insight into all the projects happening within CeTI.
What else would you like to research?
At the moment, I am only focussing on image and video applications of Deep Learning. However, there are also exciting advancements happening in other areas such as natural language processing or robotic applications (reinforcement learning).
How do you spend your spare time?
In my free time, I like to play the guitar, cook and play baseball. I also spend a lot of my time reading books and watching movies.
Publications
1. | On the pitfalls of Batch Normalization for end-to-end video learning: A study on surgical workflow analysis (Journal Article) In: Medical Image Analysis, vol. 94, pp. 103126:1–15, 2024. |
2. | AIxSuture: Vision-based assessment of open suturing skills (Journal Article) In: International Journal of Computer Assisted Radiology and Surgery, pp. 1–8, 2024. |
3. | Exploring semantic consistency in unpaired image translation to generate data for surgical applications (Journal Article) In: International Journal of Computer Assisted Radiology and Surgery, pp. 1–9, 2024, (early access). |
4. | Long-term temporally consistent unpaired video translation from simulated surgical 3D data (Proceedings Article) In: Proceedings of the International Conference on Computer Vision (ICCV), 2021. |
5. | Surgical assistance and training (Book Chapter) In: Fitzek, Frank H. P.; Li, Shu-Chen; Speidel, Stefanie; Strufe, Thorsten; Şimşek, Meryem; Reisslein, Martin (Ed.): Tactile Internet with Human-in-the-Loop, Chapter 2, pp. 23–40, Academic Press, 2021. |
6. | Rethinking anticipation tasks: Uncertainty-aware anticipation of sparse surgical instrument usage for context-aware assistance (Book Section) In: Martel, Anne L.; Abolmaesumi, Purang; Stoyanov, Danail; Mateus, Diana; Zuluaga, Maria A.; Zhou, Kevin S.; Racoceanu, Daniel; Joskowicz, Leo (Ed.): Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, vol. 12263, pp. 752–762, Springer, 2020. |
7. | Unsupervised temporal video segmentation as an auxiliary task for predicting the remaining surgery duration (Book Section) In: Zhou, Luping; Sarikaya, Duygu; Kia, Seyed M.; Speidel, Stefanie; Malpani, Anand; Hashimoto, Daniel A.; Habes, Mohamad; Löfstedt, Tommy; Ritter, Kerstin; Wang, Hongzhi (Ed.): OR 2.0 Context-Aware Operating Theaters and Machine Learning in Clinical Neuroimaging, vol. 11796, pp. 29–37, Springer, 2019, (Best Paper Award). |