Myeongho Jeon - Tutorial on learning under distribution shifts
AI systems have recently achieved state-of-the-art performance and are now widely used across many aspects of our daily lives. However, these models often struggle when encountering data that differs significantly from what they were trained on—a phenomenon known as distribution shift. For example, a self-driving car trained in urban environments may perform poorly when deployed in rural settings. In this talk, I will cover (1) how AI systems are trained and make predictions, (2) different types of distribution shifts and the challenges they pose, and (3) recent approaches developed to address these issues.
Myeongho Jeon is a postdoctoral researcher in Computer Science at École Polytechnique Fédérale de Lausanne (EPFL) working in the lab of Prof Maria Brbić. His main research interests are in developing AI systems capable of solving complex, superhuman-level problems, and improving their generalization to handle distribution shifts across diverse environments. He completed his PhD in Computational Science and Technology from Seoul National University in 2024, under the supervision of Myungjoo Kang and received the University award for the best thesis.
Ivan Kralj - Semi-decentralized Training of Spatio-Temporal Graph Neural Networks for Traffic Prediction
In smart mobility, vast networks of geographically distributed sensors generate high-frequency spatio-temporal data that must be processed in real time to avoid disruptions. Centralized approaches struggle to scale with growing sensor networks and are prone to reliability issues. To address these challenges, we adapt semi-decentralized training techniques for Spatio-Temporal Graph Neural Networks (ST-GNNs). Sensors are grouped by proximity into cloudlets, each managing a subgraph, exchanging node features, and sharing model updates to ensure consistency, eliminating reliance on a central aggregator. We evaluate centralized, traditional FL, server-free FL, and Gossip Learning setups on METR-LA and PeMS-BAY datasets for short-, mid-, and long-term predictions. Results show that semi-decentralized setups are comparable to centralized methods in performance while improving scalability and reliability, with attention to communication and computational costs often overlooked in existing literature.
Ivan Kralj is a researcher at the Department of Telecommunications of UNIZG-FER and a member of the IoTLab since November 2020. He received his MS degree in Information and Communication Technology at FER-UNIZG in 2020. His professional interests include Internet of Things (IoT), distributed systems and Graph Neural Networks (GNNs). The current focus on his research is on the distributed training of GNNs and reducing resource consumption when running GNNs in distributed resource-constrained IoT-environment. He is currently pursuing a PhD in Computing at UNIZG-FER and is working on his dissertation topic entitled “Graph neural networks for spatio-temporal data in distributed resource-constrained IoT environment” under the supervision of Prof. Gordan Ježić.
Anna Lackinger - Time series prediction
This tutorial provides a step-by-step guide to time series forecasting. The first part is aimed at beginners, demystifies the core components of time series data, such as trend, seasonality, and noise, and guides through the essential steps of preparing data for predictive modeling.
The second part analyses the most common forecasting techniques, from classical statistical methods to modern machine learning to deep learning approaches. Each method is illustrated with examples to help understand how and when they can be used effectively.
Anna Lackinger is a project assistant and a PhD student in the Distributed Systems Group at the Technical University of Vienna in Austria. Before completing her master’s degree in 2023, she contributed to the AIoTwin project. Besides working for AIoTwin, she is also working on the Horizon Europe project Intend since January 2024. Her current research interests include time-series prediction, federated learning, and reinforcement learning.
Ali Ganbarov - Workload Balancing for Distributed Multi-Object Tracking on Multi-Camera
This presentation introduces a distributed architecture for real-time multi-object tracking across multiple camera streams, optimized for deployment on resource-constrained Jetson devices. We present a fully asynchronous, multi-threaded pipeline that balances computational workloads across capture, preprocessing, TensorRT-based YOLOv8 inference, feature embedding extraction, and DeepSORT tracking. Emphasis is placed on low-level system optimization, including parallelism, memory management, and pipeline parallelism to maximize performance on heterogeneous edge hardware.
To support scalable cross-camera Re-Identification (Re-ID), we decouple identity resolution from edge inference by transmitting compact appearance embeddings to a central Kafka broker. These embeddings are indexed using a high-performance vector database (e.g., FAISS), enabling efficient global identity assignment while minimizing network bandwidth. We conclude with performance evaluations, design trade-offs, and insights into building scalable edge-to-cloud tracking systems under strict hardware constraints.
Ali Ganbarov is a scientific employee at TU Berlin, specializing in real-time, distributed multi-camera object tracking on edge devices. With a background in data engineering at Zalando and HPC research at HSU Hamburg, he has led projects optimizing deep learning pipelines using CUDA, PyTorch, and parallel programming. He holds an M.Sc. in Informatics from TU Munich and has extensive experience in building scalable, high-performance systems across industry and academia.
Presentation PDF file - TBA
Fatemeh Rahimian - Model compression and pruning techniques for AI on IoT devices
This tutorial introduces participants to the principles and practices of efficient deep learning, with a focus on core model compression techniques such as pruning, quantization, and the design of efficient neural architectures. Through a hands-on walkthrough, participants will learn how to apply fine-grained pruning and perform model quantization. The tutorial is designed to provide a solid introduction and practical starting point for developing efficient, low-footprint AI systems suitable for real-world use.
Fatemeh Rahimian is a senior researcher at RISE Research Institutes of Sweden with a background in machine learning, data analytics, and graph algorithms. Her work explores topics such as resource-efficient learning, decentralized systems, and applications of AI in healthcare, finance, and public services. She has previously held roles at the University of Oxford, Swedbank, and SICS Swedish ICT, contributing to projects involving electronic health records, fraud detection, and large-scale data mining. Fatemeh has been involved in several funded research initiatives and has supervised PhD students at KTH and Uppsala University.
Presentation PDF file - TBA