Thesis Topics

Here are some proposed thesis ideas. If you are interested contact felix.krause@lmu.de with your prior experience and CV. If you have other proposals for a thesis, you are welcome to write to us and we can figure something out.

Bachelor Thesis Topics

How Much Do Vision Encoders Help Segmentation?Pretrained vision encoders (e.g., CLIP, MAE, DINOv2) capture rich, general-purpose image features that can accelerate and improve downstream tasks. This thesis systematically measures how different encoders influence training speed, data efficiency, and final accuracy of semantic/instance segmentation models.
Image Retrieval in the Latent Space of Pretrained ModelsMany pretrained models yield embeddings that can serve as powerful image descriptors. This thesis benchmarks image retrieval quality across different embedding spaces and pooling strategies, estimating which models (and which practices) work best for instance- and category-level retrieval.
What Do Common Image Classifiers Actually Look At?Where do classifiers focus? This project probes attention and attribution patterns in CNNs and ViTs to understand which regions drive predictions.
Stress Testing Image MetricsAutomatic image quality and preference metrics (FID, sFID, aesthetic predictors, HPSv3) can be surprisingly brittle. This thesis designs targeted, manual “attacks” (simple augmentations or edits) to stress-test these metrics, mapping which manipulations most strongly degrade reliability and when metrics disagree.
Can The New Generation Of Diffusion Models Fool Classifiers More Easily?Train a lightweight binary classifier to distinguish real vs. generated images, then test whether newer diffusion models produce outputs that are harder to detect.
Sequential Learning on Moving MNIST: Evaluating Orthogonal Gradient Updates for Streaming DataThis thesis examines how a neural network learns when data arrive one class at a time, for example, digits in the Moving MNIST dataset, which is shown in sequence. You will first train a baseline classifier and then add the orthogonal gradient update technique from the paper “Learning from Streaming Video with Orthogonal Gradients.” You will compare both approaches to see if the orthogonal updates help the model remember earlier digits better and reduce catastrophic forgetting.