Autor Thema: Diverse Thesis Topics  (Gelesen 685 mal)

Offline Fraunhofer SIT

  • wow
  • *****
  • Beiträge: 102
    • Hier geht's zum SIT
Diverse Thesis Topics
« am: 02.08.2022, 11:00:29 Vormittag »
The following theses can be written in English or in German.

1. Measuring the effects of environmental influences on object detection
Neural networks achieve near human performance or even outperform humans in many tasks. State of the art object detectors reach up to 80% precision (AP50) on MS COCO. But datasets like MS COCO do not provide the ability to evaluate our models in specific envirnoments or with different weather conditions. Therefore a framework for evaluation in a 3D simulation should be implemented. 3D scenes are to be created on the basis of which object detectors can be evaluated under various conditions. Thereby, a measure of the relation of environmental influence (rain, snow, etc.) and its parameters like strength or angle to the accuracy of the object detectors can be derived. Which environmental influences are most important? This shall primarily serve the evaluation of the benign robustness of object detectors. In a further step, this will be used for the robustness evaluation of attacks on object detectors.

2. Text-To-Image models for edge case training
In order to deploy machine learning models in real world applications the models need to be robust. One of the main problems for robust machine learning are edge cases. These edge cases occur rarely and are often not represented or underrepresented in the training data. Use text-to-image models (e.g. LAION, Dall-E) to create images that do not or rarely occur in the domain to improve robustness in edge cases. The question is: Are text-to-image models suitable for closing this gap? Can we increase the robustness of the models? And how do we measure the robustness increase? A data set for the task or domain should be created. The task and the domain can be chosen freely. In a next step the possibility of a domain/task independent generation of edge cases can be explored. With image-to-text or captioning models we can analyze the dataset and create edge case descriptions with the usage of language models.

3. Adversarial attack transferability metric
Neural networks can be attacked in order to evade the correct prediction with so called adversarial attacks. These attacks are created for a specific network and can then be transferred to unknown networks with a high probability. Can we estimate the attack transferability based on the similarity of two neural nets? Can we achieve a metric which is independent from the training data and the specific attack? A metric for the attack transferability should be created.
Especially the model structure similarity (CCA, CKA) should be considered in relation to attack transferability, other parameters like model ”size” (ResNet18 vs ResNet50), hyperparameters (lr, decay, etc.), weights etc. can be examined as well. The transferability metric is to be empirically evaluated.

4. Detection of transferred adversarial attacks
Adversarial attacks can evade the correct prediction of a neural network and these attacks can be transferred from one network to another. To defend against adversarial attacks we want to detect them before sending the input to the neural network. The transferability of attacks is better with lower complexity models, therefore a realistic assumption is that attackers create their adversarial examples on surrogate networks like VGG16 and transfer them to production systems. In this thesis, a new strategy to accurately detect adversarial attacks leveraging the knowledge about the surrogate model will be implemented. The new detection scheme is to be empirically evaluated and compared to previous state of the art detectors.

Contact:
Niklas Bunzel
Fraunhofer SIT / ATHENE, Darmstadt, Germany
niklas.bunzel@sit.fraunhofer.de