Student Challenge: Object detection from degraded images - Can your AI predict the performance of Facebook's algorithms?
Object detection in images has found its way into everyday life: smartphone cameras can nowadays differentiate hundreds of sceneries in real-time, industrial image processing (e.g. in production lines) has matured significantly and camera-feeds are part of virtually all modern driving assistants in todays cars. In social media, face recognition has achieved a high level as well, which is, e.g., coming along with a range of privacy issues.
In summary, reliably detecting objects and their position relative to the camera is a main branch of AI, today. On top of the inherently complex task of object detection in any scenery and cluttered backgrounds, in real-world applications object detection can additionally be hampered by image quality degradation. Such degradation can stem from a variety of sources, e.g.
- lighting conditions (shadows, reflections, over-/under-exposure)
- focus problems
- color degradation due to mismatch in white balance (e.g. Sahara dust)
- stains on the optical lense
Current state of the art object detection models (Source) are often unable to yield the perfect prediction for arbitrary images. When evaluating the model, on inference, a 'blind trust' has to be placed in the machine learning algorithm. The a priori or a posteriori Quality of Detection (QoD) is the topic of this challenge: We seek your models that are able to decide if the object detection model achieves a good QoD for the given image, especially under the presence of degredated images which could be caused by e.g. faulty cameras.
This challenge is realized in collaboration with our industrial partner:
Dr. Andreas Hutter, Dr. Sanjukta Ghosh
CT RDA SDT IPV-DE
81739 München, Deutschland
The challenge will start with the kick-off event on Friday, March 5, 2021, at 11:00. The kick-off meeting will be realized via webex
- download the code template including the data (download link)
- view the code template tutorial (recommended, download link)
- view the provided pytorch introductory videos (optional, download link)
- download additional Coco data and place it in folder data/annotations (optional, download link)
take your own pictures and label them (optional)
- prepare a short presentation (PDF files only) presenting your model and the results obtained therewith
- your Python code including minimal instructions for running
The above should be submitted before 00:00 on April 21, 2021 via e-mail.