in

Object Detection Benchmark | Object detection Model evaluation on different Image domains



In this video, I will talk about the performance of Object detection model on different image domains like Ariel Images, Under water Images, Medical Images .

************************************************************************************
For queries: You can comment in comment section or you can mail me at aarohisingla1987@gmail.com
************************************************************************************

Microsoft’s COCO dataset (COCO) is the most popular object detection dataset at the moment. It is mostly used to benchmark the performance of any new Object detection algorithm.

If you will see the COCO dataset carefully, you will notice that it don’t have any medical images like MRI or X Rays.

Today, I will show you the performance of YOLOV5 and yolov7 on different types of Image domains (Aerial, Underwater, Healthcare). 

To evaluate any Object detection model’s performance. We calculate mAP(mean average precision). The mAP compares the ground truth boxes with the predicted boxes and returns some score. The higher that score is means the model is more accurate.
Now Let’s talk about the dataset which I am using today:  Roboflow 100 (RF100). Roboflow 100: A Multi-Domain Opensource ObjectDetection Benchmark . It consist of 100 datasets. Using which we can evaluate Object detection model performance on wider array of imagery domains. You can download RF100 from Roboflow Universe by clicking the export button in each dataset.

Share this: