Full inference (without slicing) gives different results than YOLOv8 #945
-
I am using the same weights, confidence threshold, IOU treshold and size, also choosing 'NMS' as post-process method in SAHI. But the results are different and the detected objects are lower on SAHI. Any clue? EDIT: looking it closely, the predictions are actually the same, the problem is that SAHI doesnt actually uses the |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
Hi, have you solved this problem? I face the same situation with you. |
Beta Was this translation helpful? Give feedback.
-
It appears certain configuration parameters for the detector are only applied when sliced prediction is on (no_sliced_prediction = False). |
Beta Was this translation helpful? Give feedback.
-
I also get an incomprehensible problem using SAHI with YOLOv8 model. When I compute the metrics for val datesets using standard inference with a YOLOv8 Model, I could get 0.862 mAP50. However, I just get 0.607 mAP50 using sliced inference with the same model, simultaneously setting "no_sliced_prediction=True" or setting "no_sliced_prediction=False" but tuning the "slice_size" to an oversized value, like 8000x8000, definitively inferring only one slice on a picture. By the way, I ensure the other reasoning variables the same. What a surprising result! What's more, I make another try, using sliced inference with am appropriate slice-size (such as 4032x4032), and the mAP50 rises to 0.931. Now I confirm that SAHI is effective to improve the recognition accuracy, especially for tiny objects. However, I really wonder why the values between the last two experiments are markedly different? |
Beta Was this translation helpful? Give feedback.
-
there was a bug in YOLOv8 prediction thresholds. This PR fixed the issue: #988 |
Beta Was this translation helpful? Give feedback.
there was a bug in YOLOv8 prediction thresholds. This PR fixed the issue: #988
@andressrodrl @wuyifan2233 @WaderLaken @GCGeo