Organization
Location
Badges
Activity
Ratings Progression
Challenge Categories
Challenges Entered
Predicting smell of molecular compounds
Latest submissions
Find all the aircraft!
Latest submissions
Latest submissions
See Allgraded | 67712 | ||
failed | 67711 | ||
graded | 67699 |
Participant | Rating |
---|
Participant | Rating |
---|
ImageCLEF 2020 DrawnUI
Exploit like score - 0.998!?
Over 4 years agoHi @picekl, if you look at our first submission, we had got a good overall_precision score but a lower overall_recall score according to the evaluation metric. Therefore, we improved our model to get a better overall_recall score, which we achieved with our subsequent submissions.
But, by then, when we submitted, the βoverall_recallβ column was removed. After getting confirmation from @dimitri.fichou via mail that overall_precision score is going to be the sole evaluation metric for the competition, we re-trained our model to improve on the precision scores. This is reflected in our last two submissions.
We believe that even if the evaluation metric is modified to consider either the f1 score or mAP (over IoU > 0.5), two of our submissions would excel in that, as they were trained particularly to increase the same.
@dimitri.fichou, it would be great if you could clarify what exactly would be the final evaluation metric. Weβll make another submission and tag that as the βPrimary Runβ.
No overall_recall in evaluation metric?
Over 4 years agoSeems like the overall_recall score has been removed from the evaluation metric. Is this some error or only the overall_accuracy is going to be the sole metric to determine the results?
With just a few days remaining for the challenge to end, I would request you to please confirm what exactly is going to be the metric.
Apart from that, it would be great if we could have a leaderboard of sorts so that we know where we lie exactly and modify our approach accordingly.
Getting incomplete error message after submission
Over 4 years agoThank you.
I had a few other queries related to submission.
- On the Submission Instructions section, itβs mentioned to upload a plain ASCII text file. But, when I looked at the evaluation script, it seems to read a csv file. Can you please confirm whether we need to upload a txt file or a csv file.
- In the sample submission row, the format given is - 1c3e1163fa864f9c.jpg;paragraph 0.8:190x135+410+474;button 0.95:99x60+265+745,0.6:85x50+434+819,0.6:89x50+614+739;image 0.7:259x135+379+305;container 0.8:614x925+179+95,0.8:549x229+219+689;
Can you confirm whether a β ; β needs to be added at the end of every row or not. Because the error that weβre getting is related to that.
Getting incomplete error message after submission
Over 4 years agoMade the submission in plain ASCII text format. And got this error - βError : Incorrect localisation format (Line nbr 1). The format should be β¦β .
Exploit like score - 0.998!?
Over 4 years agoThanks @dimitri.fichou for the clarification. Since we have not tagged any of our submission as a primary run, can you please confirm which one would be considered for deciding the leaderboard (according to new evaluation script) ?
Also, it would be great if we could also see the scores of other participantsβ submissions, because weβre only able to see our own scores. @picekl, can you help us out in this?
Thanks.