๐ข Updates
๐ Round 2 Launched!
๐ป Starter Kit | ๐ชQuick Submission (detectron2) | ๐ Food Recognition Baseline (detectron2)
โฎ๏ธ Previous Editions (2019, 2020, 2021)
๐ฅ Find Teammates Here
๐ต๏ธ Overview
For almost all human history, the main concern about food-centered around one goal: to get enough of it. Only in the past few decades has food ceased to be a limited resource for many. Today, food is abundant for most - but not all - inhabitants of high- and middle-income countries and its role has correspondingly changed. Whereas the primary goal of food used to be to provide sufficient energy, today, the main public health challenges are the avoidance of excessive calories and the nutritional composition of diets.
Recognizing food from images is an extremely useful tool for various use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants but had to rely on food frequency questionnaires that are known to be imprecise.
Image-based food recognition has made substantial progress thanks to advances in deep learning in the past few years. But food recognition remains a difficult problem for a variety of reasons. This is the 3rd consecutive year we are hosting this benchmark on AIcrowd. This benchmark builds upon the success of the 2019/2020/2021 Food Recognition Challenge.
PROBLEM STATEMENT
The goal of this benchmark is to train models which can look at images of food items and detect the individual food items present in them. We use a novel dataset of food images collected through the MyFoodRepo app, where numerous volunteer Swiss users provide images of their daily food intake in the context of a digital cohort called Food & You.
This growing data set has been annotated - or automatic annotations have been verified - with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight/volume estimation. This is an evolving dataset, where we will continue to release more data as the dataset grows over time.
๐พ Datasets
Finding annotated food images is difficult. There are some databases with some annotations, but they tend to be limited in important ways.
To put it bluntly: most food images on the internet are a lie. Search for any dish, and youโll find beautiful stock photography of that particular dish. Same on social media: we share photos of dishes with our friends when the image is exceptionally beautiful. But algorithms need to work on real-world images. In addition, annotations are generally missing - ideally, food images would be annotated with proper segmentation, classification, and volume/weight estimates.
With this 2022 iteration of the Food Recognition Benchmark, we release the following versions of the dataset :
v2.0
, containing a training set of39,962
images food items, with76,491
annotations spread over498
food classes.v2.1
, containing a training set of54,392
images food items, with100,256
annotations spread over323
food classes.
The datasets for the AIcrowd Food Recognition Benchmark is available at https://www.aicrowd.com/challenges/food-recognition-benchmark-2022/dataset_files
Round 2 of the competition, focuses on v2.1
of the MyFoodRepo Dataset and contains :
public\_training\_set\_release\_2.1.tar.gz
: This is the Training Set of54,392
(as RGB images) food images, along with their corresponding100,256
annotations from323
food classes in MS-COCO formatpublic\_validation\_set\_2.1.tar.gz
: This is the suggested Validation Set of946
(as RGB images) food images, along with their corresponding1708
annotations from323
food classes in MS-COCO formatpublic\_test\_release\_2.1.tar.gz
: This is the Public Test Set for Food Recognition Benchmark 2022: Round 1.
To get started, we would advise you to download all the files and untar them inside the data/ folder of this repository so that you have a directory structure like this:
๐ช An open benchmark
For all the reasons mentioned above, food recognition is a difficult but important problem. Algorithms that could tackle this problem would be extremely useful for everyone. That is why we are establishing this open benchmark for food recognition. The goal is simple: provide high-quality data, and get developers around the world excited about addressing this problem in an open way. Because of the complexity of the problem, a one-shot approach wonโt work. This is a benchmark for the long run. If you are interested in providing more annotated data, please contact us.
๐ Timeline
This is an ongoing, multi-round benchmark. The specific tasks and/or datasets will be updated at each round, and each round will have its own prizes. You can participate in multiple rounds or in single rounds.
- Round 1: December 20th, 2021 -
February 20th, 2022February 28th, 2022 - Round 2: March 3rd, 2022 - May 3rd, 2022 (Ongoing)
๐ฅ Participation Routes
There are 2 routes of participating in the challenge.
You can make a quick submission just with your predictions files. Or you can go the claasic code-based route.
- Quick Participation ๐
- You need to upload prediction json files
- Scores are computed on 40% of the publicly released test set
- You are not eligible for the final leaderboard (and prizes)
- Active Participation ๐จโ๐ป
- You need to submit code (and AIcrowd evaluators runs the code to generate predictions)
- Scores are computed on 100% of the publicly released test set + 40% of the (unreleased) extended test set
- You are eligible for the final leaderboard and prizes
The flow for Active Participation look as follows:
๐ Prizes
๐ first-to-cross prize
The First Participant or Team to reach an AP of 0.44 on the leaderboard will receive a DJI Mavic Mini 2 as prize! ๐
(This prize is awarded to the first such participant/team in active participation track, and is valid through both the rounds of the challenge)
๐ช Round 2 prizes (Active participation track)
- ๐ฅ 1st Prize: DJI FPV Drone Combo
- ๐ฅ 2nd Prize: DJI Mavic Air 2
- ๐ฅ 3rd Prize: Oculus Quest 2
The prizes will be awarded based on the final leaderboard for Round 2.
Note: Round 2 prizes are separate from the First-to-cross prize and there is no threshold for minimum score for Round 2 prizes.
๐ Paper Authorships
Top participants from Round 1 and Round 2 of the Benchmark will be invited to be co-authors of the dataset release paper and the challenge solution paper. If you have any questions, please let us know on the challenge forum.
๐ Submission
You can find more details on making a submission to the benchmark in the official starter kit here.
๐ Evaluation Criteria
The benchmark uses the official detection evaluation metrics used by COCO. The primary evaluation metric is AP @ IoU=0.50:0.05:0.95
. The secondary evaluation metric is AR @ IoU=0.50:0.05:0.95
. A further discussion about the evaluation metric can be found here.
โจ Inspiration
The AIcrowd team talked to the previous winners of the benchmark on their experience of participating in the benchmark, brief notes on their approaches, and their tips to fellow participants. There are many interesting snippets in their stories that you may want to check out!
- Gaurav Singhal on Winning Round 4 of the previous edition of Food Recognition Challenge
- Rohit and Shraddha on their journey from Chennai, India, to Switzerland
- Read Mark Potanin's advice on how he improves his scores!
- Also, check out this post by lorepieri listing some great resources for the challenge!
๐ Frequently Asked Questions
- Who can participate in this benchmark?
- Anyone. This benchmark is open to everyone.
- Do I need to take part in all the rounds?
- No. Each round has separate prizes. You can participate in any one of the rounds or all of them.
- I am a beginner. Where do I start?
- There is a starter kit available here explaining how to make a submission. You can also use notebooks in the Starter Notebooks section, which give details on using MaskRCNN and MMDetection.
- What is team size?
- Each team can have a maximum of 5 members.
- Do I have to pay to participate?
- No. Participation is free and open to all.
- Is there a private test set?
- Yes. The test set given in the Resources section is only for local evaluation. You are required to submit a repository that is run against a private test set. Please read the starter kit for more information.
- How do I upload my model to GitLab?
- To upload your models, please use Git Large File Storage.
- How are the timeouts and resources available in Active (GitLab) Submissions?
AWS.g4dn.xlarge
instances are used for inference purposes, with a timeout of 1.5 second/image.
- Other questions?
- Head over to the Discussions Forum and feel free to ask!
๐ฑ Contact
Participants
Getting Started
Notebooks
3
|
0
|
|
5
|
0
|
|
10
|
0
|