Loading
Round 1: Completed
29k
693
36
2976

🚨 The Global Wheat Challenge 2021 has ended.

πŸ“• Check out the winners submission over here!


πŸ“Ή Winner Presentations

 

πŸš€ Get started with the Baseline solution!

πŸ•΅οΈ Introduction

Wheat is the basis of the diet of a large part of humanity. Therefore, this cereal is widely studied by scientists to ensure food security. A tedious, yet important part of this research is the measurement of different characteristics of the plants, also known as Plant Phenotyping. Monitoring plant architectural characteristics allow the breeders to grow better varieties and the farmers to make better decisions, but this critical step is still done manually. The emergence of UAV, cameras and smartphones makes in-field RGB images more available and could be a solution to manual measurement. For instance, the counting of the wheat head can be done with Deep Learning.  However, this task can be visually challenging. There is often an overlap of dense wheat plants, and the wind can blur the photographs, making identify single heads difficult. Additionally, appearances vary due to maturity, colour, genotype, and head orientation. Finally, because wheat is grown worldwide, different varieties, planting densities, patterns, and field conditions must be considered. To end manual counting, a robust algorithm must be created to address all these issues. 

 

"

Figure 1: Image from summer 2020 in France with field workers counting wheat head in numerous β€œmicro plots".

πŸ‘Ύ Your mission, padawan

Current detection methods involve one-stage and two-stage detectors (Yolo-V3 and Faster-RCNN), but even when trained with a large dataset, there remains a bias to the training region remains. The goal of the competition is to understand such bias and build a robust solution. This is to be done using train and test dataset that cover different regions, such as the Global Wheat Dataset. If successful, researchers can accurately estimate the density and size of wheat heads in different varieties. With improved detection farmers can better assess their crops, ultimately bringing cereal, toast, and other favourite dishes to your table.

πŸ’Ύ Dataset

The dataset is composed of more than 6000 images of 1024x1024 pixels containing 300k+ unique wheat heads, with the corresponding bounding boxes. The images come from 11 countries and covers 44 unique measurement sessions. A measurement session is a set of images acquired at the same location, during a coherent timestamp (usually a few hours), with a specific sensor. In comparison to the 2020 competition on Kaggle, it represents 4 new countries, 22 new measurements sessions, 1200 new images and 120k new wheat heads. This amount of new situations will help to reinforce the quality of the test dataset. The 2020 dataset was labelled by researchers and students from 9 institutions across 7 countries. The additional data have been labelled by Human in the Loop, an ethical AI labelling company. We hope these changes will help in finding the most robust algorithms possible!

  

             

The task is to localize the wheat head contained in each image. The goal is to obtain a model which is robust to variation in shape, illumination, sensor and locations. A set of boxes coordinates is provided for each image. 

The training dataset will be the images acquired in Europe and Canada, which cover approximately 4000 images and the test dataset will be composed of the images from North America (except Canada), Asia, Oceania and Africa and covers approximately 2000 images. It represents 7 new measurements sessions available for training but 17 new measurements sessions for the test!

πŸ–Š Evaluation Criteria

The metrics used for the evaluation of the task will be the Average Domain Accuracy

Accuracy for one image

Accuracy is calculated for each image with Accuracy =  where:

  • TP is true Positive is a ground truth box matched with one predicted box
  • FP a False Positive (FP) a prediction box that matches no ground truth box
  • FN a False Negative (FN) a ground truth box that matches no box.

Matching method

Two boxes are matched if their Intersection over Union (IoU) is higher than a threshold of 0.5 .

Average Domain Accuracy

The accuracy of all images from one domain is averaged to give the domain accuracy.

The final score, called Average Domain Accuracy, is the average of all domain accuracies.

Special cases

If there is no bounding box in the ground truth, and at least one box is predicted, accuracy is equal to 0, else it is equal to 1

πŸ“ Files

Following files are available in the resources section:

  • train.zip -This zip contains the training dataset with a csv file containing the bounding boxes of the train images.

  • test.zip - This zip will be used for actual evaluation for the leaderboard, it contains the images for which bounding boxes needs to be predicted.

πŸ’» Labels

  • All boxes are contained in a csv with three columns image_name, BoxesString and domain
  • image_name is the name of the image, without the suffix. All images have a .png extension
  • BoxesString is a string containing all predicted boxes with the format [x_min,y_min, x_max,y_max]. To concatenate a list of boxes into a PredString, please concatenate all list of coordinates with one space (" ") and all boxes with one semi-column ";". If there is no box, BoxesString is equal to "no_box".
  • domain give the domain for each image 

πŸš€ Submission

  • Prepare a CSV file containing header as image_name, PredString, domain , and a comma as separator (",")

  • image_name is the name of images

  • PredStringis a string containing all predicted boxes with the format [x_min,y_min, x_max,y_max]. To concatenate a list of boxes into a PredString, please concatenate all list of coordinates with one space (" ") and all boxes with one semi-column ";".

  • domain is provided in submission.csv

  • If there is no box, please put "no_box"

  • Sample submission format available at submission.csv in the Resources section. The sample submission also contains the domain information of the test images.

πŸ“… Timeline

  • Start date: 4th May, 2021 (23:00 Pacific Time)
  • End date: 4th July, 2021 (23:00 Pacific Time)
  • The private leaderboard will be revealed after the 4th.

πŸ† Prizes

  • 1st position: $2000 USD
  • 2nd position: $1000 USD
  • 3rd position: $1000 USD

Winning solutions are required to be open-source (more details in Rules Section)

πŸ† Check out the winner solutions over here!

πŸ”— Links

πŸ“± Contact

πŸ“š Acknowledgement

The Global Wheat Challenge is led by three research institutes: the University of Saskatchewan, the University of Tokyo, and CAPTE ( INRAe/Arvalis/Hiphen) from data coming from 16 institutions from Europe (ETHZ, INRAe, Arvalis, Rothamsted Research, NMBU, University of Liège), Africa (Agriculture Research Corporation), North America (University of Saskatchewan, Kansas University, Terraref, CIMMYT), Oceania (University of Queensland) and Asia (Nanjing Agricultural University, University of Tokyo, NARO, University of Kyoto). These institutions are joined by many in their pursuit of accurate wheat head detection, including the Global Institute for Food Security, DigitAg, Kubota, Hiphen, GRDC.

 

Participants

Notebooks

See all
Submit with WILDS
By
etienne_david
Over 3 years ago
0
5