Loading

SnakeCLEF2021 - Snake Species Identification Challenge

Download dataset in Colab/Notebook via CLI

This notebook contain example on how to download dataset via notebook/colab using aicrowd-cli

shivam

Download dataset example for SnakesCLEF 🛠

In [ ]:
!pip install -U aicrowd-cli==0.1 > /dev/null
In [ ]:
# Get your API key from https://www.aicrowd.com/participants/me
API_KEY = "add-your-api-key-here"
!aicrowd login --api-key $API_KEY
API Key valid
Saved API Key successfully!
In [ ]:
!aicrowd dataset list --challenge snakeclef2021-snake-species-identification-challenge
                           Datasets for challenge #3                            
┌────┬──────────────────────────────┬──────────────────────────────┬───────────┐
│ #   Title                         Description                        Size │
├────┼──────────────────────────────┼──────────────────────────────┼───────────┤
│ 0  │ Species to Country Map File  │ Species to Country Map File  │   1.56 MB │
│ 1  │ BaseLine Notebook - Training │ BaseLine - Training Script   │  30.02 KB │
│    │ Script                       │ with EfficientNet B0 and     │           │
│    │                              │ PyTorch                      │           │
│ 2  │ SnakeCLEF2021 - MinTrain     │ CSV file with metadata. Max  │  13.74 MB │
│    │ Metadata                     │ 300 images per class.        │           │
│ 3  │ SnakeCLEF-2021 -             │ SnakeCLEF-2021-TrainingData  │     60 GB │
│    │ TrainingData                 │                              │           │
│ 4  │ SnakeCLEF2021 - TrainVal     │ CSV file with Metadata.      │  76.80 MB │
│    │ Metadata                     │                              │           │
│ 5  │ archived_train.tar.gz        │ Training Set of 82601 images │     21 GB │
│    │                              │ of snakes spread across 45   │           │
│    │                              │ species.                     │           │
│ 6  │ round_1_and_2_train.tar.gz   │ (Permanent Link on           │      42GB │
│    │                              │ datasets.aicrowd.com)        │           │
│ 7  │ test_ground_truth.csv        │ Round 4 - Ground Truth       │  44.83 MB │
│ 8  │ test_images.tar.gz           │ Round 4 - Testing dataset    │    4.8 GB │
│    │                              │ images                       │           │
│ 9  │ test_metadata.tar.gz         │ Round 4 - Metadata for       │ 232.22 KB │
│    │                              │ testing dataset              │           │
│ 10 │ train_labels.tar.gz          │ Round 4 - Metadata for       │     11 MB │
│    │                              │ training dataset             │           │
│ 11 │ train_images.tar.gz          │ Round 4 - Training dataset   │     41 GB │
│    │                              │ images                       │           │
│ 12 │ validate_labels_small.tar.gz │ Round 4 - Metadata for       │     10 KB │
│    │                              │ validation dataset (Small)   │           │
│ 13 │ validate_images_small.tar.gz │ Round 4 - Validation dataset │   17.8 MB │
│    │                              │ images (Small)               │           │
│ 14 │ validate_labels.tar.gz       │ Round 4 - Metadata for       │    680 KB │
│    │                              │ validation dataset           │           │
│ 15 │ validate_images.tar.gz       │ Round 4 - Validation dataset │    2.4 GB │
│    │                              │ images                       │           │
│ 16 │ train_labels.tar.gz          │ Round 3 - Metadata for       │    1.7 MB │
│    │                              │ training images              │           │
│ 17 │ train_images.tar.gz          │ Round 3 - Images for         │   24.3 GB │
│    │                              │ training the models          │           │
│ 18 │ test_metadata_small.tar.gz   │ Round 3 - Metadata for test  │    2.6 KB │
│    │                              │ images                       │           │
│ 19 │ test_images_small.tar.gz     │ Round 3 - Test Images for    │   56.1 MB │
│    │                              │ local debugging              │           │
│ 20 │ archived_round1_test.tar.gz  │ Test Set for Round-1         │    4.3 GB │
│    │                              │ Containing 17732 images of   │           │
│    │                              │ snakes (from 45 species)     │           │
│ 21 │ archived_sample_submission.… │ Sample submission file       │       17M │
│    │                              │ (random predictions)         │           │
│ 22 │ archived_class_idx_mapping.… │ mapping of class ids and     │   1.10 KB │
│    │                              │ class names                  │           │
└────┴──────────────────────────────┴──────────────────────────────┴───────────┘
In [ ]:
# Download file at index=0 (or multiple indexes)
!aicrowd dataset download --challenge snakeclef2021-snake-species-identification-challenge 0
!aicrowd dataset download --challenge snakeclef2021-snake-species-identification-challenge 0 1
f508cf10-ac79-4750-bf36-5f0c447bef87_species_to_country_mapping.csv: 100% 1.56M/1.56M [00:01<00:00, 996kB/s]
f508cf10-ac79-4750-bf36-5f0c447bef87_species_to_country_mapping.csv: 100% 1.56M/1.56M [00:01<00:00, 989kB/s]
db0556f5-422c-4b87-9afe-e79bc1268026_BaseLine-EfficientNet-B0-224.ipynb: 100% 30.0k/30.0k [00:00<00:00, 115kB/s]
In [ ]:
# Download file by file name (can specify multiple at same time too)
!aicrowd dataset download --challenge snakeclef2021-snake-species-identification-challenge "Species to Country Map File"
!aicrowd dataset download --challenge snakeclef2021-snake-species-identification-challenge "Species to Country Map File" "BaseLine Notebook - Training Script"
f508cf10-ac79-4750-bf36-5f0c447bef87_species_to_country_mapping.csv: 100% 1.56M/1.56M [00:01<00:00, 942kB/s]
f508cf10-ac79-4750-bf36-5f0c447bef87_species_to_country_mapping.csv: 100% 1.56M/1.56M [00:01<00:00, 1.00MB/s]
db0556f5-422c-4b87-9afe-e79bc1268026_BaseLine-EfficientNet-B0-224.ipynb: 100% 30.0k/30.0k [00:00<00:00, 115kB/s]

Comments

poojamalagund
Over 3 years ago

@shivam How can I save the downloaded dataset in Google Drive? Next time when I run the notebook, I don’t want to download the dataset again, I just want to mount google drive and run the model on the dataset. I am working on Global Wheat Challenge Dataset.

You must login before you can post a comment.

Execute