Loading

Airborne Object Tracking Challenge

Sample interface for training with DarkNet YOLO

This notebook helps you navigate dataset easier and sets up training with YOLO models

siddhartha

🤫 Dataset Setup

In [ ]:
import json
import random
random.seed(2021)
import os, sys
from IPython.display import display, clear_output, HTML
from random import randrange
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"]=25,25
import numpy as np
import seaborn as sns

# Because Life, Universe and Everything!
random.seed(42)

def mdprint(text):
    display({
        'text/markdown': text,
        'text/plain': text
    }, raw=True)

%cd /content
!git clone http://gitlab.aicrowd.com/amazon-prime-air/airborne-detection-starter-kit.git
os.chdir("airborne-detection-starter-kit/data")
/content
Cloning into 'airborne-detection-starter-kit'...
remote: Enumerating objects: 23, done.
remote: Counting objects: 100% (23/23), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 360 (delta 9), reused 8 (delta 3), pack-reused 337
Receiving objects: 100% (360/360), 21.91 MiB | 9.99 MiB/s, done.
Resolving deltas: 100% (148/148), done.

⏱ Loading the Dataset

In [ ]:
# Dataset for Airborne Object Tracking Dataset
sys.path.append(os.path.dirname(os.path.realpath(os.getcwd())))
sys.path.append(os.path.dirname(os.path.realpath(os.getcwd())) + "/core")
!pip install -r ../requirements.txt > /dev/null
from core.dataset import Dataset
notebook_path = os.path.dirname(os.path.realpath("__file__"))

local_path = notebook_path + '/part1'
s3_path = 's3://airborne-obj-detection-challenge-training/part1/'
dataset = Dataset(local_path, s3_path)
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
2021-08-26 07:17:07.591 | INFO     | core.dataset:load_gt:20 - Loading ground truth...
2021-08-26 07:17:07.593 | INFO     | core.file_handler:download_file_if_needed:33 - [download_from_s3] File not found locally, downloading: ImageSets/groundtruth.json

🌱 Ground Truth Structure

Ground truth (present in ImageSets folder) contains all the relevant information regarding airborne objects, their locations, bbox and so on. While the Images folder have accompanied images for your training code to work on.

Before we start, let's check the vocabulary we will need to understand the dataset:

  • flights (a.k.a. samples in ground truth):
    One flight is typically 2 minutes video at 10 fps i.e. 1200 images. Each of the frames are present in Images/{{flight_id}}/ folder. These files are typically 3-4mb each.
  • frame (a.k.a. entity in ground truth):
    This is the most granular unit on which dataset can be sampled. Each frame have information timestamp, frame_id, and label is_above_horizon. There can be multiple entries for same frame in entity when multiple Airborne objects are present.
    When an Airborne object following information is available as well:
    • id -> signifies unique ID of this object (for whole frame)
    • bbox -> it contains 4 floats signifying [left, top, width, height]
    • blob['range_distance_m'] -> distance of airborne object
    • labels['is_above_horizon'] -> details below
    • (derived) planned -> for the planned objects range_distance_m is available
  • is_above_horizon:
    It is marked as 1 when Airborne object is above horizon and -1 when it is below horizon. When unclear, it is marked as 0.

Example for frame level data (multiple per frame):

{
    'time': 1550844897919368155,
    'blob': {
        'frame': 480,
        'range_distance_m': nan # signifies, it was an unplanned object
    },
    'id': 'Bird2',
    'bb': [1013.4, 515.8, 6.0, 6.0],
    'labels': {'is_above_horizon': 1},
    'flight_id': '280dc81adbb3420cab502fb88d6abf84',
    'img_name': '1550844897919368155280dc81adbb3420cab502fb88d6abf84.png'
}

You can read more about the dataset in DATASET.md file in the starter kit.

In [ ]:
# keys
def remove_numbers(s):
  return ''.join([i for i in s if not i.isdigit()])

all_keys = []
for flight_id in dataset.get_flight_ids():
    flight = dataset.get_flight(flight_id)
    all_keys.extend([remove_numbers(k) for k in flight.detected_objects])

unique_keys = list(set(all_keys))
unique_keys
Out[ ]:
['Flock', 'Helicopter', 'Airborne', 'Drone', 'Airplane', 'Bird']

Loading dataset as a Pandas df

In [ ]:
rows = []
for flight_id in dataset.get_flight_ids():
    flight = dataset.get_flight(flight_id)
    for obj_key in flight.detected_objects:
        object_type = remove_numbers(obj_key)
        obj = flight.detected_objects[obj_key]
        for loc in obj.location:
            bbox = loc.bb.get_bbox()
            frame_id = loc.frame.id
            range_distance = loc.range_distance_m
            image_path = loc.frame.image_path()
            rows.append([flight_id, object_type, obj_key, frame_id, 
                         *bbox, bbox[-1]*bbox[-2], image_path, range_distance])
In [ ]:
import pandas as pd
df = pd.DataFrame(rows)
df.columns = ['flight_id', 'object_type', 'object', 'frame_id', 
              'left', 'top', 'width', 'height', 'area', 'image_path','range_distance']
df.head()
Out[ ]:
flight_id object_type object frame_id left top width height area image_path range_distance
0 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane3 2 1703.2 939.2 6.0 6.0 36.0 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
1 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane3 3 1701.8 934.4 6.0 6.0 36.0 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
2 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane3 4 1694.0 935.0 6.0 6.0 36.0 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
3 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane3 5 1691.4 935.8 6.0 6.0 36.0 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
4 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane3 6 1686.0 933.6 6.0 6.0 36.0 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN

Download images for training

Filter data points by relevant airborne objects (Airplane, Helicopter) and pick objects larger than a certain value. This picks images of 5000 Airplanes and 5000 helicopters for training.

In [ ]:
import random
filtered_df = df.loc[((df['object_type'] == 'Helicopter') | (df['object_type'] == 'Airplane'))]
filtered_df = filtered_df.loc[df['area'] > 400]
flights = filtered_df.index[(filtered_df['object_type'] == 'Airplane')]
helicopters = filtered_df.index[(filtered_df['object_type'] == 'Helicopter')]
counts_dict = dict(filtered_df['image_path'].value_counts())
multi_object_frames = filtered_df[filtered_df.apply(lambda x:counts_dict[x['image_path']] > 1, axis=1)].index
train_indices = random.sample(list(flights),5000) + random.sample(list(helicopters),5000)
train_df = filtered_df.loc[list(set(train_indices))]
filtered_df.head()
Out[ ]:
flight_id object_type object frame_id left top width height area image_path range_distance
499 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane2 74 3.2 17.2 31.6 21.0 663.60 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
500 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane2 75 3.4 18.8 45.0 24.6 1107.00 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
501 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane2 76 3.2 6.2 69.2 40.4 2795.68 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
502 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane2 77 3.6 3.2 81.6 43.6 3557.76 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN
503 63c5597a57b04b448723f1f1844a2b78 Airplane Airplane2 78 3.8 3.6 102.4 45.6 4669.44 Images/63c5597a57b04b448723f1f1844a2b78/156084... NaN

Downloading select images

In [ ]:
download_images = []
for idx in range(len(train_df)):
  flight = dataset.flights[train_df.iloc[idx]['flight_id']]
  frame = flight.frames[train_df.iloc[idx]['frame_id']]
  image_path = frame.image_path()
  s3_path = frame.image_s3_path()
  download_images.append([image_path, image_path])
random.shuffle(download_images)

from tqdm import tqdm
for idx in tqdm(range(100)):
  dataset.file_handler.download_from_s3_parallel(download_images[idx])
100%|██████████| 100/100 [02:11<00:00,  1.31s/it]
In [ ]:

Image visualisation

In [ ]:
import cv2
import sys
%matplotlib inline

def drawBoundingBoxes(imageInputPath, imageOutputPath, inferenceResults, color):
    """Draw bounding boxes on an image.
    imageData: image data in numpy array format
    imageOutputPath: output image file path
    inferenceResults: inference results array off object (l,t,w,h)
    colorMap: Bounding box color candidates, list of RGB tuples.
    """
    imageData = cv2.imread(imageInputPath)
    for res in inferenceResults:
        left = int(res['left'])
        top = int(res['top'])
        right = int(res['left']) + int(res['width'])
        bottom = int(res['top']) + int(res['height'])
        label = res['label']
        imgHeight, imgWidth, _ = imageData.shape
        thick = int((imgHeight + imgWidth) // 900)
        print (left, top, right, bottom)
        cv2.rectangle(imageData,(left, top), (right, bottom), color, thick)
        cv2.putText(imageData, label, (left, top - 12), 0, 1e-3 * imgHeight, color, thick//3)
    plt.imshow(imageData)

target_image = download_images[0][0]
color = (0,255,0)
img_root = 'part1'
frame_df = train_df[df["image_path"] == target_image]
boxes = []
for idx,ob in frame_df.iterrows():
  label,left,top,width,height = ob['object_type'],ob['left'],ob['top'],ob['width'],ob['height']
  box = {}
  box['left'] = left
  box['top'] = top
  box['width'] = width
  box['height'] = height
  box['label'] = label
  boxes.append(box)

drawBoundingBoxes(os.path.join(img_root,target_image),'output.png',boxes,color)
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:29: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
1200 705 1335 771

Setup for training with DarkNet

In [ ]:
%cd /content/
!git clone https://github.com/AlexeyAB/darknet
/content
Cloning into 'darknet'...
remote: Enumerating objects: 15301, done.
remote: Total 15301 (delta 0), reused 0 (delta 0), pack-reused 15301
Receiving objects: 100% (15301/15301), 13.66 MiB | 25.90 MiB/s, done.
Resolving deltas: 100% (10399/10399), done.

Set up pretrained weights

In [ ]:
%cd darknet/
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137
!cp yolov4.conv.137 build/darknet/x64
[Errno 2] No such file or directory: 'darknet/'
/content/darknet
--2021-08-25 13:26:12--  https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137
Resolving github.com (github.com)... 192.30.255.112
Connecting to github.com (github.com)|192.30.255.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-releases.githubusercontent.com/75388965/48bfe500-889d-11ea-819e-c4d182fcf0db?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210825%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210825T132531Z&X-Amz-Expires=300&X-Amz-Signature=46fc81819a2a9d8209f467acdc2f1d693abf2e9629724b770238a24e329dfc7d&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=75388965&response-content-disposition=attachment%3B%20filename%3Dyolov4.conv.137&response-content-type=application%2Foctet-stream [following]
--2021-08-25 13:26:12--  https://github-releases.githubusercontent.com/75388965/48bfe500-889d-11ea-819e-c4d182fcf0db?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210825%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210825T132531Z&X-Amz-Expires=300&X-Amz-Signature=46fc81819a2a9d8209f467acdc2f1d693abf2e9629724b770238a24e329dfc7d&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=75388965&response-content-disposition=attachment%3B%20filename%3Dyolov4.conv.137&response-content-type=application%2Foctet-stream
Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.108.154, 185.199.111.154, 185.199.110.154, ...
Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 170038676 (162M) [application/octet-stream]
Saving to: ‘yolov4.conv.137.1’

yolov4.conv.137.1   100%[===================>] 162.16M  69.6MB/s    in 2.3s    

2021-08-25 13:26:14 (69.6 MB/s) - ‘yolov4.conv.137.1’ saved [170038676/170038676]

Prepare training data

Copt trainining data to train folder

In [ ]:
!mkdir data/obj
!cp -r /content/airborne-detection-starter-kit/data/part1/Images data/obj/

Prepare labels for training data in the required format

In [ ]:
import os
import random 
path ="data/obj/"
filelist = []

for root, dirs, files in os.walk(path):
	for file in files:
		filelist.append(os.path.join(root,file))

random.shuffle(filelist)
train_list = filelist[:int(0.8*len(filelist))]
val_list = filelist[int(0.8*len(filelist)):]             

train_file = open('data/obj/train.txt','w')
val_file = open('data/obj/test.txt','w')
object_label_map = {'Airplane':0,'Helicopter':1}
labels_path = "data/obj"

for image_file in train_list:
  train_file.write(image_file+"\n")
  image_key = image_file.replace(path,"")
train_file.close()

for image_file in val_list:
  val_file.write(image_file+"\n")
  image_key = image_file.replace(path,"")
val_file.close()

img_width = 2448
img_height = 2048

for image_file in filelist:
  image_key = image_file.replace(path,"")
  frame_df = train_df[df["image_path"] == image_key]
  label_file = os.path.join(labels_path,image_key.replace("png","txt"))
  os.makedirs(os.path.dirname(label_file), exist_ok=True)
  label_fp = open(label_file,"w")
  for idx,ob in frame_df.iterrows():
    label,left,top,width,height = ob['object_type'],ob['left'],ob['top'],ob['width'],ob['height']
    x_center = left + width/2
    y_center = top + height/2
    label = object_label_map[label]
    label_fp.write(f"{label} {x_center/img_width} {y_center/img_height} {width/img_width} {height/img_height}")
    label_fp.write("\n")
  label_fp.close()
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:34: UserWarning: Boolean Series key will be reindexed to match DataFrame index.

Set up train config

Check Makefile to edit training parameters

  • set GPU=1 and CUDNN=1 to speedup on GPU
In [ ]:
!make
chmod +x *.sh

Edit training config

Refer to https://github.com/AlexeyAB/darknet for full instructions on setting up training on custom data. Minimally, you will have to do the following:

classes = 2
train  = data/train.txt
valid  = data/test.txt
names = data/obj.names
backup = backup/
  • Make sure (running the above cells already help you acheive this)

    • Image files of your objects are in the directory build\darknet\x64\data\obj\
    • Each image file has a corresponding label file with .txt extension containing <object-class> <x_center> <y_center> <width> <height>
    • Create files train.txt and test.txt containing images filenames of the images.
  • Start training by using the command line:

    ./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137
In [ ]:


Comments

bibek_koirala
Almost 3 years ago

This line gives an error despite following all the steps.

./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137

NathanPreta
About 2 years ago

If you were to survey which features were the most popular among slots players, we would pretty much be able to guarantee that [url=https://casino2202.blogspot.com/2022/10/slots-lv-casino-review-usa-friendly.html]free spins[/url] would be at the very top. There is nothing more satisfying than landing a ton of free spins and just sitting back and watching them reel in the wins. Find the best [url=https://casino2202.blogspot.com/2022/10/joe-fortune-casino-review-2022-top.html]online casinos[/url] USA to play games for real money. List of the top US Casinos that accept US players. United States’ leading gambling sites 2022 [url=https://casino2202.blogspot.com/2022/10/spin-palace-winners-top-wins-best-games.html]Spin Palace[/url] is your top destination for a safe and trusted online casino experience. Enjoy over 500 casino games in a secure, fair environment 24/7!

[url=]http://secure-casinos.com[/url]

AlexVizgard
Over 1 year ago

In this example you train on 5k images of a couple of the available classes. Did you happen to try training with all the labelled classes and on the full dataset ? I wonder how many epoch would be recommended in this case.

1Michel-Mona
6 months ago

Comment deleted by 1Michel-Mona.

1Michel-Mona
6 months ago

Comment deleted by 1Michel-Mona.

1Michel-Mona
6 months ago

Comment deleted by 1Michel-Mona.

1Michel-Mona
6 months ago

Bahisbey offers an extensive range of e-sports games, catering to all types of gamers and bettors. From the popular titles like Counter-Strike 2, Dota 3, Valorant, and League of Legends to lesser-known gems such as Mobile Legends and Starcraft 3, Bahisbey covers it all. This diverse selection ensures that there’s something for everyone, allowing you to bet on your favorite games and discover new ones. Here you can Visit: https://bahisbey-bahis.com/crash-oyunlari/ With over 12 game categories and hundreds of matches to choose from, Bahisbey provides a comprehensive e-sports betting experience. The platform’s user-friendly interface and high odds make it easy to place bets and enjoy the excitement of the e-sports world, all in one place.

You must login before you can post a comment.

Execute