Loading
Round 1: Completed Round 2: Completed Weight: 25.0
6883
449
44
315

๐Ÿ—“ Round-1 Deadline Extended | ๐Ÿ“ป Join Community Slack Channel

๐Ÿชฐ Fruit Fly Baseline | ๐Ÿ Mouse Triplet Baseline

โš”๏ธ Problem statement

In this Task, you will be given a dataset of tracking data of groups of socially interacting flies. Rather than being asked to detect a specific behavior of interest, we ask you to submit a frame-by-frame representation of the datasetโ€”for example, a low-dimensional embedding of animals' trajectories over time. (For inspiration, you can read about a few existing methods for embedding behavior of individual animals here, here, here, and here.)

To evaluate the quality of your learned representations, we will take a practical approach: we'll use representations as input to train small two layer neural networks on many different "hidden" tasks (each task will have its own neural network), such as detecting occurrence of experimenter-defined actions or distinguishing between two different strains of mice. The goal is therefore to create a representation that captures behavior and generalizes well in any downstream task. diagram of problem statement

Join our Computational Behavior Slack to discuss the challenge, ask questions, find teammates, or chat with the organizers!

๐Ÿ’พ Dataset

We provide frame-by-frame animal pose estimates extracted from top-view videos of 9-11 interacting flies filmed at 150Hz; raw videos will not be provided. Flies come from a variety of genetically engineered lines, and exhibit both naturally occurring and optogenetically or thermogenetically evoked behaviors.

Animal poses are characterized by the tracked locations of body parts on each animal, termed "keypoints." Keypoints are stored in an ndarray with the following properties:

  • Dimensions: (# frames) ร— (animal ID) ร— (body part) ร— (x, y coordinate).
  • Units: millimeters; coordinates are relative to the center of the circular arena (radius 26.689 mm) the flies are contained within. Original image dimensions are 1024 ร— 1024 pixels (18.8825 pixels / mm) for the fly dataset.

Body parts are ordered: 1) left wing tip, 2) right wing tip, 3) antennae midpoint, 4) right eye, 5) left eye, 6) left front of thorax, 7) right front of thorax, 8) base of thorax, 9) tip of abdomen, 10) right middle femur base, 11) right middle femur-tibia join, 12) left middle femur base, 13) left middle femur-tibia joint, 14) right front leg tip, 15) right middle leg tip, 16) right rear leg tip, 17) left front leg tip, 18) left middle leg tip, 19) left rear leg tip.

The placement of these keypoints is illustrated below: diagram of keypoint locations

In addition, 10 other features are included in the keypoints array, increasing its dimension from 19 ร— 2 to 24 ร— 2. These features are: 20) Ellipse-fit center (x and y coordinates), 21) Ellipse-fit orientation (cosine and sine), 22) Ellipse fit axis lengths (major and minor), 23) Area (body, foreground), and 24) Appearance (foreground/background contrast, minimum neighbor distance). diagram of pose features

๐Ÿ“ Files

The following files are available in the resources section. A "sequence" is a continuous recording of social interactions between animals: sequences are 30 seconds long (4500 frames at 150Hz) in the fly dataset. The sequence_id is a random hash to anonymize experiment details. nans indicate missing data. These occur because videos have between 9 and 11 flies. Data are padded with nans to be all the same size.

  • user_train.npy - Training set for the task, which follows the following schema :
    • {
          "keypoint_vocabulary": a list of names describing the tracked keypoints
          "vocabulary" : A list of public task names
          "sequences" : {
              "<sequence_id> : {
                  "keypoints" : a ndarray of shape (4500, 11, 24, 2)
                  "annotations" : a ndarray of shape (3, 4500)
              }
          }
      }
      
  • submission_clips.npy - Test set for the task, which follows the following schema:
    • {
          "<sequence_id> : {
              "keypoints" : a ndarray of shape (4500, 11, 24, 2)
          }
      }
      
  • sample_submission.npy - Template for a sample submission for this task, follows the following schema :
    • {
          "frame_number_map": 
              {"<sequence_id-1>": (start_frame_index, end_frame_index),
              "<sequence_id-1>": (start_frame_index, end_frame_index),
              ...
              "<sequence_id-n>": (start_frame_index, end_frame_index),
              }
          "<sequence_id-1>" : [
                  [0.321, 0.234, 0.186, 0.857, 0.482, 0.185], .....]
                  [0.184, 0.583, 0.475], 0.485, 0.275, 0.958], .....]
              ]
      }
      

In sample_submission, each key in the frame_number_map dictionary refers to the unique sequence id of a video in the test set. The item for each key is expected to be an the start and end index for slicing the embeddings numpy array to get the corresponding embeddings. The embeddings array is a 2D ndarray of floats of size total_frames by X , where X is the dimension of your learned embedding (6 in the above example; maximum permitted embedding dimension is 256), representing the embedded value of each frame in the sequence. total_frames is the sum of all the frames of the sequences, the array should be concatenation of all the embeddings of all the clips.

Public Tasks

To give you a hint of how embeddings will be evaluated, we provide labels for three sample evaluation tasks. Note that not all frames/sequences will be labeled for a given task: NaNs are used to indicate missing data that we will not evaluate on for a given task.

  1. Genotype is a "sequence-level" task, meaning its value is the same for all frames in a sequence. Here, sequences are labeled by whether flies are from the genetic control line pBDPGAL4U.
  2. Stimulation is a "frame-level" task, meaning each frame in a sequence will be 1 when stimulation is on and 0 otherwise. In labeled sequences, we optogenetically activate pC1d neurons, a manipulation that evokes aggression in females while stimulation is on.
  3. Aggressive female behavior is a "frame-level" task, meaning each frame in a sequence will be 1 when behavior is present and 0 otherwise. A subset of sequences have been manually annotated by an expert biologist for aggressive behaviors, which in flies include actions like lunging and tussling with another fly.

The sample annotations for these three tasks are provided in a matrix that is (# frames) ร— (animal ID) ร— 3. For all tasks, annotations are provided for each fly and frame of a sequence. For task 1, all flies and frames in a sequence will have the same category. For task 2, all flies will have the same category, but it can change over frames. For task 3, annotations apply to a specific fly in a specific frame.

๐Ÿš€ Submission

The sample submission format is described in the Files section above.

To test out the system, you can start by uploading the provided sample_submission.npy. When you make your own submissions, they should follow the same format.

๐Ÿ† Prizes

The cash prize pool for this task is $3,000 USD total:

  • ๐Ÿฅ‡ 1st on the leaderboard: $1500 USD
  • ๐Ÿฅˆ 2nd on the leaderboard: $1000 USD
  • ๐Ÿฅ‰ 3rd on the leaderboard: $500 USD

๐Ÿ“ซ Contact

mabe.workshop@gmail.com

Notebooks

See all
0