Loading
Round 1: Completed Round 2: Completed Round 3: Completed Round 4: Completed

ISWC 2019 Columns-Property Annotation (CPA) Challenge

2000 Prize Money
0 Travel Grants
Misc Prizes : SIRIUS and IBM Research sponsors the prize for the best systems and best student systems, respectively.
5118
37
0
702

NEWS: Deadlines updated. Please join our discussion group.

This is a task of ISWC 2019 “Semantic Web Challenge on Tabular Data to Knowledge Graph Matching”. The task is to annotate a column pair within a table with a property of DBPedia Ontology. Click here for the official challenge website.

Task Description

Each submission should be one CSV file. Each line should contain one property annotation for one column pair which is identified by a table id, a head column id and a tail column id. Note that the order of head column and tail column matters. The annotation properties should come DBPedia with the prefix of http://dbpedia.org/ontology/. Each column pair should be annotated by one property that is as fine grained as possible but correct. Case is NOT sensitive.

Briefly each line of the submission file should include “table ID”, “head column ID”, “tail column ID”, and “DBpedia property”. The header should be excluded from the submission file. Here is one line example: “50245608_0_871275842592178099”,”0”,”1”,”http://dbpedia.org/ontology/releaseDate”

Notes:

1) Table ID does not include filename extension; make sure you remove the .csv extension from the filename.

2) Column ID is the position of the column in the table file, starting from 0, i.e., first column’s ID is 0.

3) At most one property should be annotated for one column pair.

4) One submission file should have NO duplicate lines (annotations) for one column pair.

6) Annotations for column pairs out of the targets are ignored.

Datasets

Table set for Round #1: CPA_Round1.tar.gz.

Table set for Round #2: Tables, Target Column Pairs

Table set for Round #3: Tables, Target Column Pairs

Data Description: One table is stored in one CSV file. Each line corresponds to a table row. Note that the first row may either be the table header or content. The column pairs for annotation are saved in a CSV file.

Evaluation Criteria

Precision, Recall and F1 Score will be calculated:

Precision = (# correct annotations) / (# annotations)

Recall = (# correct annotations) / (# target column pairs)

F1 Score = (2 * Precision * Recall) / (Precision + Recall)

Notes:

1) # denotes the number.

2) F1 Score is used as the primary score; Precision is used as the secondary score.

3) An empty annotation of a column pair will lead to an annotated cell; we suggest to exclude the cell with empty annotation in the submission file.

Prizes

SIRIUS sponsors the prize for the best systems

Rules

  1. Selected systems with the best results in Round 1 and 2 will be invited to present their results during the ISWC conference and the Ontology Matching workshop.

  2. The prize winners will be announced during the ISWC conference (on October 30, 2019). We will take into account all evaluation rounds specially the ones running till the conference dates.

  3. Participants are encouraged to submit a system paper describing their tool and the obtained results. Papers will be published online as a volume of CEUR-WS as well as indexed on DBLP. By submitting a paper, the authors accept the CEUR-WS and DBLP publishing rules.

  4. Please see additional information at our official website

Participants

Leaderboard

01 phuc_nguyen 0.832
02 Gillesvdw 0.830
03 tabularisi 0.823
04 Vanezio 0.787
05 ADOG 0.750