Loading
Round 1: Completed Round 2: Completed Round 3: Completed Round 4: Completed

ISWC 2019 Cell-Entity Annotation (CEA) Challenge

2000 Prize Money
0 Travel Grants
Misc Prizes : SIRIUS and IBM Research sponsors the prize for the best systems and best student systems, respectively.
7329
49
2
2865

NEWS: Deadlines updated. Please join our discussion group.

This is a task of ISWC 2019 “Semantic Web Challenge on Tabular Data to Knowledge Graph Matching”. The task is to annotate each of the specified table cells (entity mentions) of a give table set with a DBPedia entity. Click here for the official challenge website.

Task Description

The task is to annotate a cell with an entity of DBpedia with the prefix of http://dbpedia.org/resource/.

Each submission should contain the annotation of the target cell. One cell can be annotated by one entity. Any of the wiki page redirected entities of the ground truth entity (defined by dbo:wikiPageRedirects) are regarded as correct. Case is NOT sensitive.

The submission file should be in CSV format. Each line should contain the annotation of one cell which is identified by a table id, a column id and a row id. Namely one line should have four fields: “Table ID”, “Column ID”, “Row ID” and “DBpedia entity”. The headers should be excluded from the submission file. Here is an example: “9206866_1_8114610355671172497”,”0”,”121”,”http://dbpedia.org/resource/Norway”

Notes:

1) Table ID does not include filename extension; make sure you remove the .csv extension from the filename.

2) Column ID is the position of the column in the table file, starting from 0, i.e., first column’s ID is 0.

3) Row ID is the position of the row in the table file, starting from 0, i.e., first row’s ID is 0.

4) At most one entity should be annotated for one cell.

5) One submission file should have NO duplicate lines for one cell.

6) Annotations for cells out of the target cells are ignored.

Datasets

Table set for Round #1: Tables, Target Cells

Table set for Round #2: Tables, Target Cells

Table set for Round #3: Tables, Target Cells

Data Description: One table is stored in one CSV file. Each line corresponds to a table row. Note that the first row may either be the table header or content. The cells for annotation are saved in a CSV file.

Evaluation Criteria

Precision, Recall and F1 Score are calculated:

Precision = (# correctly annotated cells) / (# annotated cells)

Recall = (# correctly annotated cells) / (# target cells)

F1 Score = (2 * Precision * Recall) / (Precision + Recall)

Notes:

1) # denotes the number.

2) F1 Score is used as the primary score; Precision is used as the secondary score.

3) An empty annotation of a cell will lead to an annotated cell; we suggest to exclude the cell with empty annotation in the submission file.

Prizes

SIRIUS and IBM Research sponsor the prizes for the best systems.

Rules

  1. Selected systems with the best results in Round 1 and 2 will be invited to present their results during the ISWC conference and the Ontology Matching workshop.

  2. The prize winners will be announced during the ISWC conference (on October 30, 2019). We will take into account all evaluation rounds specially the ones running till the conference dates.

  3. Participants are encouraged to submit a system paper describing their tool and the obtained results. Papers will be published online as a volume of CEUR-WS as well as indexed on DBLP. By submitting a paper, the authors accept the CEUR-WS and DBLP publishing rules.

  4. Please see additional information at our official website

Participants

Leaderboard

01
  NII
0.983
02 Vanezio 0.973
03 Gillesvdw 0.907
04 ADOG 0.835
05 saggu 0.804