We had a total of 17 systems participating in Round 1. The top teams across the three tasks in Round 1 (team_sti, MTab, Team_DAGOBAH and Tabularisi) will be presenting their systems during the ISWC conference on October 30. We will also have a session devoted to the challenge during the Ontology Matching workshop on October 26.
Round 2 had a reduction of participating systems (from 17 to 11), which helped us identifying the core systems and groups actively working in tabular data to KG matching. There were also three new participants, including a pure ontology alignment system (LogMap). Round 3 and Round 4 preserved the 7 core participants across rounds and tasks.
MTab and IDLab were the clear dominants in all three tasks. Tabularisi was in a clear overall 3rd position in CTA and CPA. The overall 3rd position in CEA was shared among Tabularisi and ADOG. Special mention requires Team_sti which had an outstanding performance in Round 4 of CEA.
Results and evaluation details can also be accessed from AIcrowd.
Round 1 | Round 2 | Round 3 | Round 4 | |
Participants | 17 | 11 | 9 | 8 |
CEA | 11 | 10 | 8 | 8 |
CTA | 13 | 9 | 8 | 7 |
CPA | 5 | 7 | 7 | 7 |
8 systems produced results in the CEA task.
Team | F1-Score | Precision |
MTab | 0.983 | 0.983 |
Team_sti | 0.973 | 0.983 |
IDLab | 0.907 | 0.912 |
ADOG | 0.835 | 0.838 |
saggu | 0.804 | 0.814 |
Tabularisi | 0.803 | 0.813 |
LOD4ALL | 0.648 | 0.654 |
Team_DAGOBAH | 0.578 | 0.599 |
7 systems produced results in the CTA task.
Team | AH-Score | AP-Score |
MTab | 2.012 | 0.300 |
IDLab | 1.846 | 0.274 |
Tabularisi | 1.716 | 0.325 |
Team_sti | 1.682 | 0.322 |
ADOG | 1.538 | 0.296 |
LOD4ALL | 1.071 | 0.386 |
Team_DAGOBAH | 0.684 | 0.206 |
7 systems produced results in the CPA task.
Team | F1-Score | Precision |
MTab | 0.832 | 0.832 |
IDLab | 0.830 | 0.835 |
Tabularisi | 0.823 | 0.825 |
Team_sti | 0.787 | 0.841 |
ADOG | 0.750 | 0.767 |
LOD4ALL | 0.439 | 0.904 |
Team_DAGOBAH | 0.398 | 0.874 |
8 systems produced results in the CEA task.
Team | F1-Score | Precision |
MTab | 0.970 | 0.970 |
IDLab | 0.962 | 0.964 |
ADOG | 0.912 | 0.913 |
Tabularisi | 0.857 | 0.866 |
saggu | 0.830 | 0.832 |
LOD4ALL | 0.828 | 0.833 |
Team_DAGOBAH | 0.725 | 0.745 |
Team_sti | 0.633 | 0.679 |
8 systems produced results in the CTA task.
Team | AH-Score | AP-Score |
MTab | 1.956 | 0.261 |
IDLab | 1.864 | 0.247 |
Tabularisi | 1.702 | 0.277 |
Team_sti | 1.648 | 0.269 |
LOD4ALL | 1.442 | 0.260 |
ADOG | 1.409 | 0.238 |
Team_DAGOBAH | 0.745 | 0.161 |
MangoPedia | 0.723 | 0.256 |
7 systems produced results in the CPA task.
Team | F1-Score | Precision |
MTab | 0.844 | 0.845 |
IDLab | 0.841 | 0.843 |
Tabularisi | 0.827 | 0.830 |
ADOG | 0.558 | 0.763 |
LOD4ALL | 0.545 | 0.853 |
Team_DAGOBAH | 0.519 | 0.826 |
Team_sti | 0.518 | 0.595 |
10 systems produced results in the CEA task.
Team | F1-Score | Precision |
MTab | 0.911 | 0.911 |
IDLab | 0.883 | 0.893 |
Tabularisi 2 | 0.826 | 0.852 |
Tabularisi | 0.808 | 0.856 |
saggu | 0.806 | 0.858 |
LOD4ALL | 0.757 | 0.767 |
ADOG | 0.742 | 0.745 |
Team_DAGOBAH | 0.713 | 0.816 |
Team_sti | 0.614 | 0.673 |
LogMap | 0.432 | 0.806 |
9 systems produced results in the CTA task.
Team | AH-Score | AP-Score |
MTab | 1.414 | 0.276 |
IDLab | 1.376 | 0.257 |
Tabularisi | 1.099 | 0.261 |
Team_sti | 1.049 | 0.247 |
LOD4ALL | 0.893 | 0.234 |
ADOG | 0.713 | 0.208 |
Team_DAGOBAH | 0.641 | 0.247 |
mgol19 | 0.438 | 0.220 |
Tabularisi 2 | 0.228 | 0.379 |
7 systems produced results in the CPA task.
Team | F1-Score | Precision |
MTab | 0.881 | 0.929 |
IDLab | 0.877 | 0.926 |
Tabularisi | 0.790 | 0.792 |
LOD4ALL | 0.555 | 0.941 |
Team_DAGOBAH | 0.533 | 0.919 |
Team_sti | 0.460 | 0.544 |
ADOG | 0.459 | 0.708 |
11 systems produced results in the CEA task. The results are very positive with 7 of the systems with a F1-score greater than 0.80.
Team | F1-Score | Precision |
Team_sti | 1.0 | 1.0 |
MTab | 1.0 | 1.0 |
Team_DAGOBAH | 0.897 | 0.941 |
Tabularisi | 0.884 | 0.908 |
bbk | 0.854 | 0.845 |
LOD4ALL | 0.852 | 0.874 |
Tabularisi 2 | 0.816 | 0.851 |
PAT-SEU/zhangyi | 0.794 | 0.804 |
VectorUP | 0.732 | 0.733 |
ADOG | 0.657 | 0.673 |
IDLab | 0.448 | 0.627 |
13 systems produced results in the CTA task. As in the CEA tasks, 7 systems achieved a F1-score greater than 0.80.
Team | F1-Score | Precision |
MTab | 1.0 | 1.0 |
VectorUP | 1.0 | 1.0 |
Team_sti | 0.929 | 0.933 |
LOD4ALL | 0.850 | 0.850 |
IDLab | 0.833 | 0.833 |
ADOG | 0.829 | 0.851 |
Tabularisi | 0.825 | 0.825 |
Siemens Munich | 0.754 | 0.70 |
Team_DAGOBAH | 0.644 | 0.580 |
f-ym | 0.527 | 0.358 |
ahmad88me | 0.485 | 0.581 |
It's all semantics | 0.192 | 0.192 |
kzafeiroudi | 0.033 | 1.0 |
The CPA task had a lower participation, with only 5 systems producing results. This is somehow expected as this task represents a slightly different challenge. The teams MTab and Team_sti produced very promising results.
Team | F1-Score | Precision |
MTab | 0.987 | 0.975 |
Team_sti | 0.965 | 0.991 |
Tabularisi | 0.606 | 0.638 |
Team_DAGOBAH | 0.415 | 0.347 |
Tuanbiu | 0.355 | 1.0 |
The challenge is currently supported by the AIDA project, the SIRIUS Centre for Research-driven Innovation, and IBM Research.