CHOLAN: A Modular Approach for Neural Entity Linking on Wikipedia and Wikidata

ManojPrabhakar, updated 🕥 2022-01-21 13:48:49

CHOLAN - Q105079136

CHOLAN : A Modular Approach for Neural Entity Linking on Wikipedia and Wikidata (paper)

Wikidata

  • Dataset - We have extracted an EL dataset from the (T-Rex dataset). Please refer this (link) to download the dataset used in our experiments

Wikipedia

  • Dataset - AIDA-CoNLL, we used the dataset from the DCA paper. Please refer to this (repository).

Candidate Generation

  • FALCON 2.0 - The locally indexed KG items have been used. Please refer to this (repository) for the set up using the Wikidata dump.
  • (DCA) - A predefined candidate set has been used. (Wikipedia)

Setup

Requirements: Python 3.6 or 3.7, torch>=1.2.0

Running

python cholan.py  

Citation

@inproceedings{kannan-ravi-etal-2021-cholan, title = {CHOLAN: A Modular Approach for Neural Entity Linking on Wikipedia and Wikidata}, author = {Kannan Ravi, Manoj Prabhakar and Singh, Kuldeep and Mulang, Isaiah Onando and Shekarpour, Saeedeh and Hoffart, Johannes and Lehmann, Jens}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume}, year = {2021} }

Issues

Use CHOLAN for inference

opened on 2022-05-20 14:49:53 by vasari-kg

I find your project very interesting but I find a strong drawback in the fact that you don't provide enough documentation on how to use your model for inference. Do you plan to add this information in future? Looking forward to your reply :)

Files missing

opened on 2022-04-29 07:38:13 by flackbash

First of all thank you for making your code publicly available. We are working on an evaluation tool for entity linking systems and would love to include your system and reproduce your results. I did however not succeed in running your code and the provided instructions are a bit sparse.

More specificly, when calling python cholan.py as instructed here in the directory CHOLAN/Cholan_T-REx/End2End, I get the error message

Traceback (most recent call last): File "cholan.py", line 60, in <module> df_target = pd.read_csv(predict_data_dir + "ned_target_data.tsv", sep="\t", encoding='utf-8') ... FileNotFoundError: [Errno 2] No such file or directory: '/data/prabhakar/CG/prediction_data/data_10000/ned_target_data.tsv'

When running python cholan.py in the directory Cholan_CoNLL_AIDA/End2End, I get the error message

Traceback (most recent call last): File "cholan.py", line 65, in <module> df_ned = pd.read_csv(predict_data_dir + "ned_data.tsv", sep='\t', encoding='utf-8', usecols=['sequence1', 'sequence2', 'label']) ... FileNotFoundError: [Errno 2] No such file or directory: '/data/prabhakar/CG/WNED/msnbc/prediction_data/data_full/Zeroshot/ned_data.tsv' Neither of these files are included in any of the linked data packages or the linked repositories.

Could you please provide the necessary data and provide some more instructions on how to use your code and reproduce your results?

How to get the pretrained_bert_ner model?

opened on 2022-04-15 11:30:15 by hezongfeng

I tried to run this code, but I didn't know how to get the bert_ner_nocll pre_trained model. Can you give me a detailed description?

What are the train/test splits of T-Rex used in the paper?

opened on 2021-08-26 21:18:47 by laituan245

Hi. Thank you for the great work.

I was wondering if you could provide the train/test splits of T-Rex used in the paper? In your README file, there is a link to download the file CHOLAN-EL-TREX.tsv. But there is no indication of which line in the file belongs to the train set or the test set.

Furthermore, I counted the number of data lines in that file. There were 1,089,661 data lines (except the header line). However, your paper mentions that "the dataset has 983,257 sentences". So was the file the same data you used in your paper?

Thank you.

Could you please give a more detailed instruction to run or adapt it to a serve?

opened on 2021-04-03 10:11:56 by AQA6666

I have read your paper,it's an amazing achievement.Now I'm trying to use it in my research experiment with full of expectation.However,it is difficult for me to run it with current brief instructions.I believe that there would be many people also want it.

Add LICENSE

opened on 2021-01-28 21:18:19 by Daniel-Mietchen

to clarify reusability

Manoj Prabhakar

NLP, Data Science, Deep Learning, Knowledge Graphs

GitHub Repository

entity-linking bert candidate-generation wikipedia wikidata