In recent years, transformer-based coreference resolution systems have achieved remarkable improvements on the CoNLL dataset. However, how coreference resolvers can benefit from discourse coherence is still an open question. In this paper, we propose to incorporate centering transitions derived from centering theory in the form of a graph into a neural coreference model. Our method improves the performance over the SOTA baselines, especially on pronoun resolution in long documents, formal well-structured text, and clusters with scattered mentions.
SEEK ID: https://publications.h-its.org/publications/1496
DOI: 10.18653/v1/2022.naacl-main.218
Research Groups: Natural Language Processing
Publication type: InProceedings
Journal: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Book Title: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Publisher: Association for Computational Linguistics
Citation: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Seattle, Washington, July 2022
Date Published: 10th Jul 2022
URL: https://aclanthology.org/2022.naacl-main.218.pdf
Registered Mode: manually
Views: 3631
Created: 19th Jul 2022 at 14:33
Last updated: 5th Mar 2024 at 21:24
This item has not yet been tagged.
None