Publications

What is a Publication?
4 Publications visible to you, out of a total of 4

Abstract (Expand)

Coreference resolution is a key step in natural language understanding. Developments in coreference resolution are mainly focused on improving the performance on standard datasets annotated for coreference resolution. However, coreference resolution is an intermediate step for text understanding and it is not clear how these improvements translate into downstream task performance. In this paper, we perform a thorough investigation on the impact of coreference resolvers in multiple settings of community-based question answering task, i.e., answer selection with long answers. Our settings cover multiple text domains and encompass several answer selection methods. We first inspect extrinsic evaluation of coreference resolvers on answer selection by using coreference relations to decontextualize individual sentences of candidate answers, and then annotate a subset of answers with coreference information for intrinsic evaluation. The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models. Our intrinsic evaluation shows that (i) resolving coreference relations on less-formal text genres is more difficult even for trained annotators, and (ii) the values of linguistic-agnostic coreference evaluation metrics do not correlate with the impact on downstream data.

Authors: Haixia Chai, Nafise Sadat Moosavi, Iryna Gurevych, Michael Strube

Date Published: 16th Oct 2022

Publication Type: InProceedings

Abstract (Expand)

In recent years, transformer-based coreference resolution systems have achieved remarkable improvements on the CoNLL dataset. However, how coreference resolvers can benefit from discourse coherence is still an open question. In this paper, we propose to incorporate centering transitions derived from centering theory in the form of a graph into a neural coreference model. Our method improves the performance over the SOTA baselines, especially on pronoun resolution in long documents, formal well-structured text, and clusters with scattered mentions.

Authors: Haixia Chai, Michael Strube

Date Published: 10th Jul 2022

Publication Type: InProceedings

Abstract

Not specified

Authors: Wei Zhao, Kevin Mathews, Haixia Chai

Date Published: 5th May 2022

Publication Type: InProceedings

Abstract

Not specified

Authors: Haixia Chai, Wei Zhao, Steffen Eger, Michael Strube

Date Published: 2020

Publication Type: InProceedings

Powered by
(v.1.15.2)
Copyright © 2008 - 2024 The University of Manchester and HITS gGmbH