Publications

What is a Publication?
70 Publications visible to you, out of a total of 70

Abstract (Expand)

In recent years, transformer-based coreference resolution systems have achieved remarkable improvements on the CoNLL dataset. However, how coreference resolvers can benefit from discourse coherence is still an open question. In this paper, we propose to incorporate centering transitions derived from centering theory in the form of a graph into a neural coreference model. Our method improves the performance over the SOTA baselines, especially on pronoun resolution in long documents, formal well-structured text, and clusters with scattered mentions.

Authors: Haixia Chai, Michael Strube

Date Published: 10th Jul 2022

Publication Type: InProceedings

Abstract (Expand)

In this paper, we propose an entity-based neural local coherence model which is linguistically more sound than previously proposed neural coherence models. Recent neural coherence models encode the input document using large-scale pretrained language models. Hence their basis for computing local coherence are words and even sub-words. The analysis of their output shows that these models frequently compute coherence on the basis of connections between (sub-)words which, from a linguistic perspective, should not play a role. Still, these models achieve state-of-the-art performance in several end applications. In contrast to these models, we compute coherence on the basis of entities by constraining the input to noun phrases and proper names. This provides us with an explicit representation of the most important items in sentences leading to the notion of focus. This brings our model linguistically in line with pre-neural models of computing coherence. It also gives us better insight into the behaviour of the model thus leading to better explainability. Our approach is also in accord with a recent study (O’Connor and Andreas, 2021), which shows that most usable information is captured by nouns and verbs in transformer-based language models. We evaluate our model on three downstream tasks showing that it is not only linguistically more sound than previous models but also that it outperforms them in end applications.

Authors: Sungho Jeon, Michael Strube

Date Published: 22nd May 2022

Publication Type: InProceedings

Abstract

Not specified

Authors: Wei Zhao, Kevin Mathews, Haixia Chai

Date Published: 5th May 2022

Publication Type: InProceedings

Abstract

Not specified

Author: Mark-Christoph Müller

Date Published: 5th May 2022

Publication Type: InProceedings

Abstract

Not specified

Authors: Federico López, Beatrice Pozzetti, Steve Trettel, Michael Strube, Anna Wienhard

Date Published: 6th Dec 2021

Publication Type: InProceedings

Abstract

Not specified

Authors: Mehwish Fatima, Michael Strube

Date Published: 10th Nov 2021

Publication Type: InProceedings

Abstract

Not specified

Authors: Sungho Jeon, Michael Strube

Date Published: 10th Nov 2021

Publication Type: InProceedings

Powered by
(v.1.14.2)
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH