Publications

What is a Publication?
81 Publications visible to you, out of a total of 81

Abstract (Expand)

In this paper, we provide an overview of the CODI-CRAC 2021 Shared-Task: Anaphora Resolution in Dialogue. The shared task focuses on detecting anaphoric relations in different genres of conversations. Using five conversational datasets, four of which have been newly annotated with a wide range of anaphoric relations: identity, bridging references and discourse deixis, we defined multiple subtasks focusing individually on these key relations. We discuss the evaluation scripts used to assess the system performance on these subtasks, and provide a brief summary of the participating systems and the results obtained across ?? runs from 5 teams, with most submissions achieving significantly better results than our baseline methods.

Authors: Sopan Khosla, Juntao Yu, Ramesh Manuvinakurike, Vincent Ng, Massimo Poesio, Michael Strube, Carolyn Rosé

Date Published: 10th Nov 2021

Publication Type: InProceedings

Abstract

Not specified

Authors: Chloé Braud, Christian Hardmeier, Junyi Jessy Li, Annie Louis, Michael Strube, Amir Zeldes

Date Published: 10th Nov 2021

Publication Type: Proceedings

Abstract

Not specified

Authors: Mark-Christoph Müller, Sucheta Ghosh, Ulrike Wittig, Maja Rey

Date Published: 11th Jun 2021

Publication Type: InProceedings

Abstract

Not specified

Author: Michael Strube

Date Published: 2021

Publication Type: InBook

Abstract

Not specified

Authors: Federico López, Beatrice Pozzetti, Steve Trettel, Michael Strube, Anna Wienhard

Date Published: 2021

Publication Type: InProceedings

Abstract (Expand)

Pretrained language models, neural models pretrained on massive amounts of data, have established the state of the art in a range of NLP tasks. They are based on a modern machine-learning technique, the Transformer which relates all items simultaneously to capture semantic relations in sequences. However, it differs from what humans do. Humans read sentences one-by-one, incrementally. Can neural models benefit by interpreting texts incrementally as humans do? We investigate this question in coherence modeling. We propose a coherence model which interprets sentences incrementally to capture lexical relations between them. We compare the state of the art in each task, simple neural models relying on a pretrained language model, and our model in two downstream tasks. Our findings suggest that interpreting texts incrementally as humans could be useful to design more advanced models.

Authors: Sungho Jeon, Michael Strube

Date Published: 1st Dec 2020

Publication Type: InProceedings

Abstract

Not specified

Authors: Chloé Braud, Christian Hardmeier, Junyi Jessy Li, Annie Louis, Michael Strube

Date Published: 20th Nov 2020

Publication Type: Proceedings

Powered by
(v.1.14.2)
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH