Towards explainable evaluation metrics for machine translation

Abstract:

Unlike classical lexical overlap metrics such as BLEU, most current evaluation metrics for machine translation (for example, COMET or BERTScore) are based on black-box large language models. They often achieve strong correlations with human judgments, but recent research indicates that the lower-quality classical metrics remain dominant, one of the potential reasons being that their decision processes are more transparent. To foster more widespread acceptance of novel high-quality metrics, explainability thus becomes crucial. In this concept paper, we identify key properties as well as key goals of explainable machine translation metrics and provide a comprehensive synthesis of recent techniques, relating them to our established goals and properties. In this context, we also discuss the latest state-of-the-art approaches to explainable metrics based on generative models such as ChatGPT and GPT4. Finally, we contribute a vision of next-generation approaches, including natural language explanations. We hope that our work can help catalyze and guide future research on explainable evaluation metrics and, mediately, also contribute to better and more transparent machine translation systems.

SEEK ID: https://publications.h-its.org/publications/1835

Research Groups: Natural Language Processing

Publication type: Journal

Journal: Journal of Machine Learning Research

Citation: Journal of Machine Learning Research, 25(75), pp.1-49

Date Published: 1st Mar 2024

URL: https://jmlr.org/papers/volume25/22-0416/22-0416.pdf

Registered Mode: manually

Authors: Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger

help Submitter
Activity

Views: 1371

Created: 24th Apr 2024 at 12:07

help Tags

This item has not yet been tagged.

help Attributions

None

Powered by
(v.1.14.2)
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH