New Correspondence in Nature

27 Jun 2024

Helene Tenzer, Stefan Feuerriegel and Rebecca Piekkari argue that AI machine translation tools must be taught intercultural competence.

In their Correspondence to Nature, Helene Tenzer, Stefan Feuerriegel and Rebecca Piekkari discuss how machine translation was recently scaled to 200 of the world’s 7000-odd languages, aiming to preserve or even revitalize minority languages and cultures. To achieve this, they argue, large language models (LLMs) must be trained in cultural competence. LLMs are predominantly trained on English-language datasets, which restricts their ability to understand how speakers from different cultures use language to create meaning. Furthermore, LLMs have been developed with a focus on text rather than context. They excellently map words, sentences, and grammatical structures, but fail to consider how cultural context influences language use. To address this, Tenzer and colleagues propose training LLMs with context-specific annotations and incorporating cross-cultural feedback. This approach will enable machine translation to better serve and reflect the world's diverse linguistic and cultural landscapes.
Read the Correspondence here.