MELT Workshop @ COLM 2025

Welcome to The 1st Workshop on Multilingual and Equitable Language Technologies (MELT) 👋

📢 Announcements


Recent advances in large language models (LLMs) have led to a paradigm shift in the economy, society, and daily life, demonstrating remarkable capabilities for complex tasks, such as reasoning and multimodality. However, these innovations remain largely centered around English and are unevenly distributed across languages, cultures, and values.

That is, linguistic and cultural inclusion is lacking in language models, mainly due to an imbalance of language sources used for training. While there are more than 7,000 languages in the world, most NLP research sheds light on a handful of high-resource languages (Blasi et al., 2022; Qin et al., 2024; Zhu et al., 2024). For instance, only 7% and 17% of non-English tokens are incorporated into the pre-training data of GPT-3 (Brown et al., 2020) and BLOOM (Scao et al., 2022), respectively. Further, as we often rely on translation from English-centric datasets (Jin et al., 2024) that predominantly reflect Western cultures (Ahuja et al., 2023) to build multilingual benchmarks, these models naturally uphold social values close to North American and European cultures and neglect a wide array of languages and cultures (Held et al., 2023; AlKhamissi et al., 2024; Tao et al., 2024; Xu et al., 2024; Masoud et al., 2025).

As LLMs are increasingly deployed in diverse, region- and culture-sensitive domains worldwide, the limitations of this linguistic and cultural imbalance become more pronounced, and their implications more consequential (Yang et al., 2025; Qadri et al., 2025a; Qadri et al., 2025b). These models often exhibit significant biases, reduced performance, and a lack of cultural sensitivity when applied to underrepresented languages and diverse sociocultural settings (Foroutan et al., 2023; Liu et al., 2024; Kwok et al., 2024; Romanou et al., 2025).

Addressing these limitations is essential to ensure that LLMs provide equal access to information, resources, and services to all populations equitably, regardless of language or cultural background. Our workshop aims to meet the ultimate demand to enhance diversity and inclusion in language models. We will disseminate the state-of-the-art research on building multilingual and multicultural LLMs, fostering international cross-disciplinary collaborations, and exploring the fundamental theoretical and methodological challenges in this field.


Venue

The MELT workshop will be co-located with COLM 2025, in Montreal, Canada. The exact room information will will be announced closer to the event date.

  • When: October 10, 2025
  • Where: Palais des Congrès, Montreal, Canada

Important Dates

Please refer to our Call for Papers for more details on paper submissions. All deadlines are 11:59 PM Anywhere on Earth (AoE).

  • Workshop Track Submission Deadline: June 23, 2025
  • Conference Track Submission Deadline: July 14, 2025
  • Notification of Acceptance: July 21, 2025
  • Workshop Date: October 10, 2025

Keynotes

We will have 5 keynotes, 1 panel discussion, and a poster session. Click here for more details on keynotes!

Shamsuddeen Hassan Muhammad

Shamsuddeen Hassan Muhammad

Imperial College London

Advanced Research Fellow and Google DeepMind Academic Fellow

Julia Kreutzer

Julia Kreutzer

Cohere Labs

Senior Research Scientist

Pedro Ortiz Suarez

Pedro Ortiz Suarez

Common Crawl Foundation

Senior Research Scientist

Monojit Choudhury

Monojit Choudhury

MBZUAI

Professor

Shixiang Shane Gu

Shixiang Shane Gu

Google Deepmind

Research Scientist


Organizers

Amir Hossein Kargaran
Amir Hossein Kargaran

LMU Munich & MCML

Haneul Yoo
Haneul Yoo

KAIST

Hwaran Lee
Hwaran Lee

Sogang University

Marzieh Fadaee
Marzieh Fadaee

Cohere Labs

Nedjma Djouhra Ousidhoum
Nedjma Djouhra Ousidhoum

Cardiff University

Rifki Afina Putri
Rifki Afina Putri

Universitas Gadjah Mad

Sachin Kumar
Sachin Kumar

Ohio State University

Seong Joon Oh
Seong Joon Oh

University of Tübingen


Advisory Boards

Alice Oh
Alice Oh

KAIST

David Ifeoluwa Adelani
David Ifeoluwa Adelani

MILA & McGill University

Jung-woo Ha
Jung-woo Ha

NAVER AI Lab

Scott A. Hale
Scott A. Hale

University of Oxford & Meedan