TaCo🌮: Translation-Assisted Cross Linguality method for Efficient Multilingual LLMs

TaCo🌮: Translation-Assisted Cross Linguality method for Efficient Multilingual LLMs

Overview

Motivated by the theory of parameter-efficient fine-tuning using LoRA and the Chain of Thought (Wei 2022) process, we propose a new method called TaCo. This method uses translation in the Chain of Thought process to create a multilingual model. In this work, we have used the Chain of Thought process to teach language models to translate the instruction to English first, generate the required response in English, and then translate it back to low-resource languages. For training, we employed the curriculum learning strategy. This strategy utilizes the fine-tuned Guanaco-33B model first and then applies instruction tuning using the TaCo method.

Link to the Paper: Arxiv

Datasets: We first translated the Alpaca-52K-GPT4 and Dolly-15K datasets using Google Cloud Translation, translating the data into 132 languages. However, due to computation limits, our experiment continued with three low-resource languages: Sanskrit, Nepali, and Maithili, as well as one high-resource language, Persian.

Multilingual Instruction-Tuning Dataset (MITS)

🌮 Taco-datasets

As machine learning translations are prone to translationese, we performed an evaluation of the translation quality. Initially, we sampled 1000 sentences from each language and calculated the BLEU score using ScarceBLEU , ChrF , and the Translation Error Rate (TER) through a round-trip process from English to the target language (en-xx).

Tools and Datasets:

Nepali 33B Model: https://huggingface.co/saillab/Nepali_33B/

Persian 33B Model: https://huggingface.co/saillab/g33b_persian

Code: TaCo

Publications:N/A

Model Response: You can find the response generated by the trained model here

Fig: Instruction and the response generated using TaCo method.

Model Weights: We have released all of our model adapters in the HuggingFace.

Results: Performance of different categories across languages can be seen in the below table.

Citation: Please use the below citation to cite our work.

@article{upadhayay2023taco,
  title={TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes},
  author={Upadhayay, Bibek and Behzadan, Vahid},
  journal={arXiv preprint arXiv:2311.10797},
  year={2023}
}

License and Intended Use: The TaCo adapter weights are trained on top of the Guanaco-33B (timdettmers/guanaco-33b-merged) model, which is based on the LLaMA model. We used the Alpaca-52K and Dolly-15K datasets and translated them using Google Cloud Translate. We advise you to look into the licensing of Guanaco-33B and the LLaMA model, as well as the terms of usage for Google Cloud Translation, before using this model.