TaCo🌮: Translation-Assisted Cross Linguality method for Efficient Multilingual LLMs

Motivated by the theory of parameter-efficient fine-tuning using LoRA and the Chain of Thought (Wei 2022) process, we propose a new method called TaCo. This method uses translation in the Chain of Thought process to create a multilingual model. In this work, we have used the Chain of Thought process to teach language models to translate the instruction to English first, generate the required response in English, and then translate it back to low-resource languages. For training, we employed the curriculum learning strategy. This strategy utilizes the fine-tuned Guanaco-33B model first and then applies instruction tuning using the TaCo method.

Link to the Paper: Arxiv

Datasets: We first translated the Alpaca-52K-GPT4 and Dolly-15K datasets using Google Cloud Translation, translating the data into 132 languages. However, due to computation limits, our experiment continued with three low-resource languages: Sanskrit, Nepali, and Maithili, as well as one high-resource language, Persian.

Multilingual Instruction-Tuning Dataset (MITS)

🌮 Taco-datasets

As machine learning translations are prone to translationese, we performed an evaluation of the translation quality. Initially, we sampled 1000 sentences from each language and calculated the BLEU score using ScarceBLEU , ChrF , and the Translation Error Rate (TER) through a round-trip process from English to the target language (en-xx) and back from the target to English (xx-en). We provide the evaluation metric scores for these three low-resource languages:

LanguageBLEUCHRF++TER
Sanskrit65.2384.6219.43
Nepali69.6887.3715.02
Persian62.4280.7220.61
Maithili63.6584.8819.58

Model Response: You can find the response generated by the trained model here

Fig: Instruction and the response generated using TaCo method.

Model Weights: We have released all of our model adapters in the HuggingFace.

Results: Performance of different categories across languages can be seen in the below table.

CategoryNepaliSanskritPersianMaithili
coding3.146.867.907.86
common sense8.558.308.607.80
counterfactual8.908.308.468.30
fermi7.707.708.207.50
generic9.009.139.009.30
knowledge9.559.429.258.80
math9.673.336.337.00
roleplay8.358.908.607.80
writing8.808.958.659.30
Overall average8.248.318.528.30

Citation: Please use the below citation to cite our work.

@article{upadhayay2023taco,
  title={TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes},
  author={Upadhayay, Bibek and Behzadan, Vahid},
  journal={arXiv preprint arXiv:2311.10797},
  year={2023}
}

License and Intended Use: The TaCo adapter weights are trained on top of the Guanaco-33B (timdettmers/guanaco-33b-merged) model, which is based on the LLaMA model. We used the Alpaca-52K and Dolly-15K datasets and translated them using Google Cloud Translate. We advise you to look into the licensing of Guanaco-33B and the LLaMA model, as well as the terms of usage for Google Cloud Translation, before using this model.