Llama-SEA-LION-v3-70B
Last updated
Last updated
Our Llama-SEA-LION-v3-70B models have been continued pre-trained on top of that is 70 billion parameters in size. Similar to our Llama-SEA-LION-v3-8B model, our Llama-SEA-LION-v3-70B also has a context length of 128K tokens, making them our SEA-LION models with the longest context length to date.
Llama-SEA-LION-v3-70B was trained on data comprised of approximately 200B tokens across 11 SEA languages: Burmese, Chinese, English, Filipino, Indonesia, Khmer, Lao, Malay, Tamil, Thai and Vietnamese.
Llama-SEA-LION-v3-70B-IT was fine-tuned in two stages on approximately 12.3M English instruction-completion pairs alongside a pool of 4.5M Southeast Asian instruction-completion pairs from SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai and Vietnamese.
At a glance:
Model type: Decoder
Tokenizer: Default tokenizer used in Llama 3.1 70B Instruct
Available Formats:
Base (Llama-SEA-LION-v3-70B)
Instruct (Llama-SEA-LION-v3-70B-IT)
GGUF (Llama-SEA-LION-v3-70B-IT-GGUF)
Languages supported:
Burmese
Chinese
English
Filipino
Indonesia
Javanese (Instruct/GGUF only)
Khmer
Lao
Malay
Sundanese (Instruct/GGUF only)
Tamil
Thai
Vietnamese
License:
First Stage
AWS p5e.48xlarge
8 instances
Nvidia H200 140GB GPU
64
Training Duration
200 hrs (step 0 - 9000)
Second Stage
SingTel HGX-100
16 instances
Nvidia H100 80GB GPU
128
Training Duration
495 hrs (step 9000 - 47684)
Precision
bfloat16
Optimizer
decoupled_adamw
Scheduler
weight_stable_decay
Learning Rate
1.0e-5
Global Batch Size
512
For tokenisation, the model employs the default tokenizer used in Llama 3.1 70B Instruct.
Llama-SEA-LION-v3-70B base model was continued pre-trained on 200B tokens of the following data:
Code
Stackv2
40
20
20
English
Dolma
37.5
18.75
25
Fineweb-Edu
7.5
3.75
Others
5
2.5
Chinese
SEA-LION Pile v1
12
6
13
Others
14
7
Vietnamese
SEA-LION Pile v1
8.4
4.2
13
VinBigData
16
8
Others
1.6
0.8
Indonesian
SEA-LION Pile v1
7
3.5
13
SEA-LION Pile v2
7
3.5
Others
12
6
Thai
SEA-LION Pile v1
10.7
5.35
10
WangChanBERTa
8.5
4.25
Others
0.8
0.4
Filipino - Malay - Tamil
SEA-LION Pile v1, AI4Bharat Sangraha
4.28
2.14
3
Others
1.72
0.86
Khmer - Lao - Burmese
SEA-LION Pile v1
5.2
2.6
3
Others
0.8
0.4
Note:
All token counts are counted using Llama 3.1 70B Instruct tokenizer
SEA-LION Pile v2 is processed from Common Crawl WARC from October 2020 to April 2024.
We evaluated Llama-SEA-LION-v3-70B base model on general language capabilities and constraint-following behaviour.
Note: SEA-HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done five-shot with native prompts on a sample of 100-1000 instances for each dataset.
Following the implementation of IFEval in OpenLLM leaderboard, we also implement SEA-IFEval to provide a comparison of the ability of the model to follow specific constraints in English and in SEA languages.
SEA-IFEval
SEA-IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalised by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
For more details on Llama-SEA-LION-v3-70B base benchmark performance, please refer to the SEA-HELM leaderboard, https://leaderboard.sea-lion.ai/.
Llama-SEA-LION-v3-70B-IT is a multilingual instruction-following model that has been tuned using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 3200 GPU hours, on a single node of 8x H100-80GB GPUs.
Llama-SEA-LION-v3-70B-IT was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data sources.
We evaluated Llama-SEA-LION-v3-70B-IT on both general language capabilities and instruction-following capabilities.
Note: SEA-HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done zero-shot with native prompts on a sample of 100-1000 instances for each dataset.
As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localise and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
SEA-IFEval
SEA-IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalised by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
SEA-MTBench
SEA-MTBench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use gpt-4-1106-preview
as the judge model and compare against gpt-3.5-turbo-0125
as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category: Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction). A tie is given a score of 0.5.
For more details on Llama-SEA-LION-v3-70B-IT benchmark performance, please refer to the SEA-HELM leaderboard, https://leaderboard.sea-lion.ai/.
The following quantized GGUF formats of our Llama-SEA-LION-v3-70B-IT model are available:
Llama-SEA-LION-v3-70B-IT-F16
Llama-SEA-LION-v3-70B-IT-Q2_K
Llama-SEA-LION-v3-70B-IT-Q3_K_M
Llama-SEA-LION-v3-70B-IT-Q4_0
Llama-SEA-LION-v3-70B-IT-Q4_K_M
Llama-SEA-LION-v3-70B-IT-Q5_0
Llama-SEA-LION-v3-70B-IT-Q5_K_M
Llama-SEA-LION-v3-70B-IT-Q6_K
Llama-SEA-LION-v3-70B-IT-Q8_0
Llama-SEA-LION-v3-70B models are available for download via the following channels:
Llama-SEA-LION-v3-70B
Llama-SEA-LION-v3-70B-IT
Llama-SEA-LION-v3-70B-IT-GGUF
Llama-SEA-LION-v3-70B-IT can be run using the 🤗 Transformers library
It is important for users to be aware that our models exhibits certain limitations that warrant consideration:
The model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
Llama-SEA-LION-v3-70B was trained in two stages using on the following hardware:
SEA-LION Pile v1 is processed from Common Crawl WET, which is published . The cutoff date of this version is September 2020.
Tamil data from Sangraha is published . The paper can be found .
Tamil news is sourced with permission from
For the evaluation of general language capabilities, we employed the (also known as BHASA) evaluation benchmark across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarisation (Abssum), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Based on , the linguists and native speakers in the team worked together to filter, localise and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
For the evaluation of general language capabilities, we employed the (also known as BHASA) evaluation benchmark across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarisation (Abssum), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Since Llama-SEA-LION-v3-70B-IT is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, SEA-IFEval (based on ) and SEA-MTBench (based on ).
Please refer to our section for more details on how to access them.
,