SEA-LION
Last updated
Last updated
Southeast Asian Languages in One Network
Built for Southeast Asia, by Southeast Asia
Southeast Asian Languages in One Network (SEA-LION) is a family of open-source Large Language Models (LLMs) that better understands Southeast Asia’s (SEA) diverse contexts, languages, and cultures.
It is an open-source project anchored by the Products Pillar of AI Singapore. Our work in SEA-LION aims to create LLMs that cater to under-represented population groups and low resource languages in the SEA region. You can .
This site provides information and resources on SEA-LION, including how to access the models, hosting, and how-to guides.
8B
128K
SFT¹ of Llama-SEA-LION-v3-8B-IT
Reasoning, GGUF
70B
128K
SFT of Llama-SEA-LION-v3-70B-IT
Reasoning, GGUF
9B
8192
CPT² of Gemma2
Base, Instruct, GGUF
8B
128K
CPT of Llama 3.1 8B
Base, Instruct, GGUF
70B
128K
CPT of Llama 3.1 70B
Base, Instruct, GGUF
8B
8192
CPT of Llama3
Base, Instruct, GGUF
3B
2048
Pre-training from scratch
Base
7B
2048
Pre-training from scratch
Instruct
¹ Supervised Fine-Tuning
² Continued Pre-Training
SEA-LION has seen:
In v1, ability to outperform most models based on SEA-HELM (Southeast Asian Holistic Evaluation of Language Models) when it was released
In v2, outperformance for SEA tasks, while retaining credible performance on standard (English) benchmarks
In v2.1, key improvements in conversational abilities across SEA languages, while providing more helpful and contextually appropriate responses to user prompts
In v3, outperforms similar sized open source models, and even some larger models in both general and SEA capabilities
In v3.5, ability to handle reasoning tasks, with the versatility of handling general tasks as well while maintaining similar performance with state-of-the-art models.
How SEA-LION compares to other available models along different metrics
What SEA-HELM is and the four key capabilities it is evaluated on: English performance, Proficiency in SEA chat, Instruction-following and Linguistic tasks
What each of these globally recognized metrics mean under SEA-HELM
Transparent and Open Source
We have benefited greatly from the open-source community and believe that efforts to better represent our region will similarly be well served by open-source efforts.
SEA-LION will also be open and transparent in the following areas throughout this guide:
Pre-Training data
Model training code
Fine-Tuning data
Evaluation benchmarks
Some ways to contribute:
Report bugs and issues
Enhance the documentation
Add more model evaluation tasks and metrics
Train versions of the model in more SEA languages
If you use SEA-LION in your work, please cite it as:
If you are using SEA-LION v3 for your work, please cite it as:
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinion, finding, conclusion or recommendation expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore, or the National University of Singapore.
We also grateful for the support of the Infocomm Media Development Authority (IMDA) of Singapore.
SEA-LION would not be possible without a growing list of Singapore, regional, and international collaborators. Please see our website for more details.
We use a holistic approach to evaluation, including not just traditional Natural Language Processing (NLP) benchmarking tasks (such as sentiment analysis and question answering) but also .
Visit our for more detailed breakdown on:
All SEA-LION releases will therefore embrace an open-source ethos under the MIT license as much as possible; however, the exact licensing terms may vary depending on the underlying base model’s restrictions or requirements. For instance, if the model leverages Meta’s Llama3 codebase, it may be bound by the , which places certain restrictions on commercial use. Similarly, the Gemma-based variants may carry different terms. Users should always refer to the Hugging Face model card of each specific SEA-LION model for the most accurate, up-to-date license information.
We welcome contributions to SEA-LION! Check out the to get started.
Check out our also, for possible ways to further enhance and expand the capabilities of SEA-LION together.
If you have questions, comments, or issues, please open a GitHub issue or contact us via .