SEA-LION-Embedding-E5
Last update: 2026-03-16
SEA-LION is a collection of Large Language Models (LLMs) and encoders which have been pretrained and fine-tuned for the Southeast Asia (SEA) region.
Introduction
SEA-LION stands for Southeast Asian Languages In One Network.
The SEA-LION-Embedding-E5-600M model is a Sentence Transformer optimised for 11 Southeast Asian languages. It has been fine-tuned from the multilingual-e5-large base, mapping sentences and paragraphs to a 1024-dimensional dense vector space. This model is designed for high-accuracy semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and RAG (Retrieval-Augmented Generation) workflows. It leverages the robust XLM-RoBERTa architecture pretrained on 100 languages, optimised here for 11 Southeast Asian languages: Burmese, Chinese, English, Filipino, Indonesian, Khmer, Lao, Malay, Tamil, Thai, and Vietnamese.
Model Details
Model Description
The SEA-LION-E5-Embedding-600M model is a Sentence Transformer built on the Multilingual E5 Text Embeddings which was initialised from xlm-roberta-large architecture.
Model Type: Sentence Transformer
Base Architecture: E5 (Transformer Encoder)
Developed by: AI Products Pillar, AI Singapore
Funded by: Singapore NRF
Shared by: AI Products Pillar, AI Singapore
Context length: 512
Languages: Burmese, Chinese, English, Filipino, Indonesian, Khmer, Lao, Malay, Tamil, Thai, and Vietnamese
License: MIT
Finetuned from model: multilingual-e5-large
Model Sources
Documentation: Sentence Transformers Documentation
Repository: aisingapore/SEA-LION-E5-Embedding-600M
Uses
SEA-LION-E5-Embedding-600M details one of the variants available within this collection. If you are deploying our models for your specific use case, we would love to hear from you! Please feel free to contact us to share your experience or explore potential collaborations.
Fine-tuned Embedding Models
- Retrieval-Augmented Generation (RAG) - Information retrieval, and search - Similarity comparisons
Pre-trained Encoder Models
- Fill mask - Text classification - Fine-tuning for downstream tasks (e.g., sentiment analysis, classification).
Pre-trained Model Checkpoints
- Continued Pre-Training (CPT) - Fine-tuning for downstream tasks (e.g., sentiment analysis, classification).
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
Then you can load this model and run inference.
Training Details
Training Data
This model was tuned using a multi-stage training pipeline with the following datasets:
Contrastive Pre-training: 245 million text pairs (EN-EN and EN-SEA) to enhance cross-lingual alignment.
Fine-tuning: 13 million diverse text pairs (spanning EN-EN, CN-CN, EN-SEA, and SEA-SEA) to create the final fine-tuned model.
Language
Percentage
EN-EN
20%
CN-CN
20%
EN-SEA
10%
SEA-SEA
50%
Training Procedure
Preprocessing
Following the foundational training, the model's cross-lingual alignment was substantially enhanced by undergoing contrastive pre-training utilising 245 million text pairs, specifically focusing on English-to-English and English-to-Southeast Asian language mappings (EN-EN and EN-SEA). Finally, to ensure the model could effectively follow user instructions and handle complex interactions, it was instruction-tuned using a diverse dataset of 13 million text pairs spanning EN-EN, CN-CN, EN-SEA, and SEA-SEA, culminating in the final highly capable instruction-tuned model.
Evaluation
Testing Data, Factors & Metrics
Testing Data
The model is evaluated across three primary benchmark suites to provide a comprehensive assessment of embedding quality across Southeast Asian, Chinese, and English contexts:
SEA-BED (Southeast Asia Embedding Benchmark) (https://arxiv.org/pdf/2508.12243): The primary testing suite, consisting of 169 datasets across 10 Southeast Asian languages (Burmese, Filipino, Indonesian, Khmer, Malay, Lao, Tamil, Tetum, Thai, and Vietnamese). Notably, 71% of these datasets are native-authored or human-curated to preserve regional linguistic properties.
CMTEB (Chinese Massive Text Embedding Benchmark): A specialised subset of MTEB focused on Chinese language tasks, used to evaluate performance in one of the region's most prominent scripts.
MTEB (Massive Text Embedding Benchmark): The industry-standard global benchmark used to gauge general-purpose English embedding performance across a wide array of tasks.
Results
For details on Performance comparison of embedding models on SEA-BED, please refer to the SEA-HELM.
Environmental Impact
Carbon emission was estimated using the fact sheet from TRG Datacenters.
Hardware Type: Nvidia H200 140GB GPUs
Hours used: 896 hrs
Cloud Provider: SMC H200
Compute Region: Singapore
Carbon Emitted: appx. 252.13 kg CO2 e
Technical Specifications
Model Architecture and Objective
SEA-LION-E5-Embedding-600M is an encoder-only model based on XLM-R Large with E5-style contrastive pre-training and mean pooling.
d_model
1024
head_dim
16
Vocabulary
250,000 (SentencePiece)
Sequence Length
512
Pooling Mode
Mean tokens (with attention mask)
Glossary
E5: "EmbEddings from bidirEctional Encoder rEpresentations" – a weakly-supervised contrastive pre-training method for text embeddings.
SEA-BED: Southeast Asia Embedding Benchmark – a comprehensive evaluation suite for embedding models on SEA languages.
Asymmetric Retrieval: Retrieval tasks where query and document formulations differ; E5 uses prefixes to handle this.
Mean Pooling: Aggregating token embeddings by averaging (weighted by attention mask) to produce a fixed-size sentence representation.
More Information
While this model supports masked language modeling, it is primarily optimised via contrastive fine-tuning for downstream tasks such as sequence classification, token classification, or question answering. Please note that these weights have not been specifically aligned for safety; therefore, developers should implement their own safety evaluations and security measures. The authors disclaim all liability for any claims, damages, or other liabilities arising from the use of the released code or weights.
For more info, please contact us at [email protected]
Last updated