We’re a small and agile team with the mission to empower the Rest of the World with AI — the most rapid & transformative tech in history. To accomplish this audacious goal, we’re building a world-class multi-disciplinary group of laser-focused and motivated individuals that are eager to make a dent in the world. Our first step is to build the most powerful Foundation Model (LLM) in Korea. We are strongly supported by leading VCs in Korea and US including Strong Ventures, Kakao Ventures, and Bass Investment.
Introducing Trillion-7B-preview: A Game-Changing 7B LLM for Multilingual AI
We are thrilled to introduce Trillion-7B-preview, a 7-billion-parameter large language model (LLM) built to advance multilingual AI scalability and performance. Designed with efficiency, accuracy, and adaptability in mind, Trillion-7B-preview delivers state-of-the-art results while maintaining a significantly lower computational footprint.
Key Features of Trillion-7B-preview:
• Architecture: Transformer Decoder integrated with RoPE, SwiGLU, and RMSNorm.
• Scale: Comprises 7.76 billion parameters across 32 layers with 32 attention heads.
• Training Data: ~ 2 trillion tokens.
• Context Length: Supports sequences up to 4,096 tokens.
• Vocabulary Size: Features a vocabulary of 128,128 tokens.
Performance & Benchmarking
Benchmark evaluations indicate that Trillion-7B-preview achieves around 66.5% average performance while utilizing significantly fewer computational resources (~9.3×10²² FLOPs). It surpasses models of similar size and remains competitive with models that require 3-8 times more compute, such as Qwen2.5-7B-Instruct and EXAONE-3.5-7.8B-Instruct.
Customer Insights
A leading On-Device AI Solutions company in Korea shared their experience with Trillion-7B-preview:
When evaluating inference speed, response quality, and RAG (Retrieval-Augmented Generation) performance, Trillion-7B-preview has consistently delivered outstanding results, significantly surpassing models like Qwen and Llama in Korean language processing. Moreover, even after quantization, it maintains high performance with minimal degradation, making it an exceptionally reliable choice for real-world deployment.
Explore Trillion-7B-preview Today
For a comprehensive overview and access to the model, visit Trillion Labs’ official page on Hugging Face:
🔗 Trillion-7B-preview on Hugging Face