- Michael Willson
- May 25, 2025
Salesforce has launched xGen-Small, a new family of small yet powerful AI models designed for long-context language tasks. With parameter sizes of just 4B and 9B, these models perform surprisingly well in enterprise applications like document summarization, code generation, and customer support. Unlike other compact models, xGen-Small can handle up to 128,000 tokens, making it ideal for processing large documents and chat histories.
If you’re looking for an efficient model with strong results and minimal compute cost, xGen-Small is worth your attention.
What is xGen-Small?
xGen-Small is a collection of lightweight language models built by Salesforce AI. These models are optimized for long-context tasks, combining strong performance with low resource usage. They’re based on Transformer architecture and trained using Salesforce’s vertically integrated pipeline.
This includes a mix of domain-balanced pretraining, smart data curation, and post-training techniques like Direct Preference Optimization (DPO). The models are available in two sizes: 4B and 9B parameters.
What makes xGen-Small different?
Most small models struggle with long contexts. xGen-Small supports up to 128k tokens—much more than GPT-3.5 or Mistral. That means it can work with entire PDFs, codebases, or multi-turn chats without forgetting the start.
It also shows above-average performance on math and code tasks, which is rare in smaller open models.
How xGen-Small Performs Against Other Models
xGen-Small beats most models in its size range, especially for tasks that require memory and structure.
Salesforce xGen-Small vs Other Small LLMs
Real-World Applications
Document understanding
The model can handle long legal or financial documents. It reads and summarizes reports, contracts, or case studies end-to-end.
Software development
xGen-Small does well in code reasoning and generation. Its 9B version performs competitively on HumanEval.
Customer support automation
It can power AI assistants that remember full customer conversations—perfect for long issue threads.
Salesforce has also tuned xGen-Small to be useful across enterprise departments, making it a great fit for professionals looking to integrate AI into operations. For deeper insight into how AI models like this operate, a foundational AI Certification is a helpful resource.
Performance Benchmarks for Salesforce xGen-Small
How to Try xGen-Small
You can try xGen-Small models on Hugging Face, where both 4B and 9B versions are available under a research license. These models are compatible with standard Transformer libraries and work well in both cloud and local GPU setups.
If you’re working with internal business data or customer content, having a basic understanding of data workflows is essential. A Data Science Certification can help you get familiar with how to manage, feed, and evaluate these models effectively.
And if you’re building products or services around AI solutions, or want to use these models in sales, support, or growth strategies, getting a Marketing and Business Certification will give you the practical edge to tie AI to outcomes.
Final Thoughts
xGen-Small fills an important gap in the AI model landscape. It offers long-context understanding, competitive benchmarks, and efficient performance in small packages. Whether you’re optimizing enterprise workflows or developing AI-powered tools, xGen-Small is a reliable option without the usual compute cost.
It’s a smart move from Salesforce, and one that could set a trend in how companies think about scaling with smaller, specialized AI.