/ NEWS

The Jamba 1.5 Open Model Family-The Most Powerful and Efficient Long Context Models

AI21 Labs has introduced the Jamba 1.5 family of models, designed to revolutionize enterprise-level AI with unmatched speed, efficiency, and quality. The models, Jamba 1.5 Mini and Jamba 1.5 Large, are built on the novel SSM-Transformer architecture, providing a massive 256K context window— the longest among open models—along with superior long-context handling and rapid processing speeds. The Jamba models are particularly suited for enterprise applications like document analysis and Retrieval Augmented Generation (RAG), excelling in both quality and cost efficiency.

Key Highlights:

  • Unrivaled Context Handling: Jamba 1.5 models can manage up to 256K tokens, ensuring consistent performance across long contexts. This capability is vital for complex tasks like document summarization, reducing the need for frequent data chunking and retrievals.
  • Speed and Efficiency: Both models are up to 2.5X faster in long-context processing compared to competitors. This speed is crucial for high-demand enterprise applications, ensuring that the models can scale efficiently with business needs.
  • Quality Performance: Jamba 1.5 Mini outperforms models in its size class on benchmarks like Arena Hard, while Jamba 1.5 Large surpasses even the most advanced models, including Llama 3.1 70B and 405B. These models deliver top-tier quality and speed, making them a cost-effective solution for enterprises.
  • Multilingual Capabilities: Beyond English, Jamba 1.5 models support several languages, including Spanish, French, and Arabic, among others, making them versatile for global applications.
  • Developer-Ready: The models natively support advanced features such as structured JSON output and function calling, and are available for download on platforms like Hugging Face.
  • Advanced Architecture: The SSM-Transformer design combines the quality of Transformer models with the efficiency of AI21's Mamba framework. This design allows Jamba 1.5 models to handle extensive contexts with a lower memory footprint, even on single GPUs.
  • Quantization Breakthrough: AI21 introduces ExpertsInt8, a novel quantization technique that reduces model size and enhances performance without sacrificing quality. This innovation enables the Jamba 1.5 Large model to fit on an 8-GPU node while maintaining its full 256K context capacity.

AI21 Labs' Jamba 1.5 models set a new standard in AI performance, particularly for enterprise applications where speed, efficiency, and accuracy are paramount. These models are readily available on various cloud platforms, with more integrations on the horizon, ensuring broad accessibility for developers and businesses alike.

You can read full report from the official announcement here.