Announcing Mistral Small 3.1 24B
We're excited to announce that Mistral Small 3.1 24B is now available on Co! Building on Mistral Small 3, this new model comes with improved text performance, multimodal understanding, and an expanded context window of up to 128k tokens. According to Mistral, the model outperforms comparable models like Gemma 3 and GPT-4o Mini, while delivering inference speeds of 150 tokens per second.
Key Highlights
- Superior Performance: Benchmarks show impressive results across coding, reasoning, and general language understanding tasks
- Multimodal Capabilities: Seamlessly analyze images along with text in your prompts
- 128k Context Window: Process and reference large documents or conversations in a single prompt
- Fast Inference: Experience responsive 150 tokens/second generation speeds
- Apache 2.0 License: Flexible usage terms for commercial applications
How to Use It
Mistral Small 3.1 is now available on Co's discovery page. To get started:
- Navigate to the Discover page
- Search for "Mistral Small 3.1" and add it to any of your spaces or create a new AI bot using the model by clicking the "Create AI Bot" button
Use Cases
This versatile model excels at:
- Complex code generation and debugging
- Document analysis and summarization
- Creative content creation
- Detailed reasoning tasks
- Image understanding and description