Mistral Small 4

Mistral Small 4

Verified

Open-source multimodal model unifying chat, reasoning, and agentic coding in a 119B/6B-active MoE under Apache 2.0.

4.8(91)
ENFRAI AssistantCode GenerationOpen Source

📘 Overview of Mistral Small 4

👉 Summary

With Mistral Small 4, French AI leader Mistral AI marks a new milestone in its open-model strategy. This generation strategically merges capabilities that were previously split across specialized models: Magistral for reasoning, Pixtral for multimodal, and Devstral for agentic coding. The result is a unified model where users no longer have to choose between conversational speed, reasoning depth, or visual understanding. Released under Apache 2.0, Mistral Small 4 fits the open philosophy that built the company's reputation. The release confirms Mistral AI's standing in the race for sovereign European models that match the technical level of US giants while granting deploying organizations a measure of strategic independence.

💡 What is Mistral Small 4?

Mistral Small 4 is a hybrid open-source language model from Mistral AI. Its architecture is built on a Mixture of Experts with 128 experts and 4 active per token, totalling 119 billion parameters with only 6 billion active at inference. This approach delivers strong energy and cost efficiency without sacrificing depth. The model natively accepts text and image inputs, supports a configurable reasoning effort, and excels at agentic and coding tasks. The Apache 2.0 license allows download, modification, and deployment, including for commercial purposes.

🧩 Key features

Mistral Small 4 brings together several technical advances. Its Mixture of Experts architecture scales the parameter count without inflating inference cost, making the model affordable to deploy on reasonable infrastructure. Native multimodality removes the need for a separate vision model when processing text and images together. Configurable reasoning effort lets teams trade off speed against depth depending on the task. The agentic coding specialization inherited from Devstral makes it an excellent engine for developer copilots and autonomous agents. The model ships via Le Chat, La Plateforme, and the official API, but is also downloadable for on-premise deployments. Mistral AI joined the NVIDIA Nemotron Coalition as a founding member, securing the GPU optimization ecosystem around the model.

🚀 Use cases

Mistral Small 4 covers a broad range of scenarios. Developers integrate it in code copilots and autonomous agents capable of executing complex technical tasks. Data science teams use it for structured reasoning, document analysis, and information extraction across large corpora. Customer support teams deploy multimodal assistants able to read screenshots or scanned documents thanks to native vision. Sovereignty-focused organizations in public, finance, and healthcare adopt it to run AI on-premise under an open license. Startups build products on the Mistral API to combine cost control with model quality. Researchers benefit from the model's accessibility to evaluate new alignment, fine-tuning, and quantization techniques.

🤝 Benefits

Mistral Small 4's main benefit is the convergence of three model families into one. This unification simplifies AI application architecture by eliminating routing between specialized models. The Apache 2.0 license unlocks strategic independence for organizations that don't want to depend on proprietary APIs. The MoE efficiency keeps inference costs reasonable while delivering high quality output. The maturity of the Mistral ecosystem, with La Plateforme, the API, and Le Chat, offers multiple onboarding paths depending on team expertise. Finally, the European origin remains a decisive argument for actors focused on GDPR compliance and digital sovereignty.

💰 Pricing

Mistral Small 4 is freely available for download under Apache 2.0, allowing on-premise deployment without licensing fees. API access via the Mistral platform is billed by token, with competitive pricing aligned with the model's efficiency. Le Chat offers a free tier with quotas for evaluation, plus paid plans for heavy usage. La Plateforme bundles monitoring and fine-tuning tools for professional teams. Overall, Mistral Small 4 offers excellent pricing flexibility, from free on-premise deployment to serverless API usage and enterprise plans for high-volume operators.

📌 Conclusion

Mistral Small 4 confirms Mistral AI's strategy: ship open, performant, and technically differentiated models. Its multimodal versatility and configurable reasoning make it a strong fit for many use cases, from developer copilots to multimodal assistants. The Apache 2.0 license and on-premise deployability appeal to European actors and sovereignty-focused organizations. For technical teams seeking a flexible, high-performance, and vendor-independent model, Mistral Small 4 stands as a major reference in today's AI landscape.

⚠️ Disclosure: some links are affiliate links (no impact on your price).