French AI startup Mistral has released its first generative AI models designed to be run on edge devices, like laptops and phones.
The new family of models, which Mistral is calling “Les Ministraux,” can be used or tuned for a variety of applications, from text generation to working in conjunction with more capable models to complete tasks.
There’s two Les Ministraux models available — Ministral 3B and Ministral 8B — both of which have a context window of 128,000 tokens, meaning they can ingest roughly the length of a 50-page book.
“Our most innovative customers and partners have increasingly been asking for local, privacy-first inference for critical applications such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics,” Mistral writes in a blog post. “Les Ministraux were built to provide a compute-efficient and low-latency solution for these scenarios.”
The Les Ministraux models are available for download — but only for research purposes. In point of fact, Mistral is only allowing developers to use Ministral 8B for research — not Ministral 3B, for which it’s requiring a commercial license.
Moreover, Mistral’s requiring developers and companies interested in Les Ministraux self-deployment setups to contact it for a commercial license.
Otherwise, developers can use Ministral 3B and Ministral 8B on its cloud platform — and other clouds with whom Mistral has partnered with “shortly,” the company says. Ministral 8B costs 10 cents per million output/input tokens (~750,000 words), while Minstral 3B costs 4 cents per million output/input tokens.