Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Published : Apr 03, 2024
Apr 2024
Assess ?

Mixtral is part of the family of open-weight large language models Mistral released, that utilizes the sparse Mixture of Experts architecture. The family of models are available both in raw pretrained and fine-tuned forms in 7B and 8x7B parameter sizes. Its sizes, open-weight nature, performance in benchmarks and context length of 32,000 tokens make it a compelling option for self-hosted LLMs. Note that these open-weight models are not tuned for safety out of the box, and users need to refine moderation based on their own use cases. We have experience with this family of models in developing Aalap, a fine-tuned Mistral 7B model trained on data related to specific Indian legal tasks, which has performed reasonably well on an affordable cost basis.

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes