Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Published : Apr 03, 2024
Apr 2024
Assess ? Worth exploring with the goal of understanding how it will affect your enterprise.

We continue to caution against rushing to fine-tune large language models (LLMs) unless it’s absolutely critical — it comes with a significant overhead in terms of costs and expertise. However, we think LLaMA-Factory can be useful when fine-tuning is needed. It’s an open-source, easy-to-use fine-tuning and training framework for LLMs. With support for LLaMA, BLOOM, Mistral, Baichuan, Qwen and ChatGLM, it makes a complex concept like fine-tuning relatively accessible. Our teams used LLaMA-Factory's LoRA tuning for a LLaMA 7B model successfully, so, if you have a need for fine-tuning, this framework is worth assessing.

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes