Ep 35. Don't Fine Tune Your LLMs

You're building a solution with large-language models and want to optimize performance on your data. So you need to fine-tune the model, right? Probably not.

Welcome to Episode 35 of Prolego's Generative AI Series.  

🔍 What’s Inside This Episode?

  • In this episode, I discuss the concept of fine-tuning in the context of LLM optimization and we explain why you should avoid rushing into fine-tuning.
  • We also discuss the limitations of fine-tuning, its potential pitfalls, and recommend other optimization techniques that we outline in our LLM Optimization Playbook.


Let’s Future Proof Your Business.