LinuxDays 2025

Fine tuning LLMs on a budget
04.10.2025 , 345
Jazyk: English

In this workshop, attendees will learn how to fine tune LLMs on minimal hardware, even without GPU.

With a simple script, built around Huggingface libraries, and a connection to an existing LLM server, it is possible to bootstrap the fine tuning process and achieve interesting results.


Since the release of ChatGPT, usage of LLMs has exploded across all sections of society.

However, the process of creating an LLM or even adjusting one to a specific use case, has remained an obscure matter, even to those who are experts in other areas of our field.

This workshop will guide attendees in how to use open source software to take an existing model, and fine tune it with a domain specific knowledge.

Attendees will have an opportunity to:

  1. (Optional) Building question and answers dataset from documents, using API of existing LLM service.
  2. Using parameter efficient fine tuning to instill new knowledge into the model.

Obtížnost:

Intermediate

I have been interested in AI for a long time and have worked on language model applications before, although not the "large" ones.

I have been working as an engineer at Red Hat on the Openstack project for the last four years, and recently I have joined Log Detective. I also collaborate with Mendel University on projects on agricultural automation and plant diagnostics using acoustic emission.

In my free time I am a fan of museums, obscure literature and strategy games.


Dlouho se zajímám o AI a už dříve jsem pracoval na aplikacích jazykových modelů, byť ne těch "velkých".

Poslední čtyři roky pracuju jak inženýr v Red Hatu na projektu Openstack, a nedávno jsem přibral Log Detective. Také spolupracuji s Mendelovou univerzitou na projektech automatizace zemědělství a diagnostiky rostlin pomocí akustické emise.

Ve volné čase jsem příznivcem muzeí, obskurní literatury a strategických her.