Hacker News new | past | comments | ask | show | jobs | submit login

I like to think of this like fine-tuning LLMs. When you fine-tune an LLM it doesn't pick up and memorise all of the training data. Rather it adjusts its weights/perspectives based on the content trained on/read.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: