Hacker News new | past | comments | ask | show | jobs | submit login

Since building this I figured out a way to calculate embeddings using a model that runs directly on my own hardware - notes here: https://til.simonwillison.net/python/gtr-t5-large



There’s a good leaderboard for embedding focused models here: https://huggingface.co/spaces/mteb/leaderboard. Some of the sentence-transformers models are even smaller than gr-t5 while still having very close performance for some use cases.

I’m working on making them 2-4x smaller with SmoothQuant int8 quantization and hoping to release standalone C++ a Rust implementation optimized for CPU inference.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: