There’s a good leaderboard for embedding focused models here: https://huggingface.co/spaces/mteb/leaderboard. Some of the sentence-transformers models are even smaller than gr-t5 while still having very close performance for some use cases.
I’m working on making them 2-4x smaller with SmoothQuant int8 quantization and hoping to release standalone C++ a Rust implementation optimized for CPU inference.