Interestingly enough much simpler models can write an accurate function to give you the answer.
I think it will be a while before we get there. An LLM can lookup knowledge but can't actually perform calculations itself, without some external processor.
Why do we have to "get there?" Humans use calculators all the time, so why not have every LLM hooked up to a calculator or code interpreter as a tool to use in these exact situations?
I think it will be a while before we get there. An LLM can lookup knowledge but can't actually perform calculations itself, without some external processor.