Hacker News new | past | comments | ask | show | jobs | submit login

> do they all eventually come to value precision over accuracy?

There are some issues to be aware of here. It isn't possible to do the reverse - to value accuracy over precision - because your accuracy is limited by your precision.

This is a theorem in psychometrics, where the phrasing is that the validity of an assessment (the degree to which it accurately measures some quantity of interest to you) is bounded by the reliability (the degree to which, if you assess the same thing twice, you get similar estimates).

If you find that your assessments aren't accurate enough, you can only fix that by increasing your precision, so the concept of valuing precision over accuracy is a little bit weird.




you can definitely value precision over accuracy - I see it all the time in my place of work

as an accountant, the timeliness of my work is scrutinized far more than the accuracy as I'm not often given enough time to do my work and if I took an extra day or two to get it right, I would be penalized whereas if I do it wrong but do it on time, no one would usually care or notice


How is that relevant to the discussion? Where does precision figure in your example?


> It isn't possible to do the reverse - to value accuracy over precision - because your accuracy is limited by your precision.

Here's what I believe to be a counter-example: always use "Eastern" when communicating a time if you're not absolutely certain whether it's EDT or EST.

Being precise but wrong is often worse than being accurate but imprecise.


Isn't this an issue of only having one way to measure things? It's like the mathematical difference between numeric methods that get you closer to something (eg 10/3 = 3/33333....) vs analytical/symbolic methods that represent values in different ways (eg 10/3 = 3⅓).


I can't really tell what you're saying. But the idea is very straightforward:

The validity of an assessment ("accuracy" in the parent comment's terminology) measures how closely the result of the assessment corresponds to reality. For your assessment to be useful, you want this to be constrained, so that when you make your measurement, you get a result that is close to the truth.

Reliability ("precision") measures how closely the results of one measurement correspond to those of another measurement. It's possible to have a reliable test with low validity. All results from that test would be tightly clustered, similar to each other, but not indicative of whatever you're actually interested in.

It isn't possible to have an unreliable test with high validity. The unreliability of this impossible test would mean that its results were spread out, dissimilar to each other. But the high validity would mean that the test results clustered around the true value of whatever it is you're measuring.

Of course, if a set of values are all constrained to be close to a particular fixed value ("the truth"), they are also constrained to be close to each other. This is why reliability is necessary for validity.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: