Showing posts with label Bayesianism. Show all posts
Showing posts with label Bayesianism. Show all posts

Sunday, September 26, 2010

Varieties of Evidence Redux

About a year ago, I posted three blog posts here, arguing that scientific evidence serves a more complex and dynamic set of functions in scientific inquiry than simply supporting hypotheses.  I've finally manage to work the idea out in a form that I'm satisfied with:

The Functional Complexity of Scientific Evidence (Draft)

I'm especially indebted to the commenters on this blog for the content of section 6, including Thomas Basbøll, Greg Frost-Arnold, Gabriele Contessa, and Eric Winsberg.  (I hope I've appropriate credit where credit is due there.  I was a bit stymied in how exactly to refer to a conversation we had on the blog, and so made the acknowledgments there fairly general.  Advice on that point is welcome.)

I hope I've managed to present it in a compelling way and answer the objections in a satisfactory way, even though I'm sure many traditionalist won't be convinced.  The goal in this paper is to motivate the need for more complex, functionalist, dynamic model of evidence in contrast with the oversimplification of the traditional-type model, to set out in detail such a model, to illustrate it with an example, and to reply to some basic objections.  I've got a second paper in progress which applies the basic framework to a variety of problems of evidence, from theory-ladenness and the experiment's regress to "evidence for use" and evidence-based public policy.  My central claim there is that this apparently diverse set of problems all share a set of assumptions, and the strongest way to solve them all is to adopt the dynamic evidential functionalism that I've laid out in this first paper.

One reason that I needed to whip this paper into shape is that I'm presenting on the topic of the sequel at the Pitt workshop on scientific experimentation.  Getting this in final form is part of finishing up that paper.  The working title there is "From the Experimenter’s Regress to Evidence-Based Policy: The Functional Complexity of Scientific Evidence."

If anyone gets a chance to look at the paper, I'd appreciate any comments, here or via email. 

Monday, June 28, 2010

Cool interview with TiLPS

Jan Willem Romeyn (Groningen) has a very nice interview with the folks at TilPs (Tilburg) in the latest issue of the reasoner:
http://www.thereasoner.org/
The interview very nicely showcases one of the leading groups in philosophy of science in the Low Countries.
My only (extremely minor) kvetch is that I would have loved to see a bit more critical examination by Jan Willem of the choice of (and justification for) Bayesianism as the preferred "general [or "theoretical"] framework" at TiLPs.

Tuesday, October 6, 2009

The Varieties of Evidence

One of the things that seems to me to distort a lot of the discussions of evidence in philosophy of science (and related areas of epistemology) is an over-simplification of the role of evidence in science and inquiry. In particular, many accounts tend to treat evidence as mono-functional, where the only important function evidence plays is support. The nature of that support relation might vary from account to account (Bayesianism, falsification, inductivism, etc.), but many accounts agree that this is how evidence should be understood.

In contrast, I think we can point to a number of equally essential roles that evidence plays in inquiry. When first setting out to investigate the problem, early observational evidence can help locate or specify the problem. When you have an outbreak of disease, or an unexpected astronomical event, you first have to gather as much evidence about the nature of the problem as possible, before you can pose hypotheses or explanations for testing / support. Gathering evidence can also actually suggest hypotheses. A first look at the evidence suggests that this problem might be best analyzed by Fourier analysis, or a simple retrospective study design, or spectral analysis, etc., or by hypothesizing that the disease is malaria, that the new object in our telescope is a type of quasar, etc. Gathering evidence can also help with the elaboration of a hypothesis, specifying, clarifying, or improving it. And not only can it provide support for a hypothesis by testing or confirming its predictions, but experimental testing can be understood as a type of testing by application. If we understand experiment as a kind of intervention on the basis of a theory or hypothesis, then it is really a type of application of the hypothesis to some situation (often a highly controlled one).

So, to sum up, a (probably partial) list of the various functions of evidence: locating the problem, suggesting a hypothesis, elaborating the hypothesis, supporting the hypothesis, and testing it by application. Probably I should say something about helping specify initial conditions, too, though I'm not sure everyone would be willing to call that "evidence."

One way that idealizing evidence as mono-functional, focusing exclusively on the support relation, might go wrong, is the tendency to worry overmuch about the independent status of evidence; i.e., if evidence is to be judged solely by its suitability for providing firm support, then all of the problems of the "empirical basis" start to rear their ugly heads. I suspect that when we have a more complex picture of the functions of evidence, we can use it to develop a multi-scale analysis of the functional fitness of that evidence, which gives as a way of assessing the adequacy of it to stand as evidence.

Friday, July 17, 2009

Does Bayes' Theorem Have Any Special Epistemological Significance?

Bayesian epistemologists seem to think that Bayes' theorem (BT) has some special epistemological significance. Let's assume that BT provides us with a synchronic constraint on the coherence of one's degrees of belief (it tells us that whatever our degrees of beliefs in H, E, H given E, and E given H are at time t they have to be related so that Pr(H|E)t=(Pr(H)tPr(E|H)t)/Pr(E)t) and that synchronc coherence is a necessary but not sufficient condition for epistemic rationality. So far nothing epistemologically special about BT--every other theorem or axiom of probability theory also provides us with such a synchronic constraint.

Supposedly, however, BT does more than just that--it also tells us how to "conditionalize on new evidence". What I don't understand is how it is supposed to do so. As far as I can see, the theorem only tells us that the conditional probability of H given E, (Pr(H|E)t) is equal to (Pr(H)tPr(E|H)t)/Pr(E)t but this is only the old synchronic constraint again. It is only if we assume that, in observing that E, our degree of belief in H (Pr(H)t+1) becomes identical to our previous degree of belief in H given E (Pr(H|E)t) (i.e. if we assume that Pr(E)t+1=1 and Pr(H|E)t+1=Pr(H|E)t) that we can use BT to find out what that degree of belief was equal to. But then if this is the case BT in and of itself does not tell us what our degree of belief in H should be after the evidence is in. It only tell us what our degree of belief in Pr(H|E)t had to be before the new evidence was in.

Can someone please show me the error of my ways? Why do Bayesian epistemologists assume that BT plays any different role from that of other axioms of probability? In what sense it is providing us with anything other than a synchornic constraint on our degrees of belief?

Wednesday, April 8, 2009

Bayesians and Probability One

It is often said that a Bayesian agent should not assign probability one to a proposition unless it is a logical truth. However, this principle is often appealed to without any reference or argument. I guess this is because people take the principle to be so self-evident that it doesn't need any support, but can anyone point me to any "standard" references or discussions of the principle?