Showing posts with label scientific method. Show all posts
Showing posts with label scientific method. Show all posts

Thursday, September 30, 2010

Hacking and Franklin on the Functional Complexity of Evidence

After posting my paper here, in the last few days, I've just happened to come across two fabulous statements related to my position. Of course, just when you start to think you're doing something a little bit original, you come across all kinds of people saying basically the same thing.

Ian Hacking, on the first page of the monumental "Experimentation and Scientific Realism":
Experiments, the philosophers say, are of value only when they test theory. . . So we lack even a terminology to describe the many varied roles of experiment.  (Hacking 1982, p. 71)
And Allan Franklin, on the first page of his Selectivity and Discord:
Experiment plays many roles in science.  One of its important roles is to test theories and provide the basis for scientific knowledge.  It can also call for a new theory. . . Experiment can provide hints about the structure or mathematical form of a theory, and it can provide evidence for the existence of the entities involved in our theory. . . it may also have a life of its own, independent of theory: Scientists may investigate a phenomenon just because it looks interesting. Such experiments may provide evidence for future theories to explain. (Franklin 2002, p. 1)
It is a nice surprise to find myself in such good company.  The aim of my paper, of course, is to try to provide a coherent picture of and some terminology for the various roles of evidence.  One of the points that I make in the paper, which I'm not sure Hacking or Franklin would accept, is that there is a useful (functional) distinction to be drawn between observational and experimental evidence.  I suspect they might even say that I leave some roles out of my picture.

Sunday, September 26, 2010

Varieties of Evidence Redux

About a year ago, I posted three blog posts here, arguing that scientific evidence serves a more complex and dynamic set of functions in scientific inquiry than simply supporting hypotheses.  I've finally manage to work the idea out in a form that I'm satisfied with:

The Functional Complexity of Scientific Evidence (Draft)

I'm especially indebted to the commenters on this blog for the content of section 6, including Thomas Basbøll, Greg Frost-Arnold, Gabriele Contessa, and Eric Winsberg.  (I hope I've appropriate credit where credit is due there.  I was a bit stymied in how exactly to refer to a conversation we had on the blog, and so made the acknowledgments there fairly general.  Advice on that point is welcome.)

I hope I've managed to present it in a compelling way and answer the objections in a satisfactory way, even though I'm sure many traditionalist won't be convinced.  The goal in this paper is to motivate the need for more complex, functionalist, dynamic model of evidence in contrast with the oversimplification of the traditional-type model, to set out in detail such a model, to illustrate it with an example, and to reply to some basic objections.  I've got a second paper in progress which applies the basic framework to a variety of problems of evidence, from theory-ladenness and the experiment's regress to "evidence for use" and evidence-based public policy.  My central claim there is that this apparently diverse set of problems all share a set of assumptions, and the strongest way to solve them all is to adopt the dynamic evidential functionalism that I've laid out in this first paper.

One reason that I needed to whip this paper into shape is that I'm presenting on the topic of the sequel at the Pitt workshop on scientific experimentation.  Getting this in final form is part of finishing up that paper.  The working title there is "From the Experimenter’s Regress to Evidence-Based Policy: The Functional Complexity of Scientific Evidence."

If anyone gets a chance to look at the paper, I'd appreciate any comments, here or via email. 

Tuesday, August 18, 2009

Should Scientific Methods and Data be Public?

At the last Eastern APA meeting in Philly, I attended an excellent session on The Epistemology of Experimental Practices, with Allan Franklin and Marcel Weber. During the discussion, I asked whether scientific methods and data should be public – that is, whether different investigators applying the same methods to the same questions should get the same data.

Franklin argued that publicity is not necessary, because some experiments might be too difficult or expensive to replicate, and different data analyses by different groups count as different experiments. This seems pretty wrong to me.

For one thing, I got the impression that Franklin didn’t fully understand what method publicity amounts to. Publicity does not require that all experiments be replicated; only that it is possible for different investigators to apply the same methods, and if they did, then they would get the same results. (Of course, much hinges on what we mean by “possible” and who counts as an investigator; for some more details, see here.)

For another thing, it’s better to say that actual replication of experiments is often unnecessary, as Marcel Weber said. Weber pointed out that experimentalists are part of a scientific network that shares techniques and materials, so they often feel they already know what was done. Nevertheless, Weber maintained that publicity is essential to science (and is implemented in the network itself, by the sharing of techniques etc.).

In fact, in his own talk, Allan Franklin listed a number of arguments/reasons for believing the results of experiments, along the lines of those listed in his Stanford Encyclopedia of Philosophy article on Experiments in Physics. All of Franklin’s reasons seem to have to do with publicity and the public validation of data.

Does anyone else have opinions on this? Should scientific methods and data be public or is this methodological principle obsolete?

I care about this because there are philosophers who have argued that introspection is a private yet legitimate method of observation, and this shows that method publicity is not necessary for science. I think this view is a disaster. If we reject method publicity, it’s not clear why we should reject all kinds of pseudo-scientific methods.

(And incidentally, I’ve also argued elsewhere that introspection is not a private method of scientific observation; rather, it’s a process of self-measurement by which public data are generated.)

(cross-posted at Brains.)