Hacker News new | past | comments | ask | show | jobs | submit login

One of my majors was Philosophy and I'm often disappointed by the attitudes of Feynman, Hawking, and other scientists towards philosophy. Their ___domain is science and I don't think that it is their place or right to judge the value of other disciplines as they apparently do not have a deep understanding of the subject.

Questions like "what is life" have serious implications for biology and arguing over definitions is important because it ensures that when we say something like "a virus is not alive" we have a well grounded and justified basis on which to make that claim. If we didn't take the time to argue over definitions we would have scientists arguing past one another because of trivial misunderstandings about the meaning of words rather than arguing over some other actual significant point such as the conclusion that can be drawn from the evidence. The role of Philosophy is often described as providing us a framework for reasoning.

Philosophy typically intersects with science at the "edges" where we are making novel discoveries in both disciplines and don't really know what to make of them yet so we have to reason about the evidence and concepts to make sense of it all so we can proceed.

You mention Physics, and the big debate in philosophy is whether the Standard Model describes reality as it is, or whether it simply a convenient tool where equation manages to encompass most of the empirical results we have so far. I'm not sure exactly where you are going with Bayesian methods but probability is definitely a focus of some philosophers and our department head wrote a somewhat influential paper about different concepts of probability. (a "taxonomy" of probability if you will but that might be a controversial way to put it in an academic setting just like the rest of this post) Edit: I might also add that this professor had a Phd in philosophy as well as theoretical physics, so really smart guy, and the TA had one in physics and was pursuing one in philosophy, so also a really smart guy.

It's also important to note that philosophy of science by and large does not aim to judge the merits or validity of science/scientists/scientific theories. The point is more to understand "what is science" and provide a passive definition instead of a passive one if you will.

Philosophy of Mind is a "hot" branch right now as well and it remains to be seen if any significant progress will be made in the next several decades. We still don't have a consistent way to discuss "consciousness" and if you read literature coming from people with a computer science background, then from a psychological background, then a neurological one, you will find that they describe it in very different (and likely incompatible) ways.

I've forgotten the context specifically but there is a degree of controversy over the diagnosis of bipolar disorder and schizophrenia since the diagnosis rates vary wildly depending on geography even when patients display the same symptoms. The implication is that neither disease is "real" and is instead a result of the training of psychiatrists. Yet we still don't want to abandon the idea that these people are mentally ill so how do we provide a coherent definition of a mental illness. It seems to pertain to consciousness in some way and if so how? We can't answer this way with out a coherent and consistent way to discuss consciousness which we still don't have.

I hope that gives you a bit of insight even if this comment comes across as disjointed and I have been willingly vague and partially inaccurate at points because you could spend a lifetime reading on some of these subjects but still not consume all of the relevant literature. This again probably reflects the difficulty that you had with that book. Philosophy of Science is already a somewhat specific branch that is difficult to approach without some background in philosophy already. In a way that book is an introduction to a specific branch of philosophy that assumes a large amount of background knowledge.




Thank you for your reply.

Who cares if we say "a virus is [not] alive"? Obviously that's predicated on the definition of alive. It doesn't change what a virus is, and certainly doesn't change our understanding of viruses. It seems like the only time that kind of thing matters is if you've got some existing issue with "alive" (like, say, a law granting rights to anything "alive") and now need to fight it out to retrofit something.

I'm thinking of "Dissolving the question"[1] and the infamous "If a tree falls in a forest a but nothing hears it, does it make a sound?" The fact that this generated any serious question/answer is absurd. Everyone agrees on what happens. They just like to argue over what "sound" really means. Again, this is only an issue if you're e.g. at court resolving a poorly written noise violation law. Yet "If a tree falls" isn't apparently used as a quick lesson in definitions, but apparently treated like it holds something interesting.

It might be fun to argue stuff. Like, is vim truly an IDE if I load it up right? Doesn't really change anything, but it might be fun. I get the feeling a lot of philosophy consists of people chasing this. If it's not to judge merits or validity, and is just passive, then perhaps that explains it.

On physics and using Bayes versus Scientific Method, I was thinking more about string theory (and I know nothing of this at all), where, since we can't test it at all, it's not proper science. Yet using a Bayesian model, we're free to add evidence apart from verifying experiments. If, for example, I come up with some simple rules for how the universe works, and out of those arise a huge amount of known equations, there may be no way to test if my rules are true and worth studying more on. It'd violate the scientific method, since there's no testing. Yet the mere fact that that "one little trick" explains a bunch of known physics is a huge amount of positive evidence by itself.

Again, thanks for taking the time to write out your reply; I did appreciate it.

1: http://lesswrong.com/lw/of/dissolving_the_question/


I want to push on the "alive" issue a bit more. You're absolutely correct in that it doesn't tell us anything more about the virus - but it does tell us more about our definition of "life".

This is in line with the "tree falls in a forest" example as well - the "paradox" is not in whether the tree falls, but whether we would consider it sound. The solution is to be more precise about defining "sound", as either "vibration of air molecules" or "vibration of someone's ear drum". Similarly, the "are viruses alive" question is pushing us for a more precise definition of "alive", which we can then apply to (for example) how we might classify self-sustaining chains of chemical reactions on alien planets.


But there is no precise definition of sound that encompasses both (or more?) uses. It could refer to either, and you just need to make it clear and the whole fuss disappears.


Well, the "virus is alive" thing is more of a toy example, a real example (which actually happened) that pertains to biology might be that of cigarettes causing lung cancer. There were extensive studies done to prove the statistical link between smoking and lung cancer but in the end we do not find a one to one correlation. Some smokers will go their entire lives and not get cancer, other people will never smoke and get lung cancer (although that number is very small). It can be obviously established that smoking is statistically linked to getting cancer, but the burden for claiming that smoking "causes" lung cancer is somewhat higher. Since scientists at the time could not account for why some smokers develop lung cancer and others do not it becomes question of epistemology, the branch of Philosophy that deals with knowledge. How can we really know that lung cancer "causes" cancer or if it is simply a side effect of something intermediate caused by smoking or closely associated with it. The other alternative is that smoking is like playing roulette. In that case it also seems inappropriate to say that smoking "causes" cancer and instead it causes one to have an increased risk. (I don't know if more research has been done on why some people develop cancer and others don't.) Causation is an extremely thorny subject but in this case (iirc) it was one of the first times that a phenomenon was accepted as being a cause when there was not a one to one direct correlation with its effect.

As for the tree in the woods, again this is a toy example. It's a vastly simplified example that philosophers use to discuss epistemology because it provides a simple basis on which to argue about something. In math and engineering they use the spherical cow in a friction-less world. Obviously no one cares about a spherical cow but it's useful as an isolation tool so that you can really work on just the problem at hand.

I don't know a lot about string theory but if it cannot be tested and is still considered science this is indeed very troublesome for many theories of science. Perhaps this ties into the realism / anti-realism debate where philosophers debate whether there are actually strings or they are just convenient mathematical constructs with wide explanatory power. I personally am somewhat of a nihilist on this point and I don't think it's appropriate to bend the concept of reality to apply to things like strings and that this is basically just an incoherent exercise to begin with.

A non-string example of this debate that I know a little bit more about (but not much) is the debate over observability. I can observe the wall in front of me unaided with my own eyes and sense of touch.

We can also "observe" radioactive decay in a gas chamber by examining the condensation trails of particles traveling through the gas, but are we really observing them? This seems to be more indirect than the first example, so can we really know that there are particles traveling through the gas? If indirect evidence is not acceptable in science, what is the cutoff point that it becomes unacceptable. I could hear from a friend of a friend of a friend that they saw bigfoot and while everyone might trust everyone else this most certainly is not a scientific observation. This latter example is extreme but it demonstrates the importance that scientists understand the scientific method and the philosophy behind it.

If string theory can only be proven indirectly, at what level of indirection does it become inappropriate to say that the observations are evidence for the theory. It sounds as if we only have very indirect evidence for string theory which is probably why it is so controversial.


A claim of someone seeing Bigfoot is most certainly Bayesian evidence. If your friend has been known to be accurate (and his friend, etc.) then it is positive evidence for Bigfoot. (After all, if he claimed to have NOT seen it, it'd be evidence against Bigfoot.) It's just not so strong compared to all the other observations where no positive evidence was found. Ideally you have some perfect way to load up all these pieces of evidence and calculate how probable Bigfoot is. A fantastic example of this kind of work is Gwern's "Who wrote the 'Death Note' script?"[1] Without an authoritative way to experimentally test, it's not following the scientific method. Yet it certainly seems to improve our knowledge.

The lung cancer thing, I'm not sure I follow. Is this simply not statistics issue combined with a lack of knowledge about the human body?

1: http://www.gwern.net/Death%20Note%20script


I think the "tree falls" riddle is precisely about making one realize the exact thing you said, rather than an actual attempt at finding a yes/no answer to it.

I think an interesting distinction can be made between concepts that are linked more to the actual things covered by them and as we find out more about those things, the definition text gets updated, vs. concepts where entities can move in and out of its coverage as we learn more about them.

In case of life, you may say at the outset that dogs, humans, birds and trees etc. are alive, but rocks, clouds etc. aren't. Then your task essentially becomes like a machine learning algorithm's: you get a training set and then you have to make a model, a decision surface in some feature space that separates the yes from the no examples well enough, while still fulfilling certain smoothness etc. criteria. Of course we never do such things actually, but it's related.

When you get a new example, like a virus or prion, you have to decide on which side it falls.

The interesting question is: what happens if it turns out that one of your training examples were represented with some erroneous features? Or what if we discover some new features that could be relevant?

Do we still label those examples as animate/inanimate and update our decision surface accordingly, even if it makes the surface quite complicated? Or do we relabel them to so that we can keep the decision surface simple? Or do we keep its label and rather relabel some of its neighbors too, to make the decision surface simpler?

These are rhetorical questions, I'm trying to show how arbitrary the whole thing is. There is no "One True" label until we decide it and it's actually like an engineering trade-off decision between matching tradition/intuition and the simplicity of the definition. The former is like the training set error in machine learning, the latter is like regularization.

I think learning about CS and programming would be beneficial to philosophers as well as the other way around. Due to the required unambiguity of computer programs, many times programmers and CS people had to tackle issues like this. Philosophy is largely written in natural language, which results in a lot of ambiguity. For example, read the source code of a simple Quine (a program that prints out its own source code). You'll see code, code written in quote marks, things in quote marks escaped and nested into other other quote marks etc. I mean CS has developed concepts like currying, or variable scoping, which are very much related to philosophical issues. For example when you create and event handler, and say onMouseClick = function(){print x;} what do you mean by x, the current value of it, or the value when the event happens? Do you evaluate it now, or postpone it until the event? In natural language both sound the same, "you print x". But do you mean x as a symbol or do you actually mean the thing pointed at? Like asking "Do you think the president of the US will die in 2050?" can mean whether Obama will die in 2050, or the president incumbent in 2050.

The similarity of this to the "life" issue is whether we freeze the current meaning of "life", or we allow it to change.

So I think a lot of this confusion is just the result of not having to face these distinctions in natural language, while CS people have sorted out many of these things conceptually, in things like reification, reflection, virtualization etc. Of course it's not new, the "use-mention distinction" is well-known from earlier, but programming makes it really straightforward and obvious.


It's important to choose a single coherent definition of "alive" so that we can accurately communicate with each other. But it's actually not a very important question whether a virus is alive.

If I define "alive" as {X, Y, Z}, then I might make scientific claims like "alive -> Q". This is a shorthand for "{X, Y, Z} -> Q".

If you define "alive" as {X, Y}, then according to your definition "alive -> Q" might be false.

But as long as we can clearly disambiguate and substitute definition for term, there is no problem. I am claiming "{X,Y,Z} -> Q" while you are claiming "it is unproven that {X,Y} -> Q".

See also: http://lesswrong.com/lw/np/disputing_definitions/


Enumeration like {X, Y, Z} is just one of the two main ways to define concepts. The other one is with a property, like {the Turing machines that halt on empty input}.

In the first case a meaningful question could be "what is the thing that makes these objects similar, what is a description that connects them?" For example if you see that certain animals die from some poison but others are unaffected. Then you may make up the concept of "resistastrong" animals that don't get killed.

You can do it in two very distinct ways. Either by enumeration, or by declaring the set as {the animals that don't die from the poison}. In the first case the definition is fixed and any more animals that are discovered to die from the poison are not accepted into the set. In the second case, the set may grow as more animals are discovered to die from the poison.

The way natural language works is often a hybrid of the two. We may call something a name, but then gradually the meaning can drift. We first have a fixed set. Then we realize some simple description that unites those things. Then we discover more things that fit this description and incorporate them in the set. Then we iterate again to find a better, more compact description of this new set. Maybe this will actually even throw out some of the elements that were previously included, because the description can be made much simpler if you throw out some edge case.


If you did philosophy you know that arguing over word definitions is a complete and utter waste of time and effort. What is altruism? Go. Be bored. Makes a mockery of the entire philosophy of morality.

While Philosophy is useful in teaching you to understand the crux of an argument and how to structure an argument, the discipline itself is of little merit and consequence in this day and age.

We had this argument here recently, Phil of Sci is a post-facto justification of methodology successful scientists already employed.

Having done it myself, it's not really a discipline, it's a side-note of historical interest. Saying that intelligent people like Feynman couldn't understand PhilSci without studying it is laughable. Especially because it's one of the simplest branches if you skip the tiresome arguments about word definitions in empiricism.


It's certainly true that definitions are often arbitrary and aren't the "meat of the issue". For example if a field is too obsessed with how it labels things, then it's usually a bad sign.

When in university, some courses would focus very much on definitions and lists of things and what part of the field covers what things, I could tell there was some pretentious bullshit going on. Now mathematics seems like an exception to this, but actually they don't argue about definitions in this sense. If you define your terms slightly differently a mathematician may be annoyed but he will recognize if your overall work is valid.

It's also true on an individual level. I noticed that people who like to argue whether they are programmers or software developers tend to be less concerned about actually getting something done, vs. people who'd say "call me whatever; you can come and watch what I do and decide what you call it".

It comes across as overcompensation for having little to say otherwise. Good scientific papers also don't dwell too much on how to categorize and break up the related fields. But apparently there are people who enjoy defining terms precisely, like whether they do Data Mining, Data Science, Machine Learning or AI or Statistics or Probability. They are all fluid categories and have significant overlaps. There is just no reason to work towards sharp separation.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: