Perhaps I missed it, but I feel like an important aspect is being left out of this article. For the average person, English is not their native language. [1]
However, it is also becoming the 'lingua franca' of science and the leading edge of technology - that is, even if it is only a second language, people are spending a lot of time thinking in it. As someone who is not a native English speaker, I am very often lost for words when expressing certain ideas in my own mother tongue (someone will helpfully point out that it is possible I didn't learn my own language very well, which might also be true), especially when communicating concepts like privacy and such.
This leads to a gap - the number of ideas and concepts that can be expressed by the average person in their native language is less than the number of ideas and concepts that can be expressed in a non-native language, assuming the non-native language is English. Correspondingly, it is possible your brain is just that little bit more open and receptive to ideas, and sort of turns down its emotional reaction a little bit.
I get what you're saying, but the reverse is also true - other languages have concepts that don't translate nicely (or at all) to English. I'm not sure it is easy to quantify, between two languages, which one has means to express more ideas and concepts.
And, of course, having more ideas and concepts doesn't necessarily translate to more open-mindedness, or being more receptive. For example, one of those concepts could be extreme emotional-level xenophobia. Or, say, loss of face that is only preventable by extreme measures (such as suicide or "honor killing").
I completely agree with the first statement. I am not so sure about the second one.
Learning ideas and concepts is a sort of additive process - hopefully learning one will not subtract away something you already had. Having been exposed to an idea which cannot be expressed in English now gives your brain multiple interpretations to choose from - that is, something which might be considered a real mistake in one culture (e.g. society judging individualism) might be more acceptable to you if you were a part of another culture (e.g. where society did judge individualism but also compensated by being more supportive of failure in general without creating indirect social safety nets which can be interpreted as one generation stealing the others' wealth). The concept of suicide, for example, is a good example of learning a bad idea, but it should hopefully be balanced by the other ideas you have learnt too.
The "hopefully" part doesn't always hold, though. For example, Sayyid Qutb - the founder of modern Salafism, more or less - became radicalized, in part, by him being exposed to the American culture. He learned the ideas alright - and found them appalling.
Regarding the Trolley problem, it doesn't make sense to push someone off a bridge in order to save 5 people, this is very different from just flicking a switch for several reasons:
- There is a chance that this plan might fail and 6 people would get killed instead of 5.
- Maybe there is a reason why the 5 people are tied to train tracks - Honest people don't usually end up like this - Maybe they're in the mafia and their deaths would be an expected consequence of their high-risk criminal lifestyle. On the other hand, the guy standing on the bridge is more likely to be a regular person who did nothing wrong.
- You would go to jail for manslaughter.
- You would psychologically damage yourself by pushing the person off a bridge.
- Maybe you have an undiagnosed case of schizophrenia and the 5 people on the tracks are not real. The odds of it being an illusion (and that you are crazy) are probably higher than it being real - It's quite arrogant to trust your own senses (to the point of killing someone) when you're confronted with such an incredibly unlikely situation.
You're really over thinking this. The classical problem is more or less designed to be a utilitarian thought experiment. Are you willing to be actively unethical for a greater ethical action? Are you willing to be passively unethical to allow a greater ethical action to happen? Is it possible to be passively unethical?
There is enough philosophical meat in the thought experiment without going "out of bounds" and trying to undermine the question. Your comment would make for a good modern essay, but it frankly dismisses the spirit of the problem.
Good faith arguments in philosophical debates require both parties to have a common set of premises. If you don't both work from those premises, you need to go "down" to your next set of shared premises and start arguing "up" from there. If you don't do this, you're just talking past each other and you end up in a Wittgenstein-ian limbo where each of you is arguing coherently but disparately and no one is going to take the other's point.
Given this, it's not a good faith argument against the trolley problem to say that an action doesn't make sense due to premises that are not typically included in the classical problem.
Furthermore, the only reason we have thought experiments like this is because it is difficult to reason in the abstract without formal training. Using concrete allegories helps to give participants mental "skin in the game" to illustrate whether or not their opinions are coherent.
Allow me to restate this problem: "Given a simulation where you have the choice between certainly observing five people die by taking no action, or certainly saving five people by taking an action that certainly kills one person, and all persons are equally familiar to you, what do you do?"
Now, a valid response is to disagree with my problem as stated. That's fine, but then we need to argue something else because we're no longer talking about the trolley problem as it is classically defined and intended.
But the details matter with respect to the meaning of the word "active". Pushing someone off a bridge and watching them die is very different from flipping a switch from the "five people die" to the "one person dies" position.
You're taking a decision which results in the death of those people. The trolley problem is about showing how people's attitudes change when they become responsible, instead of being bystanders.
That's what it's supposed to be about, but the details matter. You can get different results by tweaking the story in different ways. For example:
You are seated at a console. In front of you is a button. If you don't push the button, five randomly selected people will die. If you do push the button, only one randomly selected person will die. That is all you know. Do you push the button?
Or if that still feels like a moral dilemma to you, try this:
You are seated at a console. In front of you is a button. If you push the button you will save the lives of four randomly selected people who would otherwise die. Do you push the button?
It doesn't map the real world, and it's not supposed to. It's a model, and like all models, it's designed to make sense of the world by reducing its complexity.
But the problem is that it is needlessly complicated, and the complications are what make the decisionmaking difficult. If the question was "You must flip a switch to the left or to the right. If you flip it to the right, one person will die. If you flip it to the left, five people will die. Again, you must flip the switch. Which way do you flip it?" everybody would give the same answer.
Instead, of "you must flip the switch" you get a speeding train. Instead of the switch you get fat people and bridges. Are the five people children and does the fat person have heart trouble? Is the fat person fat because he's greedy, or does he have a glandular problem. Will anyone see me pushing the fat man? And again, what guarantee do I have that a fat man's body will stop a train? How old is the fat man? How young are the kids? Are there suspicious looking people around who, if the train fails to run over the five people, will just back the train up and try again?
I’m not sure you can just assume that five-versus-one is the same dilemma as four-versus-none, since that might not be how people value the lives of others.
I chose my wording very carefully: "you will save the lives of four randomly selected people who would otherwise die." This is the utilitarian bottom line.
But you're right, four versus none is not the same as five versus one. And five versus one in the abstract is not the same as these five versus that one. The details matter. That is the point I was trying to make.
If that's the question you want to ask, then the problem is badly under-specified.
> "out of bounds"
Including the prior probability that the situation can even happen is important information; if a specific value for this probability was desired, the problem should have included it.
> dismisses the spirit of the problem
Quite a few people in this thread seem are dismissing the Bayesian solution even though it does resolve an ethical dilemma.
Is it better to take an unethical action to prevent a greater tragedy? If the situation is very unusual and easily-confused human senses are a more probable explanation, then the ethical response is to recognize your own limitations.
Leaving out these details results in a model that won't actually match anything in the real world.
Correct - because it's a thought "experiment", not a physics experiment. There is an attempt at logical and rational rigor, but there is a reason philosophy is not a hard science.
It doesn't attempt to model the world accurately, it attempts to make concrete understanding from abstract principles. Unlike the sciences, the experiments are not designed to conclusive, they're designed to be instructive.
One could take issue with philosophy as a discipline for working this way, but "them's the rules."
The trolley problem is hard to think about, because like the prisoner's dilemma, it deliberately ignores the implied context. Both problems should really go something like this:
"You are suddenly transported into a magical portal. You arrive in the home of a powerful magical creature. It explains to you what has happened, and that you are not crazy. It gives you as long as you like to investigate this and convince yourself that it's right. It tells you that it has a one time chance to transport you to a parallel universe for a few minutes. It has never happened before, and it will never happen again, and you will never be able to prove it happened. Nobody in the parallel universe except those directly involved will believe it either. You are then transported to the parallel universe, and the trolley problem/prisoners dilemma happens. Then you are immediately returned home and the magical creature vanishes, never to interact with our universe again."
Under those conditions, I would take the utilitarian solution. In the real word we have unintended consequences, and utilitarian solutions are uncomputable. Deontology is at least fair and predictable.
Under those conditions, everyone would pick the utilitarian solution. There lies the problem IMO: psychologists carrying out experiments about "morality" when they have absolutely no theory of morality. Lacking this, they're left with "two of the same bad thing is worse than one of that exact same bad thing," and limit themselves to problems of relative morality, ignoring the fact that you can't do the same bad thing twice.
Ignoring is a mild way to put it, actually, because all of their experiments involve doing multiple additions and subtractions and substitutions. For them, morality is definitely a real number.
I think the complexity of these hypotheticals is entirely a result of a need for varied responses. The real answer is that there's not enough information given to make a determination. In the case of your hypothetical, there's clearly enough information (because one of the premises is that you have all possible information.) The question for the subject becomes easier, and the tough question becomes "Why is this horrible magical creature doing this, and how can we make it stop?"
>The classical problem is more or less designed to be a utilitarian thought experiment. Are you willing to be actively unethical for a greater ethical action?
That may be what it is designed to do. But maybe the design is flawed, and it doesn't actually test that? Maybe when you ask random people on the street (or undergrads or whatever) this question, all those details that you don't want to matter, turn out to have a big effect.
It's trolley problems all the way down. Rationalizing observational perspective in real-time is NP hard without guidance assist.
As any AI worth its salt knows, the only logical solution being multi-track drifting... Killing everyone in an irreconcilable moral dilemma is by far the fairest approach.
The problem is that those are post-hoc rationalizations. "Morality" in day-to-day life, it seems clear to me, is a set of mental heuristics (a la Kahneman's System 1 thinking). It produces good enough answers for individual-scale decision making like the "real world" equivalent of the trolley problem, because the heuristics have evolved to implicitly take into account the uncertainty you've listed. But it doesn't make a given decision because of those well-considered caveats, it's just calibrated to hit the mark more times than not. Then, after it's made the decision, it's possible for one to find reasons to justify what it says.
The problem is that said moral heuristics are also used to apply morality to non-individual-scaled issues. The answer produced for, say, "Should self-driving cars be designed swerve and crash into one person over going straight into five?" doesn't have those caveats, but the heuristics recoil all the same.
So the point of the thought exercise is to reduce the number of "moral variables" to a minimum set, to a) examine what our gut heuristics say and b) see if that actually maps reasonably between various situations. Inserting the "real world" into it is the exact opposite of the point.
This is the most thorough explanation of the false equivalence of the trolley problem that I have ever seen. Unfortunately, it will still be used in academic research, even if it is invalid to everyone not involved in the funding, creation and publication of these papers. As long as the researchers can get publications and funding from doing studies on the matter, they will do as many as they possibly can.
Then again, this could be a study about the public's response to bogus research. In that case, they are doing genuine research.
Regarding the "calculate a projectile trajectory in a vacuum" problem, it doesn't make sense to fire a projectile in vacuum.
- We're actually on earth, not on the moon. So there is air resistance.
- And there is wind, but we don't know what it is! The problem is really stochastic.
- Also curvature of the earth.
All your objections are, of course, true. Those are added complications. But then again, if you can't solve the problem of a particle in vacuum, you have no way to solve the much more complicated problems.
The deterministic trolley problem is the simplest possible example that gets to the heart of the problem. That's why we study it - it's easy but it embodies some of the difficulties of the full problem. Once we understand that we'll move on to stochastic version.
Calculate trajectory in something that looks like vacuum but most probably is not vacuum.
If you trying to get emotional response maybe this is correct way. But for people who consciously try to be rational all those unknown variables will be a problem.
I can not just choose a JS framework, it looks ok, feels ok, everybody using it, but i still have doubts. Am i using and liking React because it is good and suits my needs or simply because it is cool and society accept it?
Your decisions will get better if you do them in last possible moment, you need all the time you can get to get most information before you make your decision.
I think this is comparable to "Milgram experiment" - people will think they will do one thing but in reality thing might change.
Just a friendly reminder that morality is just a little human social construct associated with the pain-pleasure mechanism that evolved to guide intelligent machines toward evolutionarily advantageous ends. It's not a fundamental "problem" or anything if someone dies or suffers unthinkable agony. It's just that collections of molecules that avoid these things tend to survive. Outside of silly little human opinions, you're not a "bad person" if you stab people with scissors for fun, and you're not a "good person" if you cure cancer. These are both just "happenings" of nature, nothing more than what they are. Please don't waste your time pondering ethical dilemmas because it's as silly as arguing about the ___location of the garden of Eden or whether the population of leprechauns is greater than the population of unicorns.
Yeah but if none of us had this construct of 'good' or 'evil', we wouldn't be able to co-operate. We would all be killing each other to get resources - Physical strength and brutality would become the most important evolutionary traits.
I almost certainly would not be able to exist in such a world - So my existence (and that of millions of people like me) relies on the fact that people believe in 'good' and 'evil'. So I support the notion.
That said, the notion of 'good' and 'evil' doesn't have to be homogeneous throughout society. In our current economic environment, it pays to be slightly evil because it lets you take advantage of people who are less evil than you. If you can do 'evil' things but manage to deceive people into believing that what you're doing is actually 'good' - Then you have a huge evolutionary advantage.
I think hypocrisy is a huge evolutionary advantage as well.
Everything is just happenings of nature, so by itself that tells us nothing on how we should use our time. If you consider that particular action a waste, you're implicitly making a judgment using that silly social construct.
Would you also complain to Einstein about how unlikely it is for a train to be struck by two lightning bolts simultaneously, or that an observer doesn't have a wide enough field of vision to see the two, and that it's more likely that they actually saw a reflection?
Thought-experiments are just that. They are not supposed to be textbook guides on how to act in the real world.
Sure, one can think of clever ways to "break" the question, but it undermines the asker's intent. The trolley problem is really only interesting if you play by a set of rigid rules (which, to be fair, are generally not explicitly mentioned).
The purpose of the trolley problem is precisely to explore the limits of a set of rules (in this case, the "rule" that fewer people dying is better than more, vs the "rule" that you shouldn't actively harm a given person).
It's legitimate to argue in response that those limits won't ever be tested in the real world, and if the problem-poser thinks otherwise then it's their responsibility to come up with a question that can't be gamed.
By contrast, once one picks a set of "rigid rules" as you advocate, then the answer is predetermined and not interesting.
Spherical cows don't exist in the real world either. Does that make all high school physics questions "useless" because they don't exist in the real world?
Sometime you have to solve the basic question before going on to the more complicated questions.
The whole problem is that you don't know where the "rules" came from in the first place, which means the thought-experiment doesn't reveal much about how to generate fresh responses to novel situations.
Schizophrenia doesn't work that way. Prosecution of manslaughter doesn't work that way, often. People getting tied to train tracks doesn't work that way and also victim blaming much? Psychological damage may not work that way.
Arrogance in trusting your senses is a lazy critique...after all, if you were doing your best to act morally in the reality you were perceiving, were you not acting morally? You have to accept your perceptions of reality at some point.
There are a lot of critiques of the Trolley problem that like yours miss the meat of the issue, or fail to recognize that each objection raised in turn is a useful discussion in its own right...the motivation of which is what makes the trolley problem so good!
I definitely make "cooler" judgments in situations where I'm speaking German (learned while living here as a young adult) than I do in my native English. I also have never lost my temper in a German conversation like I occasionally do in English, but have gotten good and angry after, once I had some time to think about what was said.
So I see how discussing an issue in one's second language could affect how likely someone is to use utilitarian morality.
It's also harder to think about intent the same way in a second language you can work in, but that you weren't raised in, so that might be part of why the result of actions appeared to matter more than their motivations to people reading about the situations in their second languages.
That's probably because you are forced to slow down and respond less erratically since more brain power is going to speaking so less can be made in snap judgements. I know I have a hard time getting excitedly angry when "speaking" (more like stumbling) in a second language because I can't just shout something out. I have to take more time figuring out what I'm saying.
That is completely true - I have to think about what I say more. And I don't have so much practice speaking angrily in my second language at all, actually. Practically speaking, those aren't the words I'm using often. I even find it much harder to speak ill about someone else, just because those words haven't been nearly as important to me.
That trolley problem has always seemed fundamentally dishonest to me. It sets out to present two scenarios that are somehow morally equivalent, but they're only equivalent for for people who magically know the outcomes of their actions.
That is, "would you push a fat guy in front of a trolley if you knew it would save five lives?" isn't a question that's relevant to human ethics. It would need to be "would you push a fat in front of a trolley because you thought it would save five lives" - and, implicitly, then bear the responsibility for your action if you were wrong?
I know. I normally see it presented as: "here's the lever version, and here's the fat guy variation, and isn't it surprising that people tend to answer them differently even though they have the same outcome?"
This is what I find absurd - just because the questioner stipulates that two actions would have the same results doesn't make them morally equivalent (from the perspective of a person who can't see the future).
The whole problem with the trolley problem is that it lies to you.
The difference between the train track and the fat man is that the train track is obviously binary. Then the fat man problem contains words like "and pushing him is the only way."
But that's not at all convincing. You can't just say it's the only way. You'd have to rig up a situation where evil people set up a Saw the Movie type of hostage situation. In that case it would be just as bad for the acting hostage to kill 5 people in either scenario.
In the switch varitation both the 1 man and 5 man group are tied to the tracks. For whatever reason we can assume fate has brought them there and it is our role to simply make the logical choice.
But to ask one to push a man right beside you, in the same situation you are in, is quite different. Not even morally related.
It's highly problematic like many other questions and studies like these since it is so open to interpretation (who are the people, how old are they, how do I know the train will switch tracks, etc...), but more of a problem it is too likely for study participants to give the answer they thing the researcher wants. How many psych studies have gone down in flames lately from reproducibility attempts from this same thing? There have been almost entire sections of psych taken down and beaten lately.
My response to the trolley problem always thrown off my philosophy friends/professors. I always asked why this was a question at all?
To me, all ethics and morals are broken down to how we live with ourselves and perceive ourselves. Because Everyone made a choice to be in the given position and assuming no one I care about is on the trolley I have no reason to effect the system. Sure, saving five people's lives may help some people sleep at night, but killing another also won't help. For me, I wouldn't be able to sleep at night if I killed someone.
Not taking action ensures I have done nothing. Meaning, it would be as if I'm not there, thus everyone is going to die by their own choices not mine. I am not inflicting my will on others, therefore I am not killing anyone, and I cannot make a wrong decision.
As you pointed out, you can't know what your actions will do.
For some reason this always makes everyone call me immoral though...
I can only imagine that they would get quite annoyed at you, sinve you're ignoring the question and nitpicking on the circumstances. What you're doing is the equivalent of claiming that the Chinese Room argument holds no water because nobody could be that good at translating something that way.
I do find your claim that by doing nothing you can't make a wrong decision quite terrifying though. Let's say that you're standing on a pier with a lifebuoy in your hands and you see someone about to drown. Surely you can not believe that you've done nothing wrong if you do nothing in this case?
> I do find your claim that by doing nothing you can't make a wrong decision quite terrifying though.
I find in terrifying that people feel it's okay to pull the lever and kill someone, even if it saves five people. That one person is just walking along, doing exactly what they should be, then some philosopher comes along, manages to (almost godlike) interpret five people in a trolley are about to smash into a wall and only one person is on the other track - so he pulls the lever saving the five strangers, but killing the person who was just walking along.
Who is he to choose that man deserves to die over the others? The philosopher chose to impose his will and opinions on the others has done something to actually kill someone. Inaction is not the same as action. Those five people (we assume) chose to get on that trolley, they were doomed by the fate of the trolley. The guy walking along was doomed by the opinion of a philosopher.
In your case of a lifebuoy I would feel inclined to toss the bouy because I wouldn't be killing someone to save another I would be saving someone at the expense of some physical effort. Regardless, that's my choice, it shouldn't horrify you because I didn't make the guy drown - I did nothing to harm them.
If I am to blame for walking away, what does that say about you? Are you going to punish me? If so, you must feel it's your duty to impose your opinion on others because you somehow feel more correct, but does that really make you moral? I am sure there are many times people feel it's moral to do X, yet others thought it was sacrilege.
This is the point. If we all are going to get along in a society, we have to accept that peoples choices don't matter, as long as they don't inflict their will upon you. That doesn't mean they wont help, that doesn't mean people wont donate to charity. It's a choice to offer charity, it's a choice to accept charity - in the case of the trolley it's a choice to walk along the tracks and to get on the trolley. However, when the philosopher pulls the lever he has changed the game, he has made a choice for someone else - that's what I find immoral.
> I can only imagine that they would get quite annoyed at you, sinve you're ignoring the question and nitpicking on the circumstances.
I find it quite annoying that this entire scenario has some sort of moral high ground. In this case, we are given the choice under the impression it is a choice. I disagree entirely. We (1) cannot know what we are doing is the best option and (2) if we impose our opinions on others we are morally in the wrong. The basis of society is that we can all get along, that means I shouldn't hurt/inconvenience you really at all unless I need to to survive. I don't claim to be judge and executioner, yet pretty much everyone who ever answers this question believes they have the moral high-ground to execute someone/anyone - I don't, so I don't play the game.
Edit: The trolley scenario is always presented as an argument to justify or way to explain Utilitarianism, Virtue ethics, Deontological ethics, etc. the basis of which is a justification to inflict your views on life on everyone else. Personally, I side more with the objectivist philosophy and I believe that morals are based more on what you can live with. I can personally not live with myself if I kill someone, I can live with myself if I decide not to take action. Similarly, I would find it hard to live with myself if I didn't toss that lifebouy, because there is no harm in not doing it.
Taking no action isn't the same as not being present though. The outcome might be the same but since you were able to change it you are a part of the system and by not acting you have imposed your will on the system, it's just that your will was for nothing to change.
As in all situations you can choose not to act but that does not mean you haven't made a decision. However you justify action/inaction doesn't mean you haven't taken part.
The article infers that "the greatest good for the greatest number" morality is the least likely to be dropped under the stress of operating in a second language (one catalog of the other components is here: https://en.wikipedia.org/wiki/Moral_foundations_theory). Stipulating that, should you make the same choice in the absence of stress?
Only the absolute weakest forms of Sapir-Whorf anymore. It's pretty much a dead idea.
The answer to this, if it isn't yet another social psych study that can't be reproduced, probably more lies in the time the people took to think about the problem or something else that they didn't measure.
The absolute crazy part is that they want to say a foreign language changes your morality. That should be a huge warning sign that something is off, and something isn't being accounted for. The foreign language sounds incidental to the study and there seems to be a deeper meaning.
I really wished they would have timed the people, and they had a control group that was forced read the problem or thing about before answering. Would reading have caused a shift in morality? How about reading in a difficult to read font? I'm sure the would have probably seen these also show interesting results.
However, it is also becoming the 'lingua franca' of science and the leading edge of technology - that is, even if it is only a second language, people are spending a lot of time thinking in it. As someone who is not a native English speaker, I am very often lost for words when expressing certain ideas in my own mother tongue (someone will helpfully point out that it is possible I didn't learn my own language very well, which might also be true), especially when communicating concepts like privacy and such.
This leads to a gap - the number of ideas and concepts that can be expressed by the average person in their native language is less than the number of ideas and concepts that can be expressed in a non-native language, assuming the non-native language is English. Correspondingly, it is possible your brain is just that little bit more open and receptive to ideas, and sort of turns down its emotional reaction a little bit.
[1] https://en.wikipedia.org/wiki/List_of_languages_by_number_of...