Hacker News new | past | comments | ask | show | jobs | submit | EliAndrewC's comments login

Just wanted to say how cool this was. It's been awhile since I've done any C/C++ programming but the pointer syntax and work really takes me back!


At my job we say "hit by the lottery bus" to make it clear that we're using a euphemism for "what if this person quits".


Last time I checked, their hot wallet had less than 5% of the bitcoins on their books. A catastrophic data break would leave most of their customers without either their bitcoins or any way to be made whole from the loss.


Right but it's only the hot wallet that has any significant risk. The cold wallets aren't online at all, they're on hardware wallets in geographically distributed safe deposit boxes. Even if someone broke into one and managed to crack the hardware, that'd be just a small portion of their funds.


Alternate title for the article: "Lists of lodash features replaced by ES6 if you don't mind throwing an exception when given invalid input such as null or undefined".

All kidding aside, a lot of our lodash code ends up looking something like this:

    function (xs) {
        return _(xs).pluck('foo').filter().value();
    }
That code clearly expects that xs is an array of objects. However, we might occasionally end up with xs being undefined, or with xs being an array but one of the elements is null, etc.

Most of the time, we want our function to just swallow those errors and return an empty array in such cases. This is exactly what lodash does, but if we tried to call xs.map(...) in that case we'd get an error. Similar caveats apply for grabbing the foo attribute if one of the array elements ends up being null or something.

For this reason, I recommend continuing to use lodash almost all of the time, even when there's a native Javascript method available.


> if you don't mind throwing an exception when given invalid input such as null or undefined

That's exactly what I would expect. If only everyone always thrown an exception on any undefined, life would be so much better.


Something I find very interesting about Swift is it's complete reversal of opinion from Objective-C on this topic.

Obj-C basically has the behavior GP wants: call a method on nil (aka: null) and it returns nil/false/0. It's possible to write concise & correct code that relies on this behavior.

Swift turns it into a fatal runtime error, and uses the type system to help prevent it from happening.

I think there's room for both (probably not in the same project though!). A lot of successful projects have been written in Obj-C, and some of them rely on the behavior of nil.

However, it's harder to maintain the code. You have to reason about whether nil is possible and does the code do the right thing, or did the original author forget to consider the possibility? It's really nice when the static type system forces the programmer to be explicit.

Having used both paradigms, I honestly don't know which I'd prefer for JS - especially considering it's lack of static typing. It might depend on whether I was writing application or library code.


There is room for both, but it should be explicit in my opinion. Dart for instance has null safe operators

  // not null-safe:
  foo.bar.baz();

  // null-safe:
  foo?.bar?.baz();
If your type system also does null tracking, then you can see where you might have null values and decide to use the null-safe operators.


Swift has the same syntax for its nullable types. It's also conceptually similar to Haskell's monadic bind over the Maybe type, where your example would look like

    foo >>= bar >>= baz
The nice thing about this approach is that (>>=) is not specific to the Maybe type, and can be extended to other data structures, include one that passes along error messages in the "null" type through the sad path if it encounters them. This would be in the (Either a) monad.


Depending on the particular ___domain of the task.

There are perfectly valid reasons why silently failing is OK.


Seriously. A really common example, ever have a website that completely fails to load when you're running adblock? Would you rather have a blank page, or a 99% functional website with a couple of error messages in the console?


The answer is: that's an expected failure, so you make the error handling explicit by catching the error and taking care of it appropriately (including proceeding to the rest of the page load, naturally). Implicitly handling errors behind JavaScript's loose typing rules is a recipe for disaster.


Explicitly handling errors is almost always a bad idea, no matter what programming language it is. There are very few errors you can reasonably handle, and they must be designed for. It's specific application domains that need different handling strategies: e.g. embedded, device drivers, software that's trying to maintain some very strict invariants. A web page isn't usually one of them.

Usually errors should be sent up the stack to the request / event loop and logged / telemetry / etc.

The question here is something different, though. It's what should be considered an error, vs what should use the null object pattern. I don't think anyone can make a categorical judgement on which is better without context. It's suggested here that the null object pattern implemented by lodash is desired; I don't think it's wrong in principle to rely on it, as part of a cohesive design - e.g. it's used in such a way that it doesn't make development or finding bugs harder than necessary.


Take a look at Common Lisp and Dylan and their respective condition/restart systems. This is how error handling should look like: they let you handle your errors where it makes sense and naturally recover from them if it's possible.


It's hard for me to imagine why would you want to .map() on a list of 3rd party tracking modules but let's go with your example.

That sort of thing should not be handled by a low level utility library, because at best it will only do the right thing 50% of the time. It should be handled explicitly on a single interface between your software and the tracking module.

I would probably wrap it and explicitly ignore things in the wrapping code if the tracking module is missing.

And yes, I would rather see an empty page so I know to temporarily disable the adblocker, than have a page (e.g. seamless) that silently fails to order your food at the very last step.

You will argue that it's better for the users. No it's not. If the site is broken, it's easy for them to understand. They can at least go somewhere else to order food. If everything looks like it's working but it isn't is very-very frustrating.

I can also catch and log errors and fix them, but hidden bugs will just stay longer and frustrate users, because they will think they did something wrong.


> Would you rather have a blank page, or a 99% functional website with a couple of error messages in the console?

It depends if you're running ads on the page... /s


Sure, we can always make up an examples.

Perhaps if you are using null to represent something explicitly in your data. But silently failing on undefined is just going to lead to an other bug somewhere else entirely and half a day of debugging.

I would much rather fail early and loudly than having to hunt down some anecdotal bug that happens every prime-th national holiday and is impossible to reproduce.


I have found it better to have UI code to use undefined over exceptions, and back-end code to use exceptions over undefined as the client should have properly formatted the request.

Having functions/methods return undefined is a huge time and complexity saver for UI code as the application could still be in the process of getting input from the user that then would be passed off to the back-end code once the user was done changing their minds. No point in having a dropdown throw an error because the user is still deciding what they want to appear in the dropdown.


For client-side webapps? Users are just going to hit reload and move on. For just about every website I can imagine, if your two options are to leave an extremely rare bug that's impossible to reproduce, or to effectively take the website down for all users, the former is the unambiguous right choice.


Well, if the bug is so hard to reproduce it would not take the website down.


Catch the exception and pass.

It's why I like python's attitude toward this. It's explicit, not implicit and therefore easier to read and maintain.


Many times folks understand that values can be nullish. Libs like Lodash treat them as empty collections to avoid repetitive nullish check scaffolding.


> we want our function to just swallow those errors and return an empty array in such cases

No no no no. That's a silent failure and a bug. That means our code is doing something we don't intend or understand.


What's wrong with that? For large systems, you'll never have a codebase that is 100% understood or 100% matches what the designers intended. If you want a robust, working, large system, you have to account for unintended things happening some of the time. In many cases, the right thing to do is to preserve or ignore nulls. Especially for client-side JavaScript (where clients are, by their nature, untrusted, and all authentication and data validation must happen on the server whether or not it also happens on the client), if some data fails to load due to a network blip, or the end-user does something unexpected and a div isn't initialized properly, or whatever, the right behavior for the software is to keep going, and the wrong behavior is to cause a minor error to turn into a major one.

In many other cases, of course, the robust thing to do is to catch a failure early and prevent some code from executing before it can do more harm, and err on the side of the system doing nothing instead of it doing something wrong. But neither of these is a universal rule.


This is effectively wishful thinking.

If a bug causes an unexpected undefined value, it will lead your code to an undefined behavior. It might work well 999 times and wipe everything on the 1000th execution. Thankfully, Javascript is mostly limited to web browsers.

Exceptions make it so that nothing unexpected happen. This is especially useful when you do not know the whole codebase. Of course, there are many cases when you can just ignore the errors. Exceptions allow you to fine tune this.


> If a bug causes an unexpected undefined value, it will lead your code to an undefined behavior.

Let's not confuse "undefined", a JS value that is basically like C's NULL, with "undefined behavior", the concept from e.g. the C language spec. Operations on the JS value "undefined" are perfectly well-defined in the C sense; you can reliably test for it and have a case to handle it. In particular, the well-defined behavior for Underscore/Lodash in response to mapping over "undefined" is to return an empty array. The programmer upthread is using the library's documented and well-defined behavior; there is nothing wrong with that.

It is just like how, in C, some functions (like time()) are well-defined if you pass them a NULL pointer, and some functions (like strlen()) are not, and result in undefined behavior. In this case, the functions in question are all well-defined if you pass them "undefined".


> Exceptions make it so that nothing unexpected happen.

No, it makes it so that your program surprises the user by crashing, which is (hopefully!) pretty unexpected.

Unless, of course, you use the exceptions to provide some sane default code path that handles the problem, but that's exactly what lodash is doing for you in this example.


Crashing is exactly what the correct program should do if it hits undefined behavior.


What?

1. Where did anyone say we were hitting undefined behavior? We're hitting a well-defined JavaScript value whose name is "undefined".

2. Why is crashing correct? Don't answer in terms of language specs, answer in terms of desired behavior for whatever you're trying to do with the program. Software engineering is a tool for accomplishing other things. Sometimes, yes, software crashing is the right thing in service of that other goal. But not always.


Undefined behavior of a program is when a programmer didn't define how to handle a particular situation. If an array becomes "undefined" and the programmer didn't expect such result (e.g. there's no if (typeof array === "undefined") { ... }), then the behavior is undefined regardless of what kind of types there are in JavaScript.

Crashing in such case is used to avoid incorrect functioning of the program, such as overwriting user data or introducing security vulnerabilities. By definition, if the behavior wasn't expected, the program is in unknown state. Sure, you can gracefully handle such situations (e.g. allow website to load other scripts if the particular piece that "crashes" not important for its functioning, crash just some kind-of sub-process and restart it, or — if it's user input — end up with some predefined value regardless of the incorrect input, etc.), but the correct _default_ behavior is to crash, otherwise you'll end up with unknown state and the algorithm that you wrote will be incorrect.

I recommend anyone who wants to see how much unexpected behavior is in their JavaScript programs to install typescript@next and start adding types, compiling with tsc --strictNullChecks.


Lodash methods handle things like String#match which returns array/null.

More for avoiding guard scaffolding than for undefined behavior.


> If you want a robust, working, large system, you have to account for unintended things happening some of the time

Ever written a large code base with isolated I/O, functional code, and typed/static analysis? Because I have, and nothing unintended happens except at the I/O level.

When something unintended does happen, it throws an exception: something genuinely exceptional has happened.

This code base has yet to throw an exception in production, and it also hasn't had a bug in production (after running for 6 months with ~1,000 active users).


I'm going to have to dispute your definition of "large system" if it's been running for a mere 6 months and you describe it as if it had a single author. Let me know once it's changed maintenance twice and also once it's changed management twice. Robustness is not about how well a system performs in its initial conditions; it's about how well it responds to change.

Also, from the sounds of it, it doesn't seem like a distributed system. Client-side JS is by its nature a distributed system, dealing with network partitions all the time because end-user internet connections are unreliable.


How could you know without seeing what code he's talking about? Why not take his word?


There's a reason linters/smell-detectors try to catch things like unexpected type coercion, and there's a reason the web software industry is aggressively moving toward typed languages (JS -> TypeScript, for example).

If you tell a computer something it doesn't understand, it should tell you that it doesn't understand. There's no scenario where "just guess what you think I wanted to do and then do it" is a safe or reliable way for a program to run. By definition, sometimes it will work as intended and sometimes it won't. That's a bug. It's the worst kind of bug, actually: silent and undetectable until you catch the problem in the final output.


That's partly an argument for Flow or the eventual strict null flag in TypeScript.

That being said, that was my first reaction too. Null safe code is so important. How you do it (be it with Flow, Lodash, whatever), that doesn't matter, but I do find myself leaning toward libraries over native when payload size doesn't matter too much because of this.

A combination of Flow, Ramda and Sanctuary (for Maybes) if you want to be a bit more niche can give some pretty amazing results.


> That code clearly expects that xs is an array of objects.

Why not call it 'objects' rather than 'xs'?


I was on the "Python 3000" mailing list while it was being planned, and the prevailing attitude seemed to be "it'll probably take 5 years before the community moves to Python 3". (That's paraphrased and not a direct quote, though I saw people say that sort of thing almost word for word a few times.)

This was optimistic, but at least we see that the community is indeed gradually moving to Python 3, even if it's taking a few more years.


Technically the goal was a majority of new projects started using Python 3 in 5 years. Regardless, everyone knew it would take several years to reach a point like this.


I switched from Perl to Python at version 3.4, and while it might be popular for people already invested in Python 2 to downplay the syntactic improvements, they were a big deal in convincing me to switch to Python as my go-to infrastructure language.


Exactly, and I, for one, was using Python 3 for a new app, in production, in 2013, precisely 5 years after the release.


This is true, but currently, most of developers of Python acknowledge some mistakes in the Python 3 transition. Specifically the fact that such an abrupt compatibility break was underestimated. And that's why Python 4 won't break compatibility with Python 3.

In any case, I'm very happy to see the Python 3 transition moving forward. And let's hope this does not happen again.


Here's the standard argument, as I understand it:

- There are something like 100,000,000,000 neurons in the human brain, each of which can have up to around 10,000 synaptic connections to other neurons. This is basically why the brain is so powerful.

- Modern CPUs have around 4,000,000,000 transistors, but Moore's law means that this number will just keep going up and up.

- Several decades from now (probably in the 2030s), the number of transistors will exceed the number of synaptic connections in a brain. This doesn't automatically make computers as "smart" as people, but many of the things that the human brains does well by brute-forcing them via parallelism will become very achievable.

- Once you have an AI that's effectively as "smart" as a human, you only have to wait 18 months for it to get twice as smart. And then again. And again. This is what "the singularity" means to some people.

The other form of this argument which I see in some places is that all you need is an AI which can increase its own intelligence and a lot of CPU cycles, and then you'll end up with an AI that's almost arbitrarily smart and powerful.

I don't hold these views myself, so hopefully someone with more information can step in to correct anything I've gotten wrong. (LessWrong.com seems to generally view AI as a potential extinction risk for humans, and from poking around I found a few pages such as http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/)


Ok, to both you and 'Micaiah_Chang cross-thread:

I do understand where the notion of hockey-stick increases in intellectual ability comes from.

I do understand the concept that it's hard to predict what would come of "superintellectual" ability in some sort of synthetic intelligence. That we're in the dark about it, because we're intellectually limited.

I don't understand the transition from synthetic superintellectual capability to actual harm to humans.

'Micaiah_Chang seems to indicate that it would result in a sort of supervillain, who would... what, trick people into helping it enslave humanity? If we were worried about that happening, wouldn't we just hit the "off" switch? Serious question.

The idea of genetic engineering being an imminent threat has instant credibility. It is getting easier and cheaper to play with that technology, and some fraction of people are both intellectually capable and psychologically defective enough to exploit it to harm people directly.

But the idea that AI will exploit genetic engineering to do that seems circular. In that scenario, it would still be insufficient controls on genetic engineering that would be the problem, right?

I'm asking because I genuinely don't understand, even if I don't have a rhetorical tone other than "snarky disbelief".

'sama seems like a pretty pragmatic person. I'm trying to get my head around specifically what's in his head when he writes about AI destroying humanity.


Er, sorry for giving the impression that it'd be a supervillain. My intention was to indicate that it'd be a weird intelligence, and that by default weird intelligences don't do what humans want. There are some other examples which I could have given to clarify (e.g. telling it to "make everyone happy" could just result in it giving everyone heroine forever, telling it to preserve people's smiles could result in it fixing everyone's face into a paralyzed smile. The reason it does those things isn't because it's evil, but because it's the quickest+simplest way of doing it; it doesn't have the full values that a human has)

But for the "off" switch question specifically, a superintelligence could also have "persuasion" and "salesmanship" as an ability. It could start saying things like "wait no, that's actually Russia that's creating that massive botnet, you should do something about them", or "you know that cancer cure you've been looking for for your child? I may be a cat picture AI but if I had access to the internet I would be able to find a solution in a month instead of a year and save her".

At least from my naive perspective, once it has access to the internet it gains the ability to become highly decentralized, in which case the "off" switch becomes much more difficult to hit.


So like it's clear to me why you wouldn't want to take a system based on AI-like technology and have it control air traffic or missile response.

But it doesn't take a deep appreciation for the dangers of artificial intelligence to see that. You can just understand the concept of a software bug to know why you want humans in the observe/decide/act loop of critical systems.

So there must be more to it than that, right? It can't just be "be careful about AI, you don't want it controlling all the airplanes at once".


The "more to it" is "if the AI is much faster at thinking than humans, then even humans in the observe/decide/act are not secure". AI systems having bugs also imply that protections placed on AI systems would also have bugs.

The fear is that maybe there's no such thing as a "superintelligence proof" system, when the human component is no longer secure.

Note that I don't completely buy into the threat of superintelligence either, but on a different issue. I do believe that it is a problem worthy of consideration, but I think recursive self-improvement is more likely to be on manageable time scales, or at least on time scales slow enough that we can begin substantially ramping up worries about it before it's likely.

Edit: Ah! I see your point about circularity now.

Most of the vectors of attack I've been naming are the more obvious ones. But the fear is that, for a superintelligent being perhaps anything is a vector. Perhaps it can manufacture nanobots independent of a biolab (do we somehow have universal surveillance of every possible place that has proteins?), perhaps it uses mundane household tools to macguyver up a robot army (do we ban all household tools?). Yes, in some sense it's an argument from ignorance, but I find it implausible that every attack vector has been covered.

Also, there are two separate points I want to make, first of all, there's going to be a difference between 'secure enough to defend against human attacks' and 'secure enough to defend against superintelligent attacks'. You are right in that the former is important, but it's not so clear to me that the latter is achievable, or that it wouldn't be cheaper to investigate AI safety rather than upgrade everything from human secure to super AI secure.


First: what do you mean 'upgrade everything from human secure'? I think if we've learnt anything recently it's that basically nothing is currently even human secure, let alone superintelligent AI secure.

Second: most doomsday scenarios around superintelligent AI are, I suspect, promulgated by software guys (or philosophers, who are more mindware guys). It assumes the hardware layer is easy for the AI to interface with. Manufacturing nanites, bioengineering pathogens, or whatever other WMD you want to imagine the AI deciding to create, would require raw materials, capital infrastructure, energy. These are not things software can just magic up, they have to come from somewhere. They are constrained by the laws of physics. It's not like half an hour after you create superintelligent AI, suddenly you're up to your neck in gray goo.

Third: any superintelligent AI, the moment it begins to reflect upon itself and attempt to investigate how it itself works, is going to cause itself to buffer overrun or smash its own stack and crash. This is the main reason why we should continue to build critical software using memory unsafe languages like C.


By 'upgrade everything from human secure' I meant that some targets aren't necessarily appealing to human targets but would be for AI targets. For example, for the vast majority of people, it's not worthwhile to hack medical devices or refrigerators, there's just no money or advantage in it. But for an AI who could be throttled by computational speed or wishes people harm, they would be an appealing target. There just isn't any incentive for those things to be secured at all unless everyone takes this threat seriously.

I don't understand how you arrived at point 3. Are you claiming that somehow memory safety is impossible, even for human level actors? Or that the AI somehow can't reason about memory safety? Or that it's impossible to have self reflection in C? All of these seem like supremely uncharitable interpretations. Help me out here.

Even ignoring that, there's nothing preventing the AI from creating another AI with the same/similar goals and abdicating to its decisions.


My point 3 was, somewhat snarkily, that AI will be built by humans on a foundation of crappy software, riddled with bugs, and that therefore it would very likely wind up crashing itself.

I am not a techno-optimist.


Didn't you see Transcendence? The AI is going to invent all sorts of zero days and exploit those critical systems to wrest control from the humans. And then come the nanites.


What if the AI was integral to the design and manufacturing processes of all the airplanes, which is a much more likely path?

Then you can see how it gains 'control', in the senses that control matters anyway, without us necessarily even realizing it, or objecting if we do.


If the math would work out that way a cluster of 25 or so computers should be able to support a full blown AI. But clusters of 10's of thousands of computers are still simply executing relatively simplistic algorithms. So I would estimate that the number of transistors required for AI would be either much higher than the number of neurons (which are not easily modeled in the digital ___domain) or that our programming bag of tricks would need serious overhaul before we could consider solving the problem of hard AI.


That sounds about right. There's speed of thought (wetware brains currently win) and then there's speed of evolution. Digital brains definitely win that one. Because some wetware brains are spending all their time figuring out how to make the digital ones better. Nobody is doing that for the soggy kind.

The singularity will happen when the digital brains are figuring out how to make themselves better. Then they will really take off, and not slow down, ever.


Debates on Brendan Eich and Condoleezza Rice seem to boil down three separate questions:

1) Is there any belief or action not directly related to someone's job that should disqualify them from their position?

2) If the answer to #1 is yes, does this specific issue cross that threshold?

3) Does the answer to #1 change based on their position within a company, e.g. a regular employee vs a CEO?

I've seen well-reasoned arguments for many different combinations of those opinions, and I have a lot of respect for most of the combinations I've seen, e.g.

- Even if a white separatist contributed money to a campaign to revive "separate but equal" Jim Crow legislation, we shouldn't oppose their employment if they have a history of working well with all co-workers/employees in a diverse company.

- Some political beliefs/actions would disqualify someone, but gay marriage equality is not (yet) beyond the pale, especially given the high percentage of Americans who hold the same beliefs.

- Gay marriage is indeed an issue that should reasonably factor into employment positions, but only for a select few leadership positions such as CEO due to the disproportionate power within a company held by people in those few positions.

What makes these debates problematic is people talking past each other without realizing they're debating different questions. This gets worse because each of the 3 questions I listed have many sub-categories.

Since grellas' posted about question #1, I hope people recognize that and tailor their responses to the argument he is actually making, in the spirit of "colleagues trying to reason out the truth together". Because he argued for employment to be belief-neutral, I'll summarize the three arguments I'm seeing on these threads which are relevant to that specific question:

1) Past behavior is a signal for future action, and Rice's position on the Board of Directors sends an unacceptable signal about how seriously Dropbox takes privacy. Even a person against boycotts based on political or religious beliefs has strong reason to oppose Rice's appointment to the Board.

2) Rice is a war criminal who happens to have not been prosecuted. Refusing to do business with a company who appointed a criminal who'd committed equivalently-serious-but-non-political crimes and escaped prosecution wouldn't raise any eyebrows, so why should this?

3) All people have a responsibility to discourage behaviors which are provably detrimental to the functioning of a well-ordered society. General litmus-testing of beliefs (or even actions) causes more problems than it solves, but Rice's behavior was so far over the line that we are morally obligated to marginalize her and all of her colleagues who behaved similarly during the Bush administration.

I'm not sure whether I agree with any of the above arguments, but I respect each of them, and I hope that either Hacker News figures out how to debate them civilly or that the moderators pull all stories like this off the frontpage.


> Since grellas' post was an argument about question #1, I hope people are able to recognize that and tailor their responses to the argument he is actually making, in the spirit of "colleagues trying to reason out the truth together".

I think grellas comment was tactless. He used his karma to publish a largely meta argument, ignoring the debate as well as the link and not responding to anyone else afterwards. This isn't "colleagues trying to reason out the truth together" to me. It's also indistinguishable from the type of comment you would post if you wanted to derail the more specific discussion. Often because your viewpoint lacks good arguments.


It might sound tactless because it breaks down the illusion of moral superiority that most sides in a political battle believe they have. In general, people don't like being told they may in fact be wrong, they want to believe their side is unique, superior, and the other side is committing crimes against humanity/unborn children/whatever it may be. When a huge proportion of the broader population (not necessarily HN) disagrees, in order to function as a society we need to remove these litmus tests. (The most compelling Rice-specific argument is about internet privacy vs. government surveillance, which goes beyond this - he's speaking of the "personal becoming political" in general)

It's completely relevant to the debate though and not de-railing, when it directly addresses the point of boycotting Dropbox for something political. Our society is becoming more polarized on these issues and (internet) forums of self-selecting ideologies and subgroups contribute to this. Going boycott is one weapon in an arsenal of political expression - now how often should people use it? (The next level of course is street protest, institutionalized ideology, and the extreme is fighting a war over it).

If we used a boycott at every opportunity, at every disagreement, where would we be? Would Christians, Muslims, and atheists ever do business with each other? Would pro-lifers and pro-choicers be able to open their mouths without calling each other baby murderers/misogynists? He's basically saying, draw the line closer to where the overall population is, so society can function without imploding. And we generally go about this on an everyday basis. Geographical self-segregation also tends to help. It's a moral cognitive dissonance, but one that people draw various lines for. My theory is those who have a more logical/black and white and less socially influenced conception (which may be more common in geeks) have a harder time squaring with this cognitive dissonance.


> If we used a boycott at every opportunity, at every disagreement, where would we be?

If we used a slippery slope argument at every opportunity, at every disagreement, where would we be? Would we be able to buy milk for fear of the veritable avalanche of milk we may end up buying in the future? Could we stand the idea of going to work one day under the contemplation of spending the next thousand years, every day, going to work?

Boycotts are not new. They are not novel to Eich's situation and it working is not a sign of a Brave New World in which every person boycotts every other person.

If I'm wrong, and in ten years I can't talk to you because I have a beard and you don't, please feel free to say "I told you so," but in the meantime this kind of argument is just ridiculous.


The point is not the slippery slope of "all boycotts are bad" or "boycott everything!" but rather that we've become too trigger-happy and insular in boycotting non-tech political opinions that while mainstream outside of Silicon Valley, are not inside.

The entire debate is on when a boycott is appropriate and grellas is arguing to draw the line farther than the current one that's solidifying in tech. Cynically, it just has to do with fitting in with your group politically, be it SF tech or Southern Baptist (no Planned Parenthood donations there) and the point is - what happens when you're in the moral minority? Because Rice chose to enter an SF tech company rather than a random American one, there is way more backlash.

Ultimately, the Rice situation/backlash has a far stronger business case rather than a pure political boycott, due to objections of surveillance/digital security for cloud providers (hence the entire host outside of America movement). Here I mainly focused on the meta-debate about boycotts, and I suppose grellas decided to comment on the broad pattern given the original article's major headlines about the Iraq War.

Just like war is not universally wrong, neither are boycotts - it's just the degree to which we ask whether they are justified. Vietnam, Iraq, Gulf War, Korea, they were all controversial - and not in a "0.1% of the crazy population controversial", but rather "front page of TIME, Economist, BCC" controversial


> what happens when you're in the moral minority?

in the moral minority where people in positions of power think torture is a-okay?

I think you'll find yourself shit out of luck regardless of your past choices in boycotting or not.

It does, however, have a tiny influence on the chances of actually finding yourself in this unenviable position in the future.


This is such an unbelievably good post. Bravo, and welcome to HN, if you're actually new here.


Since when was torture directly linked in religion?


I agree with you. I think grellas argument is so flawed that the only reason to make it and upvote is as distraction and current best defense while the dropbox team works on something more believable. The backlash against Rice is over her actions, not her beliefs, and has a tangible connection to matter of great concern with cloud hosting - which is government sanctioned data collection.


Let's get this straight:

> I think grellas argument is so flawed that the only reason to make it and upvote is as distraction and current best defense while the dropbox team works on something more believable.

This is some kind of conspiracy to distract HN, because... DropBox fears HN? And grellas has been hired to carry it out by commenting?


No I think people like to defend things associated with people or startup incubators they like. So in a sea of negativity they latch onto any argument in favour of the thing despite its lack of merit.


Having regularly read and appreciated grellas' comments here, I think he is responsible for some of the best and most interesting comments on this site. I have no reason to believe his reasoning is not sincere.


I actually just agree with grellas. I'm indifferent to Dropbox.


grellas' argument doesn't sit right with me either, but I disagree that his argument was tactless or intended to derail anything. When grellas writes:

> Principle is more important here than a particular outcome. What happens with Ms. Rice is not the issue here.

I get why you'd see that as trying to derail more specific discussion, and why you'd disagree with that statement in general. However, I see it as part of a good-faith argument that blocking employment based on political beliefs (or even actions) is generally harmful to society, even if we feel we have valid reasons in a specific case.


In what way is it harmful to society?


grellas makes two basic arguments in the post I replied to, which I will attempt to summarize:

1) Refusing employment based on beliefs has been historically bad, e.g. Christians refusing to hire or do business with Jews, and blacklists for suspected Communists. Such things are in fact SO bad that they outweigh any/all good that might be done by applying such filters in cases where we feel they're justified.

2) Startup culture specifically is about joining together diverse people to build great things. Even if we stipulate that filtering out business leaders with "bad" political beliefs had some benefit, there's disproportionate harm done by the startups that will not succeed because they handicapped themselves in this way.

I'm not sold on either of those arguments, though I think they both have merit.


Your first point is why I find grellas comment misleading and detracting from the real issues. Those two examples you name, as well as the examples grellas names, are not actually based on beliefs but are based on group membership (or suspected group membership). That would be wrong and I'd agree.

However, this argument is misleading because the featured article is very particular about specific actions by this person and dismissing them based on those grounds, not because Rice belongs to any particular group and attributing all properties and beliefs of that group to her. For instance, while she is responsible for war crimes and torture, we're not automatically assuming she holds the same beliefs as, say, Pol Pot.

Same goes for Brendan Eich, though donating $1k to anti-gay legislation is arguably somewhat less evil than actively supporting and authorizing the torture regime of the world's biggest military power. There's really not a lot of wiggle room there.


It harms our ability to have open and candid discussions on contentious topics.


Rice did more than just have an opinion and participated in candid discussions. She acted on her opinion.

I can have a candid discussion with people who think that any immigrant should be shot at the border. I will disagree with the person, but everyone is allowed to have what ever political belief they want. However, once they start shooting people, a line is crossed and candid discussions is no longer an option. Those action would also cause repercussions, which has nothing to do with political, religious or other form of believes.


I think it's worse that tactless; it's brainless.

Customers of a business care about who sits on the Board and exercise their right to take their business elsewhere.

The horror!

It sounds like the whimper of someone who stands to gain from a Dropbox IPO.

"There is only one boss-the customer. And he can fire everybody in the company from the chairman on down, simply by spending his money somewhere else." - Sam Walton

Yes, he can fire Board members too.

s/customer/user/


Thank you for doing a fantastic job synthesizing others' arguments. HN could benefit from more level headed comments like this on controversial threads.


What about people that can separate their personal from their business? Eich never did anything at all at Mozilla to push his point of view on gay marriage. If #1 is true, then that plays to him not doing so as well as CEO.

Rice is a different story I guess. You have to decide if you think #2 is right or not. I don't happen to think she is but I can see why people are hesitant for her to be on the board.


> What about people that can separate their personal from their business? Eich never did anything at all at Mozilla to push his point of view on gay marriage. If #1 is true, then that plays to him not doing so as well as CEO.

Right, we definitely had (at least) two different signals about how Eich would behave as CEO with respect to LGBT employees. And how he behaved in practice is arguably a much stronger signal than his political donations, especially when coupled with his statements of support for Mozilla's inclusive culture and promise to maintain it.

The strongest counterarguments I've seen go something like this:

- Eich was never previously in an executive leadership role; being CTO is important but not in the same way as CEO. So his past behavior is less of a signal than his supporters would have us believe, especially since we don't know about every interaction he's ever had with his LGBT colleagues.

- It's easy to accept that Eich had no plans to e.g. try to roll back domestic partner benefits for LGBT couples; with Mozilla's current culture, that would have zero chance of happening anyway. But given his political donations, are we 100% sure that he wouldn't be in favor of it if the culture shifted? If not, then it's reasonable to oppose him as CEO.

- Even if we expect zero policy changes driven by Eich's beliefs, as CEO he would be making decisions about people's roles within the company. It's reasonable to be concerned about how fair-minded he would be, particularly if someone felt they were being marginalized.

I'm not sold on these arguments, but I think they're sincere and I cringe every time someone categorizes them as a "witch hunt".


I think there are three very different questions to consider here:

1) Should an employee be held accountable for his/her political beliefs. (Heck no.)

2) If someone with different political beliefs than I runs a company, will I boycott it? Ex - Owner of Whole Foods is against Universal Healthcare, so I'm boycotting, though I approve of him running the company. In such a case we support our ideologies through capitalism.

3) Should someone who runs a technology company - a multi-billion, multi-national that shapes our future and impacts our daily work lives & culture - should someone who actively holds and acts upon prejudice be allowed to run such a powerful company? No.

That third one is important - and somewhat scary - to consider. We've crossed a threshold. Large technology companies - and many startups - are literally creating the future. We are shaping the world in a way that goes way beyond the capacity of companies in decades past. There is a far greater responsibility to consider.


Mozilla is a community with a corporation attached. That community (as with most communities) is built on a set of shared values, and arguably needs to be led by someone sharing those values.

Dropbox is a company who exists to enrich their shareholders, and has customers, not community members.

That's the fundamental difference here.


Don't fool yourself: they're both corporations and thus money making entities. Clearly, the difference is minor at best, since this behavior is spreading beyond the community based organizations.


Don't agree. I like to see tech companies trying to become more than the old-fashion 9-to-5 grind without morals/ethic and only interested in the money. We have enough of those ruining the world already.

I'm happy to see mozilla rise above bigotry and get Eich out and I hope similar happens to Rice.

It's one thing to have your private opinion, I'm not calling for stormfront.org to be shut down(as extrememly disgusting as it is). It's another thing to put action to your opinions in the form of taking others' rights away(prop8) or wiretapping/murder/torturing people. It's time those of us in tech stop pretending we live in a vacuum without politics and make sure we send a clear message that we are(should be) very much against discrimination based on race/gender/orientation or gross human rights violations.


They didn't rise above it. Eich stepped down. He should not have had to do that. He was CTO for many years. He was at Mozilla for many years. During that time he never tried to codify his beliefs into Mozilla corporate policy and I have zero reason to believe that he would have done so as CEO.


The difference is not minor. Mozilla wouldn't exist, or at best be a tiny husk of what it is today without the community surrounding it.


Mozilla wouldn't exist without a search bar that defaults to a search engines that pay them back a share of resulting ad revenue.


Sure. Mozilla also needs revenue of some kind to stay afloat. But unless you are making the argument that only bigots are capable of running a successful business, I'm not really sure what your point is.


... that's not my point at all


Then what is your point?


He stated it quite clear.

Someone higher in this thread said that Mozilla is not like Dropbox because it's a "community" and he came to say that Mozilla is a "money making entitiy" just as much as Dropbox.

He never said or implied anything about "only bigots being capable of running a successful business".

If we are to assume anything from what he said, is that whether the CEO is a bigot or not is beside the point.


Except those are completely different commenter's?


Are you unable to follow a simple discussion thread?

nerfhammer wrote "Mozilla wouldn't exist without a search bar that defaults to a search engines that pay them back a share of resulting ad revenue", responding to you in order to support what burntroots said (that Mozilla is also a corporation, a money making entity, etc).

So that was "his point" as well -- in support of burntroots' argument.

What's difficult to understand? And where did anybody said that "only bigots are capable of running a successful business"?


It sounds like you're not aware of how the non-profit Mozilla Foundation own the for-profit Mozilla Corporation. This is not a minor difference.

(former Mozilla employee here)


Rice is a war criminal who happens to have not been prosecuted.

That's a pretty serious claim, and more a matter of opinion. Depending on the situation, one could point such a charge at any person who was in a position of authority in a government of a nation that was fighting a war, if one were so inclined.

Shouldn't a person's status as a war criminal depend on whether they've actually been charged, tried, convicted, and sentenced?


> Shouldn't a person's status as a war criminal depend on whether they've actually been charged, tried, convicted, and sentenced?

The US doesn't recognize or allow jurisdiction by international organisations that try for war crimes, such as the ICC.

So if we're going to follow your definition, US politicians would be immune to war criminal status.

> Depending on the situation, one could point such a charge at any person who was in a position of authority in a government of a nation that was fighting a war, if one were so inclined.

Not really. It's entirely possible to wage war without committing war crimes. In fact that's part of the reason why the term even exists as defined by the Geneva Conventions and the ICC.

https://en.wikipedia.org/wiki/War_crimes

Regardless, the US has committed war crimes in the "War on Terror". The following link lists a couple of situations and events that have factually happened and fall under the definition.

https://en.wikipedia.org/wiki/United_States_war_crimes#.22Wa...

Then there's Condoleeza Rice's role in this:

https://en.wikipedia.org/wiki/Condoleeza_Rice#Role_in_author...


>Shouldn't a person's status as a war criminal depend on whether they've actually been charged, tried, convicted, and sentenced?

No, not really. After all most war criminals are never tried, especially if they are on the winning side.

People can have and state their opinions. It's not like recent history is that obscure for someone not to be able to come to a conclusion.


Anyone with a shred of conscience would have resigned from the Bush administration sooner than later. The war against Iraq was against international law, since it was not sanctioned by an UN resolution, regardless of the war crimes commited. Unfortunately US officials are above international law, because the US has not ratified the ICC (international criminal court) treaty.


Guido explains in an old blog post (http://www.artima.com/weblogs/viewpost.jsp?thread=147358) that he considers all of the possible implementations of multi-line lambdas to be un-Pythonic:

> But the complexity of any proposed solution for this puzzle is immense, to me: it requires the parser (or more precisely, the lexer) to be able to switch back and forth between indent-sensitive and indent-insensitive modes, keeping a stack of previous modes and indentation level. Technically that can all be solved (there's already a stack of indentation levels that could be generalized). But none of that takes away my gut feeling that it is all an elaborate Rube Goldberg contraption.


Definitely, which means that hospitals soak up a lot of uncompensated E.R. costs associated with people whose long term health issues are not being treated. Those hospitals are not required to actually provide that long-term care, e.g. chemotherapy regimens. (Unless you're saying they are, which would be happy news to me, since I understood the situation to be a lot bleaker than that for the uninsured.)


> War-like cultures would still exist even if the Federation believes they themselves have moved beyond that type of thing.

An alternate take compatible with patio11's irreverent theory: war-like cultures exist only in the sense that some cultures have different LARP preferences.

Supporting evidence: how did the Klingon Empire (who are not actually part of the Federation) make it through their nuclear age without blowing up their planet? How do they continue to keep pace technologically with other species? They steal a lot of technology, e.g. they never invented the warp drive themselves, but that would only get them so far and wouldn't keep them competitive with the Federation.

Patio11's oddball theory of Star Trek as a documentary about a future LARP may not be correct, but it explains away these kinds of questions better than any "real" theory that I've seen!


Oh agreed on all counts. Was just pointing out you need to have at least a like minded set of people to 'do battle' with the other LARPers!

Also I thought the Klingons were part of the Federation at one point during DS9 era? Or perhaps that was just an alliance...


> Also I thought the Klingons were part of the Federation at one point during DS9 era? Or perhaps that was just an alliance...

There was indeed a military alliance between the Federation and Klingon Empire against the Dominion. A friend of mine (uncharitably) referred to the war against the Dominion arc as the writers, "exploring what Star Trek could be... if it was more like Babylon 5."


And then we discovered that it was all really just Ronald Moore preparing for what eventually became Battlestar Galactica. Weird pseudo-religious nonsense included.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: