Ephemeral is definitely not EBS, as its I/O does not contribute to saturating an instance's network connection (I've tested that). EBS has a ceiling for throughput (the instance's network throughput) and competes with your other network traffic, which is interesting for database workloads for obvious reasons. In addition, every single benchmark I've seen run reports ephemeral behaving very differently from EBS.
As ephemeral disappears when you stop an instance (presumably, when the system is given the opportunity to relocate your instance to another machine), I've always suspected that ephemeral storage was part of local disks on the virtual host's chassis -- as opposed to a SAN or other kind of network-attached storage. As such, you probably end up with the same unpredictable performance you find in all virtualized resources, since it is very unlikely that Amazon is giving you your own storage.
Before benchmarking ephemeral storage you have to pre-warm it, which might have contributed to your findings. Ephemeral is worlds better than EBS, particularly in outage scenarios; if I could convince everybody on planet Earth to stop using EBS, it would be a noble cause.
It's not a throttle, it's the "physical" capacity of the "interface". If you're on an instance with gigabit connectivity and you're doing 1 Gb/s of EBS I/O, other network chatter will suffer, probably fairly dramatically. That's why the high-I/O instances have 10 gigabit connectivity, as I understand it.
Happy to be proven wrong but this is based on a year or so of experience dealing with EBS. You can't see the EBS traffic in your tools (at least that I've been able to find), which complicates things.
It's all kinds of different on VPC instances, therefore I suspect the network interface model -- and possibly EBS connectivity -- is different on those. So, who knows? I'd kill for Amazon to be more forthcoming here so I could understand the infrastructure running my fleet, but, I don't and they aren't.
> Well Objective C is a separate language entirely.
Not quite; Objective-C is a superset of C. You can write pure C and compile it as Objective-C all day long.
This is an important point because people think their C knowledge doesn't translate to Objective-C, mainly because people say things like "it's entirely different". At its core, it really isn't -- the superset syntax becomes straight C underneath.
Beyond that, even, you can write Objective-C in pure C99. The brackets, class definitions, and all the rest is a sort of syntactic sugar that converts directly to a handful of straight C functions (you can find them all here: https://developer.apple.com/library/mac/#documentation/Cocoa...)
Diesel, on-highway in New England, is averaging $4.205/gal[1] (probably more in the city, especially more in a disaster situation). Applying your information to the above question and assuming they can get just shy of five gallons per bucket, it's probably a fair estimate that each bucket carries more than $20 of diesel fuel and accounts for seven minutes of generator time. So, about $3 per minute or 5.1 cents every second.
I got a similar feeling at my last^2 job when we had a client run a superbowl ad, for which they were paying a million dollars. That works out as something like $30/millisecond.
FYI: #2 heating oil and diesel are pretty much the same thing, the only differences being dyes in the heating oil (since it's often taxed differently) and additives in diesel (for example in the winter to stop it waxing up in the cold). Not sure about sulfur content differences when comparing #2 to the low sulfur diesels you see, but I doubt a few hours of burning higher sulfur fuel will do anything.
Also, #2 heating oil is cheaper than diesel in New England, it's about $3.70/gal.
I, like pg and hopefully other Hacker News readers, am pretty sick of every story about something cool being immediately derailed by a know-it-all in an industry dismissing the innovation as "not cool enough". pg calls it middlebrow dismissal[1]. Rather than being treated as an imperfect launching-off point for further innovation, everything has to be world-changing perfect to have a reasonably sane comments section.
I'm actually glad this thread got pushed down, so maybe some actually informative discussion can rise up rather than an analysis of how minutely the people behind this screwed up.
Another negative trend that's clearly happened in this thread is drive-by voting: all of your interlocutor's posts have negative points, whereas the only posts in the thread that deserved any downvotes were the initial post (the only legitimate instance of the poorly-named 'middlebrow dismissal' that I can see) and the one where you put words in his mouth at the end ("Oh, so they're liars now?"). Postulate: very few people read anything but the first post and its reply, and upon doing so most upvoted all of your posts in the thread and downvoted all of your interlocutor's. Intelligent discussion indeed.
I didn't put words in his mouth, he wrote that they were lying to managers and possibly being deceptive. You also misquoted me, which is ironic given your criticism.
Your postulation is unwarranted, by the way, since my posts have been going up and down based upon their depth in the thread. The top comments have been upvoted more than the lower ones, and the lower ones have actually been muted somewhat. I would actually suggest that people are reading the thread and voting appropriately, and probably aren't making it deepest into the thread.
Your sweeping characterization of mindless Hacker News voters acting by whim is disconcerting, to say the least. That we're so fixated on the voting here when karma is absolutely worthless is equally annoying. I am, however, intrigued by the shift from very light gray back to legible that has taken place on his comments since the beginning of this meta discussion (starting with ww520's remark); there might be something to what you're saying, which is that people are acting based upon what others are saying -- that's troubling.
When I see dismissals I look for a particular kind of fallacy, which this is an example of: Reasoning backwards to support the opinion. "This tool is unknown to me and their claimed improvements are dramatic" - "I would not trust an unknown tool and I do not trust their marketing efforts" - "Here's some mix of facts and expert's assumptions that supports the idea that you can't trust it."
When you're reasoning backwards you're on thin ice because it's easier to let a logical fallacy through as you start assembling your "facts and assumptions" in the rush to make your point known. You can "win the battle" (by being quick) but "lose the war" (by being wrong), and when I catch myself doing it I have to either cancel the post or put extra effort into it to make sure I have a valid argument(and often, after doing enough research, I don't, or I have gone too far outside my ___domain to know for sure).
Forward reasoning usually results in very straightforward critiques like "I used this but it was not appropriate for these situations..." or "it completely failed in this case..." or "it turned out to be unnecessary for the project I was on." Kevin's posts(both the initial one and follow-ups) get muddled because he has to weave together several minor points; as each successive rebuttal comes forward, so does his target, so that the final opinion remains the same, even though by the end he's reduced to "they're lying."
(A good example of a game developer who has put some serious effort and notetaking into finding ROI on a new tool is John Carmack and his forays into static analysis.)
The only way you know if your product is imperfect is by other people constructively criticising it. But instead you would rather all of us stand around in a circle cheering them on about how wonderful it is and politely ignore any problems.
Dismissal is not constructive criticism. There's a sizable difference between "meh, nothing to see here" (which is the original comment in this thread) and "here's how this could be better".
Ruling out a product and dismissing it based on projected ROI is not criticism. That would be called "making a business decision," not providing constructive feedback.
Well, there are a lot of Apple fans here, and a core part of the Apple aesthetic is "if it isn't perfect and doesn't have that 'wow' factor, better to leave it out".
glibc's malloc() is known to be lackluster under parallel workloads. Alternatives do perform better in certain circumstances, but profile before making the switch -- your situation might not be one of them.
It's possible to inject tcmalloc into an executable using Linux's preloader, too, without recompiling[1].
I'm quite surprised that idTech4 was using _any_ malloc implementation aside from the initial allocation of the memory pool that it would use for its actual runtime allocator.
It actually does use it's own heap allocator, with some additional allocators built on top of it. I don't know if it also uses the system / libc allocator for anything, but I'd be surprised if it did.
Aside from glQuake (which used a custom allocator for most of the game, but used malloc quite often in the GL renderer), all of id's other engines used custom allocators for everything.
Edit: just skimmed the white paper. They were talking about the custom allocator which was, of course, not thread safe, because it was designed for a single threaded game engine. Replacing that with an allocator designed for parallel usage (tcmalloc for example) would have helped, but so would having separate allocation pools for different threads (which you might do for a game engine designed for multithreading). Instead, they made the allocator thread safe, using their tool to analyze it, and then sprinkling it with mutexes. That would probably be why it was a bottleneck.
Even under single-threaded applications, with a custom allocator that suits your workload, it's possible to get significant speed boosts over the system malloc. One of the assignments I enjoyed back in college was writing a memory allocator. By the time I was done optimizing (including replacing all structs with raw pointer arithmetic using #defines), IIRC I was getting over 10000 single-threaded allocations per second vs. ~3000 from malloc().
Came for the dismissive hand-waving that I fully expected to be the top comment and I wasn't disappointed.
What's the improvement factor where you'd be impressed? 50%? 60%? It's remarkable in my opinion how little work it took to improve upon an already very-optimized engine, in the constraints that they set for the developer, by a developer who's never worked with game engines in his career.
Edit: No longer the top comment; it was when I commented.
Framerate isn't a comparative measurement. You have to measure elapsed time.
Instead of spending 3 weeks wrapping data structures in mutexes for a 15% framerate increase, you could probably get an equivalent speedup by reducing shader detail, texture size, or any number of other options. This kind of measurement is not meaningful without additional information: Is this improved performance for low-spec GPUs? High-spec CPUs? universal performance improvement for all users? Low end machines typically do not have lots of cores; will they actually benefit tremendously from parallelism? As I said before, a high-spec machine already runs Doom 3 fine, so I question whether the gains here would actually be significant at all.
EDIT: Also, gains of this size can easily be produced by changes as simple as turning on Profile-Guided Optimization in your compiler. We'd also need to know what settings they used to compile the game to know whether the results are actually realistic.
> Instead of spending 3 weeks wrapping data structures in mutexes for a 15% framerate increase, you could probably get an equivalent speedup by reducing shader detail, texture size, or any number of other options.
I see why we differ in opinion here: you feel that regressing backwards and sacrificing visual quality is a better approach than parallelizing the same exact engine with the same quality. I don't think we're going to see eye-to-eye on this one.
PC game development is about delivering a good experience for users on a wide range of hardware configurations. A given game is not going to automatically run perfect on every hardware configuration.
When considering performance issues for a particular customer, you have to weigh the cost of improvement. 3 weeks of an engineer's time for 15% speedup (even if it were in elapsed frame time, which it's not) is NOT an easy decision. Those 3 weeks could be spent doing far more valuable things, like improving the tools used by the rest of the development team, fixing crash bugs, or working on downloadable content that will bring in more money for the studio. They could also be spent making architectural changes that provide much larger performance wins through design improvements that require actual knowledge about the design of the application and the significance of the choices made.
When you're doing performance optimization in game development, you go for the biggest, cheapest wins first, based on actual measurements and an understanding of what's wrong. Given that the optimized version of the game is not running at 60fps (and the demo video contains obvious rendering glitches) I don't think it's unfair to question whether CPU parallelism was the most obvious bottleneck here.
Lastly, dropping texture size or shader detail when running on a low spec machine is not a regression. If the machine is not capable of running at max rendering detail due to GPU constraints, no degree of CPU parallelism will overcome this.
All of what you say is wise, but the engineer doesn't work for the studio and isn't on Doom 3's release timeline. He made the game's frame rate 15% faster of his own volition simply because he could. So it's wrapped in a sales pitch, oh well.
"I see something I can improve, but I had better not improve it; were I to work at id, my time would probably be more valuably spent on other things."
I wish you'd just back off and laud the improvement rather than fitting it into your model of game development and continuing to double down on a snarky, dismissive comment that basically shits on somebody else's work. I certainly appreciate that you've worked in professional game development, but come on, someone did something cool. I'm sorry that it isn't cool enough for you or doesn't use the measurements you prefer.
Attitudes like yours hurt our industry as a whole.
People who have never used software should be critical of "here's what our software can do with inexperienced hands, maybe you'll do better"? I'd much rather default to giving benefit of the doubt than assuming everything sucks before I've even touched it. Reminded of people who say they don't like exotic food but have never eaten it.
I think the lowest opinion on the totem pole is one formed by assumptions (based upon personal experience with an area) used to reflect upon something from the hip. "I'm pretty versed in game development, and what these people are doing is stupid based on my world view."
This article is clearly trying to sell licenses for a commercial product based on unsupported, possibly deceptive claims. I don't laud that. If the goal here is to prove the value of this analysis software, they should be proving it in terms of the value it will produce for an actual company developing software - if they're proving value for a game studio, it should probably be in the context I provided above. For other kinds of developers, maybe they have infinite time and their constrained resource is engineer knowledge - in that case this software could be valuable. But you don't demonstrate that using video games.
Hacker News isn't about selling things to managers who make purchasing decisions by lying to them, last time I checked.
Oh, now they're lying? Care to back that one up? That's a steep accusation that you should probably reconsider attaching to your name.
They wrote a tool to find things to parallelize. They parallelize them, the code gets faster. I realize that it isn't the faster you would prefer, but they picked an accessible target that everybody could understand. It isn't as cool if they say "we made Excel render graphs 15% faster" or "we made MP3 transcoding 15% faster". They just made a code base faster using their tool, that's all.
If they had made a video editor faster and you'd worked on video editors all your life, I'm sure you'd be here shitting on the achievement as well. They're not a game development shop. They picked a game to screw around with. They made it quicker. It just happens to be your pet area, so you absolutely cannot stand that someone figured out how to do something in your little world that you wouldn't think of.
I'm impressed you've upgraded to outright calling them liars now rather than just admit, maybe, you might be wrong.
> Neither the name “Markdown” nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
He won't enforce it. How about that elevator pitch:
> Makeup is a standardized version of John Gruber's Markdown. It has a formal specification, a reference implementation, and an exhaustive test suite. It also handles edge cases in a more sensible way than the original Markdown.
I think "Markdown" as a name can no longer be killed. The same holds for Markdown as a format. Alternate implementations will exist for a long time, and they will be referred to as "Markdown implementations", at least informally.
And so will whatever Markdown based "standard".
I probably crossed the line by suggesting "<qualifier> Markdown" as an official name. But as an informal name, it will be used, and John Gruber won't be able to do a thing about it.
By the way, I can't fathom why he wouldn't want Markdown spin-offs to be called "Markdown" as well. That name is his legacy. A standard would make sure to preserve it. As for his Perl implementation, it belongs to the museum, to be glorified as pioneering. But for actual use, Perl is now obsolete.
Does this mean all the current "Markdown" implementations out there are in violation of this licence term[1]?
How would one reasonably convey that their formatter is markdown-capable?
It would seem to me that there's a significant difference between "Bloopdown - a Markdown compliant formatter for blah" and "Markdown v3.Ice-cream-pony"; the latter clearly trading off the name of the original, while the former is descriptive.
I'm not sure where "$X Flavored Markdown" falls on that scale, although I'd be surprised if something like GitHub hadn't sought permission or clarified the naming provisions.
[1] Having checked a couple of different projects, none of them mention having sought or received that permission, but I suppose they might have
I don't know about the USA, but here in France, one can have a right to trademark even when it's not officially registered. The main requirement is to show that you have established a de facto brand.
> What both Jeff and Gruber bring to the party is profile. As the article points out, there are already variants of Markdown that fix many of the problems but it would seem none of them have gained traction.
Actually, he lost credibility for not being a decent human being and e-mailing John Gruber first and StackOverflow is really irrelevant here.
"Say, so-and-so, you wrote a really nice book but there are several errors in the first edition. We think it's great and we love it, and we see the potential in your work, so we're going to rewrite a second edition by committee then sell it under your title and byline without your permission; that cool with you?"
If Markdown had continued to be robustly maintained by Gruber then I would give that thought more credence. As it is now it's barely just one notch shy of abandonware, I think that transfers Gruber's feelings from the realm of reasonable criticism to the land of butthurt.
Yep. Nuances of etiquette are one thing but I'll always side with the person that is trying to move things forward over the person that is throwing up hurdles.
To my way of thinking, Gruber comes out of this looking churlish and unhelpful at best.
The more pertinent question is why Jeff didn't make more effort to base this off one of the existing standardisation efforts.
Regardless of how much I dislike Gruber's opinions on occasion, Jeff Atwood is completely steamrolling him here and I really feel for him; it isn't an accident that Jeff blogged about it instead of asking him privately. You do that when you don't care either way what the person says, and it's a completely dick move. As another commenter pointed out, there is no winning move for John Gruber here. That's so obvious that I can't even give Jeff the benefit of the doubt that he overlooked it.
This entire experience has left me with a bad taste in my mouth about Jeff Atwood -- I sure hope he doesn't set his sights on something useful that I've created for the world. You can make whatever argument you like about Gruber not "paying enough attention" to his creation, but Markdown doesn't really need to be "fixed" or "standardized". I look forward to the, undoubtedly, dozens of vendor-specific extensions that will be necessary in the final product and we'll end up right back at square one.
Really, really presumptuous and poor judgment. Invent your own stuff and let it stand on its merits rather than trading on Markdown's well-established name by force.
Jeff Atwood is a blowhard sure, but you're ignoring the history here. Gruber has refused for years to acknowledge that there are any problems with Markdown at all. Some argue that he has the right to do with his creation as he wishes, but that misses the larger point that Markdown has communal value far outstripping Gruber's original contribution of aggregating a bunch of ascii typographical conventions and compiling them in the spirit of Textile.
By sitting on his high horse and proclaiming Markdown "works for him" and that his unspecified version with tons of nasty edge cases is canonical and that not only is it correct, he won't weigh in on any formal specification, he deserves to be case aside and have the project taken forward without him. He doesn't have a moral claim on something as simple and widespread as Markdown, and the fact that he finally has to deal with someone with similar blog reach going on a crusade is nothing worse than he deserves for sticking his head in the sand.
Your comment reeks so strongly of entitlement that I don't even know where to begin.
We could say Twitter was instrumental in organizing human rights causes in the Middle East and is now important to the human race as a communication tool; that said, if Twitter doesn't implement a feature that you want, you don't get to redesign Twitter at your whim and call it "New Twitter". You don't have a "moral claim" to write an open letter to @jack telling him why you're moving on without him.
You instead, like all rational people, design a competing service and let your work stand on its own.
The fact that you would bring up a hosted service that is extremely complex and costs tons of money shows that you don't understand where I'm coming from at all.
Markdown is not some amazing patented invention that John Gruber is entitled to perpetual dictatorial rights to forever. It's a simple derivative idea. It's out there in the wild and people make tremendous use of it, but none of this use costs John anything, and it's successful on the backs of many implementors, not just Gruber. It's all fine and good to say design a competing "service", but people do do that, and all it does is lead to yet another variant which further exacerbates the problem.
To be clear, I'm not saying John owes anyone anything. He's free to do or not do whatever he wants, but so are other people. Precisely what courtesy do you think he's due if he refuses to act in any reasonable capacity as a steward?
> Markdown is not some amazing patented invention that John Gruber is entitled to perpetual dictatorial rights to forever.
Completely wrong. Markdown is John Gruber's creation, and he is quite entitled to perpetual dictatorial rights forever (although copyright is limited, his estate is perfectly capable of renewing if he wishes). John Gruber is the copyright holder on Markdown (the idea and implementation, which most of this thread is overlooking). It says so right here:
> Neither the name “Markdown” nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
Think about that while I repeat: You do not get to take over someone's creation and trade on its name simply because you are not satisfied with the stewardship of its creator.
He may have selected the particular syntactic conventions it uses, but the idea of parsing ASCII markup and converting it to a formatter's input syntax is much older. I'm aware of prior art dating back to 1986 or so -- and wouldn't be surprised if that wasn't the first either.
Right, so, for the sake of argument, you support perpetual copyright and software patents? Just trying to gauge how much exclusive ownership you think people should have over ideas.
Some people want to make a Markdown spin-off: clean, standardized, tested etc. Nothing ever stopped them, but they could really use the prestige given by the original name.
The question is, does John Gruber have the right to that name? I think he should give it away, but if he won't, we probably shouldn't force him. I'd change my mind if someone makes a compelling case against trademarks.
A better analogy: If Twitter had an API, and then had a spec for that API, and they didn't match, would it be out of line to call on Twitter to fix the API spec? And to put out a third-party spec if they didn't?
> it isn't an accident that Jeff blogged about it instead of asking him privately. You do that when you don't care either way what the person says, and it's a completely dick move. As another commenter pointed out, there is no winning move for John Gruber here. That's so obvious that I can't even give Jeff the benefit of the doubt that he overlooked it.
What if he did ask Gruber privately, and the private answer then was the same as the public one now, giving Atwood the choice between disclosing a private conversation, giving up on this, or doing what he has done.
I'd say you've discovered a hefty dose of speculation that lacks any evidence.
You've also invented a false trichotomy. Those aren't Jeff's only three choices. The fourth one you left out is "make something new without shitting all over Markdown".
> You've also invented a false trichotomy. Those aren't Jeff's only three choices. The fourth one you left out is "make something new without shitting all over Markdown".
And then everyone would criticize him for NIH syndrome and needlessly fragmenting the markup landscape further. And they would be right.
That's precisely what we should assume, because it's the facts as presented. Inferring things that don't exist because they benefit the parties in question is a subtle undercurrent that drives the entirety of our fact-unfriendly blogosphere, tabloids, and so on. You sound a little like an apologetic Gawker reporter, here.
We were given the story at face value, we interpret it at face value. We don't make up what could have been nor consider it acceptable to do so.
You're making something up as well if you assume Atwood intentionally did not contact Gruber beforehand. That is most definitely NOT interpreting the story at face value.
The dilemma is that it's basically impossible to contact one of these big blog/twitter folks privately. Send email? DM? I'm sure they get way too much volume (and hatemail) to even read it.
As ephemeral disappears when you stop an instance (presumably, when the system is given the opportunity to relocate your instance to another machine), I've always suspected that ephemeral storage was part of local disks on the virtual host's chassis -- as opposed to a SAN or other kind of network-attached storage. As such, you probably end up with the same unpredictable performance you find in all virtualized resources, since it is very unlikely that Amazon is giving you your own storage.
Before benchmarking ephemeral storage you have to pre-warm it, which might have contributed to your findings. Ephemeral is worlds better than EBS, particularly in outage scenarios; if I could convince everybody on planet Earth to stop using EBS, it would be a noble cause.