I mean, I get it, but I think performance is overrated in this particular case; unless it’s a significant and/or very noticeable difference, stick to object literals, please. I’d probably fire someone if I started to see `JSON.parse(…)` everywhere in a codebase just for “performance reasons” … remember, code readability and maintainability are just as important (if not more).
> I'd probably fire someone if I started to see `JSON.parse(...)`
I've had the privilege of working in organizations that consider mistakes to be the cornerstone of resilient systems. Because of that, comments like this scare me, even when intentionally hyperbolic. More so, if the product works well and is being maintained easily, why would you micromanage like that? Sounds like a minor conversation only worth having if the technical decision is having a real impact.
Thomas J. Watson:
> Recently, I was asked if I was going to fire an employee who made a mistake that cost the company $600,000. No, I replied, I just spent $600,000 training him. Why would I want somebody to hire his experience?
You probably wouldn't want to work for somebody who fired people so easily anyway. This is one reason why I find it stupid when people defend companies or are super loyal to their employers: companies don't care about you and especially companies that fire on a whim without concern that they're fucking with somebodies life. Best to work somewhere that treats you like a human instead of as a cog.
To be honest, I understand the bit of backlash that I’ve received here and I think it’s well-deserved since I should’ve worded my statement better. Thank you for your comments.
You all are correct re firing someone over mistakes and seemingly trivial matters. I was mostly referring to software engineers who make impactful decisions without good reason and/or without properly assessing the trade-offs.
I think it’s fair to say that we all want performant software, but at the same time, if I have a software engineer on my team who can’t back their decisions with some form of data and/or understanding of the trade-offs, unless they’re at the junior level, they’re not the type of software engineer who I want on my team.
I said “performance reasons” precisely because, over and over and over again in my career, I’ve watched software engineers commit unreadable messes of code that were clearly premature optimizations and/or optimizations where the performance gains weren’t significant enough to justify the costs of the unreadable and hard-to-maintain code enabling them.
I once had a software engineer unexpectedly spend almost a week rewriting a critical part of a Java codebase using the JNI because he thought it’d “make it faster” — and it did — but then all types of new native code-related issues ensued that cost the company, including a major security vulnerability that was just impossible before. On top of that, it turned out that the performance gains that we noticed were mostly significant during the startup period of the JVM, so it really wasn’t worth it. And this was a very brilliant software engineer, but he was consistently making poor decisions like this. To be clear though, he wasn’t fired! I just use that story as a realistic example. (Part of me still thinks that he just wanted to learn/use the JNI and that project seemed like the perfect target. Lol.)
But yes, it’s more complex than simply firing individual contributors for sure and I regret wording my statement that way, but I hope you all can understand the real point that I’m making.
Edit: I’d like to point out that, in my anecdote above, in hindsight, if anything, I was probably the one who looked incompetent when the suits started asking the expected questions re the sudden set of new issues, because I did my best to shield that software engineer from them (or at least I’d like to think that I did). I know the feeling of messing up at that level and I knew that he was most likely already beating himself up, so I couldn’t just let him take the fall, or worse, throw him under the bus. These tend to be complex situations in real life!
> Part of me still thinks that he just wanted to learn/use the JNI and that project seemed like the perfect target. Lol.
As a dev who sometimes goes off chasing wind mills, that's 99% of the reason why I do it. I find something nice to tinker with, and when my brain goes "ooh, shiny" I stop giving a shit about anyone's bottom line.
To be fair, it usually turns out for the better for the project and its code base! But sometimes it doesn't, and I figure that's just the cost of doing business. Companies should be willing to take these kinds of informed risks in order to improve their employees' ability, and therefore the quality of their product. However, a lot of management only sees the short term gain, because long term gain isn't incentivized for them. They just wanna do well and get a promotion.
Well, guess what, it's the same for me. Except for me to do well, I have to be learning new things constantly. So tough poop, management, I'll be chasing my white whale every once in a while. Deal with it.
> Companies should be willing to take these kinds of informed risks in order to improve their employees' ability, and therefore the quality of their product.
Perhaps they should be willing, but your description of this distraction does not including informing the Company and allowing them to determine whether it’s a risk they are willing to accept. You decided for them because you didn’t want to receive the answer “no” in return. This isn’t right.
I'm afraid the morality of this situation isn't so black-and-white.
In industry, there is always a tension between production and research: cranking out widgets vs. getting better at cranking out widgets.
A dev who spends 100% of their time cranking out widgets is stagnating. That's actually not what your employer wants, despite the fact that their agile process seems to imply that ticket cranking shall be the whole of your focus.
If you ask employers if they expect you to improve your skills over time, they would absolutely say "yes". But if you ask for permission to chase a specific white whale, you will hear "no". Everyone agrees they should be saving for the future, but "not this paycheck".
Taking the naive moral approach here and spending 100% of your time on tickets is not "what's right". If anything, that's you being taken advantage of by your employer -- sacrificing the advancement of your career in the name of short-term sprint velocity gains. On top of that, stagnation is not what your employer really wants anyway.
(edit: the above excludes companies which have explicit "20% time").
I think i work in a similar manner to the gp.. It's transparent to the org... It's not I'll head down this path or investigate this _or_ get my work done.. It's an _and_ situation. Sometimes the rabbit trail is the best thing sometimes you just have to get the thing done... Either way it's still getting done
But it is an informed risk. I was hired to do work in 4 different languages, on the frontend and the backend, plus CSS and HTML. I get to weigh in on UX decisions, I get to design service infrastructure.
Was I born this way? No. I need this overhead, that's just part of being a dev (within reason).
If you require me to do lots of things, there's overhead. If you want a ticket drone for your Scrumfall projects, get a ticket drone.
> And this was a very brilliant software engineer, but he was consistently making poor decisions like this.
That is something I've noticed. Brilliance doesn't go hand in hand with making prudent and wise decisions.
> I did my best to shield that software engineer from them
I've found rather painfully that you shouldn't shield guys like that when they go off on their own to make mistakes.
Other thing, you have a team of people that are familiar with how a codebase is put together and does things. And what sort of things go wrong. It's a bad idea to disrupt that 'just because' Goofus rewrites a module to use X fad. Great! Before there were five programmers who knew how that module worked and now there is one programmer who knows how that module works.
Assuming you are managing a dev (either through a lead role, seniority, or as a manager) you absolutely should shield team members from direct demands from up that chain - that's what most of your job is... Assuming the employee was acting within the rules you've laid out then the you should shield them and consider adjusting your rules to prevent a repeat - if, to contrast, your company has some CI tooling setup and automatic deploys and reviews and whatnot - but then someone edits a file on production... that might be a fireable offense.
Additionally to contrast - if you're a co-worker and not a manager then you may need to examine your relationship (are you a mentor and thus secretly leading them or just a colleague). If a pure colleague makes a mistake you shouldn't stick your neck out too much - except to force your common manager to properly defend them.
Everyone who is fired should be fired by their manager and not anyone else in the org - that's how a team is strong and healthy.
And
Managers, in a healthy company, own the mistakes their subordinates make.
> you absolutely should shield team members from direct demands from up that chain - that's what most of your job is
The other part of your job is keeping your manager informed about subordinates that are being problematic. Up and rewriting a critical piece of infrastructure 'because' is problematic.
(I'm assuming that you mean that the other person is another subordinate to the same manager, rather than being someone subordinate to you)
It's a bit of a delicate balance. The golden rule is that Snitches get Stitches, but if someone is being unproductive with their time and your manager isn't aware of that fact then letting them know isn't a terrible idea. But it isn't your place to measure how your co-workers are accomplishing their tasks - assuming management isn't out to lunch then performance reviews should fall on their shoulders. Maybe your coworker cleared a rewrite with your manager and your manager was satisfied with the justification and decided that explaining the full reasoning would be a waste of time until the experimental phase was completed.
In theory good management should prevent you from feeling like you need to look over other people's shoulders, because that is their job. So if you are feeling that way you might want to talk to your manager about it, maybe they are bad at managing and are letting things slip through the cracks, maybe they find that allowing someone to experiment with a rewrite is worth the training time - it may be possible that you just need to talk it through with them and find more confidence in their management ability.
> That is something I've noticed. Brilliance doesn't go hand in hand with making prudent and wise decisions.
Reminds me that John's Carmack wife told Carmack that she wouldn't allow him to bankrupt the family with his space hobby company (Armadillo Aerospace) :)
My comment wasn’t meant against you personally (for all I know it was just for emphasis and not serious, and from your comment it seems like it), just against the attitude of firing people for small things, rather than, for example, teaching them to do better.
I agree with everything you said, except that I'm not sure that JSON.parse all over the place is going to add any significant unreadability. I think most likely it would always look the same and be just as readable as object literals once the initial getting used to it period would be over with. Hell, I think it's a lot more readable than the use of !! which I consider an abomination, but everyone keeps doing that for developer speed = productivity purposes.
I've found that the readability of fast code vs. slow code is often negligible - certainly it is in the specific example under discussion. I prefer to make a habit of using faster idioms in that case, so that when speed does matter I'm already covered. I don't consider that premature optimization.
Except it isn't, because most of your developers will be using an environment that includes syntax highlighting and probably some linting. Except within string literals.
Code inside string literals is less readable and more inclined to be wrong/buggy.
Eh, depends on the reasons for firing people easily.
Firing easily for honest errors is moronic, fully agreed, especially if the person is learning from them. My code changes caused more than one sev0 before, but never was I personally blamed for them, as it was always some bigger underlying system issue that wouldn't have allowed me to make those mistakes, if the systems were more robust (and I was a little bit more wise and not pushed "seemingly safe" changes outside of business hours). I learned a lot from those mistakes.
Firing easily for a long history of non-improvement and not meshing well with the team (underperforming, causing a lack of cohesion within the team, etc.) is good for the team, but in principle it is similar to the "good king" kind of approach, so it all relies on the "king" having a straight head.
P.S. My last paragraph does not imply "culture fit" or any superficial stuff like that as a good reason for firing, I meant more fundamental sort of issues, like refusing to listen to people, never even attempting to improve (given you have some hiccups, just like most of us), etc.
I’m not against firing people if they are a bad fit, are incompetent or are toxic to the work environment. I’m against firing people for small things, not giving a chance to improve, firing on a whim or simply discarding people. Treat people like humans, but that doesn’t mean you can’t fire people who are a negative force on your business.
I usually find the opposite. Firing someone for cause is interminable or outright impossible, unless they're breaking the law or embarrassing you in front of customers.
I have only once fired for cause. Every other time, I’ve gone through the process, the PIP, and then terminated them according to their contract, usually with “more generous the legally or contractually required” severance. Let’s me sleep at night and has the nice side effect of keeping the peace with the remaining team.
counterpoint: you are just as free to take the same liberty with your employer. You can drop them like a bad date, and take a job somewhere else.
additional counterpoint: part of your job as being a grown up responsible adult is your ability to manage and endure risk and loss, especially the risk of your job disappearing overnight. Outside of circumstances of extreme poverty, or extreme disability, in which our government has safety nets in place (let's save the debate of sufficiency for another time, the fact remains they are in place), losing your job should not "fuck up your life" moreso as be a temporary setback. This is especially true for this industry.
Thankfully most other developed country’s healthcare system doesn’t penalise individuals quite so significantly as the broken system you Americans keep voting for.
I’m not saying the UK or other European counties have the perfect healthcare systems either but at least we aren’t tied to a job we don’t like because losing our company’s health scheme is too scary to consider.
If you're not paying for it with your money, then you have to pay with your time: public healthcare systems, like those in Europe are known for the long wait times for patients requiring surgery, or other costly procedures.
Also, traveling to the US for treatment is still a thing, because new, advanced treatments are developed and first implemented in the US, so all that money spent give you something in return.
Unless you're rich and regularly wipe your ass with $xx,xxx bills, you end up spending your time in the US system too: getting your insurance and care provider to agree with what is covered, what isn't, and how much you have to pay.
Sometimes it takes almost a year to resolve.
BTW, even basic surgeries in the US can have a price tag of close to $100k. I've had to fight off more than one ridiculous bill like this in the last 5 years. If you're talking about medical tourism coming into the US, I can't imagine you're talking about anything but very well off people.
> Also, traveling to the US for treatment is still a thing, because new, advanced treatments are developed and first implemented in the US, so all that money spent give you something in return.
It sounds like you’re saying America is the only country in the world developing new and advanced treatments and the only country people travel to for such surgery. Clearly that’s not even remotely true (and even if it were, which it isn’t, it still doesn’t justify just how badly broken your healthcare system is for domestic users).
Whatever time I spend in the waiting room in Canada waiting for treatment, which is honestly less time than I wait for the cable company to show up to fix things, more than makes up for the fact that I spent literally zero time dealing with hospital bills.
Considering US hospital bills can easily be tens of thousands, a couple of hours wait at even $1000/hr. billable lost opportunity is still cheaper than the US alternative.
In the US, you generally need to pay with both your money and your time. I've waited three months for an appointment with a specialist, had them only tell me to go to another specialist, and paid for the privilege.
From the moment my GP refers me to a hospital for whatever reason they need to look at me, they have 8 days to respond, and must have a diagnosis within 30 days. Treatment is usually not long after and almost always proportional to the situation.
If a potential life threatening disease is suspected diagnosis and treatment must have begone after no more than 2 weeks. Most of the time it's a matter of days. If the public hospitals cannot do that I'm free to go to private hospitals without paying anything.
You don't lose it on a whim. In the US, you can file for COBRA to extend your benefits, at which point that alots you plenty of time to apply for medicaid if the circumstances were extraordinary (why do I get this weird feeling most people on this forum just never have been poor or in this situation?).
That plus your emergency savings funds, should more than account to hold you over 6 months to find your next role. 5 years ago. I'll save my survivorship bias story for how I coped with this exact situation 5 years ago because I know everyone's situation is unique, but the lessons of growing up with 2 unemployed parents and living month to month not knowing if the bank was going to repossess our house have stayed with me I guess.
Love the quote. Though, I have some people working with me who I would still struggle with that quote. There is an assumption in it that the employee grows from the experience.. yet, I face people who seemingly make an effort not to grow.
I would also micromanage this way. Developers leave, code remains. If your code base is full of `JSON.parse(...)` in a few years because of some developer who though "how clever to do this instead of object literals" it's not the author who has to live with their decision, it's the next code maintainer.
I see too many programmers being too clever and then leaving their clever code to become someone else's issue. My advice is be simple and make readable code. No one wants to maintain the clever code of another person.
Firing instead of teaching when the person can learn isn't management.
The problem isnt that maintainble code isnt worth the effort, the problem is that firing people until someone matches your demands is not the most effective way to GET maintainable code.
Tough life that maintainer is going to have, seeing `JSON.parse(...)` being wrapped around object literals in code. This truly is going to cost them many man hours and lots of hair pulled out in stress.
Seriously though, there's clever code and then there's just nitpicking. Micro-optimizations with JSON.parse() look ugly and nullify some editor conveniences, but they're IMO very far from being a fireable offense.
They are not a fireable offense in my book either, but they sure as hell wouldn't pass my code review. I've had to deal with too much crap like this in the past. Self-proclaimed senior devs that micro-optimize everything and leave a mess, then leave. Love them.
One should always optimize for easy maintenance. Performance is always a secondary goal, because it doesn't matter how fast (you think) your code is if you can't understand it.
Well I agree, you don't fire the person because they put JSON.parse(...) even if they put it in 1000 times. That would be silly.
The question is WHY did they do that? I'd probably get them to learn about performance tuning and do some profiling, make something faster. When they find out that it's slow because of something they didn't predict, hopefully they'll decide for themselves that they can't predict what will be slow, so no point complicating the code. If they don't get that, maybe explain it to them.
Basically the person who put JSON.parse all over the code was learning.
If they come back and arrogantly say "I'm right, and I'll carry on doing it you won't stop me", then that could be an attitude problem that might lead to question if they should be working there.
There are more nuances like if the person is claiming to be a senior developer/architect then the trigger for firing them might be more likely to be pulled. But still it is worth thinking about it first.
> I’d probably fire someone if I started to see `JSON.parse(…)` everywhere in a codebase just for “performance reasons” …
Yep, and I'd fire you for doing that! There are better ways to manage instead of showing off your authority. Oh, and by the way, would some JSON.parse statements for performance be the worst thing in your codebase(s) you guess? I mean, I cannot believe that would be the worse in your codebase. Also, if it really helps to use some JSON.parse for creating big objects for performance reasons, who cares? Instead of firing 'someone' maybe you can add some annotation to it for readability (or if that is below your imaginary level, ask the developer if he/she can add that).
Sry, but I hate people that misuse their authority by imposing their subjective opinions.
You're extrapolating quite a bit from a simple comment, which tells me that you'd probably be a poor manager as well. Then again, I'm extrapolating quite a bit as well.
Seeing something like JSON.parse throughout the code is definitely a code smell and could decrease the maintainability of the codebase, and that's a very tangible problem. Obviously you shouldn't fire someone over something like this if it's the first offense, but it definitely raises red flags and should make you monitor things a little more closely. If they show a pattern of dogmatism and poor judgement, you're probably better off finding someone else with better judgement. You're not going to find a perfect employee, but some employees are just better at making decisions for a larger project than others.
Interesting how typescript plays into this - I mean back in the wild old days of plain old JS I would be totally fine with putting a JSON.parse here and there, especially on the hot path.
But now with static types - this would totally wreck static type checking. And you would need to spend additional cycles to validate that the data is actually correct.
Definitely a change request in the PR.
This has to be probably a really big validate perf advantage to warrant the loss of static checks.
Typescript gives you type checking if you import from a JSON file. (Node handles JSON imports and webpack will happily build that for you into a JSON.parse in a bundle.)
That would just obscure the lie. I'd rather see an explicit `as T` cast at the call site to make the "trust me, typechecker, I know what shape this is" claim be in-your-face instead of hidden behind a type parameter.
(This reply assumes you're not asking for TypeScript to make a major philosophical shift and start generating runtime code to validate types. If you are, that's a discussion worth having but goes way deeper than `JSON.parse`.)
> Oh, and by the way, would some JSON.parse statements for performance be the worst thing in your codebase(s) you guess?
One thing I have seen from managers who don’t work regularly in the codebase — they tend to over-focus on things like whitespace and function names more than correct abstractions, separation of concerns, etc.
That’s the compiler/minifier’s role anyway, to use the best construct when appropriate.
See Java’s whole “abc”+”ced” vs StringBuilder performance issues. When programmers have to alter readability for performance, it doesn’t necessarily mean they shouldn’t do it, but it means the precompiler is not advanced enough.
Readability is crucial in code. If you have to through and change the JSON that's being parse and it takes a nontrivial amount of time, that's a big setback. Sure, it's 1.7x faster (in v8) to parse JSON, but how long does it take to parse 10kb of an object literal in the first place? Given that these static, large objects are not common place in a codebase, is it worth the tradeoff?
The precomiler, such as Babel, could introduce a plugin for this sort of optimization. We only write ASM when it going to significantly change the performance characteristics, and typically when a particular code path is run many, many times throughout an application. If an object literal like this is getting parsed that frequently, there are better ways to optimize so that doesn't need to happen at all anyway.
I could see this being very useful in a variety of applications, such as server side rendering. However, its would be best to happen in an optimization phase as you're already bundling at that point.
That was a weird era in Java history. They changed the compiler in the subsequent major version to perform that transformation automatically, but by then people had had 2 years to stare at perf graphs looking for bottlenecks.
It never was clear to me why they didn't do both of those in the same release. Backward compatibility wasn't the problem (they were already breaking that left and right).
Even better, I think Eclipse’s Java compiler introduced the optimization in a given version, but Maven hadn’t yet. So it wasn’t optimized in production, but was optimized on the developer’s machine. What a time to be alive.
Java devs saw that "abc" + "def" involved expensive String concatenation, so as a performance improvement they pro-actively, and effectively manually, changed to use explicit StringBuffer concatenations.
When the compiler switched to generate StringBuilder (unsynchronized) concatenations for "abc" + "def" nobody benefited, because they had already changed to use StringBuffer (synchronized).
Now they had to go an undo all of their hard, manual, optimization work.
They say in the linked article that this should only be used for objects about 10kb and larger.
I'd argue that if you have 10kb or larger object literals in your codebase, you are already missing the mark on readability and maintainability in some ways.
If it's exclusively going to be used for heavyweight operations like these, it's probably better to benchmark against protobuf decoding. I guess using JSON has a "works out of the box" appeal, and doesn't require defining any protobuf schema. But personally I don't see defining proto files as too prohibitive in terms of development cost.
Protobuf isn't built into the browser, so it can't bypass the JS parse & execute time. Instead you'd be parsing protobuf's JS, executing it, parsing proto, and producing objects. It'd be worth doing, sure, but it'd almost certainly be the slowest option by far since it's doing way more stuff in JS than either of the other two options and the JS syntax parse is the slow part.
These benchmarks indicate better protobuf performance [1]. Compute time these days is often dominated by memory transfer rates. The "slowness" of javascript seems to be offset by there being less data to begin with. Collapsing a 100KB resource down to, 50 or 25KB is usually worth it even if you have to do more operations in javascript. Not to mention end to end load time (which is probably what people are usually trying to optimize for) can be lower by reducing how much data needs to travel over the wire or radio.
At the end of the day, who knows if the use case hits edge cases or stresses parts of the implementation that is not optimized for JSON decode or protobuf. Getting meaningful performance data ultimately needs to be experimental, and resists categorical answers about whether X is faster than Y.
> These benchmarks indicate better protobuf performance [1].
We're exclusively talking about cold start performance here. Single, one-time object creation. Hence why JS syntax parse is the dominate factor and not execution performance. Those benchmarks are not that, they are hot performance. That's a completely different thing.
> Not to mention end to end load time (which is probably what people are usually trying to optimize for) can be lower by reducing how much data needs to travel over the wire or radio.
Wire transfer size would need to be looked at differently. The JS code & JSON string are both also going to be compressed unless you're not using a compressed Content-Type for some reason.
What is the "completely different thing" you're referring to here. Between:
1. Having a static JSON string, and decoding that string.
and
2. having a static blob, and using protobufs to decode that blob.
these two things accomplish the same thing. I'm not sure why you seem to think one is a "cold start" and the other is "hot" - they're both "single, one-time object creation". The former is going to be parsing ints and floats as ascii, and reading in "true" and "false". Regardless of compression, the memory-inefficient JSON encoding is going to be used (whether it's over the wire, or just as an intermediate representation during parsing). I've used protobuf decoding for things like localizations and configurations before - the "cold start" use case you're talking about - and it does in many circumstances result in faster loading. My napkin paper reasoning is that this will be much more heavily weighted to booleans and integers that are much more efficiently encoded in protobufs than JSON, so maybe if you had a use case that almost entirely decoded strings your performance differences may not be the same.
Are you including the cost of loading protobuf itself? You seem to be basing your argument on an assumed already present & loaded protobuf library.
You need to benchmark starting from nothing at all. Your link that you seem to be basing this off of has a loaded and fully JIT'd protobuf. That's not the start state.
You can measure the impact on loading time, and the size of the protobuf implementation you're using probably has an impact on the threshold at which it becomes more efficient. I don't doubt that parsing a 500 character long JSON string is probably faster than loading a protobuf to do it instead. In fact, apparently this JSON parsing trick is only effective beyond 10K or so. But past a certain threshold memory bandwidth is more crucial than loading code. If your data consists mostly of booleans and integers then JSON can often be an order of magnitude larger in size than protobufs. If it's compressed, then decompressing it takes clock cycles and the parsing code is still parsing the larger uncompressed JSON text. A protobuf library can often skip compression altogether by virtue of using normal ints and bits for numbers and booleans. So while the protobuf library does have some additional overhead it's often higher throughput for many types of data.
You’re repeatedly missing the point. This is about optimizing startup time.
The comparison should be:
cost of downloading payload + runtime cost of parsing JSON
Vs
cost to download protobuf lib + parse and execute JS protobuf lib + download payload + runtime cost of parsing
Specifically, the article talks about how parsing JS is more costly than JSON - this cost will apply to the protobuf library which certainly far exceeds 10KB. There is no way the math will work on your favor until you get to MBs of data.
> Specifically, the article talks about how parsing JS is more costly than JSON - this cost will apply to the protobuf library which certainly far exceeds 10KB.
I would suggest reading the links I posted. The minimal protobuf library, which is suitable for working with static decoding, is 6.5KB [1]. Again, you're right that the size of the protobuf library will be an important factor in dictating the scale at which it's more effective than JSON parsing but your sense of the factors is off - a light protobuf library doesn't reach 10kB let alone "far exceeds 10KB".
Furthermore, if your pages use the protobuf library already for other uses like decoding and encoding RPC messages then loading and parsing the protobuf library is basically free - you're going to be doing this anyway.
I currently work on a project using protobufjs. Our generated static classes are ~500KB and ~1.5MB, or around 140KB gzipped. The schemas are not that large, and this does not even include any network code (not part of protobufjs).
Either way it's really more data than object at that point so it's appropriate to store it as JSON. Normally I'd place such data in a different file, but I can imagine that that might not be best for webpages.
Like a lot of people who have given interviews, I have my own set of very odd stories.
I interviewed a developer and asked him to explain how the system he was currently working on worked on the whiteboard. As he talked he drew two boxes. He drew a line between those boxes. Then as he talked he kept drawing over the line between the boxes. (Now, he was jr to mid-career so I didn't expect a magnum Opus but we value people who can explain themselves because at least if they're wrong we find out before the mess gets too big. But I digress.)
Your analysis reminded me of that interaction. What kind of information architecture do you have if you're building objects that big?
I mean, as others have said, if this is the main payload being transferred from client to server, it's probably going to arrive as JSON and you're going to turn it into Objects.
If it's not that data (they're talking about cold loads) how many other categories do you have that can approach 10k?
Configuration? We have libraries for that and they often read a JSON file.
Lookup tables for fixed relationships of data in the system? Maybe, but that complicates your testing situation.
How many of those categories get loaded more than once per session? Are these really such large startup bottlenecks that we tackle this instead of other problems? GP implied incompetence but I get more of a whiff of desperation here.
> remember, code readability and maintainability are just as important (if not more).
I don't know about that. Prioritising making your own job easier over the experience of all your end users feels like a much more fireable offense to me.
In this particular case I'm still a little wary of it because it feels like it's optimising for a current implementation with no idea what the future performance implications might be (or current implications in non V8 engines?) but this trend of prioritising developer experience over everything feels like a very bad one to me. It's the same reason given to justify making every web site a React app with no thought toward the extra JS payload you're sending when it's not needed.
This hack is supposed to be for huge data: 10kb or more, thus comfortably more than a page. If the >10kb wall o' code was wrapped in a parse-as-JSON-at-runtime function call which was was preceded by a three-line comment describing a quick and dirty benchmark showing that it saves a useful number of milliseconds on page load in a fairly typical use case, and if the web resource was intended to be loaded many millions of times, I would nod and approve when reviewing the code. The way the original objector writes, it sounds as though nothing would suffice to justify this hack, and certainly not a mere benchmark and 3 lines of comments preceding it. That attitude seems like unreasonable blinkered zealotry, or some other kind of tunnel vision, e.g. someone who has just never thought seriously about the appropriate tradeoffs in maintaining a web resource which gets loaded millions of times a month.
Users like code with fewer bugs and rapid response time for new feature requests, right? If you start firing people for taking the time to write readable and maintainable code, you'll be doing a greater disservice to the users than those developers were.
It depends. Say you spend 8 hours of dev work to save 1 second of processing time per call. It will take 28,800 calls until your time investment pays for.
This assumes the cost of dev time is equal to the cost of CPU time. In some cases the additional speed is going to return more value then the cost of the dev working. And other time the additional value of getting the product to market is going to win out.
It'd certainly be a good idea to understand exactly what the alternative is when you see JSON.parse() before deciding it's bad or firing anyone, right? There are definitely some legit cases for JSON.parse(). Not to mention that a full round of you setting clear expectations, giving examples of what's recommended and what's not, giving people a chance to learn & grow, and documenting repeat offenses, should all be done before booting someone...?
Deep-copying JSON objects using stringify+parse is not just faster, but less problematic and less code than writing a recursive object copy routine.
> This knowledge can be applied to improve start-up performance for web apps that ship large JSON-like configuration object literals
Third paragraph...
> A good rule of thumb is to apply this technique for objects of 10 kB or larger — but as always with performance advice, measure the actual impact before making any changes.
I wouldn’t mind having this in my build step, as it’s all minified and unreadable anyway, so what do I care, but I agree with you fully.
Not only would you be missing out on readability, none of your linters will catch errors within that string any more and if you use something like prettier, well, god help you. You’re almost guaranteed to introduce more wasted time than you’ll save with this doing it manually.
Well, they are suggesting it for literals that are 10 kB or larger. That means they aren't really talking about code that's in your normal codebase - it's quite rare to have a literal that large. It is more likely this is relevant for backend tools that autogenerate JavaScript code to be sent to a client.
For the main two apps I work on, there's some configurations that are different between different client deployments, this includes i18n strings, configuration settings/options, theme options and a couple of images (base64 encoded) for theming. Switching to JSON.parse was a pretty significant impact, from about over 200ms to under 100ms for my specific use case (IIRC). Memory usage was also reduced.
I don't remember the specific numbers... it was an easy change in the server handler for the base.js file that injects a __BASE__ variable.
var clientConfig = JSON.Stringify(base.Env.Settings.ToClient(null)).Replace("\"", "\\\"");
// NOTE: JSON.parse is faster than direct JS object injection.
ClientBase = $"{clientTest}\nwindow.__BASE__ = JSON.parse(\"{clientConfig}\")";
...
return Content($"{ClientBase}\n__BASE__.acceptLanguage=\"{lang}\";", "application/javascript");
The top part is actually a static variable that gets reused for each request, the bottom is the response with the request language being set for localization in the browser app.
I totally agree that inlining `JSON.parse` of string literals in source is a bad idea and I would reject it in a code review except under the most extreme circumstances (and even then try to identify a better solution).
On the other hand, knowing the performance characteristics, this is something that compilers could do as an optimization. Who knows if that's worth the effort, but this kind of research is part of determining that.
The JSON.parse approach might also be useful if the same data needs to be used in non-JavaScript code too.
You could then use the same string in JSON.parse(...) in your JavaScript, json_decode(...) in your PHP, JSON::Parse's parse_json(...) in your Perl, json.loads(...) in Python, and so on.
If you do have constant data that needs to match across multiple programs, it will probably be better in many or even most applications to store the constant data in one place and have everything load it from there at run time, but for those cases where it really is best to hard code the data in each program, doing so as identical JSON strings might reduce mistakes.
> I’d probably fire someone if I started to see `JSON.parse(…)`
Guys - I think he was being hyperbolic. Ya know, like everyone does on the Internet. If he had said "if I had to look at JSON.parse(...) lines constantly, I'd jump off a building!" I doubt you all would be calling 911 over an attempted suicide.
If I used this one weird trick, I'd want it to be compile time checked.
I'd stick that JSON in a separate file, get typescript to compile it "just to check it's OK" then get the compiled code and include it as a string using something like https://webpack.js.org/loaders/raw-loader/, I guess (not used it before).
There might be a leaner way to do this (maybe the whole thing can be done as a webpack loader in one step), but something like this.
They mentioned that it should only be used for very large objects (say, 10k), so if you're seeing ~10k, hard-coded objects throughout your code, you should probably fire someone. If it's in just a few places, there should be a comment describing it (e.g. "large object constructed from DB query, use JSON to make page load faster").
Believe you can use "Interceptors" or the Adapter pattern on the Front-end to easily use JSON.parse once for all your http calls instead of littering it throughout the code base.
Most development time is going to be spent on reading code that's already written, so yes, they do matter. With the speeds mentioned it's not going to be appreciable until you hit a massive scale, which, let's face it, most of us aren't working with.
const injectedValue = JSON.parse("$SERVER_JSON_VALUE.replace("\"","\\\"")");
// vs
const injectedValue = $SERVER_JSON_VALUE;
generally for a single value in the codebase is emphatically NOT a huge issue... and if it saves 80-120ms or so on the load, that's a significant impact. Not to mention the lower memory overhead while doing so.