It's a race to the bottom. You could pay more for better work that gets done more slowly (fewer hours as people have families, etc,) or you could pay less for worse work that gets done more quickly.
Well, iOS11 needs to ship on its annual schedule, non-negotiable. And the SVP will really complain if I ask for more money this quarter. So, cheap it is.
If a bridge falls down, it's really obvious and people die. If billions of computers become vulnerable to malicious actors and a few hundred or thousand people suffer dramatic personal damages, well that's a nice and quiet problem which will be quickly forgotten in the 24-hour news cycle.
If billions of computers become vulnerable to malicious actors and a few hundred or thousand people suffer dramatic personal damages, well that's a nice and quiet problem which will be quickly forgotten in the 24-hour news cycle.
This. I do development coaching and training. I’m totally burned out because my job doesn’t matter. These companies don’t care. They hire me to “increase code quality” but that ends up being a lie. They just use me to get their teams to crank out shit faster, and I do mean “shit”. They don’t care if the systems blow up a few times a year or once a week. Banks, oil companies, airlines, all of them. They loose millions because of it too, still it doesn’t matter. No one (in leadership) gives a fuck. “Just make this quarter’s goal” is all that matters.
Fucking equifax for crying out loud, NO CONSEQUENCES. If this isn’t ironic enough do some googling on why Equifax is named such. tl;dr This isn’t their first rodeo. They once fucked up so big they had to change their name.
Sucking at software only has consequences in a few industries, usually ones where the product is software, the product is discretionary, and the customer is fickle. Even then it usually sucks.
These same companies are audit conscious (about money), they’re crazy about regulatory compliance, and workplace harassment. Why? These things have real consequences.
The sooner some company gets fined one year’s profit or an exec goes to jail the sooner this shit will actually matter to them.
The topic of your rant is something that it took me forever to accept as a software engineer. Nobody cares about quality.
Your method is refactored to be super-fast and have no branches? Nobody cares.
Elegant, re-usable architecture? Nobody cares.
You've optimized an inner loop in assembly and increased speed by 30%? Nobody cares.
Made it to zero compiler warnings? Nobody cares.
No known crashers left in the bug tracker? Nobody cares.
Fixed that bug that's been in the code for two years? Nobody cares.
Patched a security hole that could allow a crazy bad attack? Nobody cares.
The vast majority of software shops out there care only that something--anything--ships as fast as remotely possible, and as long as it does not kill the customer, it's a success. Nobody in leadership gives a single fuck about anything else. There are a few places where this is not true, and good software people tend to collect there, but it's true for the vast majority of places where you will work. The sooner you accept this, the less likely you'll burn out.
I agree with you, and this is a hard lesson to learn. But I don't think it's necessarily a bad thing.
Software should really only be measured by the value it provides. If a terribly buggy piece of poorly written code still saves hundreds of man hours a week, it's a win. (Unit tests be damned)
If that big ball of mud that's using 20 year old technology still prints a billion dollars a year - that's a win. (Microservices be damned)
If some slapdash jquery-and-duct-tape web app still solves a specific problem I have, that's a win. (React SPAs be damned)
Does crappy software flourish while beautiful software dies? Sometimes. But usually code hygiene, security, bug counts and crashes are not what makes or breaks the success of a piece of software. Developers often think their work is the most important, when in actuality it is usually not. Sometimes you really just are a cog in the machine.
This is what I call the Mediocre Mercenary Mentality, this idea that "as long as we can make money off of it, it's good enough".
This mentality is the reason why sometimes -- but not always! -- my phone won't play music in my car until I restart the _car_.
This mentality is why game development companies get away with charging full price for unfinished, bug-ridden games with crippled features and missing content.
This mentality is the reason why I couldn't see my credit card in the list of my accounts on my credit union's online banking site for several days.
This mentality is why Experian and Equifax don't report the same credit score for me.
This mentality is why I feel ashamed every time a receptionist apologizes for the wait, because they "have been having problems with the system."
Most importantly, this mentality is why I have to worry about whether someone will steal my identity because people keep writing shitty code on top of shitty code and my personal information keeps getting leaked.
When I'm so disgusted by my own profession, no wonder I ended up burned out.
We have an economic system where we reward maximizing short term financial gain. As long as that doesn't change companies will try to sell the crappiest shit they can sell you for the most money.
I think this is more than just the economic system. If you look at people in general, we live in the age of instant gratification.
We want everything fast! Nobody is willing to put in the work and get the reward after a longer period of time... No! We want results now! We don't want to wait for the food to be properly cooked, we want it now, so we go for fast food. We don't want to put in 1 year of work to learn something new so we can feel that we achieved something, we want it now, so we play a game where we can 'win' in 30 minutes. We don't want to build a relationship and eventually end up being intimate, we want it now, so we go for 'one night stands'.
Then you ask yourself... is it weird that the economic system works the same? That the software industry works the same? :/
I don't think the phenomenon you described, as some new thing specific to this age, is in any way real. If anything, the things you mention only reflect the new choices we have. Going over your individual examples:
> We don't want to wait for the food to be properly cooked, we want it now, so we go for fast food.
We go for fast food because it saves time. When I opt to order in instead of going out for lunch, or grab a burger from McDonald's en route, I do that because eating is mostly instrumental. I don't want to eat at that point, I have to, and the sooner I can fill myself up, the sooner I can get back to doing the things I actually care about.
Fast food, and food delivery services, allow people to choose to spend more time on other things, when those other things are more important to them. The same people will enjoy a finely cooked meal on a different occasion, when they prioritize it.
> We don't want to put in 1 year of work to learn something new so we can feel that we achieved something, we want it now, so we play a game where we can 'win' in 30 minutes.
Spending a year on learning something to just get a quick feeling of achievement is a very stupid way to go about it. Videogames are better for that. Learning is better for getting long-term feelings of achievement, and to actually gain knowledge/skills that you can use for something. And again, it's not something new - our generation wastes time on videogames, previous generations wasted time playing soccer, cards, darts, and doing tons of other quick-reward activities.
> We don't want to build a relationship and eventually end up being intimate, we want it now, so we go for 'one night stands'.
One-night stands exist for as long as humans are humans. Nothing fundamental changed, only the hookup methods evolved with population density and available communication tools.
--
> Then you ask yourself... is it weird that the economic system works the same? That the software industry works the same?
Nah, they don't work the same. In them, the actors are not driven internally, they're driven externally. People write shitty software, or sell shitty products, not because of their need for instant gratification, but because of market pressures. A fast hack sold by your marketing team can be a difference between you getting $100M contract versus not getting it, or getting your product on the market first versus a week after your competitor. Market economy, because of all its efficiency, is what creates the culture of suck.
You say it like is a bad thing. If I compare two service/products/relationships/experiences which are equal in all but one attribute: delivery time, guess which I will choose..
Delivery time is an important attribtue and reduces (subjective) risk of no delivery.
And delivery time is an attribute which is measurable. In comparison it is hard to measure how bug free a peace of software is, or how healthy food is, or how successfull your relationship will be, or how good you can learn this new skill in a year.
I have this saying that most software is tent software. Nothing wrong with tents, music festivals are built on tents for example. They are cheap, quick to put up and put down and can do a lot!
You just have realize that most software buildings are tents, and there are a few buildings that you can use out there.
> I don't know if the world is getting a shittier place. But it surely is getting less reliable.
It is, because everything is being optimized into literal minimum viable products. That's how competitive markets work - whatever aspect of your product or business you can cut out to save money and thus get ahead of your competitors, you will cut out, and then your competitors will cut it out too, in order not to get outcompeted by you. This race-to-the-bottom phenomenon is fundamental to how competition works, and the result is that products gradually lose quality.
It's the reason why your grandfather's washing machine probably works 'till this day, while you have to fix your new one every year, and will probably replace it in five years. "Building to last" is a good example of a quality that the market economy optimized out over time across pretty much all products in all sectors.
> It's the reason why your grandfather's washing machine probably works 'till this day, while you have to fix your new one every year, and will probably replace it in five years.
Adjusting the cost of each product with inflation, things really are built like they used to be. Your grandfathers washing machine both cost far more than a current model* (again, adjusted for inflation), and all of the poorly built old washing machines broke and no on sees/thinks about them anymore.
* I am having trouble finding a source on washing machines specifically, however I'm assuming they followed the same trends as most commodities tracked by BLS: https://data.bls.gov/cgi-bin/surveymost?ap
It sounds like you'd burn out in any profession. Perfect is the enemy of the good. If you're holding your work, and that of others', to a perfectionist's standard, then you're just causing yourself unnecessary stress and anxiety.
Most of those bugs and failures you listed are really inconsequential in their contexts.
Car won't play your phone's music? Use the radio.
Game has bugs? Play another game until it's patched, or find inventive ways to use the bug.
Can't see your CC? You can still charge on it and pay its balance.
Experian and Equifax don't report same credit score? Your creditors aren't reporting to every credit bureau.
Ashamed that someone else's system doesn't work? I can't help you there. That's some deep psychological issue.
About 10 years ago I came to understand that mediocre runs the world. All these people, who are your bosses, who are getting raises and promotions, they're the B/C students from college. They don't care about perfect. They really only care about finishing what they're assigned and going on with their outside life.
> Most of those bugs and failures you listed are really inconsequential in their contexts.
Yes, I expect that for most things we could find a context that could render them inconsequential.
> Car won't play your phone's music? Use the radio. Game has bugs? Play another game until it's patched, or find inventive ways to use the bug. Can't see your CC? You can still charge on it and pay its balance. Experian and Equifax don't report same credit score? Your creditors aren't reporting to every credit bureau.
Elevator doesn't work? Use the stairs. Public restroom is filthy? Hold your breath and don't touch anything. Lost a tooth on the right side of your mouth because of an incompetent endodontist? Chew on the left side.
I'm not being facetious here. I really do agree that almost anything is tolerable if we decide to. That last example is from personal experience: I have lost three teeth on the right side of my mouth and I have to chew on the left. Most of the time, I don't even think about it anymore. And it really isn't that big of a deal -- it doesn't have a very significant impact on the quality of my life.
> Ashamed that someone else's system doesn't work? I can't help you there. That's some deep psychological issue.
It could be. Like I commented elsewhere, I would expect a doctor to be ashamed of Andrew Wakefield, but maybe most of them aren't. And even if they are, it could be a deep psychological issue that I choose to compare the state of our industry to what Wakefield did in his.
But here's the thing: I know that we can do better. Not perfect, better. But we don't have to, because we lack accountability.
Sure, perfect is the enemy of the good. But complacency is the last refuge of the mediocre.
> Ashamed that someone else's system doesn't work? I can't help you there. That's some deep psychological issue.
Personally I get angry in those situations... the larger the company, typically the more safeguard crap in place that's supposed to prevent those issues, the more pissed off I get. But that's just me. I get angry when I see evidence of poor/buggy quality in business software the more visible it is.
> Software should really only be measured by the value it provides.
The problem is when the "negative value" is externalized leading to false evaluations. "Oops, so sorry we leaked all your social security numbers. Our bad."
Yes, the "software should really only be measured by the value it provides" attitucde is kind of the key problem we're facing in a whole lot fields.
Because it's not really the value overall the software provides that's being talked about but the instantaneous, immediately visible payoff to a single consumer that software, the people or that consumables are getting judged by. If the software is going to result in trouble over time, if it's going to have security holes that will cost a lot over time, if it's going to commit you to garbage that's updated less and less frequently, etc. None of this calculated. Just as the health costs of sugary drinks don't get calculated, the social costs of poor education don't get calculated, etc.
Also the externalities that are never mentioned: bloat and poor performance unnecessarily takes up user's resources, frustrating them and preventing them from doing other things on their machine simultaneously, and also unnecessarily wasting electricity. Multiply that by the number of users of the software, and suddenly all that talk about "optimizing for developer time" starts to sound like "I'll save $100 for myself by inflicting $100k in externalities".
Somehow I never consider to solve a problem by delegating it to government or society. My first thought is always: there must be a technical solution for this. Even government and society are going to be solved by technology: crypto/blockchain or something which will evolve out of it.
There's no tech that specifically gets people to do the right thing. There's tech that might make people do things but that tech can be harnessed to get people to do either right or wrong things.
Even on a larger scale, the world hasn't burnt down with all the private information that's been stolen lately, has it? Have there even been any significant consequences?
Is it possible that the value assigned to keeping that information secret was, in the end, actually appropriate?
I’m a web developer in my spare time. I write little things that solve problems in my two day jobs, they haven’t tended to be pretty because often I’m learning as I go and when it works then it’s truly time to move on to the next problem, even though I now know enough to solve the old problem again more elegantly. I’m trying to use better practices and modern JS frameworks in new projects just for experience, even though they are actually overkill in many cases. Certainly the people I work for would rather have two useful utilities written in plain es5 and php than one useful utility plus a story about the code quality. I’m gradually finding the middle ground where somebody call look at my code and not instantly want to take a shower.
You're forgetting about ethics. Even if your boss doesn't care about security, if you want to be a decent person, you need to at least try to prevent your system from becoming a tool to harm others.
The vast majority of people making agregrious security errors know no better. You could argue that whoever hired them shares or shoulders the blame, but to suggest they’re not decent people is a stretch. Most I’m sure are. They’re also likely infosec idiots.
Now if you know something is terribly insecure, understand what’s involved in fixing it, and go out of your way not to, then yes ethics come into play. I see that the same as an engineer (in the true sense of the word) staying silent on an issue involving automobile brakes that could lead to casualties. The consequences are not 1:1 but the ethical question is in the category.
I already have issues when I see something but I know no-one is going to pay to fix it, so what can I do? I report it, no-one cares; I might be able to fix it but then I do this for free.
If your bugs can hurt people (financially, physically) I think you need to work a bit harder to make sure it doesn't sneak into your work. Whatever you believe helps there (unit tests, formal verification, ...); if you didn't do all you could do because you are not paid for it (your boss tells you to add features, don't waste time on things that didn't break yet for instance), what should you do? Seems like a real issue for professional coders as there are not so many options; not many companies will pay to prevent these kinds of things unless they had a big issue before already (and someone(s) got fired / sued for it already).
I'm normally on board with Hanlon, but "infosec idiots" who fail to educate themselves on basic security principles are guilty of negligence at best. There's no reasonable way they don't know that security is a real concern, so they should be looking into it. If they still fail, then we can go back to Hanlon's razor.
But as one HNer to another, I was really talking about your second case. To claim here that security can ever be discounted, as the person I was replying to implied, is pretty much inexcusable. Everyone here knows that security affects more than just your business, so no one here should be solely applying business logic to it.
I have a feeling you dont have much experience on working on other peoples code. You have just bundled all kinds of issues into a bucket of "buggy piece of poorly written code". Some of these are outright security vulnerabilities and some make you wonder how was the bug not found so far. Sometimes fixing these issues require a heavy rewrite which cannot be justified as a bug-fix. And how do you justify playing with your customers data by "slapdash jquery-and-duct-tape" techniques.
This, ironic enough, is my main motivation to improve my software development skills, from algorithms to system design to development patterns. Because, like first impressions, the first write is the most important. If it's done well the first time around, the true risk of missing an improvement upon the next iteration is significantly lowered.
Edit: also the reason I'd like an ongoing education in basic security for app devs
"Software should really only be measured by the value it provides. If a terribly buggy piece of poorly written code still saves hundreds of man hours a week, it's a win. (Unit tests be damned)"
But those bugs could easily lead to mistakes in the output.
I had the fortune to work at an agency that cared a whole lot, and tended to take on clients who had learned hard lessons from bad code written for the platform we worked in.
Code quality matters and has tangible consequences in that niche, and it was awesome to work in that environment.
I haven't worked anywhere like that previously, and I think the key factors that brought code quality to the forefront were:
* Bugs cost sales
* Poor code caused friction for features a significant amount (reduced ROI)
* Small teams and an insanely tight customer relationship to those teams. The client needed to have a stakeholder in our team or as close to that as possible.
But most importantly it was just the company culture from top to bottom. The director cared, he made the project managers care, and we already cared because it's more fun to write good code.
It really was! I believe they were doing well, growing at a steady pace. Another part of the puzzle was that client relationships were long, over years, so if we wrote bad code it was us that would have to deal with it later. The clients were typically larger, which helped to that end.
We did take on a rushed project at one point, so there is a good comparison in that project. It shipped, but everyone was unhappy, some key people left and had to be re-hired for. It triggered a very transparent discussion roughly titled "let's figure out how to never do that again" and to that end they curated the clients more carefully, and put in some checks to see if projects are a good fit.
But I understand that is probably unique, and at previous jobs the "good enough to make money" mantra has worked out okay.
I think when your software ___domain is complex and rapid, it pays to have sound architecture. When it is simple, or slow to change, it doesn't matter quite as much.
Whoa now. The Toyota 'unintended acceleration' case showed us that it's fine to kill the customer. The court declared themselves incapable of penalizing even the most stupendously egregious example of criminal negligence we're likely to see (Of 90+ 'required' and 'suggested' practices in automotive industry firmware coding practices, they did 4 in the code in question. The developers didn't have a bug tracker or static analysis tools. Typical of all organizations, the software engineers had no real power to demand more time or testing, etc). They lost a civil wrongful death case and paid out $4 million, then settled with the plaintiffs out of court before the jury could award punitive damages. So killing is OK.
The issue is that, I think, software people need to stand up for themselves. They need to tell executives no, and refuse to pump out slapdash work. They need to take responsibility for the successes and failures of the company. The practical truth is that in most organizations, the software is the most valuable part of the entire business and the thing with the power to enable the company to excel or fall flat. We've been waiting for executives to recognize this since the 1980s and they still just see the software people as people who do typing.
After a 10-month search, NASA and NHTSA scientists found no electronic defect in Toyota vehicles. Driver error or pedal misapplication was found responsible for most of the incidents. The report ended stating, "Our conclusion is Toyota's problems were mechanical, not electrical."
Totally agree. But the only way for software people to take any power or control (really, for any worker to have any say in the business at all) is to unionize, and us tech folks tend to treat that like a curse word.
My universal advice to graduating CS students is, "no matter what your current code quality/dev speed balance is, your boss will think you're polishing too much."
Second advice. If you want to pretend that quality matters, move to aerospace, finance, or any critical systems.
Third advice. It's not enough to fix bug, everyone MUST know that you fixed that weird bug that customers were complaining about for years and noone could understand the cause.
> If you want to pretend that quality matters, move to aerospace, finance, or any critical systems.
The important word here is pretend. If the system is safety critical you'd think that quality matters. It doesn't. What matters is following the process laid out in ISO 26262 or whatever standard applies to your field. Whether or not that improves quality is not important (it does, but only a little). The important part is that you have reduced/no liability for accidents if you can prove that you did everything by the book.
Yep, part of the quality standard for medical devices is to have processes, and also processes to improve and change the processes. So theoretically there shouldn't be any doubt as what to do in a given circumstance when something fails, and that standards would rise as development matures. But in reality it's just a bunch of paperwork, and as far as the software is concerned, standard practices of unit and integration testing, code reviews, and strict compiler settings are more than adequate already.
Edit: It's all about documenting that those things have been done and signed off on, so keeping track of paperwork.
...where he talks about his role and questions regarding the Challenger shuttle disaster; I don't know if things have changed much since then at NASA, but I suspect they haven't, given Columbia and such.
Mmmyes, but the process implies a heavy chain of verifications. So even when there has been an idiot and a lazy guy, at some later stage in the process, it has to go through someone who cares at some point and it gets fixed at that point.
After all, that is the point of those heavy processes: manage to product something fairly reliable despite the acknowledged presence of morons and shitty managers. The process does not try to eradicate those people, it works around them with some kind of redundancy.
Having been one of those 'concerned' people, I must admit it is exhausting. You know that if some work comes from this guy, or that company, it will be 90% shit and you have to explain them again and again what is wrong and how to fix it, or do it yourself when you are really fed up. So basically you do your job + the slackers' job. That is how the system works. As a system, the result is not too bad. As far as individuals are concerned, however...
And make sure to record it so that at the end of your yearly review cycle, and before the raises are decided, you can remind your upper management of your notable accomplishments.
I disagree. A year is a long time, and a valuable, difficult achievement in February may raise your prestige for a while, but by December that's faded to background levels. I even forget my own achievements from nine months ago, never mind those of a dozen other people on my team.
Depending on the attention span of your particular managers (and their managers as well, most likely), this can indeed be a viable strategy.
You might also carefully consider your company's culture when deciding how to split your efforts between new (user-visible) features, preventive maintenance, and bugs affecting end users (especially any VIP end users that have your managers' ears).
For example, a 'Hero' culture rewards last-minute, late night, death march efforts, whether to fix an urgent problem, or to add that one feature that is (supposedly) needed to land a specific high-value customer/sale.
Really? Ship an app to the store that crashes all the time and see your metrics go WAY down. Loosing user data is another great way to lose users permanently. Sure if you are Google of FB it probably does not matter but for smaller companies NOT CRASH, NOT LOOSE USER DATA seems to matter quite a bit in my experience.
B2C is a tiny, tiny fraction of the overall software industry. The vast majority of developers are working on godawful enterprise software that end-users have no choice but to tolerate. The absolute worst apps on the iOS store are remarkably high quality by the standards of enterprise software.
> The sooner you accept this, the less likely you'll burn out.
I learned this at my first software engineer (though we didn't call it such) job back when I was 18; closing in on 45 years of age, and I'm still here at it, and don't see that changing soon. I have thought about management, but I really don't have such experience, and I'm not a real people person anyhow - plus I like to code.
Another thing I have learned - at least here in the United States: Not only do they not care about quality, they usually don't have any loyalty to you as an employee. They'll let you go without any warning at all, and if you're lucky, you'll get some severance pay.
So save your money, build a savings account with FU cash in it (6 months to a year of salary - or more), and don't be afraid to drop an employer like a hot potato if things aren't working out, or a better offer is in the works.
And above all else, don't let your skills stagnate.
Do they care if a year down the road your maintenance costs increase fivefold? Have they seen the analysis which shows that initial development cost just makes a small fraction of the total cost and that fraction get smaller the longer software is in use?
Do they care if a year down the road your TTM times increase tenfold?
We know that the bad code slows us down. Why do we write it?
Why do we write the code that slows us down? How much does it
slow us down? It slows us down a lot. And every time we touch
the modules we get slowed down again. Every time we touch bad
code it slows us down, what do we write this stuff what slows
us down?
And the answer to that is: well we have to go fast.
I'll let you to deal with the logical inconsistency there, my
message to you today is—you don't go fast by writing crap.
No. When they have seen all the bugs, they will tell you to fix them and when you make a new feature they will tell you the same thing, just make it work, just ship it.
Spot on in my experience. Curious if anyone at Google/Facebook/Amazon/Apple core teams could comment on whether these issues are also the case at Google/Facebook/Amazon/Apple? I would imagine solving issues like these do make business sense at big firms with scalability problems.
The industry does itself no favours by regularly rejecting techniques to improve software quality tho'. We could all be writing rock solid Ada, but programmers said no, we prefer JavaScript, hell we'll even try to use it on the server...
Lack of profit focus is definitely something I see in junior devs of the higher quality kind. It's important to sell any prospective improvement to management in a way that makes money. Quality people can't see won't make money, and even quality they can see will only make money if there's a competitive market.
> The vast majority of software shops out there care only that something--anything--ships as fast as remotely possible
When nobody cares about you get the point sooner or later where you can't ship, they've probably launched a product but there's still a lot more to ship in future. Eventually they become so mired in technical debt that they care.
When they realize they've invested a hundred million dollars in something that now has no value? They care. (especially if it's a turd before it gets out the door)
When a start up comes along and eats their lunch because they aren't weighed down be legacy code? They care.
When they lose revenue because they can't keep customers happy? They care.
The problem is that the professionalism of software developers doesn't allow them to learn these lessons early and hard enough.
I disagree. They will hire a consulting company to do a root cause analysis. The Ha'vad grads from McKinsey will swoop in, determine that the past doesn't matter, you need to get to black and to get to black, the St. Louis team needs to go, along with 20% of the following 7 teams....
Shops that use strongly-typed functional languages are, in my experience, a good bet. They usually made that choice because they have an unusually compelling need to write correct code. Aerospace also has a few sub-industries where correctness is priority #1 (like commercial autopilot software).
This article bothers me because it implies that the path to correct code is simply to have code written by white-collar salarymen (as opposed to unkempt youth), and not improved formal methods.
It doesn't imply there is a correct path to code. It implies that the best path to the goal for this program is to minimize the opportunity for exhaustion to be a variable.
> That’s the culture: the on-board shuttle group produces grown-up software, and the way they do it is by being grown-ups. It may not be sexy, it may not be a coding ego-trip — but it is the future of software. When you’re ready to take the next step — when you have to write perfect software instead of software that’s just good enough — then it’s time to grow up.
It might not be the future of software, but it's the software of the future, in the sense that it's what enabled us to get to the moon, and will enable us to get to Mars.
I think you've nailed it on the head but haven't realized why you've nailed it on the head.
`Why? These things have real consequences.`
To a business they see software/engineers as a expense. The whole focus as a business is to provide a service that people are will to pay more than it costs to provide said service.
Everything else doesn't matter, what does matter for a business is $$$$$ and making sure business doesn't get sued because bill touched jill with his #####.
For what my words are worth, I do think you need to take a step back and view it from a Business person perspective and attempt to align your goals with theirs. Get out of your cube and go and meet CEO's and CIO and talk to them about their problems and what their goals are. Then see if you two can work together to achieve their goals and bring value to the table.
Even writing this I've could think of 10 or 20 different business discussions you could have.
>To a business they see software/engineers as a expense.
Anecdotally, this is one of the first things I try to gauge at a prospective employer. "Are their development teams seen as cost centers, or profit centers?" If the company works with technology a lot and doesn't see technology as being particularly important to its bottom line, that's usually a bad sign.
Also, discussions with C-levels tend to be pretty one-dimensional. It doesn't matter how much their infrastructure is falling apart, they'll still stonewall with something along the lines of, "our goal this quarter is new user acquisition - how does this improve those numbers?"
And they won't accept, "users cannot sign up if everything is broken" as an answer.
Like I said before you're viewing it from the perspective of a developer but not from the perspective of a Business person.
Instead of saying `Users cannot sing up if everything is broken` you need to see it from a Business perspective, and communicate it to them in the language they understand.
For example,
`From our quarterly earnings report we need to have 10,0000 new clients signed up by the end of the month to reach the goal. We have on average 2,000 sign-ups per day. On the 2nd, 3rd of last month two of the critical system's fell over, and this result in a estimated 4,000 failed client acquisitions. The estimate from our quarterly earnings each new customer bring's in a revenue of $300 net profit from each client for the quarter. As a result of the system crash our quarterly earnings will be down 1.2 Million.`
Your job is to communicate clearly in the language they understand how it will achieve their goals. You're not the only department/manager who is requiring resources from the Business. As I stated before you're there to serve the business to achieve their goals.
Honestly, this is pretty straight-forward stuff, and I'm surprised you're getting any argument. If a technologist cannot explain how a proposed implementation is going to provide value to the business, that project is probably better off left alone. Prettier code and better tests don't provide value. But fewer customer-facing P1/P2 tickets provides value. That prettier code and better tests leads to fewer tickets is an implementation detail.
Let's put it this way: Netflix has one of the most robust microservice architectures I know of. I promise you will find zero mention of it anywhere on their customer-facing pages. Because that is not the problem that Netflix solves for their customer; they solve a media library and streaming problem.
The business is your customer. You need to understand their problems and how you're going to fix them. Everything else is an implementation detail.
>Honestly, this is pretty straight-forward stuff, and I'm surprised you're getting any argument.
I'm not. Our industry is full of academics who've been told "this is the right way to do it, everything else is stupid" for the entirety of their education. They're academics, trained by other academics, ignoring the broader reality of why we do these things at all.
The corollary here that it is perfectly acceptable for business users to not understand significant parts of their own business operations. In fact, everybody should work around your shortcomings in understanding that "technical" stuff. As the guy with the MBA, you are the most important person in the room and everybody else is there to serve you.
You are a team. Not everyone has the same background and experiences. You would be amazed at how many people in an organization have incredible knowledge of their business ___domain and little to no knowledge of IT systems, their costs, and benefits. As an IT leader it is your responsibility to help bridge that gap, and the language of business is business not tech jargon, so I would caution against dismissing it.
Another way to think of it is as of user acquisition problem. You have to highlight the benefits in a way that is compelling to them. At the end of a day all progress/fiascos in a company comes down to consistent and effective communication.
Still doesn't work. I've done clandestine work over weekends to speed up an application, then as part of shipping it also add a split test with half the audience having a delay similar to what was in place before I fixed the problem and brought that to the table with the "business" guys, and everyone nodded their head "wow" but every sprint after that was just full of new features.
I obviously didn't bring to the demonstration that I had volunteered the time, it suited my point to make it seem like it was something easy I could do as an aside... Problem is, they don't care about $ either, they care about what makes them look good, what makes their boss look good, what the sales guys are bugging them about, and it's features and visual polish and never quality or performance.
I had to break it down to one of them "Listen, we've seen that the site being faster gets us X% more conversions, if the site was down, or if the site was ugly X% of the time and cost us those conversions, you'd be breathing down my neck to fix it, why is this any different...?" reply after a few seconds of thought "You're right, but you'll never convince Leadership"
> I've done clandestine work over weekends to speed up an application
> I obviously didn't bring to the demonstration that I had volunteered the time, it suited my point to make it seem like it was something easy I could do as an aside
I understand the impulse, but in long term this leads to inflated estimates from management - and tech side has only themselves to blame. When we pretend things take shorter then they really do, dont be surprised when they learn to expect our lie to become truth.
There is an old Google paper where they talk about discovering that users returned more often if the search results list was 10 instead of 20, since the shorter list took marginally less time to load/render for the user. If your co-workers don't believe you then I am sure they would at least love the Google finding that minor performance improvements have a positive impact. They would then support improving performance in the app and brag about how much like Google they are.
Google is showing one series of ads per page. The study showed that having more legitimate results per page decreased their revenues. Of course, it gives less room for ads and less page views.
The work you did probably is great and do I think its above your average developer. Somewhere in the the top 10%. Though how you conducted yourself in the face of management was down right petty/folly.
Yes you did the best job in the world. You improved the performance of the serves by % and it did make management of those server's easier. Though you need to think about what you've done in the eyes of management.
Firstly you've modified the server's without permission. You've run tests out-side of work hours that could of impacted the business bottom line. For all I know (its up for debate) but your senior managers had more pressing issues on the table, and having a young code money running around outside office hours is the LAST thing they want to hear.
`It's not a simple as you may think it is.`
Some of the things that come to mind, and honestly I would of pulled you into a office to explain yourself if you're on my team.
1) Did you document all the changes you've made to the server
2) Did you back-up or store any sensitive information off-site during out of office hours
3) Did you modify critical system files that could cause another dependent system to fail
4) What ETA/Service level agreements if any did you break when bringing down the server for maintenance?
5) Did we lose any Security Certification with you modifying the server or changing any of the certificates on the server?
We're software developer, and yes when we see problem's we want to fix it to our best of our ability. Though when you're in a Business there are many thing you may not be aware of. In this situation I would honestly say you're lucky to keep your job! I hope you take it into consideration next time you want to go out-side of your duties.
I don't want to come down harsh on you, but there are times when you see a bad situation and there is nothing you can do about it. You notify the people who're higher than you of the situation and the dangers. It's then up to them to make the decision. After they've made the decision that is it. If the server blows up during the weekend, and they want you to come in and fix it (even when its there fuckup) you just take your hourly rate and x10. So if your hourly rate is $50hr and the server blows up, then your new rate is going to be $500 per hour to come in during the weekend to fix the mess.
Many of the points you've raised are moot outside of large corporate environments and projects.
It is likely that the commenter has root access to all of the things you mention and touches them on a daily basis. It is likely that they are not inundated with all of the heavy process you're bringing up. It is likely that all of the security concerns you raise are not even part of their current business. Working outside of set business hours is likely not a concern; overtime is likely uncompensated anyway and there probably aren't considerable office security protocols (they probably have keys and an access code for the whole office).
The points you've raised do not apply to most small development teams.
It was a simple performance bug (think N+1 query type) that was trivial to implement and trivial to be confident about its behavior vs the pathological implementation, and I used the normal channels for shipping it, it passed QA.
The only "clandestine" part is that I added the split test, and did some work outside of the set of features that was scheduled.
We had no policies around SLA, developing off-site, security or restricting sensitive information (think potential FERPA/HIPPA violation) which are yet other "non-features" that I tried to get them to adopt.
I implemented this entirely above-board, with more precautions and consideration than were taken by other developers on that team for the day-to-day.
The engineer is trying to optimize/normalize his free time by acknowledging quality issues, the management is trying to optimize their income by sweeping them under the rug.
The engineer is ultimately liable for the problem. Quality is insured with his reputation and free time.
The business guys almost always leave at 4:00pm and their bonuses are based on a metric not related to quality.
So the janitor should analyse and quantify what difference he will make to the companies bottom line before he wipes those skidmarks off?
Not everything is quantifiable like that. If I'm writing code and I take the time to make it 30% faster then it won't make a difference to our bottom line. If that's done everywhere then customers will like us more, be more likely to stay with us and be more likely to recommend us, but that can't be quantified in my day to day work.
Probably the biggest change looming over our industry is coding to - and deployment on - platforms that definitely are instrumented to capture exactly how much value a developer is adding (or subtracting) from the bottom line with every commit (it's a natural outgrowth of Function As A Service architectures).
It is already possible to more-or-less show how a code change (bug/fix, new feature, change in storage backend, etc.) directly affects your IAAS bill at the end of each month, and taking it a step further by tying in some instrumentation and formulas (of the sort often used to calculate the effectiveness of marketing funnels for optimization and A/B testing) to show the concomittant effect on revenue allows ROI to be calculated.
So, I am pretty sure that Finance-Oriented Programming is going to be a thing not so many years from now, tied into the whole toolchain, and very few - not developers nor their managers - seem prepared for that sea-change (some Operations folks are probably a bit better off in terms of mindset and processes).
>"Are their development teams seen as cost centers, or profit centers?"
It's not a matter of opinion, but what the term means. They are cost centers unless they are closing sales (or being billed for about 3x what they cost, which is still way less than sales people will generate, being closer to 10x+ cost).
Best thing any employee can do is find out how their company actually makes money and understand how they align. Unfortunately, the highest business expense is salaries and one of the most important things for most businesses is remaining runway. So there is huge pressure to keep salaries down, particularly in cost centres, which aren't directly extending runway. If it's a service business it can get worse, as the money is typically made on billable hours - creating a perverse incentive to have cheaper and less efficient people to bill out rather than a 10xer (outside a senior business facing role). As long as they are capable and not so bad that they lose the client, it's often more efficient to have cheaper mediocre people from a business point of view.
How long you can continue operations if sales drops off a cliff is a very important number for every company I've worked at, from startups to big multinationals. Maybe it's less of a consideration for Amazon and Apple but I wouldn't even bet on it there.
Become a highly sought after specialist for a product offering (e.g. self-driving cars, blockchain, fintech), or get ownership (startup), or go customer facing (particularly if commission involved), or go into technical management on CTO route.
The better question to ask in my opinion is whether or not they activate their IP on the balance sheet. If they do there is a very good chance that they care about the quality. If they don't then there is a very good chance they treat software as something disposable.
Yet my experience is that the issue comes from engineering a lot of times: fixing bugs is not fun, let’s rewrite everything because this time we learned from our errors, or just to use the new shiny framework. A buggy legacy code base that no one wants to touch or improve, and V2/New projects that of course completely fail to deliver in the end and the cycle repeats. Being in a customer team I try everyday to make Engineering care about non technical end users. My team and business might be responsible for some feature creep, but bad code quality comes from engineering practices.
Edit: to be more clear the discussion usually goes like this:
- Me: “where are we on fixing this bug? it’s been two years and customers are still complaining”
- Engineering: “we can’t touch at this component it’s a mess but we are rewriting it, this bug will no longer exists and it’ll be so much powerful, just wait a few month”
We all know how it ends. And when business pressure comes because it failed to deliver it’s all “business is evil we need more time to do great software!”
What we really need is senior and experienced engineers IMHO
> Get out of your cube and go and meet CEO's and CIO and talk to them about their problems and what their goals are.
Implied in this statement is that the parent has a myopic perspective on the business world. Based on my read of his comment, I don't think that's fair - and moreover would question whether or not your perspective is as unique as you apparently think it is.
In other words, I think you're patting yourself on the back a little much for figuring out how to chew before swallowing.
> Get out of your cube and go and meet CEO's and CIO
Oh right.
Good luck even ATTEMPTING to arrange a meeting with a Fortune company's CEO. "Who from what department again?". Worse yet if it not even your company's CEO.
Maybe, if you are lucky, you can get to skip a level or too. Software engineers are usually very far removed from the top leadership.
Regarding Equifax, when a company like that gets enough negative public reaction, they look for a bandaid. In the case of Equifax, they recently hired ReliaQuest https://www.reliaquest.com/
I've friends at ReliaQuest and I know it is a good company, but I am frustrated that the leadership at Equifax thinks that security is a minor issue that they can outsource to a 3rd party. If a company deals with sensitive financial data, shouldn't security be a core competence?
> It's a race to the bottom. You could pay more for better work that gets done more slowly (fewer hours as people have families, etc,) or you could pay less for worse work that gets done more quickly.
But in my experience, this is not true AT ALL. More experienced people might work less hours, but will produce the actual end results much quicker - both in hours and in calendar time.
Especially when considering that less experienced devs will (on the average) cause more bugs and worse code structure - in the long run this causes a ton of extra work. In my opinion that is the main difference that easily makes someone a "10x" or a "100x" developer.
But that part - bugs, structure - quickly becomes invisible to higher management, as they start to rake in millions the software issues become only minor hitches in comparison. When a company does well, software problems are easily forgiven - Twitter had huge scaling problems but still grew to one of the largest platforms; Coinbase is having regular problems now and is a multi-billion platform. Code quality doesn't matter if the business goes well. We purists like to think that high-quality software sells itself, but I'm more and more starting to think that you can't start off with that.
Of course, on the other hand over-engineering at an early phase of a project is very much a thing too - starting with microservices, using the latest framework or language, etc.
Please can you explain why using microservoces is over engineering? Is it because it’s a relatively new approach? Im getting involved in a new project that is using microservices and a new framework...
Because like any design pattern microservices should be used to address a specific problem. Martin Fowler describes that particular design pattern here in great detail: https://martinfowler.com/articles/microservices.html
Microservices however unfortunately have become a fashionable cure-all that's applied indiscriminately to software projects of all sizes, ages and purposes. Hardly anyone seems to be asking anymore why they're actually using microservices. Everyone uses them because everyone else does.
Microservices always incur overhead both in terms of engineering and network latency as well as maintenance / administration. If you have more than 1 microservice you need infrastructure for orchestrating and monitoring these services.
For all that to be worth it you need to have pretty solid arguments for using microservices in the first place. For a new project I'd usually advise against using microservices right from the start. Doing so is often rooted in the same kind of fallacy that has people worry about Facebook-scale scalability before they even have their first user.
If you start having problems that can be solved with microservices then good for you! This means your software is successful and has grown so much in terms of features and responsibilities it's starting to become intractable with standard approaches.
A traditional monolith gives you the opportunity to learn about the problem ___domain first without having to worry about implementation details such as microservices.
I second this. You can internally build your monolith in a way that makes a transition to microservices easy. Avoid global state and build single concern processes which have their dependencies passed to them on creation.
Also note that it is very, very difficult for anyone, inexperienced or experienced, to build a monolith that can later be pulled apart. You're two suggestions are a great start. You have to really design with the major blocks of functionality in mind, and carefully manage dependencies between these blocks AND third party dependencies. So many monoliths are balls of mud, that can't be pulled apart with any reasonable effort. The code may not be spaghetti, but the dependencies and interdependencies are.
Honestly, if one can not create a monolith that isn't a giant ball of mud, one has not chance at all of creating a distributed application, microservices or not, even ignoring quality.
I think the strategy of "monolith first" isn't a poor one, but it is a little humorous to see someone who makes part of their living helping companies break apart their monoliths advocate it.
I guess. My partner is currently starting a mushroom farm. She hopes one day it will be big enough to make use of a very large facility. Right now it isn't, though. Do you find the same humor in her real estate broker guiding her towards smaller properties initially even though if her business successfully grows the broker will make part of their living helping her move into a bigger ___location?
I get what you mean by saying it's humorous, but I think it's important that nobody easily dismiss what he says because of who he is.
A lot of what he's arguing against is cargo cult programming, and I think it's fair for anyone to argue against that. If your only explanation for why you chose one architecture or design pattern over the others (especially if that option introduces a lot of additional drawbacks) is "because that's just what you do", you need to stop and objectively evaluate what you're doing.
What about flow-based programming (https://en.wikipedia.org/wiki/Flow-based_programming)? How would you argue (or not argue) that differs from microservices? When would you advocate for a monolith first vs building one tool that does one thing and does it well and then pipes its output into another tool?
It is a design technique that can be used for designing microservices based applications or any other kind of application that is distributed. So, it's something completely different and the difference you asked makes no sense.
There is a link on that page for the more generic dataflow based programing. I'd advise anybody to follow it and simply ignore its specializations until they master the generic one. I'd also say that any program should be designed from multiple points of view, so do not stop at dataflow.
They work well when designed well - and for me that's more often designed organically. If they're designed badly, most bugs eventually reduce to trying to do distributed transactions across microservices.
That's my yardstick - if I see poor man's distributed transaction attempts happening across service boundaries I'll start to assume over-engineering / cargo culting. If I see services acting a nodes in an pipeline or a flowchart, preferably with multiple incoming lines, it smells better to me.
not OP, but microservices can cause you to have to spend a disproportionate amount of time writing glue code between services. Starting with microservices is often a case of implementing separation of concern at the network rather than the functional layer, which is IMO too high in the stack. Microservices should be extracted for performance reasons, not build from the start because they are trendy.
In most companies upper management can change their mind faster than junior developers can write bugs, thus invalidating all the buggy code already written.
Literally laughed out loud. So many times, I've sat on requirements or even projects waiting for the inevitable "change of priorities" making the projects no longer relevant. "Forget X, Y is the new priority", it's very reliable in big dumb corporations.
>> less experienced devs will (on the average) cause more bugs and worse code structure - in the long run this causes a ton of extra work.
In the last five years or so, I've done contract work at several large corporations. In every single one, they didn't care about defects or code structure. They just wanted to ship an application in a very short period of time.
When I asked about the same thing you pointed out, the response was the same, "We don't care about defects once it's released, and we don't care what the code looks like. It needs to work well for our end users, nothing else. Besides, in 6 months, we're going to redesign and rebuild it anyways."
It just seems nobody cares about long term maintenance, and likewise, they don't care much about the code or the amount of defects that code is generating. Pretty maddening when you think about it. Nobody cares about quality anymore.
If they actually redesigned and rebuilt it evety 6 months, that would be a reasonable approach. Often they just want to build more stuff on top of what they already have, however.
That may simply mean that only one of these things really is quality :
1) working great and well for the end user, for the function of the software
2) bugs and code structure
Code quality can be so bad that it matters. But beyond a basic level, it does not matter. Bad code that works beats great code that doesn't fulfill it's function every single time.
I once had this great lesson. I was brought into a team as a TL/Manager. We were making a product that was doing pretty well, in terms of attention it was getting and so on. 1 week in, I learn that about 70% of the code is tests, 30% is actual code that's used at runtime. 2 weeks in I learn that everybody's always working on the tests, never on the program (there's a reasonably good reason for this even). 3 weeks in I explore the program, and at some point I try to run it. Turns out the "main" function is broken. Because of crashes (plural) during variable initialization it never gets to the first line of the main function ... How long had these bugs existed (it wasn't just one), you ask ? Well, over 3 months. Doesn't happen in the (VERY extensive) test run.
Lessons learned in that project:
1) test driven development is a great place to start, and a VERY bad way to run projects once they're even 10% into their delivery schedule
2) even the best tests don't check everything
3) even the best tests don't guarantee that even the most basic simple parts of the program work. In fact, tests actually WORK AGAINST THIS.
4) the value of a system test (where the entire program runs, by actually going into main and doing what it's supposed to on a realistic example) is incredible. The criticism that TDD developers have is valid, that if it fails you won't know (necessarily) where it failed, but isn't that serious. Firstly, you'll usually have some idea of where it failed, and secondly, and this is the big one, you'll know it fails. FAR preferable to the other situation. They're also very hard to write. Tough. You need them, far more than you need unit tests on 5 line functions.
5) you are MUCH better off with a "hero" programmer (bad name, "loose cannon" would be more apt description) AND someone acting 90% anal about tests, and managing the inevitable conflict, than you are without the loose cannon. Managing that conflict will be a challenge though. And yes, that means that, to the utter dismay of the TDD developer, you'll have to tell him to let in code without, or with bad tests, from time to time.
And twitter became a successful (questionable) company with buggy code and the infamous "fail whale". You don't need good code, you sometimes just need shippable code to get by until the company executes on their exit strategy.
Hasn't Twitter racked up 2.5B in losses and yet to turn a profit (though reportedly close to doing it for the first time ever). 2.5B in losses for a platform you could build and scale for what, $10M now. Very questionably successful - some people got rich so yes, i suppose?
I've made the mistake at one company of thinking that my job was to architect solid software that was sustainable for the long term.
What they really wanted was to have software that was "good enough" and had all of the checkbox features to be attractive to potential acquirers so the investors would stop breathing down their necks.
Twitter "made 2.5B losses". Yeah ... that's bad ... but not from the perspectives of investors.
Jack Dorsey became a billionaire from those losses. The same is true for probably all other early investors. For an investment in Twitter to have done really badly, there was a "bear" period of about 2 years.
And before you say it, remember that even Google stock dropped 60%, Facebook's 70%. Twitter's drop actually looks pretty reasonable in comparison.
So firstly, I don't think twitter is bad, and secondly, I don't think people have given up on twitter, and thirdly, that seems pretty justified.
As a founder - yes, if I could convince VCs to give me enough money to fund a money losing venture long enough to take public to get the market to value my company at a few billion.
As an employee who has a job because the company exists? Yes
As a retail investor - I wouldn't be quite as happy.
People have been asking for years why Twitter is so bloated with employees. A lot of people think that the bloat is partially caused by a poor architecture that they have to maintain.
Contrast that to the small staff that Instagram had before getting acquired. It's still run on a relatively small staff now.
a 24 year old from a top CS college with 3 years of solid javascript experience will be better than a 40 year old with 15+ years of C#.net experience for today's job market
What about a 36 year old from a top CS college with 6 years of Java experience, 6 years of C#, and 3 years of Javascript? And many more failures and paradigm shifts under their belt, by the way. The core field of undergraduate CS hasn't changed much in the past decade, mostly there are new options for specialization and the bleeding edge remains in graduate programs. If you still think the 24 year old is the superior choice, then even generously you're not looking for skill.
I'm one of the greyhairs. I won't work the insane hours, but I'll still get more done. How? By knowing what to write and how to write it, so I don't have to fumble around trying things for nearly as long. By not writing the bugs that the 20-somethings write, which means I don't have to take the time to try to fix them. By not creating the shoddy designs that other create, which then have to be fixed. By having an idea where to look when bugs do show up, so that they get fixed more quickly. And so on.
It's about producing working code, not about how many hours my butt is in a seat...
What you won't do however, is work insane hours to get something done because management changed scope or requirements at the last second but kept the deadline. The flavor of agile development that most shops seem to devolve to is one where management gets the part that they can update requirements based on new information, but deadlines are the same and costs need to constantly be cut.
My personal belief is that our industry tends towards less experienced workers because they put up with shit managers more and can be used to cover for their issues. It's anecodtal only, but I have never heard management decide that their deadlines we're improbable if not impossible when bugs get through. It's always just explained away with platitudes about code quality that suddenly dont matter when that requires more money or more time
Call me cynical, but I am always surprised when people sincerely appeal to "agile/scrum values" of team autonomy or whatever to contest management diktats because, like... isn't it totally predictable that that part will be ignored whenever it is inconvenient?
Agile is cargo cult. Customers are not agile to your fuckups. Competitors are not agile to your "scope creep". You know what is the current most agile company in software? Microsoft. You know why? Because no matter how much it fucked up its business is so huge and so vast that it did not matter. It makes money. But even Microsoft is starting to feel the need to deliver as otherwise it is going to become IBM or Xerox.
Agile is like dieting, simple but hard and requiring a lot of discipline. Very very few companies are any good at it but done well, it's a complete transformation.
I think it's more like a cult, where it's supposed to solve all your problems and any problem you identify with it just proves you aren't doing it right.
It is, at best, a different set of trade offs. What it has become is like you said, some sort of cult where management gets everything they want and don't have to pay any costs.
I've worked at two places where my tech lead at least recognized that if scope or requirements changed then deadlines changed. Now I am at a place where agile is just a word thrown at any problem and now we have things like being asked to integrate third party tools that we don't receive until a week before the deadline. My team stayed till 2 am and was back in at 9am the next day and _literally no one_ saw a problem because it's "agile" and that's what you do when your agile
I've experienced "no methodology; just people kind of describing what they want and you make it up as you go along" and Scrum and I prefer the former, but I suppose it's possible that someone out there is doing Agile in a way that is better than the one I learned in a bunch of excruciatingly boring training sessions from consultants.
e: Actually I did Kanban too and that was OK, but that is a lot closer to the "no methodology" thing and anyway doesn't involve you making decisions like "oh, can't start working on that, because the sprint is almost over."
Kanban is very much Agile. Scrum can be implemented in a variously heavy to lightweight way, though FWIW I found having leftover time at the end of a sprint that could only be used for non-business-facing work (e.g. improving the build system, refactoring of code that wasn't immediately being worked on) to be very valuable.
I guess the cynic in me finds it predictable that most Agile shops (at least IME) would be doing Scrum and not something like Kanban, even though it largely works against the supposed core values of Agile.
Yep Kanban is the way to go, focus on flow, small batch sizes, reducing waste of wait times and handovers. Look at cumulative flow and rightsize stories instead of fretting over estimation, points, burndown, and all that mini waterfall that scrum seems to entail. Takes most of the drama and hysteria out of delivery.
Ultimately it makes no difference whether you're doing "story points" or estimates in units of time. All that's going to happen is they'll be converted to units of time in a way you don't quite understand. It's just so naive to believe any of the Scrum promises.
It's why we always do rightsizing and never numerical estimates. You can calculate delivery dates by counting stories * average story time - very fast, very simple, and amusingly at least as accurate as detailed time estimates. Has an interesting side effect of tying scope to delivery. So if you want to reduce delivery time, it's clearly understood as under control of stakeholders and done by dropping stories. If you can't rightsize to a reasonably narrow range (we use 1-5 days) and have to estimate (e.g. sifting stories for prioritisation purposes), t-shirt sizing usually works ok and gives the required info without opening it to abuse.
Not many 20-something have the mental strength to insist on the dealine moving if such big change requests come in late.
As a contractor you'd insist on the client to pay that extra work and time.
I totally agree that measuring productivity by the amount of working hours is ridiculous.
However, it is important to realize that you are not competing with the 20-somethings or the ones that are fresh out of college. You are competing with the ones that have already a good amount of experience (let's say 7-10 years) but are still in the beginning of their careers.
If they are minimally talented, they will probably produce code just as good as yours, yet they will probably (a) work for less money and (b) still be single and (c) more dedicated to work and (d) more tolerant of bullshit asks from the employer. Those are the ones that eventually fill positions as "Sr. Engineer" and crowd out the older ones.
I'm not attacking you here (and in fact I agree with you), but from a manager's perspective, I need one guy like you to design it in an architecture modeling tool and 8 young code monkeys to do all the boring work of building and testing it. You can do the code reviews to make sure they don't do stupid shit.
The thought is that a team of greyhairs never ships anything, and a team of youngsters ships garbage. But it's better to ship garbage than nothing, hence the bias.
My experience is that programmers who become "architects" and are only responsible for design and code review tend to go downhill in their skills. It SEEMS like an efficient way to use experienced people, but it is actually an anti-pattern. Their tendency is to develop ideas that sound good but don't work well in practice, and there is no direct way to correct the mistake. (Any time it doesn't work, the tendency is to blame the implementer. And there is usually enough to blame that their own contribution to the problem gets missed. You're less likely to miss the problem when YOU are trying to make the implementation work.)
In fact the problem is sufficiently bad that in interviews it is important to have people actually write code to show that they still can. When you get an "architect" who takes offense at the exercise, that's a non-hire. They might have been good 10 years ago, but they aren't worth hiring now.
This was something I'd sort of noticed, but didn't become conscious of until I worked at Google. There they were very conscious of the phenomena. Every programmer from the most junior to the most senior (for the record that would be Jeff Dean) writes code. If you're not willing to write code, you're not a hire.
That said, the exercise goes both ways. If I interview with an employer and I discover that they design things up front as UML diagrams, odds are that this won't be a workplace that I want much to do with. If I'm working in a job and they force me into an abstract architecture role like you describe, I'm going to quit and find a better job.
My title is "architect". I use scare quotes because I agree with what you wrote and I dislike being associated with the type of people and the jobs that make it true. I, also, only work at places where the interview is sufficiently technical and there are no separate design-only roles.
It's been surprising to me how much I've had to fight people on this issue in the past.
The thing is, software architecture kind of the same job as coding. So what the "architects" you describe really do is equivalent to writing code on a piece of paper. That is, doing one of the most mentally demanding jobs imaginable, but without the tooling to protect them from their own confusion. No surprise then, that it later turns out the architecture doesn't make sense. Without a tool like a compiler to call you on your bullshit, it's too easy to start engaging in fuzzy thinking, and the longer you're not exposed to such practical verification, the more your thoughts will become fuzzy.
I mean; that’s fine. I wouldn’t want that job either — it’s why I got an MBA, got out of the tech side and now actually have power over hires/fires and product direction. But unless you’re directly bringing revenue in the door, you are a cost item to be cut. As soon as they can find someone cheaper, they will.
Lots of companies are smart enough to figure out that it is cheaper to hire competent people than to accept the boneheaded mistakes that the cheapest warm body would make.
Besides, the ones that don't figure it out are no fun to work at. Who wants to be on the side that's bound to lose in the long run?
I just wish that they did less collateral damage on their way down.
> MBA jobs are all about personal brand and networking
No disagreement there. Just in my experience those jobs are either the first or second to go when things get tough - often because there's an MBA somewhere near the top who recognizes how replaceable all the rest are.
Yeah, but these jobs also often have contracts with severance packages. Executives (VP and above) are typically hired under fixed contracts that make it very expensive to get rid of them. You can demand these things if you have specialized knowledge, credentials, and relationships.
Waiting tables in Hollywood is never the reason people take up acting, and yet so it goes.
Anecdotally, I know several full time top 10 grads who aren't exactly on the fast track to the c-suite, and I assume this is even more true for the broader pool of MBAs.
Us older folks won't build things we know doesn't work against synthetic deadlines, then work overtime to fix what we knew wasn't going to work in the first place, while taking the blame politically.
Engineering has in my lifetime become "unwinnable." I'm either not "a team player" or "being negative" for planning for reasonable failure scenarios. Then politically I still take the heat when they happen.
This is not a quixotic aspect of engineering, it is a design feature at bad employers. "Heads I win, tails you lose"
Engineers aren't blameless, here. In fact they're at least as much the problem as "bad employers". Constantly seeking to use the latest shiny tech toy whether it's appropriate or not (e.g. "Big Data" tools to run stuff on a few GB of data) is a thing, especially in the Bay Area. It leads to a bunch of people with a small amount of shallow experience in a large number of soon-to-be-deprecated technologies. Few want to do the hard work of actual engineering.
Also, that the product is 10% better/more stable/whatever frequently does not translate into even a 10% increase in sales. So it's not worth it (from the business' perspective) to invest in that quality. Pick some other arbitrary point (e.g. 20% better/20% boost in sales, etc.), right down to some threshold whereby the company simply doesn't even have a product that can be demonstrated.
> Also, that the product is 10% better/more stable/whatever frequently does not translate into even a 10% increase in sales. So it's not worth it (from the business' perspective) to invest in that quality.
Here is the core of the problem, and it's more of an incompatibility of goals than errors of one of the parties. Businesses want to make money. They usually don't give a flying fuck about what their product is or does beyond the point it gets sold. Does it waste users' time and piss them off? They paid us, which means they value it, so everything is ok.
Engineers, on the other hand, tend to care about what value the product actually provides to their users. So they would rather invest effort in making the product better for the users, instead of making it better for sales team to sell.
I don't really see the way out of this conflict. The engineers are right, but the businesses are right too - it's the business that pays and suffers (some of) the consequences, so it's the business that gets to tell engineers what to do, and not the other way around. If CEO is making a stupid decision, that's on CEO.
The way I see actually useful software gets done, it's outside or on the side of a business, not within it.
Good point about incompatible goals. One way out is to align the value as perceived by users with the value as perceived by engineers. If users aren't willing to pay for reliability/security/etc. then it's rational for management to devalue those properties.
I'm not optimistic that this will happen. Anecdotally, I expound to acquaintances the risks of unprotected PII or questionably-secured home security apps, but convenience seems to outweigh such concerns.
Regulation is one approach to solve asymmetric information transactions. I don't see how that could be applied in general to software quality, but it could target the cases that have severe or widespread effects.
Ok, but his specific complaint was that planning for reasonable failure scenarios makes you labeled not "a team player" or "being negative". And I have seen this happen, it is true problem. Someone else refactoring badly does not cancel that systemic problem out.
You hire wrong grey hairs, most new people never shipped anything. Person with 10-20 years experience probably shipped a TON of stuff. This is the key thing to look when hiring, people who shipped things and like to ship things.
On other hand there tons of grey hairs who never shipped anything and are useless as new grads are.
well i think this is certainly the problem--ie, now "garbage" looks justified
but those aren't really the two alternatives, are they--ie, crap code or nothing?
it might be in the very short terim (ie, by this friday)
but over any other span of time, the choice is more like:
ship crap code in 30 day, followed by 50% of the team's resources spent bug fixing (which can often be cleverly disguised as new features) for the next 90 days
Sadly, it's hard to convince people that doing things right is better than doing them wrong, but quicker.
That customer needs this code on the 1st, that is a hard and fast deadline! So shit is churned out, and handed over on the 1st. Then, three or four 1sts later, the customer finally gets their shit together and deploys it, and, voila, it is shit. And the cycle continues, as the scramble ensues to patch the shit by the 15th with yet more shit. And you end up like the little Dutch boy at the dike, except instead of fingers you're using hotfixes made of excrement.
> You can do the code reviews to make sure they don't do stupid shit.
This does not work. Your code monkeys will get demotivated fast and wont be able to learn anyway. The one architect guy will because increasingly out of touch and his code review will become pointless red tape fast.
> The thought is that a team of greyhairs never ships anything, and a team of youngsters ships garbage. But it's better to ship garbage than nothing, hence the bias.
Why would greyhairs never ships anything? I dont get it. My experience was that young people need more supervision to not get demotivated and to actually finish it. (on average)
I seem to recall an episode of the A16Z podcast where Adrian Cockcroft of Netflix explained they were top heavy on senior engineers when they began moving to microservices. Didn’t seem to slow them down but I could be misremembering.
I don't know what that really means though. People who are smart, hard working, and get the right jobs can get a senior title in 5-6 years at the big companies.
I know someone hired at Netflix who is belligerent, inexperienced, and lazy. He is a senior engineer there. Titles don’t really matter much at the end of the day.
Indeed! Over time I've become more and more aware (and proud) of the things I left out. As a young programmer, I would be far more proud of the things I included.
The shipped project is one branch of an extremely large tree of possibilities which a younger, less experienced programmer would have taken a very long time to explore and discard (which I know for certain because I was that programmer).
It's weird how that works... I try to avoid unneeded complexity as much as possible for as long as possible... and when I do need complexity, I try to make it as easy to use as possible, so that it's less in your face on reuse.
I'm consistently amazed at how much of a pain in the ass things tend to get when people try to build solutions in their early 20's, vs 30's and now in my 40's. Not to mention how much my viewpoint has changed in the past 20+ years in software.
I'm pretty happy when I can remove a bunch of dead/unused code trees, commented out swaths of crap, and refactor portions of a codebase into 1/5 the size.
I rewrote a 900 line class today by writing one generic function that encapsulated a pile of common logic that the monstrosity had copied and pasted with slight changes over, and over, and over, and over, ad nauseum, and wound up with less than 100 lines of code. What a glorious day.
Indeed. When you see some of the waste that goes on - whole projects going in the wrong direction and having to be largely reworked. And the people in charge will never admit they made an error in how they set up teams, and that it's the 'greyhairs' getting involved and rescuing projects over and over again.
If we assume this is all 100% true...there's still no way to be sure during an interview.
There's plenty of shitty older devs. And if they're old enough they're a protected class which will make things very awkward if you hit a shitty one. And with the newer generations, there are plenty of 18 years old that will run circle against more experience devs, nevermind mid 20 ones. There's of course plenty of crappy ones too. So all things equal, but with a different salary, which one do you pick if you're a naive hiring manager?
Fortunately I now work for a company where this is a non-issue. While we definitely hire a ton of new grads, we'll never say no to older/more senior candidates (and sure could use more). Which is good, because I'm getting dangerously close to my 40s.
maybe they discriminate because you call the designs of other people shoddy.
I'm only being half snarky. Large chunks of these threads end up as people bashing young people for being idiots for various reasons.
I do believe that discrimination is an issue, but you also just don't need that many seniors. If you have a couple to make sure designs are reasonable, catch mistakes, and mentor the junior people then that seems pretty sufficient.
If you have 5 seniors and a junior you get stuff done, but the junior person quits because they don't see any point in sticking around on a team where they aren't being challenged because someone more senior always gets to do it. Someone else will give them the challenge they want. I'm moving on right now exactly because of this.
I think it depends on the timescale. In the short term, a 'bad' hacky solution is usually quicker than a 'good' one, but it will likely slow you down over the medium to long term.
If the timing of that bad hacky solution being shipped is critical, then it's actually the perfect solution and the right work was done quickly, which is great. If it's not, and is rather the first piece of a new feature intended to last for a while, then the work on that feature as a whole may be slow as a result, so then it looks more like bad and slow work.
I think with experience comes the ability to choose the appropriate approach for the problem at hand, whereas inexperience will usually lead to the short term quickest easiest route being chosen every time by default.
Another subtlety is that, typically, more experienced people will actually deliver the 'good' solution faster, which results in a double win if that is indeed the correct course of action.
I find writhing a quick hacky solution and then fixing it over time (IF it's worth it, alternatively a rewrite might happen) is quicker because I don't usually write what I want/need on the first try.
If I want to end up with a great end result, I find this is the way I have to work.
These discussions tend to assume in-house development though.
I've spent far too much time around organisations where "more time with bums on seats" = "direct correlation to billing more", which = "better work" if you ask the right people. That leads to an unfortunately different scenario.
And that's how you get code from company A that's performing sanity checks on a parts of a file that are outside the scope of the spec the data in that file is supposed to meet failing content that meets spec but has had all superfluous information discarded by a custom compression algorithm that's designed by company B to improve performance on a network with insane latency.
I don't think they mean slow development. Rather, overall rate - all other things equal, someone who insists on no more than a 40 hour week will accomplish less than someone willing to work 60-80 based on whatever metric you use (KLOC, story points, etc)
I've only ever heard one person ever mention KLOC's, and they were in their 60's nearly a decade ago. They were explaining how people were using it as a measurement of work done in the 80's, and what a terrible idea it was to measure throughput that way.
You're the second person. When was the last time you encountered anyone using KLOC's (per 1k lines of code) as a measurement of work done? Are they working on mainframe codebases?
> You're the second person. When was the last time you encountered anyone using KLOC's (per 1k lines of code) as a measurement of work done? Are they working on mainframe codebases?
The DOD and their "experts" love measuring software projects this way. They even take the cost/LOC ratio as a measure of value. I was once semi-seriously chastised for committing a change with net negative LOC because it broke the formula and implied negative value.
Me too (also DoD contracting). Cleaned up a module full of amateurish over-complicated code resulting in a negative LOC count on the boss's spreadsheet. Oh, I heard about it.
I like to use LOCs to see at a glance the size of a project. A more complex business logic usually takes more LOCs to write it down. And I bet there is a strong correlation between LOCs and hours worked on a project which would make LOCs a good approximation of work done. Actually I should research it. Should be not too hard to find all the tickets for projects and compare the time logged to LOCs.
1) There is no fixed correlation between LOCs and business logic complexity.
2) there is actually an inverse correlation between LOCs and amount of work done. When I'm most productive I actually decrease the LOCs.
1) You tell me that if you take some fixed business logic and add some exceptions to it (make it more complex) you wont need more LOC's to address this added complexity? I don't believe you.
2) You can't be productive having 0 LOCs. So 0 LOCs = 0 productivity. n>0 LOCs = (presumable) some productivity. I would say we have a trend.
1) I said that there is no fixed correlation.
2) I'm most productive when my LOCs are negative, deleting duplicate code or finding a better approach for a problem.
1) so you say there is a (positive) correlation, but it is not "fixed". Sorry, I don't understand what this means.
2) I know what you mean. I do it often myself and I know it improves code quality. It's not my intention to measure "work done" in LOCs, which, as you say, could be contradictory. I propose to measure the size of a project in LOCs. Suppose, you have two projects. And you have all the time you want to reduce their LOCs (to improve code). After you are done, both projects will still have some LOCs. My point is that the project, which has more LOCs, after you had your fun with it, is the "bigger"/more complicated project.
1) Imagine some project written using CTRL+C, CTRL+V everywhere, another much bigger project written with some good architecture and using always the DRY principle, another one still bigger is written in a much more compact language like F# or Haskell. The first project following your criteria is the bigger and most complex, but in reality it is just a bloated mess.
2) It never works like that in reality. No one is paid to spend time only to improve the code, we are paid to provide value to the users. The project with the good architecture is easier to work with and it will be easier to add features and continue to improve the code. The bloated mess will be much more difficult to work with and you won't have the time to improve it apart from getting the low hanging fruits.
I think LOC probably works OK as a very crude metric until you start to measure it. Once you start to measure it all hell breaks loose. If you measure me on lines of code, well I can get you lots of lines to code. No problem at all...
I seem to recall a quote from some computer scientist, possibly Knuth but I'm not sure, that went something like 'You optimize what you measure.' So if you measures LOC, you get LOC... and far more of them than is by any reason 'necessary.' Really a horrendous metric.
There is no universal metric to gauge productivity of software engineers.
The added value a software engineer produces with his work in a single unit of time has a huge variance. Furthermore, it can range from negative to positive.
I.e, it does not matter how long one works. The only thing long work hours of a software engineer communicate is that they work long hours.
If a bridge falls down and people die, the suits go to prison for criminal negligence for not hiring experienced people and giving them the power in the organization to set the schedules. If a system that involves software fails and people die, the courts throw up their hands and say 'nobody knows how computers work, and there are no standards we could claim they violated or ignored' and the company usually doesn't even take much of a PR hit. It should be obvious to everyone that this situation will not stand forever and it will be very bad when it stops... but everyone has motivations to resist the establishment of standards so they'll continue to bicker about it and do nothing until legislators force the issue in the most hamfisted and horrendous way possible.
Mind you, people are still happy to let bridges fall, and infrastructure in general, crumble. “That’s for the next guy to fix” seems to be the order of the day.
Crudely put, for project X you have three the following constraints. Cheap, On time, and good quality. You get to pick two of these three. It would seem to me most companies are opting for cheap, on time delivery and thus the quality is simply absent.
I am not sure it is cheaper in the end. You might pay a lower hourly rate, but because of bad work, rework, miscommunication, delay on communications you will pay more in the end. That has been my experience with, for example Chinese and Indian development teams abroad. There is a reason why agile prefers collocated teams.
While I agree with you, that it isn't cheaper in the end, people think short term. Lots of companies are trying to pinch pennies at every corner. A great example: Why does extra sauce cost more money? It pisses most people off and a lot of people turn it down. But they still probably see slightly higher margins at the end of the year.
Nope. I can hire out of school programmers for $70k/year. I can hire grey beards for $200k/year. Not a single grey beard is worth 3 of those out of school programmers.
This is also why grey beards suck for P&L so they better want to work long hours.
Well, iOS11 needs to ship on its annual schedule, non-negotiable. And the SVP will really complain if I ask for more money this quarter. So, cheap it is.
If a bridge falls down, it's really obvious and people die. If billions of computers become vulnerable to malicious actors and a few hundred or thousand people suffer dramatic personal damages, well that's a nice and quiet problem which will be quickly forgotten in the 24-hour news cycle.