Hacker News new | past | comments | ask | show | jobs | submit | npstr's comments login

In Germany it just means it's a bad place to have a career in. Thankfully most HRs will happily advertise it in the job description making it easy to dodge.


No, what he says is "when we write software there are bugs, so we should write less software".


As opposed to "we should fix the bugs we find".


I'm sure there is plenty of movement visible on all kinds of bridges. We also know that all bridges will fail eventually, so the real value is having a more or less exact prediction _when_ they will fail, not a "we totally saw that coming" after the fact.


> there is plenty of movement visible on all kinds of bridges

Sure, but "the movements in the cluster began to show an atypical pattern with diverging paths" does highlight that even though movements might be common for all bridges, the particular pattern they discovered for this bridge was uncommon.

Agree that the maximum value one could get from a process like that would be "When will this bridge fail?", but better than nothing, is to get a "This bridge looks like it will fail soon".

Besides, could manual human-driven inspections even be able to give a "more or less exact prediction of when a bridge fails"? If not, then at least a somewhat automated way of getting the same answer would be cheaper for local governments to run, and for the cases where it makes sense, send the human bridge inspectors there.


> a "we totally saw that coming" after the fact.

That's not the case, though. This isn't an "I told you so," like the 2022 Pittsburgh bridge collapse. (https://www.ntsb.gov/news/press-releases/Pages/NR20240221.as...)

The study ended 7 days before the collapse, and it only concluded that an on-site inspection is needed:

> Its findings revealed there were significant indicators present prior to the incident to suggest inspection/consultation with on-site engineers would have been necessary.


This is a random company selling everything from blockchain to big data analysis performing a retrospective analysis of satellite data over a certain period prior to the collapse. In other words it’s a puff piece to promote the capabilities of what sounds like a very dubious firm, not a report of a predictive study.


The SQL part as well, it should be pronounced "squeal".


I can see a lot of hate in the comments about the original Spring Project / Java language. You are absolutely right, it's the worst. Please go away, nothing to see there =)

On a different note though, when I was a Junior, building my first few projects, I didn't get it either. You'll get it when you make it to true Senior level :) Just last year I took over a project at work where 2-4 engineers (including "Senior"s, especially the one who came up with the infrastructure) were very busy reinventing their own framework/platform instead of building the actual product, for about 3 years. They had their own protocols and their own transaction management and two webservers (cause they were trying to use VertX websockets in addition to HTTP but due to a skill issue couldn't serve them from a single webserver) etc etc. What they did not have, was a working product. I got buy-in from management up to C level to rewrite that burning garbage dump. Now, about a year later, after management moved those people off and me in, the product is actually live with all the missing features, has about 50% less LoC than at its peak, and does no longer exist as a distributed stateful microservice agglomeration. Instead it's a Spring Boot monolith. Including Spring Security. It was the first time any of my software got a security bug found by external researchers and only because I did not add Spring Security immediately along Spring Boot because I thought we can go along with the homegrown auth code for a while longer.

Anyways, serious people who have built some serious (web) products will appreciate the battle proven tech that has integrations with just about any other relevant software on the planet and implements production ready patterns for you to immediately use.

If you don't, may I suggest you ask yourself: maybe you've just been building only toy projects, or solving too much leet code problems? Maybe you've only worked on projects meant for your resume or promotion dossier, rather than actually putting a product live to millions of users?

I applaud the Rust ecosystem taking one of the best pages from the Java ecosystem book, although the focus on "lightweight" does not make me optimistic that the author has truly understood what value, and how, Spring delivers.


Heh, my personal complaint about Spring comes from the other side of this. I came into a company where Spring Boot was already deeply entrenched, in a Senior DevOps/Sysadmin type of role. I spent a significant amount of time untangling all of the voodoo reflection magic that the teams who had created the services couldn't seem to debug on their own.

To be fair to Spring Boot, the developers in question didn't have any real operations mentality, which complicated things. A service would crash/500 and would have a 0-byte log file documenting everything that led to the crash. It was pretty surpassing though to discover that a lot of the visibility things I'd come to expect from the large variety of other non-Java frameworks didn't seem to be included by default (eg request logging/exception logging) and had to be turned on explicitly.


> Heh, my personal complaint about Spring comes from the other side of this.

What did your example have to do with Spring? It sounds like it'd have been the same if replaced with something else. It was the developers...


Just like the original comment praising Spring, too. It's always the developers. Whether you use one framework or another or nothing.


Mostly that the whole annotation-based dependency injection part can be a mess to debug and that it seemed to (at the time, unsure about the present) have very odd defaults around logging.

The annotation-based DI seemed to do a really good job of turning what should have been a compile-time error into a runtime exception instead.


> Mostly that the whole annotation-based dependency injection part

Right, but DI isn't Spring specific, so point still holds.

Within Java there are plenty of other annotation frameworks...

In other languages there are annotation usages and sometimes EVEN worse behavior exists...

And yet for reasons I'd like to know people aren't blaming those (at least not in the same capacity). That's the crux of the issue.


I'm explaining why I have had a bad experience with Spring. Spring is the only framework I've used that does weird stuff with annotation-based DI and can silently fail in production by default due to those annotations not being resolved correctly at runtime (heck, even if they can't be resolved at compile-time, please make them fail at startup instead of later!).

I've had other issues with other non-Java frameworks too, but I'm not giving a comprehensive write-up of the pros and cons of every web framework I've ever used here... just pointing out the pain points I had with Spring.

I also didn't mention LOMBOK because that doesn't seem like it's necessarily part of Spring by default but wow did that also prove to be another source of painful bugs.


Spent a good day debugging, to learn that some (most?) annotations don’t work when the method is called from inside the same class (no proxy is setup).


Great comment! I been developing web applications for almost 20 years. I've been working on several frameworks within PHP, Python, Ruby, Elixir and Java.

And after all these years I have settled on Spring Boot. Because it is mature, stable, fast and well documented. There are other frameworks that claims they are faster, but I've spent 10x more time just getting my stuff to work. I really do not have time for that as my focus is delivering business value. One Spring monolith scales really well on a Epyc server.


Nobody here was saying you should always write your own framework, which is what your engineers were trying to do. Vert.x is a a low-level framework when compared to Spring, so it doesn't provide much organization for your code. This is betrayed by the fact that there are higher-level frameworks written on top of Vert.x, like Quarkus.

What you are describing above is a skill issue. If the developers are bad, they'll be bad with any framework. As a proof, I can give you dozens of bloated, slow, buggy and hard to maintain Spring projects I've seen during my career. I can also list a bunch of successful projects we've built with Vert.x and a custom framework. At the same time, we also have a bunch of successful Spring projects. This has more to do with the skills of the team involved rather than the framework selection.

But when it comes to selecting your framework, there are good reasons we avoid Spring. Spring shines when it comes to having a bunch of built-in modules for everything: authentication, configuration, dependency injection, monitoring, various template engines and databases - whatever you want. The advantage is that you don't need to spend time investigating and arguing dependencies - they're all right there. It's also easy to structure your projects and to hire developers who claim to know Spring (whether they actually do understand how Spring works is another story).

But Spring has a lot of issues too:

- Performance: This is the main reason we avoid it for most projects. Spring is slow. This appears to be the main reason OP has created a Rust version of the Spring framework. Of course, Rust has less overhead than Java, but there are many Java frameworks that are faster than Spring. Spring is just proudly underoptimized. Almost anything else I could say about Spring or you could say about other frameworks may be subjective or anecdotal — but speed is easy to quantify. If you look at the TechEmpower benchmarks, there is an order of magnitude of difference between Spring and lightweight frameworks like Vert.x and Jooby and even some optimized heavyweight frameworks like Quarkus[1]. If you care about performance you just cannot use Spring.

- Inscrutable Magic: Spring has a lot of annotation-based magic. A lot of Spring enthusiasts like it, since it gets stuff done and reduces boilerplate. But it also makes your framework behavior hard to understand and scrutinize. If you want to know what an annotation does, you can't just click "go to definition" in your editor and look at its source code. You need to find out where all the possible annotation processors are and then read all the relevant code until you find how that particular annotation is processed into generated code or wrapper classes or whatever.

- Security: I beg to differ here. Spring Security can save you from the bugs that you would have if you wrote your own authentication code, but the code that Spring itself brings to the table does not have a very good track record. The sheer amount of CVEs found in Spring[2] is staggering. A lot of is due to popularity and exposure, but this is also due to Spring's desire to include everything under the sun and do as much as possible behind the scenes, with automagic. A great example of this approach is how Spring Actuator used to expose a lot of sensitive endpoints (including a full heapdump endpoint) by default, on a standard path. This needed you to add the actuator module, but a lot of servers included it because this is the standard way to enable health checks in Spring, and almost every cloud infrastructure nowadays requires health checks. The end result is that if you wanted your Spring Boot 1.5 web server to be secure, you'd had to explicitly disable these endpoints[3]. Even with modern Spring versions, the sensitive "/actuator/info" endpoint is still exposed by default.

[1] https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...

[2] https://spring.io/security

[3] https://docs.stackhawk.com/vulnerabilities/40042/


I think you're leaving something out.

- the affinity for 'traditional' java guys to make code seem so complicated and be wrapped in so many layers.

the JVM is an excellent platform bar none. But if you work on a java codebase with the traditional guys - I feel sorry for you. there has to be so many "best practices" - Impl, Factory, Wrapper what what . what was supposed to be a simple endpoint - then a simple function for business logic is now wrapped in 5 or so classes.

then you add the magic of "Spring" and productivity & everything else slows down to molasses.

End of day if you're working on web services or internet stuff - you take in JSON, transform JSON, spit out JSON. That's been my experience. & that's not complicated to do. maybe other people can differ.


The performance argument I cannot subscribe to. We did extensive load testing in the past on different products, and the bottleneck in the end was always the DB (or more recently, getting screwed by OpenSSL v3). Sure Spring might not be the fastest in its class, but it's a cheap problem to solve, just fire up more or bigger VMs (as long as you've kept it stateless).

The security concerns about actuator I cannot subscribe to either. Why are your endpoints exposed to the outside by default? Why is the management port reachable from the outside? Why are devs not reading the docs and only enabling the endpoints they need?

The magic annotations part can definitely be a problem. Would recommend to stay away as much as possible and keep that simple. Only use it as a last resort, but boy can it be powerful. Need your own request scope that is bound to transactional commit/rollback? You can have that to for example only send out side effects when the transaction succeeds, or to build transparent caching solutions on request scope.


Spring’s security record prior to 6 was pretty interesting. They had a bunch of RCE causing vulns they wouldn’t even recognise as vulns (eg due to Java deserialisation issues), which wasn’t a great look for something in use in many large companies.

To be fair, Spring devs did say “don’t use it like that”, but devs of all skill levels use Spring, so that’s not great advice.

It is much better now, but of course the latest thing is that it’s now owned by Broadcom. So if you as a contractor are foisting Spring upon your clients, I hope you’re ready with a security/fixes strategy, because don’t expect Broadcom to support old versions of Spring forever. Or else you could just pay Broadcom $$$.

Good time to mention: users of open source Spring 5, it goes end of life this month. Hope you’re ready!


> Inscrutable Magic ... need to find out where all the possible annotation processors are

or you read the documentation.


I don't think this is a good take.

Magic doesn't belong in software, and a well-designed library or framework doesn't need it to succinctly express the program logic in a way that isn't boilerplaty and bloated, not even in Java.


I may be as old and cynical as House MD but if there is anything decades in the business taught me it's that customers and documentation (esp. code comments) always lie.


Sounds like you ran into a bad "not invented here" culture, which can seriously slow down a product. Even worse, they are often so passionate about the tech that they can scare management into agreement with their priorities. Kudos to you for having the backbone to stand up and fix it!! Also: I am no fan of Spring ;) -- I just have ptsd from the early xml config days. If it were already in place though, I'd roll with it.


> You'll get it when you make it to true Senior level :)

This and the following lines have so many opinions presented as truisms, it’s difficult to take the poster seriously.

2-4 “senior” devs were spending time on tech-only features without delivering actual end-user / business features. Ookay. Seems like a failure of engineering management, but let’s press forward.

2-4 engineers replaced with 1 “true Senior” engineer, who rewrote the microservices nightmare into a monolith powered by Spring. And he had C Suite backing for this. All’s well now.

I mean, that’s a great result. But it feels like there’s also some great people/motivation backstory we’re not getting. Also, maybe lessons from a one-person monolith don’t apply everywhere, even if that monolith serves millions of users?


You don't have to take me serious to derive value from the post. It's obvious anecdotal, just shares a story. I skipped some of the details to focus on the topic of Spring. But I think I can add some as you have rightly spotted they are missing: The major problem was imho indeed engineering management of that specific team, which was also swapped out at the same time I was put on the team (which was before we got the buy in to rewrite it). There were actually at least two dev generations of the team building the software before ours. The first one caused the most damage by the demented infrastructure decisions, while the second iteration was not able to successfully challenge that, even though they had the right ideas already. New engineering management was awesome in that they supported devs with most of our radical ideas, but also pulled in resources from other departments to help deal with the largest pain points such as adding Spring Boot quite early in the mending journey, with the heavy lifting being done by an expert from another team temporarily joining us.

Forgive my judgy wordings in the direction of seniority...our org has (had?) an issue where complexity is rewarded over simplicity. Some senior people here all they could do is build something so clusterfucked noone can understand it. Funnily enough, this project was originally launched to replace a legacy system noone was able to (or unwilling to) maintain.


that team is probably telling a story how someone convinced management to ditch their best in class product which would conquer market share, with a lame cookie cutter version that didn't even validate jwt tokens or something at launch and now they're one of the bottom also-run in the market.


Hehe, the original pitch for this project contained lots of mentions of AI =)


Do the downgrade. And please report back how you like it.


There is plenty of money for bridges in this country if would stop taking it away from productive citizens that create value and giving it to unproductive citizens that just further reduces willingness to pick up a job. Working full-time vs living on benefits roughly evens out here in Germany, so I fully understand fellow citizens who stop working. Not only that, it justifies giant government departments that deal with determining each citizens needs and generally administering this redistribution that adds absolutely zero value. We need to fix these incentives.


> Working full-time vs living on benefits roughly evens out here in Germany, so I fully understand fellow citizens who stop working.

Source?


Minimum wage around 1300 EUR per month. If you are on benefits you get maybe like 500 EUR and the gov pays your rent. Plus you get bonuses for kids, so if you are a single mom with several kids (increases your benefits) and no qualifications, working is not economical.

Not sure if that's a good or a bad thing, i imagine if you do not work, you can spend more time with the kids.


> Plus you get bonuses for kids

Kids famously come with costs attached as well.


You are as poor as if you were working on minimum wage, but you have more time since you don't need to leave for work.


What costs? Healthcare, schooling (including kindergarten) and transportation are free for kids. Only thing you need to do is feed them and buy them some clothes, and government already pays you money for that.


I think Germany has a rather complicated relationship with the whole "Arbeit macht frei" thing.


Exactly. The ones benefitting most in German society from people in the economy are „full-time“ politicians and „Beamte“ enjoying the worlds best healthcare, above average pay with little to no perf review, and a whole separate extremely nice pension system.

Source: https://en.m.wikipedia.org/wiki/Beamter


Idk to me this looks like a modeling issue of the data. There should be a team table that contains team specific data such as the skill level, then these two queries wouldn't run into any problems.


Generally the best way to surmount a hard problem is to solve a different one.


> given what constraint 2 says the level is actually a team level, not a player level.

Then it's bad unnormalized data design that is the problem here. If that is a team level, it should be in the team table, not the player table.


> Then it's bad unnormalized data design that is the problem here.

That the table is not normalised makes the example somewhat confusing but it does not actually affect the issue being demonstrated. And denormalisation is a fact of data modelling.


Why not simply use an example that isn't confusing? Many developers, and especially academics, love wasting effort and time on solving issues they made up but that have no real life examples. When using examples that have trivial alternative solutions, it does not help me as a reader to distinguish whether this is a real problem, or something made up.


Because every example you can think of is confusing, because understanding how concurrent transactions should operate is confusing.

Almost all problems like this can be solved by improving an application data model, but here’s the thing, lots applications have dodgy data models, either due to time constraints, or because the application evolved over time, and the data model didn’t. So these are all real world problems and examples, but creating a “simple” problem to demonstrate the issue almost certainly means also creating an example where other “obvious” solutions exist.

Just because you can’t imagine how this simple example might represent a much more complex “real world” problem, doesn’t mean it doesn’t exist.


> If that is a team level, it should be in the team table, not the player table.

It's all contrived, of course, but the reason I would consider skill level to be a player attribute rather than a team attribute is that there could be free agents with a skill level but no team:

INSERT INTO player VALUES (10, 'Pavlo', 'AAA', NULL);

Then with enough free agents, you could imagine building a new team out of free agents that are all at the same skill level:

UPDATE player SET team = 'Otters' WHERE level = 'AAA' AND team IS NULL ORDER BY id LIMIT 3;


Nature does not refactor. There's the laryngeal nerve somewhere in the neck of mammals that has to go around a blood vessel. On Giraffes, this runs all the way down the neck, around said blood vessel, than back up, all that just to connect two places in the upper neck that are inches apart. Evolution writes the ultimate spaghetti code.

More info: https://berto-meister.blogspot.com/2011/08/unintelligent-des...


Early developmental biology (by “early” I mean early pregnancy) is the shit you Do Not Touch. Each step of development is highly dependent on the previous steps (assuming this bit’s here, that bit’s there, etc.). If you make a change it’s extremely likely to be fatal or have negative consequences like birth defects, and the likelihood of a bad outcome climbs steeply as you move the change earlier and earlier in the process. That’s why pretty much every animal - humans, tigers, whales, lizards - looks the same when they’re a teeny tiny little fetus.

My example is testicles! Testicles begin in the same place as ovaries, which made a lot of sense back when we were coldblooded! Having a pair of balls dangling in a soft pouch between your legs isn’t a strategy you adopt if you can help it. In the alternate timeline where lizard people rose to dominate the earth instead of humans they don’t have “Ouch, I’ve been hit in the balls!” humor.

But the testicles form next to the kidneys early on in the developmental process (and as we’ve established, you don’t fuck with those stages), so they have to migrate south VERY late in the development! Because they SCHLOOP on down after everything else is pretty much in place, it weakens the abdominal wall, which is why men are more prone to certain types of hernias than women are.


Nature does not refactor, except when it does as in the case of going from 48 to 46 chromosomes (great apes->humans). Nice write-up and slightly novel hypothesis at: https://molecularcytogenetics.biomedcentral.com/articles/10....


https://youtu.be/cO1a1Ek-HD0?si=H6X6ICZQLbq_s41C

The famous Richard Dawkins video about that (4min long)


"Octopus throat runs through brain" and the "blood supply in FRONT of the retina cones n rods" were the other 2 that are often mentioned.


Nature hires a million devs and then kills the project if they do not hit a deadline.Nature mixes it up with sex and creatures hunting one another. There are no chill moments.


noodly its appendage, sphagetti its code, in its image we are made, amen.


*ramen


God must be a frontend developer.


Or hacked it all together in Perl: https://xkcd.com/224/


He's definitely full stack.


Nature rarely* refactors. Sometimes random mutations refactors, which leads to efficiencies which leads to evolutionary advantages.

If nature doesn't refactor, then why are there so many crabs?


Clearly refactored for memory safety.


Hence C laws


There are many crabs because of convergent evolution, in this analogy that has nothing to do with refactoring. That is more similar to competing products having a lot of the same features.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: