Companies fund things because they're useful or necessary. My guess is that some of the companies listed might use BSD — and perhaps wanted/needed an implementation of rsync that was not GPL3 licensed.
And/or they simply have an interest in funding Open Source projects / development.
Three out of four aren't even companies. SUNET is the Swedish NREN, NetNod is a non-profit that manages Internet infrastructure services (like DNS and NTP) in Sweden, IIS is the non-profit that manages the Swedish TLDs.
Feel free to substitute my use of the word "company", with "company / organisation / foundation". Plus others I'm surely forgetting.
I meant 'company' in the sense of a legal entity, probably paying some kind of tax, probably having to register/file their accounts every year. Here in the UK, all of these various different types of 'companies' all have to register with Companies House, and file tax returns to HMRC. 'Company' is the overarching legal term here.
— But sure, my bad: the post I was replying to actually used a term that is arguably better, 'organisations'. And I should have used that.
But my point still stands, whether a private limited company, or a non-profit of some kind, or an organisation, or a foundation, or a charity, or whatever — they're all legal entities of some kind — and they're all able to fund anything they please, if they see value in it.
- NetNod is actually a private limited company according to Wikipedia [1]. Corporate identity number: 556534-0014.
- Swedish Internet Foundation, formerly IIS, have corporate identity number: 802405-0190 (on their website [2])
- Sunet is a department of the Swedish Research Council, and uses the Swedish Research Council’s corporate identity number 2021005208, according to their website [3]
So they are all registered with the Swedish Companies Registration Office. Which I assume is their equivalent of Companies House here in the UK.
Maybe if you still think that they're not 'companies' — of some kind — then perhaps take it up with the Swedish Companies Registration Office! ;)
There is a significant amount of naivety here. I don't say that in any kind of negative way: it's purely an observation (as a dev who has built a lot of different software over the years, plenty of which is web-based)
Let me pick just one example: these guys seem quite upset that someone has tried attacking their vibe-coded app, e.g. submitting sign-ups with invalid emails...
They're like: "What's the point? Why would someone waste their time targeting us / our app? It's a waste of time."
Their fix was to enable a Captcha. While complaining that it's annoying for users, but also saying: never mind, everyone is used to Captchas already.
They don't realise at all that pretty much every single public-facing sign-up form will, given a very short amount of time, receive automated sign-ups from false addresses.
They're not specifically being targeted at all — it's not because some folk specifically hate vibe-coded apps, despite them believing that (they say specifically this).
It's just what happens, to every site with a high enough profile — and that bar is in fact very low indeed. It's pretty much any website that has been around for a week or two.
These kinds of sign-ups are fully automated. Welcome to the internet. This has literally been par-for-the-course for years and years already.
And, of course, that's why large areas of the internet already have Captchas. An observation which they already made, but seemingly without joining the dots at all...?
And this is exactly the difference between someone who is new to the field, or a junior, or whatever, and someone who already has a load of experience behind them.
These guys are basically learning some of these facts the hard way. While believing something else is going on (that they are being targeted).
The whole of web development is actually like that though. Not just sign-up forms. There are lots of gotchas, many of them security related. Or DOS related. Or whatever (the list goes on).
And, with a an approach that mostly boils down to 'kind of understand why things are done a certain way by more experienced devs, but only after the fact', it's so very easy to shoot oneself in the foot.
Without such knowledge, it's all too easy to have one's database breached, and have personally identifiable information stolen (hello GDPR violations, and worse, which can destroy a company in many different ways, not only fines or reputation). It's all too easy to have a system that basically gets pwned by hackers.
It's a bit like believing one can just build a house, by just learning a few skills. Sure, I mean: that's possible. Kind of.
But at some point, particularly once one gets past a single-room house, or past a single-floor house, one will likely quickly realise why there are safety regulations, why we have structural engineers, why we have surveyors, why we certify various things in the building world, why various folk in the field undergo training, some of which is quite specialised, etc.
Anyhow. I'll leave my primary observations at that: automated sign-ups including bad email addresses, isn't a targeted attack at all. It's totally normal, expected even. And it's just the tip of the iceberg (rather: one of many icebergs).
No, existing developers are not deliberately gatekeeping, as these guys claim and believe.
We just don't have time to provide 101/202 (and more) of web development / web security / best practises / appropriate algorithm selection — the list goes on and on (hello compsci degree, and more) to folk who just don't yet have a suitable foundation, and perhaps don't even understand why so much of this might even be needed — and, in some cases, don't even care (admittedly: perhaps through complete lack of knowledge). It takes time to learn these thing, and it takes time to teach these things. Doesn't matter if anyone thinks there are shortcuts — that attitude will likely just provide very tough lessons, after the fact.
There's lots of stuff that AI doesn't know, and there's lots of apps that AI can't build. There's lots of things in security that can't just be "winged". And this isn't gonna change any time soon — despite what less technical folk might believe (including the folk in this video, it seems).
It's just not realistic to believe that AI can improve enough in the near future to be able to build fully secure apps. Sure, they might be more secure than previous versions. But how are these folk even going to be able to verify their app is secure? Security and pen-testing is a whole field in itself. And nobody in their right might is going to (also) trust all of that to AI.
I wish them luck in their endeavours. But it's a steep learning curve — particularly if basically believing everything is just like the early days of the internet — and with a mindset that they are specifically being targeted with bad-email-address signups (as just one single example).
Don't get me wrong: I'm not anti-generative-AI in the slightest. But being an experienced developer, I can plainly see that it is a giant bag of loaded foot-guns. Particularly for those who are new and inexperienced in this field. And AI doesn't solve that.
Ex-games-programmer here: I used to code the C64/NES/etc many moons ago...
From the article:
> The 6502 is a microprocessor that is used in the Nintendo Entertainment System (NES) and the Sega Genesis
This isn't correct: the Sega Genesis (aka Megadrive) did not have a 6502 in it at all.
It had a 16-bit 68000 as its primary CPU, with a Z80 alongside (commonly used for sound and music in Megadrive games, but also to provide backwards compatibility with the 8-bit Sega Master System)
> The 6502 is a very simple processor, it has only two registers, the accumulator and the program counter.
This is incorrect: the 6502 also has both X and Y index registers, as well as the accumulator.
And, if including the program counter as a register, then it's probably also necessary to also include the status register, and the stack pointer too.
It's interesting that you describe yourself as a developer now.
Because just three months ago, in your first post [1] to HN, you said:
> I'm somewhat non-technical but I've been using Claude to hack MVPs together for months now.
Sure: you might feel as though you have now 10x'ed yourself. But, quite honestly, when the reality is that just a few months back you self-described as "somewhat non-technical", it's clear that (a) you're at such an early stage in your learning and understanding of tech, as a developer, that it's relatively easy to experience bigs gains, and (b) you can't actually have much of an objective measure on this, because you are in fact quite new to the field.
I read a lot of your other comments. To me, even before I had confirmation that you were actually "somewhat non-technical", and fairly new to the field — effectively a junior developer by any real measure — this was already quite apparent to me.
Based upon having been a developer for some decades myself already: I can generally spot those that talk-the-talk — and similarly: I can generally spot those who have non-trivial / deeper experience with various fields of tech.
Powering-up with AI tooling doesn't remedy that. Even if it might seem otherwise from your "somewhat non-technical"-but-newly-empowered position.
Good luck with your coding endeavours though, and with your evangelism.
I have no doubts at all that the world is changing — including how software is developed. But I see your posts for what they are.
Yeah I was a jr developer for a year before I became a PM. That's the definition of being "somewhat non-technical" as I put it.
you've been a developer for some decades which is why your reality is threatened that your craft is increasingly becoming irrelevant so you had to snoop my profile to find some confirmation that your reality doesn't get shattered
this is nothing new of course. obnoxious neckbeard engineers who don't understand where the world is going have existed since the unix debates on irc. you'll find plenty of people who agree with you on mastodon lol.
> you've been a developer for some decades which is why your reality is threatened that your craft is increasingly becoming irrelevant so you had to snoop my profile to find some confirmation that your reality doesn't get shattered
Hahaha - no, that's really not accurate at all. On lots of levels. The ability to read another user's comments is there so that anyone who chooses can actually get a better understanding of who they're talking with, and what that person is about. One doesn't have to feel threatened at all to want to use it, one simply has to be intellectually curious, and interested to find out more...
There's no need to try and portray it as a negative, and make out there's something afoot which isn't actually taking place.
Anyone who's been here on HN for any significant amount of time knows exactly what that feature is for — as well as when it might be best to use it. And people absolutely will use it.
It helps separate the wheat from the chaff.
— Please do try and take care that your wide-of-the-mark unnecessary put-downs and name calling don't violate the HN guidelines! (Just for your own good!)
Ah right, junior dev for a year. Wow, how amazing.
Plenty of room for you to 10x many times over then.
Over the years, I’ve met plenty of folk who have dabbled with software development, before deciding it wasn’t for them - then pivoting to something less technical.
Nah, I don’t feel threatened at all by AI. My job is secure. Tools change, sure. But there’s plenty of years left in software development for sufficiently skilled humans. No matter what a junior-level dev / AI evangelist might claim.
I’ll be cleaning up and properly re-implementing the MVPs that less knowledgeable folk are throwing together, slap dash. For a long while yet. And doing other stuff that AI simply can’t do properly - and quite honestly is quite far from doing.
Your rhetoric betrays your knowledge, and your bravado and insults can’t make up for that in any way.
It’s easy to get enchanted by current generative AI, and believe it far more capable than it is. Particularly if not overly skilled in whatever ___domain. Particularly if one doesn’t have much of a grasp on how generative AI actually works. Good luck with that.
Unfortunately there’s no bans for stuff like that.
But that’s why I call it out: yes, exactly, it degrades the conversation when someone is preaching about a new tech, and how it’s gonna change development, and claiming they’re a developer themselves - while not being upfront about the fact that they’ve not actually got much real-world experience as a developer at all in general.
And this kind of thing should always be called out when spotted. It’s just plain disingenuous at the end of the day.
I’ve probably been contracted to fix more broken projects (by devs who royally messed up), than the count of MVPs this person has made, or indeed the number of months they’ve been coding.
But at the end of the day, these kinds of folk simply make us more experienced folk more valuable to those that need a professional service in a bail-out scenario. I’ve got decades of real-world coding experience, and a healthy list of successfully published / deployed projects, including some fairly big clients over the years. My CV speaks volumes, particularly when contrast against someone with little experience in the field of software development. I’ve seen languages and tooling come and go. I’ve headed teams and worked solo. I’ve witnessed plenty of folk
like this in my time. It’s certainly not my first rodeo!
Unfortunate that someone chose to downvote me, as opposed to engaging me in conversation as to why my view might perhaps be incorrect or maybe shortsighted - as per the HN guidelines. But no real surprise - I guess that in itself is quite telling here.
Karma points might come and go sometimes, but whatever: I’ve been posting on HN (and other sites) for years, on and off. I’ve no need to try and portray myself as something I’m not, nor portray myself to have skills or experience that I don’t have. I generally post to share my knowledge and experience, because real-world experience adds up over time.
> The c64 could decompress the data faster than the Datasette could play it back, so there was no processing wait.
C64 fast loaders generally didn't use any compression whatsoever.
They would cut the load time in half by simply only writing/reading the file once (whereas normally the load process actually read the data twice) - and then get extra a whole heap of extra speed on top of that (turbo speed!), by implementing the load in custom code, basically working at a faster baud rate than the standard C64 kernel code did.
> Since tape was the medium of choice for ZX Spectrum and other rivals, C64 was on a level playing field.
Kinda. While the C64 had its own cassette player - the C64 was very slow to load stuff compared to the others, until fast load came along.
Part of the reason behind this was that by default the C64 actually loaded the data twice during the process of loading from tape — once to actually read the data, then it read a second copy to verify the data.
> Why software crackers had to crack cassette games in the first place, given that they can be duplicated with any dual-bay tape deck.
It was actually quite rare to duplicate games with twin-tape systems — at least amongst all the folk I knew. It was easier to load a cracked game into memory, using some fast loader (or indeed: from disk), then write it out again.
> The extent of crack intros for cassette games.
I recall that lot of cracked games showed an intro once loaded - the intro was often added onto the game, and often the tape and disk versions did this the same way (as opposed to a separately loaded program). This was part of the reason why folk were trying to write such small intros.
> Is this... normal? I don't understand why they might want to serialize/access all of my env vars. Does anyone have a suggestion for that behaviour?
All processes get a copy of all environment variables [edit for clarity: all environment variables, from the global environment].
Unless one goes out of one's way to prevent this from happening.
> the process args included "JSON.stringify(process.env)" part
And this app choses to receive the env vars in a JSON format. NBD really, in light of the above points.
Environment variables are not secret at all. Quite the opposite: because all processes get a copy of them. They're just variables that are associated with- / stored in- the environment, instead of e.g. in code itself. They absolutely should not be considered to be secure in any way.
Managing secrets is always tricky. Even a naive attempt at trying to avoid using env vars generally leaks stuff in some way - shell command history will record secrets passed-in at launch time, plus any running process (with sufficient permissions) can get a list of running processes, and can see the command line used to invoke a process.
And once one gets past the naive solutions, it usually adds some friction somewhere along the line. There's no easy, transparent, way to do things, as far as I am aware. They all have some cost.
There are quite a few articles on the web about stuff this topic as a whole. I don't think anything particularly new will come from HN users here, it'll mostly be repeating the same already known/discussed stuff. As I myself am doing here, really.
You might find it helpful to consider something like Hashicorp's Vault, or similar, for proper management of secrets.
Using env vars for secrets has become semi-normalised because of container-based development and deployment. It's okay-ish in the limited context and scope of a container, but it's not good at all in a host OS or VM context. Some dev practices have leaked through, possibly because it's an approach that works in all environments even if it's not best practice
I think it was actually normalised long before container-based development was even a thing. It's always just been standard common practice — both in development and for live deployment.
With the assumption being that it's safe, if the box itself is safe (is secure and is running trusted processes).
You have to store the secrets somewhere, and at point of usage they are no longer secret. So one has to assume that any truly determined adversary will undoubtedly get hold of all secrets anyhow.
Anything else is all about minimising risk. And, as with all security practices, there is always a cost/benefit analysis that has to be made, and there will be some kind of cost/benefit tradeoffs made throughout the system / system design, as a result.
But regarding your original point: I would actually think that container-based development makes it easier to provide secrets to only the containers that need them, because e.g. with Docker, environmental variables can easily be specified in separate env files that are passed only to specific containers.
I'm familiar with Vault and been using that at work — but we tend to fetch values from Vault and export them as env variables in the end anyway. Obviously we don't want to hardcode these values in the code either. So env vars are not good for secrets, hardcoding is terrible — what's good/secure then?
Env vars are fine for secrets, as long as you provide the right env vars to the right processes. You can unset them before launching a new process, or better still, not "export" the sensitive ones to all processes.
Just avoid putting secrets in the global environment if it is a concern, and instead just pass necessary secrets locally in the environment when launching a specific app.
Ideally you would fetch values directly from Vault, e.g. using the REST API, ideally with SSL (but that depends on the environment your app is running in /etc.) or using the vault command.
One can either access the Vault REST API directly inside the app itself, or one can pull data from it in a script file that launches the app, etc. and set any necessary environment vars dynamically before launching the app.
e.g. in a launch script you might do something like (sorry, no idea how to do preformatted text on HN) :
Or, in wrapper launch scripts, instead of using the REST API directly with curl, instead use the vault command directly, if it's installed, e.g.
SOME_KEY=$(vault kv get foo/whatever)
Although you'd also need to do some calls upfront first, to authenticate and get an access token, before querying for data/secrets.
But doing these kinds of calls, in the global environment gives those secrets to, well, everything in the global environment.
If you need to pass a vault secret to some specific app, then you want to read from the vault as close to that app's launch as possible, e.g. in a wrapper script that launches that app (instead of launching it 'naked', and leaving it to read from global environment) - or by actually accessing the vault directly from within the app (which isn't gonna be possible with third-party stuff, unless it already supports your vault natively)
https://news.ycombinator.com/item?id=43605846
Companies fund things because they're useful or necessary. My guess is that some of the companies listed might use BSD — and perhaps wanted/needed an implementation of rsync that was not GPL3 licensed.
And/or they simply have an interest in funding Open Source projects / development.