Hacker News new | past | comments | ask | show | jobs | submit | more strenholme's comments login

It would probably be a separate site like https://lishogi.org or https://lidraughts.org (both based on the Lichess engine). Or, it might be added to https://playstrategy.org/ (which is the Lichess engine with a bunch of other abstract games added)


Here is a list of free Git hosting services for open source software:

https://github.com/

https://gitlab.com/

https://bitbucket.org/

https://codeberg.org/ (As per the linked article)

https://sr.ht/ (Sourcehut)

Codeberg and Sourcehut appear to use open source code for their web page backend; the others seem to use proprietary software (in the case of GitLab, there is a free version, but gitlab.com also uses non-free software).

Sourcehut says they may some day charge people to host open source software on their server, but right now it’s a free beer service (but, yes, I have donated) using Free (libre) software.

Souceforge also has a proprietary free-to-use for open source Git hosting service, but their service is a little buggy so I would use one (or in my case, all) of the five I have listed.

If there are any others, please let us know.

In terms of continuous integration, in my particular use case, the automated CI tests take about an hour to all run, so I have a Raspberry Pi server the size of a deck of cards which runs Ubuntu 22.04. The server uses a crontab which checks to see if the Git repo has been updated once a day, and runs the tests inside a Docker container if the repo has changed. Some problems, such as automated testing, don’t need to be solved by putting everything in a cloud.


I think we should emphasize that Sourcehut is just currently free while it is in alpha. Mr. DeVault been very upfront about the fact that he's planning to start charging eventually. From his FAQ,

https://man.sr.ht/billing-faq.md#why-should-i-pay-when-githu...

The point is to set up a responsible financial situation between himself and the users and avoid the bad incentive structures that "free" services have ("Free" until VC money runs out, at which point it is roll of the dice whether they'll be monetized through acquisition, or some annoying scheme).


I think that makes a whole lot of sense. The actual post explains it wonderfully:

>Most other companies are financially supported by venture capital, in the form of large investments from a small number of people. If you use their services for free, or even if you pay the modest fees for their paid plans, they are not incentivized to serve your needs first. They are beholden to their investors, and at their behest, they may implement changes you don't like: invasive tracking, selling your data, and so on.

>SourceHut has not accepted, and will never accept, any outside investments. Our sole source of revenue is from sr.ht users making their account payments. This incentivizes us to only consider your needs, and not to use you as a resource to be exploited. The financial relationship is much more responsible for both parties.

A very commendable stance.


Thank you for calling this out! It's great to see more products adopt this stance and this is one of the reasons we chose to not raise funds for Typesense [1].

From an article I wrote recently on this topic [2]:

> Selling Stock vs Selling Search

One key realization we had is that when we bring on investors, we are essentially bringing on a new group of customers - customers for whom the product is our company stock.

The value this group of customers (investors) gets from the product they are buying (stocks) is appreciating stock prices. And the way we can keep this group of customers happy is by regularly raising the next round of funding which is when stock prices appreciate, or having a liquidity event to make a return on investment.

We are concerned that “launching” a new “product line” (selling stocks) and bringing on a whole new group of customers (investors), would cause us to lose our precious bandwidth that could have otherwise been spent on our core search product that our primary group of users and customers expect from us. After all, the “company stock” product line would not exist without the core search product.

[1] https://typesense.org

[2] https://typesense.org/blog/why-we-are-not-raising-funds/


Well Codeberg is also just an German association. Everyone can become a member and the member dues finance the server costs


Does SourceHut still have an aversion to Kubernetes and Docker? Last year I was trying to setup SourceHut on my home infra (which is Docker/SystemD based, but was Kubernetes based) and I was told I'd be banned if I asked about it, and that they don't support that kind of software. Super weird interaction from someone I otherwise used to admire.


Some Kubernetes and Docker people get somewhat religious about it, and rather than taking “I/we don't support that and don't have any immediate plans to, but feel free to try it yourself” as a valid response will plead, nag, and otherwise try to cajole a project's maintainer's to reconsider, sometimes being irritatingly persistent and calling into question a person's overall intelligence because they don't currently want to directly support that plumbing. Such discussions can become time-consuming, tiresome, and (if really persistent in trying to convert them to the cause) difficult to just ignore.

> be banned if I asked about it

I assume this means the maintainer(s) have experienced the above a few times, and have given up trying to be more polite about it!

Don't take it personally. If you want to use a tool with that plumbing then feel free to DIY. If you make it work well, perhaps publish your process and/or images and support it for others who need/want that support.


Bah, what a dumb argument. There's a whole ocean between "we don't really support Docker, you're on your own" and "if you dare ask this again, you'll get banned." That's not the Kubernetes guys being religious here.

Also, it doesn't even make sense on the engineering point of view. I understand not liking Docker the company or Kubernetes the product, but Linux namespaces are a kernel level facility, and banning people because they dare ask how to integrate this product with a native subsystem seems absolute zealotry from people with oversized ego.

Nothing wrong with that, but let's call a spade a spade. It's their project, so it's their right to be a prick about it, but that shouldn't stop other people to call them out on it.


> There's a whole ocean between "we don't really support Docker" and "if you dare ask this again, you'll get banned."

There is. And I'm suggesting that the water filling that ocean has come from having the same discussion over and over again with people who have asked in the past, each thinking they might be the one to finally help you see the light. I've not personally dealt with it from the point of view of infrastructure choices, but I hear tale of those who have, and I've been subject to it with regard to people who didn't agree with a licence choice so can speak for how much it makes you not want to engage at all just-in-case.

I'm not suggesting you'd be like that, but I understand not wanting to engage on the matter based on previous experiences.


Perhaps there is a difference between "not wanting to engage on the matter" and "banning someone on sight"? Maybe there's polite ways to say, "Sorry, but we're not interested in supporting docker, and we don't care about any arguments for it" that would take less effort than rudely telling someone to fuck off?


Of course the suggestion of “banning on sight” could have been an attempt at humour by way of hyperbole, something I should have noted in my previous reply.

> that would take less effort than

Unfortunately, not always. Sometimes the only way to convince people that it isn't worth their time continuing to try to change your mind, is to blatantly be a dick about it. I try not to jump straight to dickishness though: I'll state my position politely once, I may have time to repeat that once or twice more, then out come the big guns.

On a public mailing list or similar I might be more blunt: linking to past discussions in the first instance then jumping to DickCon1. On a public group the discussion is taking more than just my time and might encourage others to chime in and drag the matter instead of letting it close.

And again: the suggestion of “banning on sight” could have been intended as an attempt at humour by way of hyperbole, rather than the direct “off you fuck” that was felt. Communication of sentiment online is very prone to errors like that.


> Communication of sentiment online is very prone to errors like that

This is exactly my point: given that online communication is low bandwidth, why are you attempting humour with someone new, who is therefore very unlikely to get it? (And, given Drew's track record, I find it extremely unlikely that he is just trying to be funny, but that's a different matter.)

Also, your point about not wanting to spend more time on a discussion than is necessary, you can simply... not. Say not interested, post a link to a FAQ that covers your stance, and simply don't engage. You don't need to be offensive. You don't need to be worried about people dragging the matter. It's just a discussion.


It was on Libera in a public channel. It wasn't humor. I'm very used to the IRC crowd and recognizing even the cringiest of humor tactics. There's also another person on this thread now who Drew reacted similarly on a mailing list.

I will be very surprised if Drew isn't called in by his communities at some point.


> Some Kubernetes and Docker people get somewhat religious about it

I asked once and didn't get a response for what I think was 3-5 days and asked again (it is a low frequency channel, but that gap seemed appropriate). I wasn't making a religious argument, literally just asking about the availability of images.

> Don't take it personally. If you want to use a tool with that plumbing then feel free to DIY. If you make it work well, perhaps publish your process and/or images and support it for others who need/want that support.

I mean Drew was pretty clear that I couldn't even ask questions related to either subject in that channel. I don't take it personally, but it certainly affected the way that I view the project and Drew as a human being.


Drew seemed irritated when I posted a Dockerfile for setting up the Hare language language compiler for development on the mailing list, trying to save other folks the effort.

https://gist.github.com/GavinRay97/e3c166c5ba24c2c1bc4a09d7b...

He said "Docker isn't a supported installation mechanism."

I wasn't aware of Drew's anti-docker stance, whoops.


We're going to start looking into k8s and Docker soon as one of the candidates for infrastructure in our new datacenter rollout, but don't hold your breath. It will be a while before this bears any fruit and we may decide it's not worth it.


Orchestrators and runtimes should always be carefully chosen, if at all, so that's great. Sounds like you have some smart people working on it. My post was less about the tech and more around the culture that's been established at SourceHut with respect to those technologies. If I can't even join #sr.ht and ask about images or strategies others are using that's going to be an issue.


#sr.ht is an on-topic channel for end-user support and development discussion. Since Docker/k8s has been rejected upstream, it's off-topic for #sr.ht. The channel needs to keep quiet and keep a high signal:noise ratio to make sure that users get attention when they need support. That said, you could discuss it on the off-topic channel, #sr.ht.watercooler, if you like.


Speaking from the user side on CI usability, SourceHut, while not supporting a cache (which would be dope), has been pretty good with Nix and supports running NixOS unstable as an image for CI. This has been better for me than wasting steps building containers from Nix and then running the containers that some CIs require. I believe the SourceHut setup is in nixpkgs too to run yourself easily.



Pretty feeble argument. Basically it boils down to "if you use Docker you risk not learning how it works and hurting yourself." As I mention in the sibling comment, it would feel less patronising just saying out loud "we do not like containers, roll your own."


I don't think so.

I was researching Keycloak integration with Kubernetes (a project needed it at the office).

I installed it on bare metal, set-up its TLS certificates and enabled native HTTPS. While searching for integration I found a video. The person installed Keycloak Docker image, and dropped an HTTPS proxy container in front of it to enable TLS/HTTPS.

If this is not both bad practice and being misinformed about Keycloak in one step, I don't know what it is.

The person didn't learn how to use and administer Keycloak, didn't make good judgement calls about security and published bad information while doing that.

Docker is easy to abuse and creates people who think know stuff, but do not in reality.

Docker. Pull Responsibly (TM).


Oh, so there's people that don't know how to use containers, so they're bad? What kind of patronising, nanny state kind of argument is that?

Listen, I understand not liking containers. That's fine. But just say so, or try to give more concrete arguments to the table than "I saw a guy in a video creating an insecure container. Thus Docker creates ignorance."

As if configuring servers by hand isn't prone to misconfiguration or bad security practices. Perhaps we have namespaces today because people have been creating insecure, unmaintainable pet systems since the stone age, yet it doesn't save you from hurting yourself if you don't know what you're doing.


Docker != containers. You're arguing as if the srht or generally the diy anti-kubernetes crowd was against containers. Or as if docker was the only mean of secure isolation. More likely than not you'll find them running bwrap, lxc, qemu, etc. You're missing your opponents here, it's very much the opposite of conservative sysadmins that went incompetent and don't understand the new tech. It's (imho) people that are fed up with the "click here to deploy a production server", the "infinite elastic scaling" and the fact that most of these false promises are hidden by bloated software. Nobody is saying that docker gets nothing right, just that most of the time others do more of the good and less of the bad.

> I saw a guy in a video creating an insecure container. Thus Docker creates ignorance

It's more than a guy. Once you see that kubernetes is corporate bloat you gotta ask for the responsability of the people making it. Yes it's creating people that don't understand how things are actually wired (and no, this is not a "natural" direction of history). It's making people dependent on complex stuff and it's actually hiding from mainstream knowledge the simple ways to do simple stuff. I recently had to help someone deploy stuff on an openshift hosting and i must say the experience is infuriating (not even talking about the runtime efficiency of managing the bazillion moving parts).


> Oh, so there's people that don't know how to use containers, so they're bad? What kind of patronising, nanny state kind of argument is that?

You're building rest of your comment on this misinterpretation of my comment.

What I say is "Docker is easy, but it lowers the bar for making mistakes too much. Using Docker without enough information creates bigger problems, faster".

You can do a lot of mistakes, and fatal ones at that, while working on bare metal too. I'm managing systems for 15+ years, using Linux close to 20, and had my fair share on any abstraction level (metal, VM, container, etc.).

However, K8S and Docker is susceptible to create a whole set of problems, and more importantly without knowing it, until it's too late. VMs and bare metal fail relatively early and with more noise. If you do something wrong with your K8S deployment, you can't mend it, and need to re-deploy it.

I do not prefer Docker and K8S, and avoid the latter if I can, but I'm not an hard-line extremist against any of them. I also manage containers and a K8S cluster at work, too.

And no, I never patronize anyone. This is not my style. I just shared my experience, and told that "learning wrong things is easier with Docker", that's all.

I know excellent developers and sysadmins who do wonders with containers, too. Docker and K8S are sharp knives with no warnings on them. That's all.


> Docker and K8S are sharp knives with no warnings on them.

And that's nothing wrong with that. You might also "hurt" yourself by installing software by hand.

Let's agree to disagree, this "it could be dangerous!" argument feels like doubling down on a weak position, but I honestly do not care about fighting someone over the internet about it. Your computer, your rules :-)


> And that's nothing wrong with that.

Yes, except poor documentation and lots of open secrets.

There is no need to fight. We're just discussing. We can do things differently and obtain the same brilliant or disastrous results regardless of the underlying abstraction layer/platform.

All approaches have advantages and disadvantages, so all takes are equally weak IMHO.

Have a nice day,

Cheers.


> I installed it on bare metal, set-up its TLS certificates and enabled native HTTPS. While searching for integration I found a video. The person installed Keycloak Docker image, and dropped an HTTPS proxy container in front of it to enable TLS/HTTPS.

> If this is not both bad practice and being misinformed about Keycloak in one step, I don't know what it is.

Would you mind sharing your argumentation about this, both in the context of Keycloak and in general?

In my eyes, enabling TLS on whatever application you want to run directly is almost always a bad choice, because you now need to deal with its unique implementation, as well as any vulnerabilities that the implementation could have. For example, even if all of your services run Java, you might find that a Spring application, a Spring Boot application, a Quarkus and a Vert.X application all have different ways of enabling it:

  Spring example: https://www.baeldung.com/spring-channel-security-https
  Spring Boot example: https://www.baeldung.com/spring-boot-https-self-signed-certificate
  Quarkus example: https://quarkus.io/guides/http-reference#ssl
  Vert.X example: https://github.com/eclipse-vertx/vert.x/blob/master/src/main/java/examples/EventBusExamples.java#L106 (couldn't even find docs)
And that's before you have parts of your stack running Node, Python, .NET, Ruby, PHP, Go or whatever else you might have to deal with. You can basically forget about automatically provisioning ACME certificates through Let's Encrypt, as well as being able to (easily) have all of your certificates in one place (at least for whatever that node needs), for easier renewals. This is especially noticeable when you want to serve dozens of different sites/applications from the same node.

In contrast, when you have a web server as your point of ingress, that then acts as a transparent reverse proxy in front of your applications:

  - it takes care of all of the certificates, in a common format, they're easy to renew or can be provisioned thanks to ACME
  - it can take care of other concerns, like path rewriting, HTTP to HTTPS redirects, rate limiting, additional caching or headers
  - your applications can talk to one another in the internal network through HTTP, whereas encryption there can be added transparently
  - with containers, this might be an overlay network where the clustering/networking solution takes care of internal certificates and rotates them often
So, how is that a bad choice? Some reasonable arguments that I can come up with are:

  - it's easier to screw things up and expose stuff directly (e.g. port mappings, instead of overlay networks)
  - one could feasibly argue that sometimes the application itself might want to look at the certificates and do something with them (e.g. expiry alerts)
  - one could also argue that reverse proxies won't necessarily be perfect and some software might not correctly deal with headers
Perhaps I'm missing something that's staring me in the face.


I've been paying for years, because I agree with these principles. It's a good service.


Has ddevault ever commented on how much build minutes will cost out of alpha? From their pricing page it seems like a fixed monthly cost for unlimited minutes, which doesn't seem sustainable to me.


Right now it’s unlimited*, where the * means that you’re not allowed to abuse it by doing things like Bitcoin mining. So far the model seems to be financially sustainable (they release quarterly financial statements).


The guy very SourceHut is very nice and dedicated to the project, his blog is also interesting to understand the philosophy being SourceHut.


Worth noting as well, from the list of various hosting services, Codeberg is the only one explicitly for FOSS software and not for proprietary software. Sourcehut seems to be in the box of everything for running the service is FOSS but they would be OK with hosting non-FOSS software, whereas Codeberg isn't for hosting proprietary software at all.

From the FAQ (https://docs.codeberg.org/getting-started/faq/#is-it-allowed...):

> Is it allowed to host non-free software?

> Our mission is to support the creation and development of Free Software; therefore we only allow repos licensed under an OSI/FSF-approved license. For more details see Licensing article. However, we sometimes tolerate repositories that aren't perfectly licensed and focus on spreading awareness on the topic of improper FLOSS licensing and its issues.

> Can I host private (non-licensed) repositories?

> Codeberg is intended for free and open source content. [...]


There was a discussion on sourcehut about making this change to their TOS, not sure what happened with it though

https://lists.sr.ht/~sircmpwn/sr.ht-discuss/%3CC32T7UNXIK7O....


There is also framagit: https://framagit.org/public/projects which is a Gitlab instance run by a French non-profit, Framasoft: https://framasoft.org/en/ I host my project there, no issues.


I have a lot of love for framasoft, but beware that they have limited resources and have in the past closed down or geofenced services I was relying on.

I think Codeberg's co-op style governance/funding makes a lot of sense w.r.t sustainability.

A reminder that Framasoft's a worthy cause to donate to if you are looking for causes to support!


They have been around for a while at this point. 20+ years of good work.


Codeberg source https://codeberg.org/Codeberg/gitea

Codeberg is a fork of Gitea https://github.com/go-gitea/gitea which curiously uses GitHub for hosting.


Didn't even change the readme, it seems.

*EDIT* https://codeberg.org/

I'm guessing they're just maintaining their own copy and Codeberg is more about the service rather than being another git hosting project.


I no longer trust bitbucket after they shut down their mercurial support. That was the thing that differentiated them from everyone else. Why even use them now?


To be fair, hardly anyone ever used Bitbucket due to Mercurial support.

Some people probably used it because for awhile, pre-GitLab, it was the GitHub alternative with the most generous free tier for private repos.

However, the overwhelming majority of users probably use it because they're already in the Atlassian ecosystem due to JIRA.


Bitbucket is like an ex girlfriend you don't quite recall why you broke up with, but afterwards every time you meet her and hear her voice it all comes back in a rush how disgusted she made you feel.


Heptapod archived 250,000 Bitbucket Mercurial repos and runs fork of GitLab that supports Mercurial: https://octobus.net/blog/2020-04-23-heptapod-and-swh.html


I wonder if there even is public mercurial hosting available anymore. This git monoculture is starting to be a bit annoying.


Heptapod supports a Mercurial hosting service (free for OSS) that’s a fork of GitLab: https://heptapod.net/


Really appreciate the link. Much prefer mercurial to git on my own stuff.


Same.


SourceHut offers mercurial hosting, though it’s entirely community maintained.


For the longest time I just couldn’t see what the fuzz (about Git) was about. But after finally switching I don’t think I’d ever go back.


What VCS were you coming from, and what makes Git so much nicer?

I believe Git and Mercurial are considered to be 'about as good' by most.


Sourcehut.


I tried out self hosting on my $5/mo VPS and found it much easier than expected. https://jeskin.net/blog/self-hosting-git-with-stagit/


Can't you just run gitlab or whatever else out of a container in 5 seconds nowadays? Map a single volume and you are likely done. Its not that it's hard to self host that I choose to use third party git services (its clearly not!), it's just that the benefits of self hosting a git repository are increasingly few unless you have strict security requirements etc etc, and the free options are so good now. Running self hosted git is trivial in 2022 if you really need to, with many one-line container deployment options to choose from.

> https://registry.hub.docker.com/r/gitlab/gitlab-ce/

While it might be fun to self-host, a 5 dollar a month fee to run it is also not price competitive with the free or paid individual tiers at github.com for a single user too. I'd probably only do this if the git repo was being hosted on my home LAN.

> https://github.com/pricing


Yeah I'm sure you could. I use that VPS for my blog and personal projects also so I wanted to keep resource utilization to a minimum and not administer a webapp.


no, it's too bloaty to run it longer than the mentioned 5 seconds in good faith to not deplete the biosphere with bloat like that.


But why? If you have an SSH server running, you can immediately setup a git server. Just do git init —-bare, no gigantic web server overhead.

https://rgz.ee/git.html


Gitlab has a nice GUI, team features, and CI/CD integration. It's a reasonable choice for a team.

I also remember the there were tools that use Git as a backend for a change review system, and for an issue-tracking system (much like the stuff which Fossil has integrated).

With that, you theoretically don't need a central server at all, as long as you can send patches to each other. In practice, a central server is an important convenience that helps keep the history synchronized between several developers.


Yeah you could even use recent ssh clients if you leave gitea, which is written in go, out of the equation:

https://github.com/go-gitea/gitea/issues/17798


Presumably people value the UI (also, what is the "gigantic web server overhead" you're referring to?).


Ever tried to install gitlab? It's like 1GiB compressed package. lol


Gitea is much more lightweight.


It is easy to give people read-only git access no SSH, if you want to share your code with the internet at large?


If you add a git-daemon-export-ok file to the repo, it's accessible read-only over the git protocol. That's how all my repos on https://git.jeskin.net are setup.

https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protoco...


note: Transposing "it" and "is" has made my question look sort of passive-aggressive or sarcastic, this was accidental.


The intent was clear from the context, no worries!


Stagit looks really nice and lightweight.

I like the idea of generating static webpages, it sounds great for read only content.

However, it doesn't sound very good for collaboration. Someone can come your repo but it doesn't seem easy for them to submit changes back upstream to your repo?


The best way to collaborate would be just sending patches, either with git send-email or as an attachment to a regular message. Then the author applies them and the repo/stagit will display the change.

https://git-send-email.io/


Please keep in mind codeberg runs on donations (like mine :) ). If you join and have resources, consider donating (they're on LiberaPay). Their finances seem healthy and sustainable for now.

https://liberapay.com/codeberg/


$400/month hardly seems sustainable. That pays for hosting and not much else.


You can also become a supporting member and donate directly, which is what I did: https://join.codeberg.org/


> Souceforge also has a proprietary free-to-use for open source Git hosting service

It's not proprietary?

https://sourceforge.net/create/

> And, as if all of that wasn’t enough, the SourceForge platform runs on Apache Allura which itself is Open Source!

https://allura.apache.org/


Gitee (gitee.com) is also a free hosting service that can be used for OSS, afaik.


Which is a Chinese site hosted behind the GFW. I can't even access the site because they block my IP address. I really hope OSS projects don't consider them for their hosting.


I guess it's the same stuff developers from other countries like Iran, Syria, etc., face when trying to work with Github. It's a shame many OSS projects find providers like these OK when there should be plenty of alternatives that don't block users by country.


Fun fact, they're literally backed by the Chinese government who are trying to promote open source, I assume because it can't be embargoed.

On the bright side I suppose that means they shouldn't have any of those pesky commercial pressures to start charging people.

I don't think I need to say what the down sides are.


>"Fun fact, they're literally backed by the Chinese government who are trying to promote open source, I assume because it can't be embargoed."

And then the feds come for you for breaking some of those US sanctions you had no idea about.


Not everyone is in the US.


True. But their hairy paws may still reach.


Subject to mandatory review by a censor before your code gets published.

https://finance.yahoo.com/news/gitee-chinas-answer-github-re...


China-based


I know, but not all of the alternatives noted by the parent post are hosted in Europe, either. Some are US based, and it doesn't mean your project can't be subject to censorship.


Keybase also has support for hosting Git repos: https://book.keybase.io/git


I'd exercise caution building on top of it since I don't think Zoom cares about Keybase and I was unable to find any server among their repositories (https://github.com/orgs/keybase/repositories) such that one could self-host after Zoom bores of running the Keybase infra


I used to use Keybase for my dotfiles repo but I thought Keybase was on their way out since being bought?


Which is a shame, assuming it's true. I thought keybase was really innovative in their use of modern cryptography.




Sourcehut was nice and minimal last time I used it. Worth checking out. Also: the creator has a great blog


Wait, most if not all are not noscript/basic (x)html friendly, what codeberg seems to be. Ok, github.com has most of its critical functions (microsoft has managed not to f*ck it up... yet) working with a noscript/basic (x)html browser, gitlab and bitbucket, not even possible to create an account, and if I recall properly, neither for sourcehut.

There are other git services which are noscript/basic (x)html friendly from the ground up, and EU based, but much less popular.


I... don't think that's true of Sourcehut? Drew has certainly argued for the fact that websites should be usable without Javascript, so it would surprise me if Sourcehut didn't follow the same standards.


I should test, but isn't there a javascript only captcha at account creation?

A few years ago, captchas were long to go thru but at least without that horrible javascript.


Also proprietary and relatively new on the scene:

https://www.jetbrains.com/space/


Has anyone tried this yet? It looks promising, especially with JetBrains behind it.


Let's not forget https://savannah.gnu.org/.


Didn't Sourceforge open source their forge server software. IIRC it is an Apache project now.



gitlab and github both provide a program that runs on your raspi and executes the ci pipeline. Search for ‘runner‘. As a solo developer, you can even install the runner on your laptop. It will only be needed while you’re connected to the network anyways.


Github and Bitbucket will shut you down on a whim of twitter sharia police

https://knowyourmeme.com/memes/events/c-plus-equality-c

I suggest to use and support the alternative that explicitly promises not to engage in censorship - gitgud.io


You forgot the most important one for the future https://radicle.xyz/


> gitlab

Pretty sure GitLab is open source.


GitLab is a nuanced situation.

There is a FOSS GitLab https://gitlab.com/gitlab-org/gitlab-foss

The hosted gitlab.com instance uses EE features so parent is correct to say that the gitlab.com site is not open source.


> The hosted gitlab.com instance uses EE features so parent is correct to say that the gitlab.com site is not open source.

It’s OSS but with proprietary parts. That’s the issue with the term "open-source": their source code is indeed open [1] but it’s not 100% free (as in freedom).

Edit: mmh apparently I’m mistaken; according to another commenter this is called "open-core" and not "open-source" [2].

[1]: https://gitlab.com/gitlab-org/gitlab

[2]: https://news.ycombinator.com/item?id=33237641


Another term you might be looking for is "source available", meaning you can read the source code but not modify or distribute it without a license.


Open core


Everyone has their own story, and one’s person’s experience can be very different from another person’s experience. I used egrep a whole lot, dozens of times for the automated test setup I have for my open source project. I had to spend most of an hour this morning updating that code to no longer use egrep—a non-trivial task. Here’s the amount of hassle breaking egrep has given me:

https://github.com/samboy/MaraDNS/commit/afc9d1800f3a641bdf1...

This is just one open source project. I’ve seen fgrep in use for well over 25 years, back on the SunOS boxes we used at the time. egrep apparently has been around for a very long time too. Just because it didn’t get enshrined in a Posix document—OK, according to Paul Eggert it was made obsolete by Posix in 1992, but apparently no one got the telegram and it’s been a part of Linux since the beginning and is also a part of busybox—doesn’t mean it’s something which should be removed.

I’m just glad I caught this thread and was able to “update” my code.


This is my general take on it. As I posted on Twitter:

As one example, chess.com thinks it’s suspicious that Hans became a grandmaster at 17 instead of a younger age (page 15 of the report), but keep in mind that the legendary Joseph Henry Blackburne didn’t learn to even how to play chess until he was 17. More recently, world champion #15 Vishy Anand did not become a grandmaster until he was 18.

Point being, it argues like a polemic, not a fact-finding report. They themselves admit this. On page 19: “Chess.com is unaware of any concrete evidence proving that Hans is cheating over the board or has ever cheated over the board”.

My personal hurdle, which has not changed in the last month, is real evidence of online cheating on or after June 20, 2021 (Hans’s 18th birthday), or real evidence of over the board cheating. Understating the extent of his online cheating in his September 6 speech is not evidence of recent cheating. I don’t like it, but it might not even be deliberate lying, but a combination of a foggy memory and nerves.

Regardless, while apparently guilty of online cheating when he was 17 or younger, and possibly guilty of understating the extent of his juvenile delinquency, I see no need to ban him from playing over the board chess with the evidence we have on the table. To ban someone as an adult for something done while a juvenile is generally considered immoral, and to ban someone without solid evidence is also considered immoral. This is a line I draw in the sand and will hold to.


> To ban someone as an adult for something done while a juvenile is generally considered immoral

I don't think it's so clear, especially when the time elapsed is short.

If you cheated I'm a game we played yesterday and today is your 18th birthday I'm still not going to trust you today.


> This is a line I draw in the sand and will hold to.

Would you play him for money at a game where you are both equally skilled and it's easy to cheat?


Dealing with calendars can be pretty difficult. Since I recently wrote a script in Lua to be my personal assistant, processing calendars, todo lists, mailing lists, etc., here’s a Lua form of the code to calculate the day of week. This is accurate for any Georgian date:

  -- Calculate the day of the week
  -- Input: year, month, day (e.g. 2022,9,16)
  -- Output: day of week (0 = Sunday, 6 = Saturday)
  function dayOfWeek(year, month, day)
    -- Tomohiko Sakamoto algorithm
    local monthX = {0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4}
    if month < 3 then year = year - 1 end
    local yearX = (year + math.floor(year / 4) - math.floor(year / 100) +
                   math.floor(year / 400))
    local out = yearX + monthX[month] + day
    out = out % 7
    return out
  end


In C-lang the algorithm is more mysterious:

  dow(m,d,y) { y-=m<3; return(y+y/4-y/100+y/400+"-bed=pen+mad."[m]+d)%7; }


When choosing a cryptographic algorithm, there are generally two schools of thought:

• Use something FIPS approved and tried and true (RSA, SHA-256, AES in GCM, maybe a NIST/FIPS curve, although Google “Dual_EC_DRBG” first, etc.)

• Use a cryptographic primitive developed by Daniel J. Bernstein (Curve25519—which, yes, is NIST if maybe not FIPS approved, [1] ChaCha20 and its derivative BLAKE2, etc.)

These are two camps, and there is some tension between the two camps (as can be seen elsewhere in this discussion) about which approach is better, with very heated emotions. While heated, as someone who remembers the BIND-vs-djbdns and Sendmail-vs-Qmail flame wars of 20 years ago all too well, [2] I find an old fashioned to-DJB-or-not-to-DJB heated discussion a refreshing blast from the past compared to the kind of toxic cancel culture, doxxing, and hideous political attacks dominating social media here in the 2020s.

For a brand new project using crypto, I would use libsodium, which is firmly in the DJB school of thought w.r.t. cryptographic algorithms used, simply because, while newer, it doesn’t have the security history OpenSSL has had, and looks to be quite secure and actively maintained by multiple developers.

[1] https://csrc.nist.gov/News/2017/Transition-Plans-for-Key-Est...

[2] I used Qmail and djbdns during that era before transitioning to my own MaraDNS. I only made MaraDNS because of the concerns about djbdns’ then-not-open-source license and the lack of open source diversity w.r.t. DNS servers back then. All moot, now that Qmail/djbdns are public ___domain, both NSD/Unbound and KnotDNS have come out, and Postfix has become the most common non-Sendmail email daemon. Actually, w.r.t. email, I use an online service here in the 2020s (mxroute), because the headaches of bypassing spam filters are now such it’s best left to someone devoted to just that one problem.

[3] My preferences tend to win the NIST standardization workshops. I liked Rijndael the most during the AES process because it, unlike the others, has a variable block size, so can be used for stuff like AEShash-256. I liked Keccak the most during the SHA-3 process because it was the only unbroken extensible output function (XOF) which ran decently on 32-bit hardware; the only other unbroken XOF, Skein, was pretty much 64-bit only, and BLAKE didn’t have an XOF mode at the time. I still wish BLAKE had a sponge-style “rehash to generate more random bits” XOF mode to make fast forwarding its XOF computationally infeasible.


If you want to minimize options, and yes, I know corporations with “you will use only these cryptographic algorithms” lists, I would go with SHA-2-256 for cryptographic hashing, [1] AES-192 for encryption, and RSA-2048 for public key. I would phase out RSA-2048 with whatever NIST chooses as their post-Quantium algorithm once that process ends [2], and use SipHash for the special case of choosing which hash bucket to use in hash tables. [3] I would maybe have SHA-3 on standby just in case SHA-2 ever gets broken.

[1] SHA-512/256 protects us from key extension attacks, but has much less deployment than SHA-2. We can mandate using SHA-2-256 in HMAC mode when we use it with a secret the attacker does not know.

[2] https://csrc.nist.gov/projects/post-quantum-cryptography/wor...

[3] SHA-2 is called a “hash”, as is “SipHash”, but they are different use cases. SHA-2 is generally for having a long hash where generally both the user and attacker know how to make a given string have a given hash. “SipHash” is to be used for the compression function for a hash table [4] where performance is critical, the hash is shorter, but it is protected by a secret key.

[4] https://en.wikipedia.org/wiki/Hash_function

[5] If you ask me what my favorite cryptographic hash and stream cipher is, I would say RadioGatún[32]. It has roughly BLAKE2’s speed with the added benefit of enhanced hardware performance. It was the only Extendable-Output Function (XOF) which existed back in 2007—the last time I revised which crypto primitives my own open-source software uses—and remains unbroken as we enter 2022. It know it’s hardly mainstream, but it is Keccak’s direct predecessor, and it has been analyzed by world renowned cryptographers.


You posted what you would use but did not justify it. Why not ECC? Why not AES256? Why not SHA512? Is truncating it that hard? What mode of operation?

People are very opinioned about cryptographic algorithms but more often than not they tend to be kinda clueless (not saying that this is the case here).


I chose, with the exception of SipHash, FIPS approved algorithms. As per your questions:

ECC: There was a lot of controversy because one of the FIPS (Edit: NIST) curves apparently has a back door (Edit, it was Dual_EC_DRBG), so just stick with RSA until we have standardized post-quantum public key cryptography.

AES192: If I had to choose one key size, AES192 comes off as the best compromise between speed and security; larger keys result in more AES rounds. Also: AES-256 is weaker with related key attacks than AES-192, and AES-192 is approved for Top Secret (not just Secret) information, while AES-128 isn’t.

SHA512: It uses 64-bit addition operations, which is less than optimal on certain low end embedded processors. Same reason BLAKE3 dropped the 64-bit variant BLAKE and BLAKE2 have; but I will let @oconnor663 speak on why they dropped the 64-bit version.

For a block cipher mode of operation, I would probably choose GCM, since it looks secure and is approved over at https://csrc.nist.gov/projects/block-cipher-techniques/bcm (ECB is still approved, strangely enough).


> I will let @oconnor663 speak on why they dropped the 64-bit version.

The most important thing to us was that there should be just one version. It's simpler for everyone. But we actually did go back and forth about which version it would be, 64-bit or 32-bit. "PCs and servers will all be 64-bit for the foreseeable future" was definitely a big point in favor of choosing 64-bit words. Section 7.2 of our paper goes into detail about why we went with 32-bit anyway.


Why go with FIPS approved algorithms though? Why not curve25519 which is easier to implement in a safe way compared to rsa or other curves?

"If I had to choose one key size, AES192 comes off as the best compromise between speed and security"

Did you test the speed differences? Especially when aesni is involved.


DJB’s algorithms are good. They haven’t been FIPS approved, mind you (EDIT: Curve25519, however, has been NIST approved [1]) but they are good.

If I were to make a short list of approved algorithms, I stick with FIPS, not because they are necessarily the best, but because they are standardized. Non-standard algorithms have their place—I use RadioGatún[32], of all things, for a cryptographic secure PRNG as well as for a cryptographic strong password generator—but not in a corporate context of making a “use only these algorithms” list so people don’t make novice mistakes like using MD5 to encrypt passwords or a block counter in ECB mode.

I mean, yeah, DNScurve would had been better than the DNS-over-HTTPS which won instead, but standards sometimes win simply because the standardization process for what won is perceived as being more cooperative.

As an aside, I actually like DJB’s software. I used Qmail and djbdns back in the day, and in fact I currently maintain a branch of djbdns which is, as far as I know, the only djbdns fork which is actively maintained here in the 2020s: https://github.com/samboy/ndjbdns

[1] https://csrc.nist.gov/News/2017/Transition-Plans-for-Key-Est...


So you are fine with curve25519 and chacha20, just not in "professional" tools like wireguard and tls, right?


I didn’t say that. Look, I really do have a lot of respect for DJB and his skills as both a coder and cryptographic algorithm designer. If I didn’t, I wouldn’t be using SipHash in one of my open source projects [1] nor would I be maintaining a fork of djbdns. [2]

Let’s just breathe in deeply and calm down. This is not a personal attack directed at anyone here but merely a note that emotions seem to be flairing up and I don’t think it serves anyone’s interests for this to become a Reddit or Twitter style flame war.

In cases where people know the risks, have read Applied Cryptography cover to cover, know why not to use MD5 or ECB in production code, etc. DJB algorithms are just fine. That’s the case with Wireshark and that’s the case with TLS.

What I am saying is this: In a corporate context, where you have programmers who would otherwise make novice mistakes like use simple MD5 to encrypt passwords—I’ve seen that in the real world, for the record—I would put mainly FIPS approved algorithms on a short list.

[1] https://github.com/samboy/lunacy

[2] https://github.com/samboy/ndjbdns

[3] Solar Designer did use MD5 to encrypt passwords for Wordpress about 15 years ago, but he did it in a way to minimize security risks (still secure today, though Argon2 or bcrypt are better), but that was in an era when the only cryptographic primitive PHP had was the now-broken MD5.


You think "Have read Applied Cryptography cover to cover" is a qualification for cryptography engineering? You get that there are people that actually do cryptography engineering professionally, like, on this thread and stuff, right?

It's OK to not really know all of this stuff. Other people can know it for you. The question mark comes in really handy in situations like this. It's not a challenge where you start with the distinction between FIPS and NIST and then axiomatically derive all of modern cryptography.


> Let’s just breathe in deeply and calm down.

Pretty sure they were just asking a small clarifying question.


> Also: AES-256 is weaker with related key attacks than AES-192, and AES-192 is approved for Top Secret (not just Secret) information, while AES-128 isn’t.

NSA CNSA says to use 256:

* https://en.wikipedia.org/wiki/Commercial_National_Security_A...

* https://www.keylength.com/en/6/

> […] so just stick with RSA until we have standardized post-quantum public key cryptography.

The folks that will approve the post-quantum stuff were also the ones the approved the 'pre-quantum' stuff, including the supposedly worrisome ECC standards. If you're not going to trust them now, why should you trust them in the future?


>The folks that will approve the post-quantum stuff were also the ones the approved the 'pre-quantum' stuff, including the supposedly worrisome ECC standards.

There’s a difference between something being developed by a third party and rubber stamped and made a FIPS by NIST (e.g. Rijndael which became AES or Keccak which became SHA-3) and something developed internally by NIST. The post quantum stuff is being developed outside of NIST, so there isn’t really a place for NIST/NSA to put a backdoor like they did with Dual_EC_DRBG.

SHA-2 was in fact developed behind closed doors by NIST/NSA, but since the magic constants are square roots and cube roots of primes, and since it’s been around and analyzed for over two decades, it’s extremely unlikely to have a backdoor.


> There’s a difference between something being developed by a third party and rubber stamped and made a FIPS by NIST

DES was developed by a third party (IBM, IIRC) and it was altered by NIST/NSA before its final form was approved. Turns out it was altered to maker it stronger against an attack no one publicly knew about at the time.

Things could potentially be approved that are mostly strong, except against an attack(s) that are only known in classified circles.


Ok, well, they're NIST curves, not FIPS curves, and none of them are backdoored, and it's GCM, not GSM, but do go on.


I stand corrected. It was Dual_EC_DRBG which had the backdoor. Nevertheless, it was a backdoor in a NIST algorithm using elliptic curves, so I think being a bit reluctant to use a NIST ECC curve is understandable, especially with RSA being tried and true.

GSM instead of GCM was a typo, and I believe it’s against the Ycombinator guidelines to make a correction like that a personal attack, and I believe we’re getting really close to crossing that line, so let’s just calm down and chill out.

This isn’t in any way a personal attack—I’m familiar with and have a lot of respect for your security auditing skills—but just an observation things are getting kind of heated here and it serves no one’s interests to have an out of control flame war.


So, on the one hand, you won't use NIST-approved elliptic curves because a NIST RNG used elliptic curves; on the other hand, you'd use GCM because... NIST approves it.


It’s funny that you’re a proponent of NIST/FIPS approved crypto but also worry about NSA backdoors.


The conflict of interest inherent to NSA has been a thing since data security first was a thing.

It is very rational to follow their guidelines, while at the same time being wary of backdoored products. You could have done a lot worse than picking AES during the past two decades.


If we, back in 1982, encrypted a document using triple-key 3DES, proposed in 1981, using CTR mode (1979), with a unique random key, storing the key using RSA-1024 (1977), that document would still be secure today. Even though DES came from NIST and the NSA.

Even if it’s a large document. Sweet32 really only works for CFB mode; while there are Sweet32-based attacks against CTR mode, they are a lot more difficult and often times not practical. I’m, of course, not advocating using 3DES, Blowfish, IDEA, or any other 64-bit block size cipher in a new system in 2022, but even ancient 3DES documents would still be secure today in many use cases.


CTR mode with a 8-byte block leaves so little room for a counter that, unless you use a fresh key for every encryption, you can plausibly wrap the counter.


Just for whatever it's worth, I think you're replying to one of the Blake3 authors?


Indeed. Not putting BLAKE on that list is not a criticism of BLAKE; it’s fast (especially in software), it doesn’t get faster in dedicated hardware (which, in some cases, is a good thing), and I believe it’s secure.

But, it hasn’t been standardized by NIST and consequently doesn’t have a FIPS standard. The only non-standard algorithm I put on that “use only this crypto” list is SipHash, simply because no one has ever bothered to make a FIPS standard for a fast and secure keyed hash bucket chooser.

I also like how BLAKE3 just said “we are only using 32-bit operations”, because there’s a lot of embedded places where 64-bit really isn’t an option, and because SIMD ISAs have fast 32-bit operation support on 64-bit architectures.


It's length extension attacks, not "key extension attacks". And SHA-512/256 (as you've named it here) is SHA-2.

The author of the comment you're replying to certainly knows what SipHash is.


With parameters as specified by SHA3 it's a lot slower than BLAKE3

Keccak (SHA-3) is actually a good deal faster than BLAKE(1) in hardware. That’s the reason why they chose it: It has acceptable performance in software, and very good performance in hardware.

KangarooTwelve / MarsupilamiFourteen are Keccak variants with fewer rounds; they should smoke BLAKE2 and probably even BLAKE3 in dedicated hardware. Also, they have tree hashing modes of operation like the later BLAKE developers.

The BLAKE family is best in situations where you want the best possible software performance; indeed, there are cases where you do not want hardware to outperform software (e.g. key derivation functions) where some Salsa20/ChaCha20/BLAKE variant makes the most sense. The Keccak family is when one already has dedicated hardware instructions (e.g. ARM already has a hardware level Keccak engine; Intel is dragging their feet but it is only a matter of time) or is willing to trade software performance for more hardware performance.

Keccak code is here: https://github.com/XKCP/XKCP


> they should smoke BLAKE2 and probably even BLAKE3 in dedicated hardware

That's certainly possible, but there are subtleties to watch out for. To really take advantage of the tree structure in K12, you need a vectorized implementation of the Keccak permutation. For comparison, there are vectorized implementations of the AES block cipher, and these are very useful for optimizing AES-CTR. This ends up being one of the strengths of CTR mode compared to some other modes like CBC, which can't process multiple blocks in parallel, at least not for a single input.

So one subtlety we have to think about, is that the sponge construction inside SHA-3 looks more like CBC mode than like CTR mode. The blocks form a chain. And that means that a vectorized implementation of Keccak can't benefit SHA-3 itself, again at least not for hashing a single input. So if this is going to be provided in hardware, it will have to be specifically with other constructions like K12 in mind. That could happen, but it might be harder to justify the cost in chip area. (At this point I'm out of my depth. I have no idea what Intel or ARM are planning. Or maybe vectorized hardware implementations of Keccak already exist and I'm just writing nonsense.)


Is the SHA-3 hardware support already available in existing and widely used hardware?


Last time I looked, it’s available for ARM but there aren’t Intel ISA SHA-3 instructions yet.


True, but both Python’s “int()” operation and the C99 and C11 standard for division operators truncate towards 0. [1]

[1] e.g. http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf PDF page 94/physical page 82


Python's int() truncates towards 0, but (at least in Python 3, I don't know if it was always the same in older versions) its division operator always floors, so -7 // 2 gives -4 even though int(-3.5) is 3. (The int() function is best thought of as just "make this float into an integer, I don't really care how"; if you want to be more specific, you have math.floor(), math.ceil() and math.trunc(), and you also have round(), which rounds towards the closest integer; if the float is evenly between the two integers, it goes towards 0 when the integer part is even and away from 0 when the integer part is odd.)


In my experience, when looking at how division should operate for a given well-established language, it doesn’t matter what you and I think should be the result of the operation; it matters what the standard says on it.

From PDF page 94 (physical page 82) of the draft C99 standard: [1] The result of the / operator is the quotient from the division of the first operand by the second

This is a little vague; it would be better if they called it the “quotient from Euclidean division” (EDIT: This was incorrect. Actually, it’s the quotient from algebraic division, with the fraction truncated, also called “truncation towards zero”. See discussion below.), which is well defined as being -3 in the above example; see https://en.wikipedia.org/wiki/Euclidean_division (EDIT: Euclidean is -4; truncation towards 0 is -3)

[1] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf


it doesn’t matter what you and I think should be the result of the operation; it matters what the standard says on it

In practice, this offloads the problem to your programmer in at least every other case (i.e. when % doesn’t what they need out of box), who then miscalculates or mistests their solution and now the problem is yours.

It doesn’t matter what the standard says, there should be many functions, each for a combination of signs (or roundings) you want in the results. Because your library doesn’t suck and your programmer has things to do instead of fiddling with off by ones and other primitive but highly error-prone-under-stress bs.


Euclidian division actually gives -7 = -4*2 + 1


It’s -3 in GCC 11.2.0

  main() { printf("%d\n",-7/2);}
Let’s look at, oh, C11: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf

PDF page 110, physical page 92: When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded. with a footnote: This is often called ‘‘truncation toward zero’’

-7/2 is -3.5, so discarding the fractional part gives us -3


Yes, and this is not Euclidean division in the usual sense.


Indeed. The C99 and C11 standards define integer division as the “algebraic quotient with any fractional part discarded”, not as Euclidean division.


This whole thread is a nice review and excersize of the 2nd chapter of my Computers and networks security course.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: