Hacker News new | past | comments | ask | show | jobs | submit login

I am still waiting for any of the alternative Rust front- or backends to allow me to bootstrap Rust on alpha, hppa, m68k and sh4 which are still lacking Rust support.

Originally, the rustc_codegen_gcc project made this promise but never fulfilled it.




> to allow me to bootstrap Rust on alpha, hppa, m68k and sh4

Do you actually use all four of those platforms, or is this an arbitrary threshold for what you consider a complete set of platform support?


They're still common (except for alpha) platforms in some market segment specific corners of embedded development. So, maybe for those purposes?

Though, the trend I'm seeing a lot of is greenfield projects just migrating their MCUs to ARM.


> Though, the trend I'm seeing a lot of is greenfield projects just migrating their MCUs to ARM.

That’s what I would expect, too.

The Venn diagram of projects using an old architecture like alpha but also wanting to adopt a new programming language is nearly two separate circles.

The parent comment even included HPPA (PA-RISC) which almost makes me think they’re into either retro computing or they have some arbitrary completionist goal of covering all platforms.


Hi, retro computing person here. I've had a similar debate with Rust evangelists in past

Something the Rust community doesn't understand is when they shout "REWRITE IT IN RUST!" at a certain point that's simply not possible

Those mainframes your Bank runs? I'm sure they'd love to see all that "awful" FORTRAN or C or whatever other language rewritten in Rust. But if Rust as a platform doesn't support the architecture? Well then that's a non-starter

Worse still, Rust seems to basically leave anything that isn't i686/x86_64 or ARM64 as "Tier 2" or worse

This specific line in Tier 2 would send most project managers running for the hills "Tier 2 target-specific code is not closely scrutinized by Rust team(s) when modifications are made. Bugs are possible in all code, but the level of quality control for these targets is likely to be lower"

Lower level of quality control when you're trying to upgrade or refactor a legacy code base? And the target is a nuclear power plant? Or an air traffic control system? Or a bank?

The usual response from the Rust evangelists is "well then they should sponsor it to run better!" but the economics simply don't stack up. Why hire 50 Rust programmers to whip rust-m68k into shape when you can just hire 10 senior C programmers for 20% of the cost?

EDIT: Architecture, not language. I need my morning coffee


>Those mainframes your Bank runs? I'm sure they'd love to see all that "awful" FORTRAN or C or whatever other language rewritten in Rust. But if Rust as a platform doesn't support the architecture? Well then that's a non-starter

But Rust does support S390x?

>Worse still, Rust seems to basically leave anything that isn't i686/x86_64 or ARM64 as "Tier 2" or worse

Rust has an explicit documented support tier list with guarantees laid out for each level of support. Point me to a document where GCC or Clang lists out their own explicit guarantees on a platform-by-platform basis.

Because I strongly suspect that the actual "guarantees" which GCC, clang and so forth provide for most obscure architectures is not that much better than Rust, if at all - just more ambiguous. And I don't find it very likely that the level of quality control for C compilers on m68k or alpha or s390x is not, in practice, at least a bit lower than that provided for x86 and ARM.


We made s390x builds of all our tools (http://www.auxon.io) for an early customer that insisted on running their org on a leased machine from IBM.

It was actually a pretty good experience. It mostly just worked.


> But Rust does support S390x?

It's not a tier-1 target though.


And?

How extensively is GCC testing on s390x, and do they hard-block merging all patches on s390x support being 100% working, verified by said test suite in a CI that runs on every submitted patchset? Or at least hard-block releases over failing tests on s390x? Do they guarantee this in a written document somewhere?

If they do, then that's great, they can legitimately claim to have something over Rust here. But if they don't, and I cannot find any reference to such a policy despite searching fairly extensively, then GCC isn't providing "tier 1"-equivalent support either.

I work for Red Hat so I'm well aware that there are people out there that care a lot about s390x support and are willing to pay for that support. But I suspect that the upstreams are much looser in what they promise, if they make any promises at all.


> This specific line in Tier 2 would send most project managers running for the hills "Tier 2 target-specific code is not closely scrutinized by Rust team(s) when modifications are made. Bugs are possible in all code, but the level of quality control for these targets is likely to be lower"

Are you operating under the assumption that the largely implicit support tiers in other compilers are better? In other words: do you think GCC’s m68k backend (to pick an arbitrary one) has been as battle-tested as their AArch64 one?

(I think the comment about evangelists is a red herring here: what Rust does is offer precison in what it guarantees, while C as an ecosystem has historically been permissive of mystery meat compilers. This IMO doesn’t scale well in a world where project maintainers are trivially accessible, since they have to now field bug reports on platforms they can’t reproduce for and never intended to support to begin with.)


> do you think GCC’s m68k backend (to pick an arbitrary one) has been as battle-tested as their AArch64 one

m68k might be a bad example to pick. I was using gcc to target m68k on netbsd in the mid 1990s. It's very battle tested.

Also, don't forget that m68k used to be in all of the macs that Apple sold at one point before they switched to powerpc (before switching to x86 and the current arm chips). You could use gcc (with mpw's libs and headers) on pre-osx (e.g. system 7) m68k macs.


> m68k might be a bad example to pick. I was using gcc to target m68k on netbsd in the mid 1990s. It's very battle tested.

That was 30 years ago! Having worked on LLVM: it's very easy for optimizing compilers to regress on smaller targets. I imagine the situation is similar in GCC.

(The underlying point is simpler: explicit is better than implicit, and all Rust is doing is front-loading the frustration from "this project was never tested on this platform but we pretend like it was" to "this platform is not well tested." That's a good thing.)


> The parent comment even included HPPA (PA-RISC) which almost makes me think they’re into either retro computing or they have some arbitrary completionist goal of covering all platforms.

Yes, retro-computing is a thing. Wouldn't it be nice to have Rust support on these platforms as well?


I am actively maintaining all of these within Debian.

Plus, there is a very vibrant community around the Motorola 68000 and SuperH CPUs thanks to the Amiga, Sega Dreamcast and many other classical computer systems and consoles.


"m68k-unknown-linux-gnu" was merged as a Tier-3 target for Rust, wasn't it? [0]

[0] https://github.com/rust-lang/compiler-team/issues/458


Yes, it was me that did the work on the Rust side. It doesn't work yet though as progress on the LLVM side is very slow.


Did they abandon that goal? Last I heard it was still under development.


Well, the promise was that rustc_codegen_gcc would reach its goals very quickly which is why several people dismissed projects such as gccrs.

But it turns out that rustc_codegen_gcc still hasn't delivered and it seems the project has fallen asleep.


I am not affiliated with `cg_gcc`, but I have contributed some tiny things here and there.

Currently, `cg_gcc` is within spitting distance of being able to bootstrap the compiler. There really are only 3(!) bugs that currently stop a stage 2 build.

I know for sure, because I found workarounds for them, and have a working stage 2 build. A stage 3 build requires a bit more RAM than I have, but, besides that, it is definitely possible.

Those 3 bugs are: 1. Lack of support for 128 bit SwitchInt terminators(Rust IR equivalent of switch). This is likely caused by an issue on the GCC side, since libgccjit rejects 128 bit labels provided by `cg_gcc`. 2. A semantic difference between Rust's `#[inline(always)]` and `__attribute__((always_inline)) `. In Rust `#[inline(always)]` is a hint and works on recursive functions, but the GCC equivalent is not a hint, but a gurante, and does not work on recursive function. 3. `cg_gcc` miscompiles the Rust compiler's interner code if level 3 optimzations are enabled. The Rust compiler interner is quite complex, and does a lot of fiddly unsafe things, so it is the most likely to break. The exact cause of this issue is hard to pin down, but it can be worked around(by setting a lower opt level).

If you work around those issues, `cg_gcc` is able to successfully build the Rust compiler, at least on `x86_64 Linux`. Going from that to other architectures will still take time, but it is not as far away as some might think.


Thanks for the update. I think one important issue that also needs to be resolved is adding the GCC git tree as a subproject in Rust's git tree so that in the end, it will be possible to build a Rust compiler with a rustc_codegen_gcc backend without having to resort to external repositories.


rust still doesn't even support OpenBSD on x86_64...


Do you mean x86 (as in 32bit)? Because I'm fairly sure that there's a Rust package available on x86_64 ( and aarch64, riscv64, sparc64 and powerpc64).


Rust has Tier 3 support for OpenBSD on x86_64




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: