Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps C is here to stay and that is the way Linux should live and naturally die. That's what's being proposed here by the guy who opposes multi-language projects.



That's my overall point of view too. Regardless of infinite technical discussions about one or another, if Alice and Bob can't live together than just don't get married.

Why spend all this energy on conflict and drama to no end? If one language/technology/group is so much better then just fork the thing and follow their own path.

I'm actually not defending the C guys, I just want to leave them alone and let "Nature" take his course, if they die on obsolescence, then they die. who cares..


I concur. Personally, I'd love to see all the Linux Rust effort being redirected at Redox OS.

If the Asahi team focused their efforts on Redox, with all the genius talent they they have, we could see an actually practical, usable implementation of Redox on real hardware; a potential daily-driver which would catapult the development of whole ecosystem - and that can only be a good thing.


Redox has stayed drama free so far.

I am sure most people involved with the Linux rust effort are also not problematic; these would be very welcome there.

OTOH, please don't let Redox be taken over by problematic people.


Redox is drama-free because Redox has very few developers working on it. In the last 12 months, Redox had 7 contributors that touched more than 100 lines of code.

Hard to have too much drama if you have a handful of developers that do code changes, and no users to complain about any of your decisions.


"In 2010, after twenty years under development, Stallman said that he was "not very optimistic about the GNU Hurd. It makes some progress, but to be really superior it would require solving a lot of deep problems" - https://en.wikipedia.org/wiki/GNU_Hurd


Most UNIX systems that were not implemented in C, and thus lacked the symbiotic relationship, never survived in the market, sadly.

There have been UNIX systems implemented, Pascal, Ada, Modula-2, Modula-3, as the most relevant ones.

All gone.

Also note that POSIX/UNIX certification requires a C compiler presence.


I don't think you can conclude anything from that since a ton of UNIX systems implemented in C are also dead.


C was created to rewrite UNIX from its original Assembly implementation, naturally it is a symbiotic relationship not shared by other languages.

Note that many folks even forget that C++ was equally developed on the same Bell Labs group, and is probably one of the first examples of guest languages, making their best to fit into the platform, taking advantage of the ecosystem with almost zero friction, but never being able to control where the platform goes.


> C was created to rewrite UNIX from its original Assembly implementation

I don't think I can go with that one. C was created in the same period as people were trying to find a way to create a common platform, but C was more about trying to solve the problem of having a higher-level language that wasn't available for the low-end hardware (ie PDP-11, etc) of the time. Richie wasn't trying to reinvent the wheel.

He would have been happy to use other languages, but they were either design for large platforms which needed more resources (Fortran) or looking to be locked behind companies (IBM's PL/I). Richie considered BCPL, which at the time had a design that made it pretty easy to port if your computer was word-based (same size of bit-width for all numbers regardless of purpose). But, mini-frames were moving towards byte-based data and word or multi-word-based addressing. Plus, mini-frames had poorer hardware to make it cheaper, so typing on them meant more physical work.

A lot of UNIX design came from trying to use less: less memory, less paper, less typing. Richie tried to simplify BCPL to be less wordy by making B, but ultimately decided to jump to the next thing by making a language that would require as few keystroke as possible. That's why C is so symbolic: what is the least amount of typing to perform the concept? That made it pretty easy to translate to a fixed set assembly instructions; however, it hasn't had a symbiotic relationship with assembly.

If anything, it is the reverse. Just look at the compiler for all of the memory addressing it has to know. Look at any reasonably complex program of all of the compiler directives to see all the platform exceptions. C++ really took failure modes to the next level. My favorites is "a = b/*c;" Is that "a equals b divided by value pointed at by c" or "a equals b" with a comment? I left C++ a long time ago because I could take code that would compile on two different platforms and result in totally different behavior.

I think all of this drama has to do with the simple fact of there a bunch of people content to live in a one-langauge dominated environment and the head of Linux doesn't want to decide if that is or isn't the mandate; however, by not taking sides, he has effectively taken the one-language mandate. Rust needs to reimplement Linux.


> C++ really took failure modes to the next level. My favorites is "a = b/*c;" Is that "a equals b divided by value pointed at by c" or "a equals b" with a comment?

That is a really bizarre comment, especially including a comment that is perfectly valid K&R C, and just as "ambiguous" in that. The answer is of course that it is an assignment of the value b to a variable called a, followed by a comment. "/*" is always the start of a block comment.

Since C99 (and since forever in C++) there is also the new style comment, //, for commenting out the rest of the line, and this in fact broke certain older C programs (`a = b//* my comment*/ c;` used to mean a = b / c; in C89, and means `a = b` in C++ or C99).


Well, it is kinda weird to take what would otherwise be perfectly legitimate and meaningful syntax and make it a comment. E.g. Pascal uses (* *) instead, and there's no other construct in the language where those can legitimately appear in this order.


Sure, but it's still a choice that C made, long before C++, so it's bizarre to see it in reference to how much worse C++ is.

As for the actual syntax itself, I do wonder why they didn't use ## or #{ }# or something similar, since # was only being used for the preprocessor, whereas / and * were much more common.


/* */ is a PL/I thing that somehow ended up in B. I suspect that Ritchie just wanted multiline comments (which BCPL didn't have - it only had // line comments), and just grabbed the syntax from another language he was familiar with without much consideration.

Or maybe he just didn't care about having to use whitespace to disambiguate. The other piece of similarly ambiguous syntax in B is the compound assignment, which was =+ =- =* =/ rather than the more familiar C-style += etc. So a=+a and a= +a would have different meaning.


That is the usual cargo cult story of C, systems programming languages go back to JOVIAL in 1958, NEWP in 1961, one of the first systems programming languages with intrinsics and unsafe code blocks.

You surely aren't advocating that hardware predating PDP-11 for a decade are more powerful.

There is enough material that show had UNIX been a commercial product instead of free beer source code, most likely C would have been just another systems language in the mysts of time.


> You surely aren't advocating that hardware predating PDP-11 for a decade are more powerful.

That's correct. The PDP-11 used for the first Unix system had 24KBytes of memory, and no virtual memory. The kernel and the current running process had to both fit in 24KB. This PDP-11 minicomputer was vastly underpowered compared to ten year old mainframes (but was also far less expensive). The ability of Unix to run on such underpowered (and cheap) machines was a factor in its early popularity.

BCPL was first implemented on an IBM 7094 running CTSS at Project Mac at MIT. This was one of the most powerful mainframes of its era. It had 12× the memory of the Unix group’s PDP-11, plus memory protection to separate kernel memory from user memory. One of the historical papers about C noted that a BCPL compiler could not be made to run on the PDP-11 because it needed too much memory. It needed to keep the entire parse tree of a function in memory while generating code for that function. C was designed so that machine code could be generated one statement at a time while parsing a function.


Market failures of the other Unices aren't necessarily related to the technical advantages or disadvantages or symbiosis with C or being implemented in C. However, making C programmers' life easier was crucial.

Linux was at the correct place at the correct time. It was the only free version of Unix-like OSes that didn't have legal bullshit to deal with. IBM and Intel's support also made GNU/Linux ecosystem successful, without them it would stay as an academic project. Being free meant that it had an advantage where price sensitivity mattered and dotcom boom and VC explosion is very sensitive to cheaping out and preffers suffering with less-than-ideal software. So Linux stayed popular while other ones died slowly.

C had a huge following and all OSes had to support it. Simplicity made it popular when average hardware at the hands of many academics and young professionals was very weak. Being written in C may have made things marginally easier but neglecting it for Ada or Pascal was a terminal mistake. Windows isn't Unix at all but it also had to support C well.


Free beer OS with source tapes and the Lions book made the huge following of academics and young professionals.

Had AT&T been able to sell UNIX, and naturally C, at the same price points as VMS, System 360, and many other contemporary OSes, and none of us would be talking about them today, other than history curiosities.

Instead we are left with UNIX haters handbook, and still trying to fix the security issues across the industry caused by C's adoption, the JavaScript and PHP of systems programming languages, both in adoption scale, and code quality.


OSX would like to disagree with you.


First of all I mentioned most, not all.

Second, while OS X, and NeXTSTEP before it, are technically UNIX, they aren't seen as such by either NeXT, nor Apple.

The focus of the whole userspace experience is on Objective-C frameworks, nowadays also a mix of Swift and C++.

Steve Jobs was famously against UNIX culture, there was even a famous attendance of him at USENIX.

NeXTSTEP was based on UNIX, because Steve Job wanted to win the workstation market against Sun, using UNIX compatibility as EEE, bringing folks into NeXTSTEP and keeping them there with Objective-C development experience, Lotus Improv, Renderman and such.


> NeXTSTEP was based on UNIX, because Steve Job wanted to win the workstation market against Sun, using UNIX compatibility as EEE, bringing folks into NeXTSTEP and keeping them there with Objective-C development experience, Lotus Improv, Renderman and such.

So Embrace, Extend and Extinguish?


If macOS isn't see as UNIX by Apple, why does the latter continue to submit it for certification?


Is the OSX kernel not written in C?


the modular parts (IOKit) are C++


Linux kernel is also one of the few that do not use C ABI as entry point for user programs at all.

As for C compiler presence in POSIX, only existence of C-accessible APIs with specific structure are mandated, C compiler is optional just like Fortran runtime and compiler are.


Are you sure?

https://pubs.opengroup.org/onlinepubs/9799919799/nframe.html

https://pubs.opengroup.org/onlinepubs/9699919799.2018edition...

https://pubs.opengroup.org/onlinepubs/015967575/toc.htm

And copying this from UNIX 03, the most widespread certification,

"A single configuration of the system shall meet all of the conformance requirements defined in the following mandatory Product Standards:

    Internationalized System Calls and Libraries Extended V3

    Commands and Utilities V4

    C Language V2

    Internationalized Terminal Interfaces
The product must be registered as conformant to the Product Standards prior to, or concurrent with, the UNIX 03 Product Standard registration."


Depends on exact level of conformance and options chosen:

From POSIX 2017 edition https://pubs.opengroup.org/onlinepubs/9699919799.2018edition...

> On systems providing POSIX Conformance (see XBD Conformance), c99 is required only with the C-Language Development option; XSI-conformant systems always provide c99.

If XSI conformance is not asserted, only requirement is that C APIs and runtime libs for use by C programs exist on the system, and presence of C compiler is optional,

2017 POSIX had done away with including Fortran 77 in the same category as C, only providing an option for Fortran runtime libs but no longer specifying a Fortran development environment.

Also, I do not have relevant systems on hands to check, but as far as I know multiple Unix systems including behemoths like SunOS/Solaris shipped as POSIX compliant without C compiler.


Ok, but that is the split certification due to the way Sun started to introduce on UNIX world the SDK concept, where customers had to additionally pay for the development tools, the UNIX version of UNIX Home and UNIX Pro editions.

Naturally UNIX Home users are also using a UNIX, and UNIX Pro, users have anyway a C compiler.

Also to note, exactly because nothing else is required, there used to be UNIX vendors, like Sun, that only included C and C++ on their base SDK. Fortran and Ada compilers were additional products to acquire on top of the SDK. Naturally most folks didn't even bother.


Yes, perhaps a C-only Linux would do that and die, and perhaps it would continue its customization of C flavor and runtime it uses (e.g., the various static and dynamic memory limitations and checkers) and closes the gap with Rust to a point where the incremental benefit of using it is not significant enough that it makes a Rust based competitor an inevitability. We may already be beyond that point, even.


The changes required to bring C to a Rust level of safety would make it an entirely different language, even when restricted to the kernel's ___domain. Also, if you're already doing codebase-specific patches to the language itself, many of the arguments around codebase portability that justify the use of C fall apart.

Aside from that, there are other benefits to Rust than safety: it's better at modelling data and states, as the now-infamous filesystem talk [0] outlined.

[0]: https://lwn.net/Articles/978738/


> The changes required to bring C to a Rust level of safety would make it an entirely different language, even when restricted to the kernel's ___domain.

Maybe. 1. It may not have to be "Rust-level of safety" to be good enough to make Rust benefit less compelling. 2. Linux C is already a different languag than C, continued incremental changes might be a better way to get there than adding Rust even if it does become very different in the end.

> Also, if you're already doing codebase-specific patches to the language itself, many of the arguments around codebase portability that justify the use of C fall apart.

Sure, but Linux never had a "codebase portability" argument. It always had to be GCC C. It eventually made some relatively small changes to allow clang to compile it, the far bulk of that work being changing of clang to behave like GCC.

> Aside from that, there are other benefits to Rust than safety: it's better at modelling data and states, as the now-infamous filesystem talk [0] outlined.

Yeah, it's not only safety improvements that are in Linux-C.


> The changes required to bring C to a Rust level of safety

cough* Cargo cough* /s

You cannot have safety or security when you download your code from the internet and this code is a moving target.


The kernel does not use Cargo.


For the moment./s


Or may be Zig might be more acceptable to the Kernal team, once it becomes mature enough?


This is inevitable: Rust is proposed as a safe language, but there is no way to have a "half-secure" kernel. The only option for people who believe in Rust is to have its own kernel, and Linux should have no part on this.


> there is no way to have a "half-secure" kernel.

There is, and this is how Rust naturally works. If you look at its standard library, you will see a lot of unsafe code or libc calls hidden away under safe interfaces.

In fact, this is how all memory safe languages work, including Java, Python, etc: A small trusted base written in an unsafe language that exposes a safe interface (i.e. the interpreter, the JVM, etc), with the large majority of the code written over that safe interface (i.e. the Java/Python code).

Rust is used to make kernel drivers secure by providing a safe interface for them to use.


Keeping the project single language doesn't mean that the project can't change. The C used today is not the same C used when when the project was started. Changing language is also possible. Using two different languages long-term however means that everyone effectively needs to be proficient in both languages and a lot of work is duplicated.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: