> Yet projects inevitably get to the stage where a more native representation wins out.
I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s
We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.
I think the Prisma case is a bit of a red herring. First, they are using WASM which itself is a a low-level representation. Second, the performance gains appear primarily in avoiding the marshalling of data from JavaScript into Rust (and back again I presume). Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
As for the "compilers are special" reasoning, I don't ascribe to it. I suppose because it implies the opposite: something (other than a compiler) is especially suited to run well in a scripting language. But the former doesn't imply the later in reality and so the case should be made independently. The Prisma case is one: you are already dealing with JavaScript objects so it is wise to stay in JavaScript. The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
> First, they are using WASM which itself is a a low-level representation.
WASM is used to generate the query plan, but query execution now happens entirely within TypeScript, whereas under the previous architecture both steps were handled by Rust. So in a very literal sense some of the Rust code is being rewritten in TypeScript.
> Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
My point was simply to refute the assertion that once software is written in a low level language, it will never be converted to a higher level language, as if low level languages are necessarily the terminal state for all software, which is what your original comment seemed to be suggesting. This feels like a bit of a "No true Scotsman" argument: https://en.wikipedia.org/wiki/No_true_Scotsman
> As for the "compilers are special" reasoning, I don't ascribe to it.
Compilers (and more specifically lexers and parsers) are special in the sense that they're incredibly well suited for languages with shared memory multithreading. Not every workload fits that profile.
> The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits still apply. In fact, one of the stated reasons for the Prisma rewrite was "skillset barriers". "Contributing to the query engine requires a combination of Rust and TypeScript proficiency, reducing the opportunity for community involvement." [1]
I'm not denying the facts of the matter, I am denying the conclusion. The circumstances of the situation are relevant. Marshalling cost across IPC boundaries come into play in every single possible situation regardless of language. It is why shared memory architectures exist. It doesn't matter what language is on the other side of the IPC, if the performance gained by using a separate process is not greater than the cost of the communication then you should avoid the IPC. One way to avoid that cost is to share the memory. In the case of code already running in a JavaScript VM a very easy way to share the memory means you do the processing in JavaScript.
That is why I am saying your evidence is a red herring. It is a case where a reasonable decision was made to rewrite in JavaScript/TypeScript but it has nothing to do with the merits of the language and everything to do with the environment that the entire system is running in. They even state the Rust code is fast (and undoubtedly faster than the JS version), just not fast enough to justify the IPC cost.
And it in no way applies to the point I am making, where I explicitly question "starting a new project" for example "my default assumption to use JS runtimes on the server". It's closer to a "Well, actually ..." than an attempt to clarify or provide a reasoned response.
The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions. And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
Rather than fixating on this single Prisma example, I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages.
First of all, I would argue that software rewrites are a bad proxy metric for language quality in general. Language rewrites don't measure languages purely on a qualitative scale, but rather on a scale of how likely they are to be misused in the wrong problem ___domain.
Low level languages tend to have a higher barrier to entry, which as a result means they're less likely to be chosen on a whim during the first iteration of a project. This phenomenon is exhibited not just at the macroscopic level of language choice, but often times when determining which data structures and techniques to use within a specific language. I've very seldomly found myself accidentally reaching for a Uint8Array or a WeakRef in JS when a normal array or reference would suffice, and then having to rewrite my code, not because those solutions are superior, but because they're so much less ergonomic that I'm only likely to use them when I'm relatively certain they're required.
This results in obvious selection bias. If you were to survey JS developers and ask how often they've rewritten a normal reference in favor of a WeakRef vs the opposite migration, the results would be skewed because the cost of dereferencing WeakRefs is high enough that you're unlikely to use them hastily. The same is true to a certain extent in regards to language choice. Developers are less likely to spend time appeasing Rust's borrow checker when PHP/Ruby/JS would suffice, so if a scripting language is the best choice for the problem at hand, they're less likely to get it wrong during the first iteration and have to suffer through a massive rewrite (and then post about it on HN). I've seen plenty of examples of competent software developers saying they'd choose a scripting language in lieu of Go/Rust/Zig. Here's the founder of Hashicorp (who built his company on Go, and who's currently building a terminal in Zig), saying he'd choose PHP or Rails for a web server in 2025: https://www.youtube.com/watch?v=YQnz7L6x068&t=1821s
> your larger point which seems to be that all greenfield projects are necessarily best suited to low level language
That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements. When I say "it makes me think I should ..." I am not saying: "Everyone everywhere should always under any circumstances ...". It is a call to question the assumption, not to make emphatic universal decisions on any possible project that could ever be conceived. That would be a bad faith interpretation of my post. If that is what you are arguing against, consider if you really believe that is what I meant.
So my point stands: I am going to consider this more deeply rather than default assuming that an interpreted scripting language is suitable.
> Low level languages tend to have a higher barrier to entry,
I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
> That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements.
The first comment I wrote in this thread was a response to the following quote: "Yet projects inevitably get to the stage where a more native representation wins out." Inevitable means impossible to evade. That's about as close to a black and white statement as possible. You're also completely ignoring the substance of my argument and focusing on the wording. My point is that language rewrites (like the TS rewrite that sparked this discussion) are a faulty indicator of scripting language quality.
> I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
And I've already said that I disagree with this assertion. I'll just quote myself in case you haven't read through all my comments: "I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits [of scripting languages] still apply." I was under the impression that I didn't have to keep restating my position.
I don't believe that AI has eroded the barriers of entry to the point where the average Ruby or PHP developer will enjoy passing around memory allocators in Zig while writing API endpoints. Neither of us can be 100% certain about what the future holds for AI, but as someone else pointed out, making technical decisions in the present based on AI speculation is a gamble.
Ah, now we're at the dictionary definition level. So let's check Google:
Inevitable:
as is certain to happen; unavoidably.
informal
as one would expect; predictably.
"inevitably, the phone started to ring just as we sat down"
Which interpretation of the word is "good faith" considering the rest of my post? If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement? Would you argue with Google and say "I have sat down before and the phone didn't ring"?
It is Hacker News policy and just good internet etiquette to argue with good faith in mind. I find it hard to believe you could have read my entire post and come away with the belief of absolutism.
edit: Just to add to this, your interpretation assumes I think Django (the Python web application framework) will unavoidably be rewritten in a lower level language. And Ruby on Rails will unavoidably be rewritten. Do you believe that is what I was saying? Do you believe that I actually believe that?
I wrote 362 words on why language rewrites are a faulty indicator of language quality with multiple examples and anecdotes, and you hyper-fixated on the very first sentence of my comment, instead of addressing the substance of my claim. In what alternate universe is that a good faith argument? If you were truly arguing in good faith you'd restate your position in whichever way you'd like your argument represented, and then proceed to respond to something besides the first sentence. Regardless of how strongly or weakly you believe that "native representations win out", my argument about misusing language rewrite anecdata still stands, and it would have been far more productive to respond to that point.
> If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement?
If we were having a discussion about automobile safety and you wrote several hundred words about why a specific type of accident isn't indicative of a larger trend, I wouldn't respond by cherry picking the first sentence of your comment, and quoting Google definitions about a phone ringing.
You said: "Inevitable means impossible to evade. That's about as close to a black and white statement as possible."
I used Google to point out that your argument, which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement. An interpretation so narrow that it indicates you are arguing in bad faith, which I believe to be the case. You are accusing me of making an argument that I did not make by accusing me of not understanding what a word means. You are wrong on both accounts as demonstrated.
The only person thinking in black in white is the figment of me in your imagination. I've re-read the argument chain and I'm happy leaving my point where it is. I don't think your points, starting with your attempt at a counter example with Prisma, nor your exceptional compiler argument, nor any of the other points you have tried support your case.
> which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement.
My argument does not hinge upon the definition of the word inevitable. You originally said "I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language."
I gave a relatively thorough accounting of why you've observed this, and why it doesn't indicate what you believe it to indicate here: https://news.ycombinator.com/item?id=43339297
Instead of addressing the substance of the argument you focused on this introductory sentence: "I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages."
Regardless of how narrowly or widely you want me to interpret your stance, my point is that the data you're using to form your opinion (rewrites from higher to lower level languages) does not support any variation of your argument. You "can't think of a time a high profile project written in a lower level representation got ported to a higher level language" because developers tend to be more hesitant about reaching for lower level languages (due to the higher barrier to entry), and therefore are less likely to misuse them in the wrong problem ___domain.
> The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions.
Making technical decisions based on hypothetical technologies that may solve your problems in "a year or so" is a gamble.
> And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
Arguably Go is a scripting language designed for exactly that purpose.
I wouldn't think choosing a native language over a scripting language is a "gamble" but I suppose that all depends on ability and risk tolerance. I think it would be relatively easy to develop using Rust, Go, Zig, etc.
I would not call Go a scripting language. Go programs are statically linked single binaries, not a textual representation that is loaded into an interpreter or VM. It has more in common with C than Bash. But to make sure we are clear (in case you want to dig in on calling Go a scripting language) I am talking about dynamic programming languages like Python, Ruby, JavaScript, PHP, Perl, etc. which generally do not compile to static binaries and instead load text files into an interpreter/VM. These dynamic scripted languages tend to have performance below static binaries (like Go, Rust, C/C++) and usually below byte code interpreted languages (like C# and Java).
The fact that many software products are moving to lower-level languages is not a general point in favour of lower-level languages being somehow better—rather, it simply aligns with general directions of software evolution.
1. As products mature, they may find useful scenarios involving runtime environments that don’t necessarily match the ones that were in mind back when the foundation was laid. If relevant parts are rewritten in a lower-level language like C or Rust, it becomes possible to reuse them across environments (in embedded land, in Web via WASM, etc.) without duplicate implementations while mostly preserving or even improving performance and unlocking new use cases and interesting integrations.
2. As products mature, they may find use cases that have drastically different performance requirements. TypeScript was not used for truly massive codebases, until it was, and then performance became a big issue.
Starting a product trying to get all of the above from the get go is rarely a good idea: a product that rots and has little adoption due to feature creep and lack of focus (with resulting bugs and/or slow progress) doesn’t stand a chance against a product that runs slower and in fewer environments but, crucially, 1) is released, 2) makes sound design decisions, and 3) functions sufficiently well for the purposes of its audience.
Whether LLMs are involved or not makes no meaningful difference: no matter how good your autocomplete is, other things equal the second instance still wins over the first—it still takes less time to reach the usefulness threshold and start gaining adoption. (And if you are making a religious argument about omniscient entities for which there is no meaningful difference between those two cases, which can instantly develop a bug-free product with infinite flexibility and perfect performance at whatever the level of abstraction required, coming any year, then you should double-check whether if they do arrive anyone would still be using them for this purpose. In a world where I, a hypothetical end user, can get X instantly conjured for me out of thin air by a genie, you, a hypothetical software developer, better have that genie conjure you some money lest your family goes hungry.)
I'm not here to predict the future, rather to reconsider old assumptions based on new evidence.
Of course, LLMs may stay as "autocomplete" forever. Or for decades. But my intuition is telling me that in the next 2-3 years they are going to increase in capability, especially for coding, at a pace greater than the last 2 years. The evidence that I have (by actually using them) seems to point in that direction.
I'm perfectly capable of writing programs in Perl, Python, JavaScript, C++, PHP, Java. Each of those languages (and more actually) I have used professionally in the past. I am confident I could write a perfectly good app in Go, Rust, Elixir, C, Ruby, Swift, Scala, etc.
If you asked me 6 months ago "what would you choose to write a basic CRUD web app" I probably would have said TypeScript. What I am questioning now is: why? What would lead me to choose TypeScript? Do the reasons I would have chosen TypeScript continue to make sense today?
There are no genies here, only questioning of assumptions. And my new assumptions include the assumption that any coding I would do will involve a code assisting LLM. That opens up new possibilities for me. Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?
Your assumptions about the present and near future will guide your own decisions. If you don't share the same intuitions you will come to different conclusions.
> Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?
Same reasons as with no LLM assistance. You would be choosing higher maintenance burden and slower development speed compared to your competitors, though. They will get it out faster, they will have fewer issues, and will be able to find people to support it more easily. Your product may run faster, but theirs will work and be out faster.
Lets imagine we are assembly programmers. You have a particular style of assembly that you believe gives you some advantage over your competitors. The way you structure your assembly gives you a lower maintenance burden and faster development speed compared to your competitors.
I show up and say "I have a C compiler". Does it matter at that point how good your assembly is? All of a sudden I can generate 10x the amount of assembly that you generate. And you are probably aghast, what crappy assembly my C compiler generates.
Now ask yourself: how often do you look at generated assembly?
Compilers don't care about writing maintainable assembly. They are a tool to generate assembly in high volumes. History has shown that people who use C compilers were able to get products to market faster compared to people who wrote using assembly.
So lets assume, for the sake of understanding my position, that LLMs will be like the compiler. I give it some high-level English description of the code I want it to run and it generates a high volume of [programming language] as its output. My argument is, the programming language that it outputs is important and it would be better for it to output into a language that low level native binaries. In the same way I don't care about "maintainable assembly" coming out of a C compiler, I don't care about maintainable Python coming out of my LLM.
Humans aren't deterministic. I've trusted junior engineers to ship code. I fail to see a significant difference here in the long term.
We have engineering practices that guard against humans making mistakes that break builds or production environments. It isn't like we are going to discard those practices. In fact, we'll double down on them. I would subject an LLM to the level of strict validation that any human engineer would fine suffocating.
The reason we trust compilers as a black box is because we have created systems that allow us to do so. There is no reason I can see currently that we will be unable to do so for LLM output.
I might be wrong, time will tell. We're going to find out because some will try. And if it turns out to be as effective as C was compared to assembly then I want to be on that side of history as early as possible.
Exactly, which is why I would want humans and LLMs to write maintainable code, so that I can review and maintain it, which brings us back to the original question of which programming languages are the easiest to maintain...
Well, we're in a loop then because my response was "you don't care about maintainable assembly".
I want maintainable systems you want maintainable code. We can just accept that difference. I believe maintainable systems can be achieved without focusing on code that humans find maintainable. In the future, I believe we will build systems on top of code primarily written by LLMs and the rubric of what constitutes good code will change accordingly.
edit: I would also add that your position is exactly the position of assembly programmers when C came around. They lamented the assembly the C compiler generated. "I want assembly I can read, understand and maintain" they demanded. They didn't get it.
We're stuck in a loop because you're flip flopping between two positions.
You started off by comparing LLM output to compiler output, which I pointed out is a false equivalence because LLMs aren't as deterministic as compilers.
Then you switched to comparing LLMs to humans, which I'm fine with, but then LLMs must be expected to produce maintainable code just like humans.
Now you're going back to the original premise that LLM output is comparable to compiler output, thus completing the loop.
There are more elements to a compiler than determinism. That is, determinism isn't their sole defining property. I can compare other properties of compilers to LLMs. No "flip flop" there IMO, but your judgment may vary.
Perhaps it is impossible for you to imagine that LLMs can share some properties with compilers and other properties with humans? And that this specific blend of properties makes them unique? And that uniqueness means we have to take a nuanced approach to understanding their impact on designing and building systems?
So lets lay it out. LLMs are like compilers in that they take high level instructions (in the form of English) and translate it into programming languages. Maybe "transpiler" would be a word you prefer? LLMs are like humans in that this translation of high level instructions to programming languages is non-deterministic and so it requires system level controls to handle this imprecision.
I do not detect any conflict in these two ideas but perhaps you see things differently.
> There are more elements to a compiler than determinism.
Yes, but determinism is the factor that allows me to treat compilers as a black box without verifying their output. LLMs do not share this specific property, which is why I have to verify their output, and easily verifiable software is what I call "maintainable".
An interesting question you might want to ask yourself, related to this idea: what would you do if your compiler wasn't deterministic?
Would you go back to writing assembly? Would you diligently work to make the compiler "more" deterministic. Would you engineer your systems around potential failures?
How do industries like the medical or aviation deal with imperfect humans? Are there lessons we can learn from those domains that may apply to writing code with non-deterministic LLMs?
I also just want to point out an irony here. I'm arguing in favor of languages like Go, Rust and Zig over the more traditional dynamic scripting languages like Python, PHP, Ruby and JavaScript. I almost can't believe I'm fighting the "unmaintainable" angle here. Do people really think a web server written in Go or Rust is unmaintainable? I'm defending my position as if they are, but come on. This is all a bit ridiculous.
> How do industries like the medical or aviation deal with imperfect humans?
We have a system in science for verifying shoddy human output, it's called peer review. And it's easier for your peers to review your code when it's maintainable. We're back in the loop.
> Do people really think a web server written in Go or Rust is unmaintainable?
Things are not black and white. It will be less maintainable relatively speaking, proper tool for the job and all that. That’s why you will be left in the dust.
Again, your competitor will get there faster and with fewer bugs. LLMs are trained on human input and humans do not do great at low level languages. They churn out better Python than C and especially when it comes to refactoring it (have observed that personally).
>The main driver behind this project is that while Rust is very quick, the cost of serializing data between Rust and TypeScript is very high.
This sounds more like a "we're kinda stuck with Javascript here" situation. The team is making a compromise, can't have your cake and eat it too I guess.
i don't think this speaks to the general reasons someone would rewrite a mid- or low-level project in a high-level language, so much as to the special treatment JS/TS get. yes, your data model being the default supported, and everything else in the world having to serialize/deserialize to accommodate that, slows performance. in other words, this is just a reason to use the natively-supported JS/TS, still very much the favorite children of browser engines, over the still sort of hacked-in Rust.
Prisma is currently being rewritten from Rust to TypeScript: https://www.prisma.io/blog/rust-to-typescript-update-boostin...
> Yet projects inevitably get to the stage where a more native representation wins out.
I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s
We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.
[1] https://github.com/tc39/proposal-structs?tab=readme-ov-file