Hacker News new | past | comments | ask | show | jobs | submit | glutamate's comments login

In the beginning, when you read papers like this, it can be hard work. You can either give up or put some effort in to try to understand it. Maybe look at some of the other Jepsen reports, some may be easier. Or perhaps an introductory CS textbook. With practice and patience it will become easier to read and eventually write like this.

You may not be part of that world now, but you can be some day.

EDIT: forgot to say, i had to read 6 or 7 books on Bayesian statistics before i understood the most basic concepts. A few years later i wrote a compiler for a statistical programming language.


I’ll look to do so, and appreciate your pointers. Thank you for being kind!

The state of the art is always advancing, which greatly increases the burden of starting from first principles.

I somewhat feel that there was a generation that had it easier, because they were pioneers in a new field, allowing them to become experts quickly, while improving year-on-year, being paid well in the process, and having great network and exposure.

Of course, it can be done, but we should at least acknowledge that sometimes the industry is unforgiving and simply doesn't have on-ramps except for the privileged few.


> I somewhat feel that there was a generation that had it easier

I don't think so. I've been doing this for nearly 35 years now, and there's always been a lot to learn. Each layer of abstraction developed makes it easier to quickly iterate towards a new outcome faster or with more confidence, but hides away complexity that you might eventually need to know. In a lot of ways it's easier these days, because there's so much information available at your fingertips when you need it, presented in a multitude of different formats. I learned my first programming language by reading a QBasic textbook trying to debug a text-based adventure game that crashed at a critical moment. I had no Internet, no BBS, nobody to help, except my Dad who was a solo RPG programmer who had learned on the job after being promoted from sweeping floors in a warehouse.


> solo RPG programmer

The kids might not know this means "IBM mainframe" rather than "role playing game" :)


The Gettier examples disagree!


What is Batman? My Google skills are failing


Batman is the secret identity of Bruce Wayne


Thanks for the spoiler, I was just about to start


While I am at it, did not see it coming, but towards the end the twist is that Joker is the bad guy.

Once you know and rewatch it you see all these subtle clues along the way.


Careful now. That kind of provocative black-and-white claim will only alienate the 50.1% of American voters who appreciate Mr Joker’s anti-government and anti-woke agenda.


my guess would be batman.js


The CoffeeScript rails clone? Seems an unlikely inspiration for this


I think it means actual Batman. The web dev who uses this framework is encouraged to roleplay Batman.

https://robyn.tech/documentation/en/api_reference/advanced_f...


The comic book hero


Serious question: why Rust? Sounds like this is not exactly systems level programming or any user would suffer from or even notice garbage collection latency.

Is Rust building up a decent ecosystem now for application programming? When I tried developing in Rust I came to the conclusion that you pay a heavy price for not having a garbage collector. Was I doing it wrong?


I find Rust super ergonomic for any kind of app. The memory management aspect is trivial and second nature to anyone used to unmanaged languages, and the type system equally is not difficult to understand.

It's a general purpose language, but it does give you full control. Plus, of course, you can encode a large amount of program states in the type system and borrowing checks enforce rules that programmers usually have to check in their head.

I find that when I write Rust, I have to worry about an order of magnitude less about silly things like lifetime bugs, reference bugs, resource cleanup, all of which are 80% of my job when I write C#, or other similar managed languages.

Plus Rust can generate a static executable, which is reasonably small, and doesn't require a third party runtime.


As someone used to unamaged languages since 1986, and with a major focused on systems programing, it isn't that clear cut.

Rust executables are only 100% static on OSes that expose system libraries as static libraries, and there are not many of those around, outside embedded systems.


As a counterpoint to the parent commenter, I like using Rust for personal stuff (like the project being linked to) for almost everything besides the memory model; I prefer Cargo to any other build tool I've used, and I like how I can eliminate boilerplate with stuff like Serde (for serialization), clap (for argument parsing), and even for how I'm able to write one-off macros to generate code for myself. The documentation for the language and tooling is great, and most packages have fairly good documentatio as well because any package published on crates.io will have it generated from the doc comments into a static site and hosted on docs.rs. The compiler error messages are better than any other language I've used, and at least for my personal use on Linux, I can easily compile with musl and get a fully static binary when libc is my only native dependency; I'm sure that Linux desktops are part of the "not many of those around" you refer to, but the same trick would work just as easily for any Linux server, and I do feel like there are plenty of those around!

If I could get all of those quality of life things in a language with a garbage collector, I'd probably use that for most things instead. Right now though, the closest options would be maybe OCaml or Swift, one of which doesn't really give me nearly as much in terms of quality of life stuff around documentation and tooling, and the other isn't nearly as streamlined to use on my platform of choice as I'd like, so I'm using Rust unless (or until!) something better for my personal projects comes along.


What issues do you run into with Rusts memory management?


After years navigating this issue, I think the common understanding of "static" binary is just "something that won't give me a dll / dylib error on startup when I copy it to my friend's computer"


Which is a possibility, given that ldd/Dependency Walker result won't be an empty list.

Even shipping the whole computer on a container, might not do it, because most containers are leaky abstractions the way their Dockerfiles are written.


Yet in practice this works fine for all intents and purposes.

Sure, you might get a loading error if you try to run your Ubuntu 24.04-compiled executable on Ubuntu 18.04 if you use the default toolchain. tTat’s exactly the same as C or C++, only much easier to fix in rust by rustup-installing the right tool chain.

Compiling on Ubuntu 18.04 and running on 24.04 is absolutely not an issue. Same for windows.

In practice this is really not a problem, especially for cargo libraries that you anyway build from source.


I would say the same regarding dynamic linking, which is why most OSes have moved away from being static linking only executables, except embedded in the cases where it is a single blob uploaded into the device.

Not even modern 3D graphics APIs, none of them, work with static linking, yet another area static linking advocates have to accept the industry has moved on.


> Rust executables are only 100% static on OSes that expose system libraries as static libraries

This seems to be a weird hair to split. GP clearly means “a single executable you can run on any install of the target OS without dependencies.” Whether it’s a truly honest-to-goodness static binary that don’t link to libc or libSystem or whatever is important to approximately zero people, outside of internet pedants.


I agree that this is what many people -- including me -- are caring about.

However, it is not achieved in practice with "static" languages like rust and golang for me

I frequently run into dynlib issues with regards to incompatible glibc references when sharing binaries between different OS like Ubuntu and Fedora or even just different versions of the same OS


> I frequently run into dynlib issues with regards to incompatible glibc references when sharing binaries

Assuming you’re talking about rust binaries, this would only happen if your binary is using glibc symbols that don’t exist on an older version of glibc, and then try to run that binary on a system with the older glibc.

But glibc is a red herring here, because rust is only using libc to call syscalls on the host, because that’s how you’re supposed to do it on every OS, and that’s how you have to do it on every OS but Linux. (Only Go seems to want to implement their own syscalls on Linux.)

It’s a red herring though, because even if rust made their own syscalls and didn’t use glibc on Linux, you’d just fail with ENOSYS when these syscalls are used, instead of failing with undefined symbols at load time. If you try to run stuff that was developed against newer kernel features, on a system without these syscalls, you’re going to run into an equivalent issue. (You might get lucky and not need those syscalls in your app, but that’s not always going to be the case.)


It is, because so many folks make such a big deal of how using computers as they used to be until early 1980's, with single static linking model.

Ironically the same folks don't see to appreciate how object files and binary libraries work in static linking, lets make it even better compiling always from source.


Nobody’s making a big deal about in it in this thread but you.


I beg to differ,

> Good dependency management, a rich package ecosystem, defaults to static binaries which are easy to distribute and a tendency to be fast (even if it’s just the lack of startup overhead) make it a popular choice.

> .....

> Plus Rust can generate a static executable, which is reasonably small, and doesn't require a third party runtime.

Somewhere this thread....

And guess what, you can also ignore my comments.


People are using a term in a different way than you want them to.

They use static binary to mean “the dependencies I specify in my package manager are all put in the one binary, and it doesn’t require a separately installed runtime to run”, which is totally reasonable, and it is opposed to so many languages that don’t work this way.

You’re using static binary to mean “does not link to anything at all”, which on some systems results in worse portability, since the syscall interface is unstable and linking to libc/libsystem/etc is the only supported way to make syscalls.

You come into the discussion assuming definition B, ignore the fact that they’re using definition A, say that they’re making a big deal about definition B, etc.

There’s no confusion from anyone here but you. We all are using definition A. You’re the only one using definition B. It’s probably better if you just recognize that and move on rather than insisting everyone here is an idiot but you.


Rust is a popular choice for CLI tools as well as Go and I do think the ecosystem is picking up.

In fact there are a bunch of them for the terminal.

https://terminaltrove.com/language/rust/

https://terminaltrove.com/language/go/


I've used both, and they are comparable in library support. I happen to write _much_ faster in Go, so it's usually my default choice for CLIs, unless I specifically need to bridge with another Rust program.


Two things:

1. Historically a CLI like this would often be written in C, so Rust isn't that strange of a choice.

2. Rust is know for the borrow checker and being a great low level language. However, it's also an excellent modern general purpose language with a great ecosystem. People end up using it for all kinds of things because it's a joy to use.

"When I tried developing in Rust I came to the conclusion that you pay a heavy price for not having a garbage collector. Was I doing it wrong?"

Rust is tricky to grasp initially, the learning curve goes pretty much vertical immediately, but once you "get it" it's very nice. When I started out I overused borrowing and ran into heaps of lifetime problems. I think limiting yourself to only using references for function arguments and, rarely, return values is a good place to start. As soon as you are start adding references to struct you should stop and think about who owns the data and what its lifetime is. Thinking properly about ownership is the big shift from GC languages. Once you've gotten into that habit, lifetimes are downstream from that.


>Historically a CLI like this would often be written in C

*Perl


Rust is my main programming language and Python the second. Rust is very much useful for application programming - especially these sort of applications. In fact, Rust is the language I sometimes reach for when my shell scripts cross a certain threshold of complexity. Rust even has some tools and an RFC to address the use case of using it like a scripting language (I believe that Go has something similar too).

I don't face much friction from the borrow checker to consider it a 'heavy price for not having a garbage collector'. There are even tools like bacon [1] that can give your real-time feedback on your code. It's even better with the default language server. I sometimes train other developers in Rust - mostly people who are not even into systems programming (JS, Python programmers). However, they don't seem to struggle too much with the borrow checker either. Could you elaborate a bit on what you consider as 'the heavy price'? What sort of issues were you facing frequently?

[1] https://github.com/Canop/bacon


It's BEEN built so the price has been paid. I'm not going to complain that someone over-engineered a useful tool and got away with it.

I will say that rust is pretty damn productive once you organize your brain around ownership. I haven't had to mark a lifetime in over a month.


Rust is a general purpose programming language not just systems programming language.


I am a huge Rust fan and I work with Rust at my job.

I also recently started some open source projects (mostly CLI tools) where I picked Go for one reason: Rust’s learning curve is super steep.

In order to make my projects approachable for other developers Go seemed to be a better choice over Rust.


Syntactically it's a pretty nice language, with a nice and sane ecosystem (crates, etc), and it's fun to write (which is a pro especially for unpaid hobby projects). If you get used to it, it's nice to write various things in it. I'd probably use it for CLI tools at this point.

You do pay a bit in syntactic overhead (lifetimes, borrow checking perplexities), though you get used to it. I'd still not use it for a standard product-y web app. For CLI tools though, it's pretty good.


I write a lot in Python, but I love to write small utilities in Rust as well. The tooling around command line stuff is just really good in Rust and the distribution (just a binary) is just simpler. On top of that Rust forces you to handle errors with IO, like paths correctly to a degree that gives you much more confidence in the resulting tool.

Give it a try.


Honestly, Rust is a breeze to make CLI/TUI apps in. Some crates worth looking into: tokio (mostly for stuff built on top of it), clap, promptly, colored, ratatui. These are just a few ones to get you started, but there's lots more depending on what you need to accomplish!


Good dependency management, a rich package ecosystem, defaults to static binaries which are easy to distribute and a tendency to be fast (even if it’s just the lack of startup overhead) make it a popular choice.

It also helps that rust attracts the kind of devs that make nice tools.


As a user, if I have the choice between 2 apps, I’ll strongly favor a Go, Rust or ANSI C app over a Python, Ruby, Shell or C++ app.

This is because empirically, they usually work better, feel more polished, faster and I can easily contribute patches if I need. If a tool is written in Python, I’ll go out of my way to find a rewrite in another language.


The primary purpose of Rust is to write Rust. It is only the secondary purpose to actually write useful things.


One thing your comment has in common with it then ;-)


Serious question: why not, if the author just knows Rust? Are you so allergic to anything written in Rust? Then get some new pills and ointments because there's going to be more and more of it.


As a Rust developer, let me point out that this is certainly the wrong way to respond when someone criticizes or makes an observation about Rust. Especially when they aren't hostile at all. It's clear from the commenters' question that they have a some preconceptions about Rust for which they are seeking clarifications from other Rust users. This sort of response will only make them give up on Rust due to the toxicity of the community, rather than address any real issues they may be facing.


I am not a Rust programmer and barely know it.

It's just wherever you go, here, Phoronix, Linux Kernel Mailing List for example, the forums are full of haters that just ask "not again, why?" anytime Rust is mentioned. Maybe I've become biased because of that.

Rust helps with performance and security, lots of examples are evidence of that. It's not 100% bulletproof, but nothing is. But it helps. And yet IT people just yell on it for some reason. Like we all should just get stuck forever with C, C++, praise Go, and hate everything else.


The antibodies are probably to the "written in rust" addendum, which seems unnecessary unless rust adds something to the product. Much of the answer above to "why Rust" concerns the developer, and we are potential customers, so who cares? Just drop the "in Rust" and don't worry going forward.

Honestly though I thought we were past the "written in Rust" phase.


It works. Including on me. I'm much more likely to pay attention to a tool that I know is written in Rust. This is both because I love the language and really enjoy using it, and because I've gotten conditioned to believing that tools written in it will be extremely robust, fast, useful, and stable. I can easily rattle off a dozen tools I use on a regular basis written in Rust that are significantly better than their non-Rust counterparts (starting with ripgrep, which is extremely fast). It's effective marketing still.


Didn't know Max Weber was lurking on HN.


It's true if you're ignoring the no-true-scottman fallacy.

Bureaucracy doesn't have to be to the detriment of society. As a matter of fact, it can potentially put breaks on the worst exploitative behavior.

But over time... It has the potential to grow too much with bad legislation, effectively making the positive potential into a very real negative that stifles unnecessarily.


> Bureaucracy doesn't have to be to the detriment of society.

Bureaucracy is an organizational model that reflects human intentions and choices, just like every other organizational model in society.

Attributing specific moral inclinations to an organizational model is as absurd as attributing them to any other tool. Debating whether bureaucracies per se have good or bad intentions is as ridiculous as debating whether handwritten documents convey better or worse intentions than printed ones.


So far all of the bad things I've heard about our system, such as the economic unsustainability and now this, are effects that will happen in the perpetual future.


You have to think about who you’re listening too. The economic sustainability of the actions Trump has taken so far is a pittance:

* The beauracracy today is about the size it was in 1980 on a per capita basis. It’s not the largest per capita it’s ever been.

> The federal government’s workforce has remained largely unchanged in size for over 50 years, even as the U.S. population has grown by 68% and federal spending has quintupled, highlighting the critical role of technology and contractors in filling the gap.

> Compensation for federal employees cost $291 billion in 2019, or 6.6% of that year’s total spending

So firing everyone is a 6% improvement to the federal budget while a complete government collapse for a number of reasons including that the government won’t have anyone to collect revenue or prosecute crimes.

[1]

* The largest discretionary spending area is the military at 800 billion in 2023. Of that, personnel accounted for 173 billion, or 20%. Personnel is a tiny fraction of the government’s spend each year. Even [2] which is a right wing think tank supporting this effort, claims that the liabilities improvement is 600B over 10 years which makes it a <1% dent seeing as how we spend >6T each year and just hand-waves the pension improvement as “significant”. But cuts aren’t focusing on the biggest employer within the government like the military.

* The people Trump & Musk are firing now are people who haven’t been on the job long enough to have protections. This drastically reduces the numbers above as a best case since that assumes a uniform 10% reduction across all salary bands whereas the current 10% reduction is almost certainly across the lowest bands since the government pays based on seniority.

This is what Trump does - he often identifies a real problem and then does a sleight of hand trick to make you think the actions he’s taking, because they’re highly visible, are solving the problem when in fact he’s not actually making any meaningful dent. That’s why he made a big show about the deportation flights but not talking about how the places he’s sending them to aren’t the places the people are from - he’s bullied Costa Rica into accepting whoever he send [3].

[1] https://www.brookings.edu/articles/is-government-too-big-ref...

[2] https://epicforamerica.org/education-workforce-retirement/fi...

[3] https://www.nbcnews.com/news/asian-america/us-deportation-fl...


> your Danish language setting in Croydon screams "I VPN to Copenhagen for overpriced pastries."

First time an LLM made me laugh


Now that I think about it, I don't think I had laughed at an LLM text yet. Non-ironically pretty cool!


There's a great YouTube channel that has a lot to say about this: carefree wanderings (https://www.youtube.com/channel/UCnEuIogVV2Mv6Q1a3nHIRsQ).

The idea is that after first sincerity, and then authenticity, we are moving into a new identity generating technology (in the philosophical sense of the word technology) called "profilicity" which is focussed on curating a profile across a variety of media channels. This profile is more multifaceted than an authenticity and is created or evolved with deliberate intent.


Thanks, looking at the channel it wasn't immediately clear which video would explain that idea, so in case this is useful to someone else, that one goes further into it, and was interesting: https://www.youtube-nocookie.com/embed/Cu1lnTQM0Gw


> type hints are a thing even with plain JS

Can you clarify how you do this? the type annotation TC39 is not approved; but I'm interested in hearing if you have a "userland" approach that works for you?


I've seen several ways of annotating Javascript that IDEs seem to understand. They usually involve using comments before fields, classes, or functions.

The most compliant one seems to be using [JSDoc](https://jsdoc.app/). JSDoc is mostly intended for generating documentation. However, the Typescript compiler can validate types (and can even interoperate with Typescript definitions), if you configure it as such.

In scenarios where you HAVE to write raw Javascript but still would like to do some type validation, this is probably the best solution.

It looks a bit like this:

    /**
     * Fiddle the widget
     * @param {WidgetThing} The widget to fiddle
     */
    function fiddleWidget(widget) {
This is a type hint that asserts that `widget` is of type `WidgetThing`.

JSDoc also works in the middle of a method:

    /** @type {Foo} */
    const foo = widget.fetchFoo();
This asserts that `foo` is of type `Foo`. This could in theory also be derived from the return value of `widget.fetchFoo()` of course.

JSDoc also has arbitrary type definitions:

    /**
     * @typedef Foo
     * @type {object}
     * @property {WidgetThing} parent - the parent that created this Foo.
     * @property {string} name - the name of this Foo.
     */
I would stick to transpiling Typescript myself, but I've also seen use cases where that's simply not an option.


The author never actually says what is so wrong with JavaScript. I think it is quite a good dynamically typed language, and it certainly has some very high-performance implementations. If you don't like it, just use something else?

> Ultimately, JavaScript was the right thing at the right time. It ended up being folded, spindled, and mutilated to serve purposes that it isn’t well suited for

Counterpoint: many quirks in the language were addressed, e.g. introduction of === and with ESLint you get many of the practical advantages of a type checker. And when you need more safety, sprinkle TypeScript on top.

What I really want for Christmas is a TypeScript-to-native compiler.


> The author never actually says what is so wrong with JavaScript

The lack of explicit typing. The author argues against untyped code, and for using typescript instead of javascript to get types.


> The author argues against untyped code, and for using typescript instead of javascript to get types.

Considering how much of the article was about typing and not specific to JS itself, "Just Say No to Dynamic Typing" would've probably been a better title for readers. It would likely get less clicks, though. :')


Fully agree, "Type your JavaScript code" (pun initially not intended) would have been better :-)


> The lack of explicit typing.

The JIT seems to figure it out just fine. Otherwise typeless programming is just a particular style and it's not particularly complicated just different. I've never understood this rationale in a language that does not have pointers.


JS ist not typeless. In fact dynamic languages allow you to use very complex types that are not even possible to express in most static type systems. That is why TS needed to have such an advanced and complex type system to catch at least most of it's power.

You are right that dynamic typing is a valid style though and offers some great advantages but also disadvantages. That is why most dynamic languages have added gradual typing support to have the best of both worlds. The real issue is the weak typing in JS which is an design mistake in retrospect.


I was not arguing for explicit typing myself in the previous comment (I was just the messenger).

But now I'm going to. For context, I write code in both explicitly typed and not explicitly typed languages (mostly JS).

> The JIT seems to figure it out just fine.

And the CPU will figure out binary code just fine. But you won't. You can, but not easily. You are writing for the computer, but also in a way that you (and your peers, if applicable) will be able to work with the code: extend it, fix it, maintain it, make it evolve. "The runtime figures it out" is hardly an argument for the dev UX.

Explicit typing is one way among other things to document the code, and this is documentation that's automatically checked by the tooling, which is nice. Types are documentation that can't rot. I'd say, good to take.

"The JIT doesn't need this to figure out the types" also applies to documentation by the way. The runtime, however, doesn't need to understand the code, just to run it. But you need to understand the code when working on it.

Types can help you figure out what functions you can call on any particular variable, without having to have something similar to a complete stack trace in your mind to know which type is a variable.

Sometimes, you can't even have the complete stack trace, for example when you are working on a library called by user code, or when you are calling some black box library code from user code. It's nice having types that are automatically checked at the interfaces so you don't have to figure it out (which you may need to do by running the related code, noticing the return types and hoping for the best). Even if you are dealing only with code that you have access to, it's nice not to have to go read all the code of the other side each type.

Explicit types just make it easier and faster to understand code, and also to modify it. Without types, you need to "recompute" the type of anything you are dealing with, risking calling the wrong method if you do a mistake at this step. Of course, if you have solid testing in place, you will probably catch it, but with explicit typing, that's a mistake you can't do: it'll be in red in your editor, or an error at the type checking step, winning some time.

Explicit types lets you avoid having to re-figure out again and again types of things, and that's cognitive burden that's freed to give more room to focus on what the code actually does. They also help your IDE help you if you use one (or something like LSP).

if you are dealing with small pieces of code, it doesn't matter much: everything can be kept in your head anyway. For such code, explicit typing can be just noise. But for bigger code, types are a relief.

For me, a nice balance is explicit typing at the interfaces (function parameters and return types), and implicit typing inside the function body.

All in all, I think explicit types make me faster for anything more than a trivial amount of lines of code. So I have a strong case for explicit typing. But I don't have much for non explicit typing.

The only reason I can choose to write vanilla JS despite this is because Typescript more or less requires opening the NPM can of worms. Would browsers allow type annotation in JS, I would totally use them and have some type checking pass somewhere to enforce correctness.


>The author never actually says what is so wrong with JavaScript.

The author does but you have to extract it yourself by using some mental subtraction from the sentences he did write:

>[...], TypeScript has all of the benefits of JavaScript (such as they are) while _adding_ a type system that is expressive and powerful. [...] TypeScript leverages the ubiquity of JavaScript while _adding_ all the power of a modern typing system. And that is why you should be using TypeScript instead.

Translation... subtract out the "added modern type system" from Typescript... to reveal his Javascript criticism: "Javascript does not have a modern type system to help catch errors."


If you are asking "well we could build this in 2 hours with low-code or 8 hours with rails" - then you are not the target market for low-code. To be able to build a rails app in one day takes years of skill and education and maturity. A lot of people or organisations want to build apps but don't have that expertise.

Also, I don't see what is wrong with the outcome that at some point you have to rebuild with code. You have spent not very much time to develop a prototype that was able to get some feedback and come up with new ideas. Maybe that very revolutionary idea about the reservations app the author describes would never have imagined if they didn't have a prototype to play with?

And don't even get me started on the risks of traditional software engineering. How many projects never even got to a viable prototype because devs decided to rewrite everything in flavour-of-the-month every 3 weeks?


> If you are asking "well we could build this in 2 hours with low-code or 8 hours with rails" - then you are not the target market for low-code. To be able to build a rails app in one day takes years of skill and education and maturity. A lot of people or organisations want to build apps but don't have that expertise.

Exactly, low/no code solutions have their limitations, but I think the space they're useful for is — "they" need a simple CRUD app so they build it themselves in No Code solution, figure out what they really want & what the pain points are and _then_ bring on board a developer if it needs expansion, but with a real todo list in front of them.

Or (just as useful to the business user), realise it's not what they need and bin the project before engaging a dev at all.


Well that's the point of the article isn't it, Rails hasn't been the flavour-of-the-month in about 15 years, it's boringly good, so that's why you should stick with it


> How many projects never even got to a viable prototype because devs decided to rewrite everything in flavour-of-the-month every 3 week

this part sounds pretty fanciful, my experience is at the worst six months for new library in your chosen language, and that you can generally talk them out of; 1-2 years rewrite fever in some is almost overwhelming.

on edit: in short I think even the least self-aware dev is not going to do rewrite cool new tech every 3 week, maybe once but not twice. So sounds a bit hyperbolical.


I admit it was an exaggeration for comic relief, but many projects have definitely not gotten to working prototype because devs wanted to use something fancy


Although i have to say i always found the concept of low-code silly. All code should be low code, and i think rails is actually a very good example of that.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: