Hacker News new | past | comments | ask | show | jobs | submit | resonious's comments login

On a more micro level, I find it very hard to write good documentation. I always forget something that once pointed out seems obvious. Or worse, the reader is missing some important context that many other readers are already privy to. Not to mention, some people don't even seek out docs before acting.

I imagine this gets amplified in a large org. The docs are lacking, people might not read them anyway, and you get an explosion of people who don't understand very much but still have a job to do.


So the government admits that all they can do is enable, but they don't get rid of the red tape?

It's always crazy to me that you can build crappy websites in the most flexible environment imaginable and make way more than those doing the actually deep and challenging work required for those websites to run in the first place.

Fundamentally, in programming the programmers have the means to compete with their employers if they want to. Shifts power a lot.

Nitpicking but

> SSE enables microsecond updates, challenging the limitations of polling in HTMX.

How is this true? SSE is just the server sending a message to the client. If server and client are in opposite sides of the world, it will not be a matter of microseconds...


Reminds me of the joke "hey, check out the website I just made: localhost:8080"


You can have microsecond updated, once the connection is established you can stream. Regardless of your latency.

Say your ping is 100 (units are irrelevant here). It will take you 100 before you see your first byte but if the server is sending updates down that connection you will have data at whatever rate the server can send data. Say the server sends every 10.

Then you will have updates on the client at 100 110 120 130 etc.


That's still 100 irrelevant units later than the server sent the update. This is like saying the first byte of the packet takes 100ms to arrive but the subsequent bytes in the packet are instant!

It's not quite right. You'll never have updates in microseconds even if your ping is, say, 7ms.

At best you can be ~2-4x as fast as long polling on HTTP/1 -- an order of magnitude is a ridiculous statement.


You're right! 200-400% faster is so useless.


Well obviously there's a difference between latency and throughput. Of course it's going to be microsecond plus your rtt/2. Sorry, we can't beat physics.


> Sorry, we can't beat physics.

In a way, you can with optimistic updates. That requires having a full front end stack, though, and probably making the app local-first if you really wanted to hammer that nail.

There's always the cost of the round trip to verify, which means planning a solid roll-back user experience, but it can be done.


Everyone knows no one can beat physics. That doesn't excuse claiming you can beat physics.


Latency doesn't affect server update rate it affects time to first data. I can have a ping of 500ms and still get an update from a stock ticker every 5 milliseconds. They will arrive at 500 505 510 etc.


I think most people would have a different understanding of what's entailed in an "update".


The original bit from the datastar website that was quoted was talking about polling.


I'll take your word for it, as I can't find the quote in context.

But I'm unfamiliar with any polling pattern where poll requests are expected to overlap. If updates take microseconds, does that mean I can comfortably run 10,000 of these in a second?

I even think datastar looks cool, but I just think that quote is misleading, and I still think that.

I'd like to see some realistic results, like the kind measured by this[1] type of benchmark. "Microsecond" updates sounds like microbenchmarks with very carefully crafted definitions.

[1]: https://krausest.github.io/js-framework-benchmark/2025/table...


Can't beat physics but can write better copy.


Latency doesn't affect server update rate it affects time to first data.


The author seems aware of this given their "Run your code 1ms slower" remark in the use cases section.


It's not clear to me how these are dark. You aren't being tricked, and all of these traits might be in the game client itself as well (game could phone home, auto update, fail to auto update, be slow, etc)


Maybe not this game but it always ends that way.

Example: EA Origins. Atrocious launcher.

No one can seem to answer why does the game need a launcher?


Personally I consider the programming language used for a piece of software to be similar to the materials used for a handbag.


I'll somewhat echo this. I think simply switching puts the new OS at a huge disadvantage, because not being used to something can make it seem bad.


This is what I was thinking. WASM is a good replacement for containers because it doesn't have these things.


So basically virtual machines, those we can spin up with lxd or firecracker. Not that they don't have file access, it's just that's finnicky compared to containers (I'm thinking docker/podman)


One can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.

E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:

  $ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm  /bin/bash
  root@7862d7c432b4:/# ls /app
  bin   home            lib32       mnt   run   tmp      vmlinuz.old
  boot  initrd.img      lib64       opt   sbin  usr
  dev   initrd.img.old  lost+found  proc  srv   var
  etc   lib             media       root  sys   vmlinuz
  root@7862d7c432b4:/# uname -a
  Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
Gvisor let's one have strong sandbox without resorting to WASM.


Meanwhile, Google moved away from gVisor, because they had too much trouble trying to make it look like actual Linux :-(

https://cloud.google.com/blog/products/serverless/cloud-run-...

Between this and WLS1, trying to reimplement all Linux syscalls might not lead to a good experience for running preexisting software.


Yes, but note the difficulty of building a specialized I/O or drivers for controlling access in a virtual machine versus the WASI model.

Also, startup times are generally better w/ availability of general metering (fuel/epochs) for example. The features of Wasm versus a virtual machine are similar but there are definitely unique benefits to Wasm.

The closer comparison is probably the JVM -- but with support for many more languages (the list is growing, with upstream support commonplace).



Except developers have consistently chosen not to embed the JVM, CLR, or IBMi.

wasmtime (the current reference runtime implementation) is much more embeddable than these other options were/are, and is trivially embeddable in many languages today, with good performance. On top of being an option, it is being used, and WebAssembly is spreading cross-language, farther than the alternatives ever reached.

These things may look the same, but just like ssh/scp and dropbox, they're not the same once you zoom/dig in to what's different this time.


As if developers are consistently chosing to embedd WASM, just wait after the hype cycle dies.

What we have now is lots of hype, mostly by folks clueless of their history, in the venture to sell their cool startup idea based on WASM.


> As if developers are consistently chosing to embedd WASM, just wait after the hype cycle dies. > > What we have now is lots of hype, mostly by folks clueless of their history, in the venture to sell their cool startup idea based on WASM.

I don't think there's much of a hype cycle -- most of the air has been sucked out of the room by AI.

There aren't actually that many Wasm startups, but there are companies leveraging it to great success, and some of these cases are known. There is also the usefulness of Wasm as a target, and that is growing -- languages are choosing to build in the ability to generate wasm bytecode, just as they might support a new architecture. That's the most important part that other solutions seemingly never achieved.

The ecosystem is aiming for a least-changes-necessary approach -- integrating in a way that workflows and existing code does not have to change. This is a recipe for success.

I think it's a docker-shaped adoption curve -- most people may not think it is useful now, but it will silently and usefully be everywhere later. At some point, it will be trivial to ship a small WASM binary inside (or independent of) a container, and that will be much more desirable than building a container. The artifact will be smaller, more self-describing, work with language tooling (i.e. a world without Dockerfiles), etc.


I believe that the two most likely futures for the wasm-as-puglin-engine are mod in games and applications with a generic extension interface.

IMO in games developers would prefer something with a reasonable repl like lua or javascript (as a game is already assumed to be heavy if the mods are not performance critical running a V8 should not be a problem) for extensions in generic complex applications (things like, VSCode, Blender, Excel, etc.) I would posit that the wasm sandbox could be a really good way to enable granular-permission secure extenstion.


I don't understand WASM, but I read that a big draw of WASM is it's ability to provide portability to any language. This would mean Python libraries that depend on an unpopular C library (which could be lost to time) could instead be a single WASM blob.

Assuming equivalent performance, which I understand might not be the case, is there merit to this idea? Or is there nothing new WASM provides?


Application plugins could also be wasm. That lets plugin authors write in any language they want and have their plugin work.

That's the idea behind the Extism framework:

https://extism.org/

uBlock Origin uses WebAssembly in Firefox for better performance:

https://github.com/gorhill/uBlock/wiki/uBlock-Origin-works-b...


The closest way to visualize it is probably how cloudflare offers something kind of like a container (if you squint) via it's workers product.

https://blog.cloudflare.com/announcing-wasi-on-workers/

I assume the merit for cloudflare is lower overhead cost per worker than if they had done something more like AWS lambda. Explained better than I can, here: https://developers.cloudflare.com/workers/reference/how-work...

It does currently present a lot of restrictions as compared to what you could do in a container. But it's good enough to run lots of real world stuff today.


I run my golang on car workers.

It does not work with wasi.

I just use a simple driver:

https://github.com/syumai/workers

Wasi is so painful that I just write all my golang using stdio and have a shim per runtime. Web browser, Cloudflare, server ( with wazero ).

The new go:wasmexport might be useful in go 1.24 but I highly doubt by it.


Even closer WebLogic, WebSphere, JBoss, Glassfish in 2002, but now instead of a EAR file, it is a WASM one.


> I don't understand WASM, but I read that a big draw of WASM is it's ability to provide portability to any language. This would mean Python libraries that depend on an unpopular C library (which could be lost to time) could instead be a single WASM blob.

Yes, this is a key value of WebAssembly compared to other approaches, it is a relatively (compared to a container or a full blown VM) lightweight way to package and distribute functionality from other languages, with high performance and fast startup. The artifact is minimal (like a static/dynamic library, depending on how much you've included), and if your language has a way to run WASM, you have a way to tap into that specialized computation.


IronPython alongside C++/CLI on the CLR, everything compiled down to MSIL bytecodes.


I could be wrong, but I can't find anything about how to include your C dependencies with IronPython when you compile. Instead I see that IronPython has limited compatibility with the Python ecosystem because of Python libraries using C.

Contrasted with WASM where you can write in any language and bring the ecosystem with you, since it all compiles down.


Fully agree with your point here, but wanted to point out that including C dependencies is actually one of the biggest reasons why Python support is hard for WebAssembly too.

Bolstering your point -- smart-and-hardworking people are working on this, which results in:

https://github.com/bytecodealliance/componentize-py/

which inspired

https://github.com/WebAssembly/component-model/blob/main/des...

with some hard work done to make things work:

https://github.com/dicej/wasi-wheels

It's a fun ecosystem -- the challenge is huge but the work being done is really fundamentally clean/high quality, and the solutions are novel and useful/powerful.


You compile them with C++/CLI, which is why I referred to it, no different than using emscripten.


I think that looking at it in terms of Embeddability is more useful compared to portability.

In the sense that compiling C to any language is easily done without too many problems, what wasm allow is to have a secure and performant interface with that language.

For example IIRC one of the first inclusions of wasm was to sandbox many of the various codecs that had regular security vulnerabilities, in this Wasm is neither the first nor the only approach, but with a combination of hype and simplicity it is having good success.

as in https://arxiv.org/abs/1912.02285


Basically yet another bytecode based runtime, but being sold as if it is the very first of its kind, despite prior history.


The idea with wasi containers is that you could spin up a container with only the interfaces it needs.


Pretty weird UX to say "no papers found" before suddenly showing me a paper.


I'm rolling a new version up now, it'll be live in 2 minutes.


I'll try to improve that in a moment.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: