I don't mean just vacuum tubes or even electronics at all. Mechanical analog computing is insane when you get down to it. You have special shapes that move against each other and do calculus.
We make these mechanical models as analogs of more complex physical systems. We can turn huge calculations into relatively simple machines. That we can roll two weirdly shaped gears together and get an integral out says to me something very profound about the universe. I find it to be one of the most beautiful concepts in all of the sciences.
What's even more wild is that we can take those mechanical analogs of physical systems and build an electronic analog out of vacuum tubes. That vacuum tubes work at all is just completely insane, but it's some absolutely beautiful physics.
And yes, there are equally beautiful problems that can only be solved in the digital ___domain, but it just doesn't speak to me in the same way. The closest thing is the bitwise black magic like fast inverse square root from a special constant and some arithmetic. Besides, that's more a property of number systems than it is of digital computation.
I understand how and why digital took over, but I can't help but feel like we lost something profound in abandoning analog.
The tide height is a function of the earth/sun/moon systems. Earth and Moon aren't at a fixed distance from eachother, and neither is the sun, so every day is a unique tide but you can predict the range.
The analog way to do it is to make a gear for each point of data in the system and synchronize all their gears. Then you use them all to rotate one final gear, which will show you the prediction for the time you've chosen.
I used to know nothing about Lord Kelvin except he said things like "It seems, therefore, on the whole most probable that the sun has not illuminated the earth for 100,000,000 years, and almost certain that he has not done so for 500,000,000 years"[1] and allegedly "everything which can be discovered, has been discovered"; until last year's Veritasium video on YouTube[2] about analog computers, and learned he invented tide-predicting analog computers to "substitute brass for brains" and add sinusoidal curves, and a mechanical integrator to separate out individual sinusoidal frequencies from the sum.
I know, I've had my eye on this topic for a while.
Honestly it seems like a perfect application. Neural networks are analog systems. An analog computer can represent neurons very accurately and the entire network is inherently parallel, for free!
I can't wait to see what comes out of this research
My research was in this direction. We already know that these analog neural chips could be orders of magnitude faster than digital equivalents. There is also a lot of military research going on in this area for few decades. However architecture innovations are much faster on software level and dedicated hardware approaches have not been able to catch up. Once things slow down on software level, slowly hardware llms could become the norm.
I didn't make that connection until my late 20s and when I finally did, it radically changed how I look at and understand analog systems.
In today's world, we still build analogs, we just coerce them into strictly numerical, digital models. I don't know if you can call it better or worse, but digital is definitely less magical and wondrous than mechanical analog systems.
Nature, almost completely analog, has been around a thousand times longer than humans. How many times has 'evolution' used digital methods to accomplish something? Perhaps we've chosen to switch to digital because we're in a hurry and its easier ... in hopes of, some day, asymptotically approaching the advantages of analog.
The main reason is that digital computers are so incredibly, overwhelmingly more flexible than analog. Analog computers are (generally) bespoke single-purpose devices. It really isn't too far off to imagine analog computers as code made physical, with all that entails.
Imagine writing a program if every time you wanted to change something you had to cut a new gear, or design a new mechanism, or build a new circuit. Imagine the sheer complexity of debugging a system if instead of inspecting memory, you have to disassemble the machine and inspect the exact rotation of hundreds of gears.
Analog computing truthfully doesn't have enough advantages to outweigh the advantage of digital: you have one truly universal machine that can perform any conceivable computation with nothing but pure information as input. Your application is a bunch of binary information instead of a delicate machine weighing tens to hundreds of pounds.
Analog computing is just too impractical for too little benefit. The extra precision and speed is almost never enough to be worth the exorbitant cost and complexity.
Yeah but ChatGPT is using $700k per day in compute (or was in April). Someone's going to make an analog machine that uses mirrors and light interference or something to do self attention and become very, very wealthy.
Neural networks are a very good application for analog computing (imo). You have a ton of floating point operations that all need to happen more or less simultaneously. And what are floating point numbers if not digital approximations of analog values? :)
This can be implemented as a network of transistors on a chip, but driven in the linear region instead of trying to switch them on as hard as possible as fast as possible. Which is, I believe, what researchers are trying to do.
There are also some interesting ideas about photonic computing, but I'm not sure if that's going anywhere.
A few months back, someone on YouTube attempted to design a mechanical neural network as a single 3D printed mechanism. It ended up not working, but the concept was pretty solid.
Perhaps that's only because we haven't begun to understand analog yet. And our crude original perceptions have long suffered for being ignored. For example, I have yet to actually hear any digital music yet ... that didn't have to pass through a DtoA converter. Hell, we may even learn that braining is not really the product of individual neurons at all, but a coordinated ballet oscillating like seaweed. I'll go bigger: is consciousness analog?
I'm not sure you understand what we're talking about. You seem to be talking about analog electronics, where I'm talking about computation with mechanical or electrical analogs of physical systems.
Both domains are extremely well understood. Analog electronics is an incredibly deep field, and forms the foundations of basically all of our electronic infrastructure. For instance, the transceivers that power cell stations are all analog and are incredibly complex. This stuff would seem like alien magic to anyone from even 30 years ago. The sheer magnitude of complexity in modern analog circuits cannot be overstated.
As for analog computing, well, it's just math. We can design analog computers as complex as our understanding of the physics of the system we want to model. There's not really any magic here. If we can express a physical system in equations, we can "simply" build a machine that computes that equation.
> I have yet to actually hear any digital music yet ... that didn't have to pass through a DtoA converter.
This is simply not true. There are plenty of ways to turn a digital signal into sound without an intermediate analog stage. See PC speakers, piezo buzzers, the floppotron. You can also just pump a square wave directly into a speaker and get different tones by modulating the pulse width.
The reason we use an intermediate analog stage for audio is because direct digital drive sounds like total trash. I won't go too much into it, but physics means that you can't reproduce a large range of frequencies, and you will always get high frequency noise that sounds like static.
Edit: I didn't notice your username before. All 8 bit systems make heavy use of the square wave voice, which is a digital signal. But it's typically passed through an analog filter to make it sound less bad. Music on e.g., the first IBM PCs was purely digital, played through a piezo beeper on the motherboard.
No, that's not really the problem. You can implement branching of a sorts in analog, but branching isn't a very useful concept here.
The strength of digital is that your logic is implemented as information instead of physical pieces. Your CPU contains all the hardware to perform any operation, and your code is what directs the flow of information. When you get down to bare basics, the CPU is a pretty simple machine without much more complexity than a clockwork mechanism. It's an extremely fascinating subject and I very highly recommend Ben Eater's breadboard CPU videos on YouTube. But I digress.
The real trick is that digital computers are general purpose. They can compute any problem that is computable, with no physical changes. It's purely information that drives the system. An analog computer is a single-purpose device[0] designed to compute a very specific set of equations which directly model a physical system. Any changes to that set of equations requires physical changes to the machine.
[0] general purpose analog computers do exist, but generally they're actually performing digital logic. There have only been a few general purpose true-analog computers ever designed AFAIK. See Babbage's difference engine.
DNA is digital. I think crucial digital feature is ability to have exact result from imperfect components, especially important for self-replicating systems. Instead of having calculation that is always off by 1%, you can have perfect result 99% of the time. And you can improve MTBF by stacking error correction on top of it, without necessarily having to improve manufacturing tolerances.
DNA copying always introduces errors. Organisms have quite a few error correcting mechanisms to mitigate damage from bad copies.
Most DNA errors turn out to be inconsequential to the individual. If a cell suffers catastrophic errors during reproduction, it typically just dies. Same for embryos, they fail to develop and get reabsorbed. Errors during normal RNA transcription tend to encode an impossible or useless protein that usually does nothing. Malformed RNA can also get permanently stuck in the cellular machinery meant to decode it, but this also has no real effect. That transcriptase floats around uselessly until it's broken down and replaced. You've got a nearly infinite number of them.
DNA and all the machinery around it is surprisingly messy and imprecise. But it all keeps working anyway because organisms have billions or trillions of redundant copies of their DNA.
*take with a grain of salt, I last studied this stuff many years ago.
You may conceptualize it as digital, as most of our modern mythology does with appearances these days. But does that really correspond with the ding-an-sich? Or, again, how much analog developent happened before our rush to commoditize everything as quickly as possible?
'Imperfect components' is a value judgement. Apparently an analog world was a necessary part of self-replicating 'mechanisms' arising while floating in the analog seas.
I agree. I remember climbing into the turret of the USS Massachusetts and playing with the ranging computer. It was just impressive that a geared device could do pretty complicated math in real time.
Electronic analog computing is also still being researched, eg https://arxiv.org/abs/2006.13177 ("Analog multiplication is carried out in the synapse circuits, while the results are accumulated on the neurons' membrane capacitors. Designed as an analog, in-memory computing device, it promises high energy efficiency")
As for something you can easily get your hands on, micrometers are incredible. A simple screw and graduated markings on the shaft and nut give you incredibly precise measurements. You can also find mechanical calculators (adding machines) on eBay. But those really aren't very sexy examples of the concepts.
Analog computers aren't very common anymore. Your best bet is visiting one of the computer museums that house antique machines. Or watching YouTube videos of people showing them off. There's plenty of mechanical flight computers in particular on YouTube.
If you have access to a 3D printer, there's plenty of mechanisms one can print. The antikythera mechanism is a very interesting celestial computer from ancient times, and 3D models exist online.
Look around on YouTube. There's some fascinating videos from the 1950s on US Navy mechanical firing computers.
These machines can calculate ballistic trajectories with incredible accuracy, accounting for the relative motion of the ships, wind speed, and even the curvature of the earth. Those calculations are not at all trivial!
Hah, this is the exact video I was referring to in the sibling comment. This is what really captured my imagination with regard to mechanical computers
I guess I'm looking for a hint as to why this selection of items is particularly interesting to you. These cover a pretty wide spread of topics, and for folks who aren't well versed in each topic, they might be better served by evaluating the standard option in that field (Redis, Qt, etc) before they dive into the weird alternatives
I appreciate your comment, I've interests in so many things, specially RAD systems like lazarus, delphi, gambas (I was from that era) and I tinkered with Rebol many years ago, and the experience was quite "unique". Tarantool I had to work with in previous projects (it offers a lot, but very unknown)
It's a protocol/tool for async file transfer, built for disconnected/intermittent connectivity amongst known parties (trusted friends as p2p), allowing even for sneakernet-based file transfer.
It's started as a modern take on usenet, but it boggles my mind how cool it is:
Want to send a TV Series to your friend? send it via nncp, and it will make it through either via line-based file transfer (when connection allows, pull or push, cronjob, etc), or even via sneakernet if there is "someone going that way".
Comms priority system lets you hi-prio message-checking via expensive network link vs bulk file transfer using trunk lines later.
It even can be configured to run arbitrary commands on message receive, to allow indexing/processing of files (like a ZFS-receive hook, mail/matrix ingestion...)
Most people know about MediaWiki even if they don't realize they do, because it powers Wikipedia, but I wish more people used it for documentation.
You can create highly specialized templates in Lua, and there's a RDBMS extension called Cargo that gives you some limited SQL ability too. With these tools you can build basically an entirely custom CMS on top of the base MW software, while retaining everything that's great about MW (easy page history, anyone can start editing including with a WYSIWYG editor, really fine-grained permissions control across user groups, a fantastic API for automated edits).
It doesn't have the range of plugins to external services the way something like Confluence has, but you can host it yourself and have a great platform for documentation.
Fossil, the bespoke VCS used by sqlite includes a wiki & web server out of the box. It's not normally what people think of in this ___domain but I've used it for that purpose and it works great for it. https://fossil-scm.org
Yes, while technically speaking they are not wiki in traditional sense, they are based on Git thus collaborative editing is feasible and combined with friendly interface of TinaCMS in which Tinasaurus is based on, it can be a modern Wiki version on steroid i.e lean and fast wiki
It is a PITA from an ops point of view unless you use vanilla with no extensions. Each upgrade tends to break a bunch of extensions and you have to hunt around for solutions.
Isn't that only a problem if the extensions you use are third-party? If you use 100 different extensions, but they're all ones Wikipedia uses too, won't you be fine?
> As an administrator, I wish MediaWiki had a built-in updater (bonus points if it could be automated).
I get that by using the container distributions. I just mount My LocalSettings.php and storage volumes in the appropriate places and I get a new version.
And since I run on ZFS and i take a snapshot before updating if something goes wrong I can rollback the snapshot, and go back to when stuff just worked (and retry later).
Nix package manager's `nix-shell` is something I wish more people knew about. Nix is gaining some popularity, but people often think of using it has to be a really big commitment, like changing your Linux distro to NixOS or replacing your dotfiles with a Nix-based one (using the Nix package manager).
What I wish more people knew was that you don't need to do those things to get value from Nix. Create project specific dev shells that install the packages (at the correct versions) to work with that project can almost replace 90% of the docs for getting setup to work on a project.
conceptually a game changer for me. In practice it's far from a silver bullet (because every language prefers its own package management so you still have to manage those), but when it works it's quite magical.
You can patchelf to link to the host system libraries instead, or some projects can statically compile (inc musl) with less drama than usual, since your cross compilation toolchain can be part of your nix-shell.
It was just surprising, is all. When I use use <x> application from a nix shell, it pretty much always works the way I think. The compiler experience was very jarring, but yes I understand why it works the way it does.
I was more or less pointing out the UX issues with Nix that end up turning many people away.
There is definitely a learning investment in order to write good Nix expressions. But, if you write a good nix shell expression for your project, other devs should be able to jump in without really needing to understand those Nix expressions and still get a working environment.
Oh God miniconda is a horrible piece of software on Nix.
I fell down the Nix rabbit hole, and miniconda was one of the worst things to get working. My first pass used an FHS environment, but eventually I just got the environment.yml file working in micromamba and used that instead. Except micromamba ships it's own linker that I had to override with SHAREDLD, or some random python c++ dependencies wouldn't compile correctly.
I love Nix, but my list of complaints is a mile long. If you want to do anything with opengl in nix, but not on nixos, just give up. NixGl just doesn't really work.
Good luck getting something like Poky (Reference project for Yocto) running in Nix. The only working example puts it in an FHS environment, which uses bubble wrap under the hood. But then because you're in a container with permissions dropped, you can't use a vm. The solution I see in the support forums is roll your own alternative FHS environment based on something else.
Yes, this is where I am at. Used it for over a year in a DevOps role and have developed a huge distaste for it. Despite the language itself being one of the most complained about things, I didn't mind it so much. It was the mile-long stack traces, which were often wrong, and constantly fiddling with things I didn't want to fiddle with to get something working. Just ended up costing me way too much time.
I've played around with it a little bit, but not enough to make any judgements on it. Something like devbox could be the sort of thing to make nix-shell accessible enough to see wider adoption.
It's good for a c or c++ project where libraries are very environment specific. But most modern languages have their own package/environment managers which makes Nix redundant.
Not really. I introduced it to our Python projects at work and it's been great. Partially because of poetry2nix, and partially because it makes it easy to include other stuff like a specific version of Redis for testing purposes. Everybody gets the exact same dev environment, reducing a ton of "works on my machine".
Presumably it also can fill the role of conda/mamba i.e. also managing C/C++ libraries in the same way in the nix environment, isolated from the system libraries?
Yep, it can lock down exact versions of those libraries as well, which is great for not mucking about with lib versions between even different Ubuntu versions, not to mention distros or macOS.
Sure, that works. Or I can have it all in a single shell.nix file that covers everything and is super simple to use. It's great for handing off to coworkers that don't usually use Python.
It's not simple. The nix programming language is like untyped ML. Most people aren't used to it and even if you are familiar with it it gets hella hard to read. Learning curve is huge.
One docker file and a poetry file works just as well. And is simpler. It's literally the same thing but using os primitives to manage the environment rather then shell tricks. Makes more sense to me to use a dedicated os primitive for the task it was designed to be used for.
Additionally docker-compose allows you to manage a constellation of environments simultaneously. This is nowhere near as straightforward with nix.
I love nix but being honest here. It's not definitively the best.
The biggest reason right now to avoid it is adoption. Most people won't know what to do with a shell.nix
> One docker file and a poetry file works just as well. And is simpler. It's literally the same thing but using os primitives to manage the environment rather then shell tricks. Makes more sense to me to use a dedicated os primitive for the task it was designed to be used for.
1) not just as well because docker is repeatable, not reproducible
2) not if you need GPU acceleration which is a headache in docker, but not Nix shells
> Additionally docker-compose allows you to manage a constellation of environments simultaneously. This is nowhere near as straightforward with nix.
>1) not just as well because docker is repeatable, not reproducible
Not sure what you're saying here but most likely you're referring to some obscure pedantic difference. Effectively speaking docker and nix shell achieve similar objectives.
>2) not if you need GPU acceleration which is a headache in docker, but not Nix shells
This is true. But this is the only clear benefit I see.
right. So? I said nowhere near as straightforward. This isn't straightforward. It's an obscure solution.
>The same was once true for Dockerfile
False. DockerFiles are much more intuitive because it's just a declarative config. With Nix shell it's mostly people who like haskell or OCaml who are into that style of syntax. I like it but clearly that syntax has not caught on for years and years and years. Quite likely Nix will never catch on to that level too.
I assume what they're getting at is that when you download a Docker image it'll always be the same (repeatable), but the image which is built from a Dockerfile may change even if the file does not (not reproducible).
nix is far simpler for consumption. My coworkers don't like fancy new things, and they haven't had any complaints. They don't have to dick around with half a dozen different commands to get everything set up, or bother with docker volumes/port mapping/etc. They just run nix-shell and it all works. That's all you have to do with a shell.nix file, it's very simple.
It is harder to write on average atm, but it's very much worth it to me when it comes to sharing code for development. Also, LLMs help quite a bit when writing nix.
It's the same thing for Docker. Just one command. The nix is much harder to deal with mainly because shell.nix is harder to read and write then a Docker file.
Additionally nix uses shell hacks to get everything working. Docker uses an os primitive DESIGNED for this very use case.
And additionally, because docker uses os primitives you can use docker-compose to manage multiple processes on multiple different environments simultaneously. Something that's much harder to do with nix shell.
You're seriously overestimating how hard this is, especially with poetry2nix. I like docker just fine and have used it in a development workflow and it's a pain in the ass and should never be used for that. It's great for packaging stuff up for prod, though.
Also, one man's "DESIGNED" is another man's hacks. I don't see anything wrong with how nix works. Potato/potato, I guess.
I'm not overestimating anything. it's not hard once the shell.nix is there, but everything to get to that point is waaay harder than docker. In the end once you're done you have two ways of doing the same thing with one command.
I think I know what you're getting at. nix-shell provides a fast way to get access to that specific shell environment which is a bit more annoying to do with docker. All docker needs to do is provide this interface by default and the only surface level differences between the two techniques is really just the configuration.
>Also, one man's "DESIGNED" is another man's hacks. I don't see anything wrong with how nix works. Potato/potato, I guess.
By any colloquial usage of the term "designed" in this context by any unbiased party, it's obvious Nix is more hacky by any charitable interpretation. NixOS is a layer on top of linux, containers are a linux feature. Thus creating a layer on top of linux to use existing features is the more hacky less elegant solution.
It can actually go in the other direction. Rather then use shell tricks Nix can also use containers under the hood. Overall though the API for docker is superior in terms of editing config files but not switching shells. Additionally the underlying implementation for docker is also superior.
Your main problem is with the API which is just opinionated.
a) Unless you literally write everything in one language, you will have to deal with learning, supporting and fixing bugs in N different package/environment managers instead of just one.
b) If you have a project that uses several languages (say, a Python webapp with C++ extensions and frontend templates in Typescript), then Nix is the only solution that will integrate this mess under one umbrella.
a. Using nix in place of a package manager means dealing with libraries specific to that language. It's still managing different apis. And more potential for bugs and unforeseen problems in custom 3rd party APIs as opposed to the official one. Admit it, you hit tons of problems getting everything to work fine with python.
b. C++ is the only one that would benefit from nix here because C++ dependencies are usually installed externally. There's no folder with all the external sources in the project. Even so this can be achieved with docker. If you want you can have docker call some other scripting language to install everything if you want "one language" which is essentially what you're doing with nix.
a. you hit tons of problems getting everything to work fine with python - of course, but the maintenance burden is an order of magnitude lower than integrating all this manually.
b. No, docker is not a solution. Docker is another problem and a separate maintenance nightmare.
(Nix solves maintenance problems at scale, Docker explodes them exponentially. I would not ever recommend using Docker for anything except personal computing devices you don't care about.)
There are many python packages that have other dependencies not managed by Python package management. The pain of figuring out what those implicit dependencies are is effectively removed for users when configured as a nix shell.
I had to use it for a c++ project and it was one of the biggest waste of time and frustrating moments of my computing career, there were constant breakages due to glibc mismatches, Nvidia drivers and whatnot, and getting an host IDE to have semantic understanding of the paths , etc... necessary for normal completions and stuff was nigh impossible.
Yeah but other than conan it's one of the few things where you can get a sort of "project package manager" experience like npm with C++. It's not nearly as user friendly as what they have for python or nodejs.
No way. Language specific managers are terrible at managing external dependencies. Trying to get python packages to link to system libraries is terrible. Nix makes it infinitely better.
GnuPG/PGP and the web of trust[0]. A lot of things I see blockchain being used for today (e.g. NFTs) seems like it would be better solved using standard OpenPGP signatures with no backing chain.
Additionally, as machine-generated content proliferates, I think having services use something like the web of trust concept for membership would be super powerful. The problem is, of course, the terrible UX of cryptographic signatures. But I think there's a lot of opportunity for the group that makes it easy to use.
There's a problem though: either you have to ban transferring NFTs (or other tokens), which makes those a lot less useful, or you need something to prevent double spend attacks (something that blockchain solves).
GPG is great. It also makes it really easy to encrypt environment dotfiles that safely reside in your source code repository. This is my favorite way of storing sensitive app configs. You don't even need a PGP private key in your keychain to do it. You can use a passphrase.
As a follow-up to the web of trust, I was pretty excited about Keybase and the breadth of applications they enabled, with a slick UX for web-of-trust. Pity they didn't quite succeed (got acqired/acquihired by Zoom), but it would be wonderful if something like that got another life.
Just curious, which would be most reliable? One entity confirms it who confirmed 1000 previous results, 2 who confirmed 500, 10 who confirmed 100 or 1000 who confirmed 1 previously?
Would many thousands of entities, who confirmed hundreds of thousands of previous results be preferable over hundreds of thousands of entities, who confirmed many thousands of previous results?
The Arcan display server is a really cool idea. Even if it doesn't manage to get popular, I think there are ideas here that we could mine to use them in popular programs.
This. A hundred times this. The Cat9 stuff alone is so far ahead of what some have thrown millions at cut and paste cookie cutter things like Warp yet that is not even close to what was just presented as a fun thing.
The latest EU funded 'a12' things are also soooo high concept but not fever dream.
Not sure if you're looking for things as "trifling" as programming languages, but I do wish more people knew about Nim. It's fast, statically typed, reads more or less like Python, has a great effect system, etc. It's a joy to use. I've been working through writing an interpreter in it:
https://youtu.be/48CsjEFzyXQ
Thanks! I plan to record many more videos. Had some unplanned construction going on in my house so my recording setup is unavailable for a bit. As soon as it's done in a few weeks, I'll put out more videos.
Nim should be more popular, but it seemed to take some time to get started properly. It's now far more ready for serious use. Python also took some time before it took off, so there's hope.
I have a handful of Nimble packages. Lovely language, though I haven't done much with it recently. I wish it were easier to sell people on style agnostic syntax.
I was using Nim for some of last years Advent of Code problems. I was mostly liking the syntax. Was a bit bother by the standard library have a snake case and camel case reference for each function (if I'm remember that correctly).
At the time nimble also required me to have NPM to install the the Nim package manager, Nimble. This was not ideal, but looking at [the nimble project install docs](https://github.com/nim-lang/nimble#installation) it seems like it is now package with the language.
Might try dusting it off for some AoC puzzles this year :)
Yes. It’s so you can maintain a consistent style in your code base even if your dependencies use different styles. Nim has excellent C/C++ interop and it’s relatively common to interact with C or C++ symbols directly, and being able to do this without needing to adopt the dependency’s style or wrap everything is nice.
In python, for historical reasons the logging module uses camelCase while most other modules use snake_case, so it isn’t really possible to use the logging module and maintain a consistent style. This is a non-issue in Nim.
The downsides of this approach are unfortunately that it makes wrapping certain low-level libraries an absolute pain in the ass (especially anything to do with keyboards). But overall it's a non-issue, tooling recognizes both styles and you don't notice it.
Sphinx [1] gets my vote. It's the docs system that powers most sites in the Python ecosystem so it probably looks familiar to you.
I call it a docs system rather than static site generator because the web is just one of many output targets it supports.
To tap into its full power you need to author in a markup that predates Markdown called reStructuredText (reST). It's very similar to Markdown (MD) so it's never bothered me, but I know some people get very annoyed at the "uncanny valley" between reST and MD. reST has some very powerful yet simple features; it perplexes me that these aren't adopted in other docs systems. For example, to cross-link you just do :ref:`target` where `target` is an ID for a section. At "compile-time" the ref is replaced with the section title text. If you remove that ID then the build fails. Always accurate internal links, in other words.
The extension system really works and there is quite a large ecosystem of extensions on PyPI for common tasks, such as generating a sitemap.
The documentation for Sphinx is ironically not great; not terrible but not great either. I eventually accomplish whatever I need to do but the sub-optimal docs make the research take a bit longer than it probably has to.
I have been a technical writer for 11 years and have used many SSGs over the years. There's no perfect SSG but Sphinx strikes the best balance between the common tradeoffs.
I can't recommend this enough! It's such a quality of life improvement to get the powerful dynamic documentation features of rST and Sphinx (and its many extensions), but in the more pleasant and familiar syntax of Markdown. I use MyST + Sphinx for all my project docs now.
A Sphinx plugin[0] allows for writing in markdown, and I'd heavily encourage using it if you're looking to get widespread adoption of sphinx on a project or at a workplace. Rst is fine once you learn it but removing barriers to entry is useful.
edit: my understanding of feature parity in reST/Markdown seems outdated - comment below might be incorrect
The value prop of Sphinx goes down a lot if you're not using reST because you can't use the extensive catalog of directives, such as the ref directive that I mentioned in my first comment. If you must use Markdown then there's not much difference between Sphinx and all the other Markdown-powered SSGs out there. In other words there's not a compelling reason to use Sphinx if you've got to use Markdown.
From Sphinx's Getting Started page:
> Much of Sphinx’s power comes from the richness of its default plain-text markup format, reStructuredText, along with its significant extensibility capabilities.
It works with all docutils and Sphinx roles, and almost all directives, including extensions.
A notable exception is autodoc (automodule, autoclass, etc.), and any other directives that generate more rST. The current workaround is to use eval-rst:
I have always found sphinx challenging, in usability or syntax :(
It could be probably much more advanced, but I went for pdoc3 for api docs and mdbook for documentation in general.
What I really hope that exists, is a system where I can reuse the documentation (sections) in other pages, ergonomically
I built that system multiple times to do preprocessing with things like including parts or special linking or referencing images from anyhwere
irobinovitch just corrected me that the library that provides Markdown support for Sphinx supports the features [1] that I care about; have to dig into the details but if the feature parity is very good and you strongly prefer Markdown over reST then I would say... go for the Markdown!
Just want to +1 this, and also add a twist. The Sphinx community also has a great extension called hieroglyph, which lets you use rST directives to build slide presentations which also double as single-page HTML notes documents.
This meant I could first write a blog post on learning Clojure as a Pythonista[1]; then turn some code samples and tables and images into slides I could present at a public talk on my laptop or desktop[2]; and then finally publish a public notes document that talk attendees could use to easily study or copy-paste code examples[3]. (The notes are the exact same contents of the slides, just rendered in a simple single-page HTML format, with each slide transformed into a section heading, with permalinks/ToC auto-generated.) So, this is generated HTML from a single .rst source[4], all the way down! And, of course, I could version control and render the .rst file powering the slides / notes / etc. in GitHub.
Note: the slides in [2] do not play well on mobile. You are meant to use keyboard arrows to advance and tap “t” to switch into tiled mode (aka slide sorter) and “c” to open a presenter console. The slides are powered by a fork of html5slides, which will look familiar if you’ve seen the JS/CSS slide template that Go core developers use in https://go.dev/talks (they generate those with “go present,” a different tool, though).
More recently, I have also used a similar-in-spirit tool called marp (https://marp.app) for generating technical slides from source, but the output and functionality was never quite as good as rST + Sphinx + hieroglyph. The big advantages to marp: Markdown is used as the source, some tooling allows for VSCode preview, and PDF export is fully supported alongside HTML slides.
I have a soft spot for Sphinx, not only because it was responsible for so much great documentation of Python open source libraries (including Python’s own standard library docs at python.org), but also because the first comprehensive technical docs I ever wrote for a successful commercial product were written in Sphinx/rST. And the Sphinx-powered docs stayed that way for a ridiculously long time before being moved to a CMS.
I used to do some professional services work, and magic wormhole was one of the most reliable ways for me to get files to clients who's companies blocked traditional file sharing hosts like Dropbox and Google Drive.
I moved from Croc to wormhole-william as the author of Croc has dodged a bunch of security-related questions in various issues and I didn't feel comfortable continuing to use it.
Curious if there’s any web based implementation yet? I guess it’s more of a hassle to ask the receiving party to open up the sending session when you can just provide them an asynchronous link to a hosted file (Dropbox, mega, etc)
Asciidoc lightweight markup can be used in place of ANY complex XML-based CCS (component content system), i.e. DocBook, DITA, S1000D, 40-50-something MIL-STD "specifications". Asciidoc can do anything they can, and can do it cheaper, faster, and better. With standard tooling that's everywhere you have a computer.
I'm not sure I can type out, with trembling fingers, how many dollars have been flushed down the toilet of CCSs by businesses that either had no business experimenting with componentized content, or businesses that didn't have resources for training up staff, or vendors who literally evaporated like morning dew after they'd gotten their initialization fees. So just one single story: one prime aerospace vendor I worked with had started their road to S1000D publishing in 2009. Today - at the end of 2023, and more than twenty million dollars later, with a garbage truck full of sweat and blood - that system has not released a single publication to the end user. Not one.
But beware row-level security. It's great, but the query planning is a huge mess, and you can end up with serious performance issues real fast if you're not careful.
My understanding, perhaps flawed, is that row security policies add predicates to the SQL, which then encounters the query planner like any other. Is this not the case?
I'm not sure about this specific use case, but a reason for using cgroupv2 over rlimit is that cgroup allows you to limit the resources of a _group_ of processes, which is handy if, say, your Python script uses the `subprocess` module.
Since I generally have no clue what technologies are popular (other than the obvious big name projects) I'll just toss out some interesting links I've recently bookmarked in comments here.
I don't know if they're "unpopular", but I think the BEAM family of languages, Erlang, Elixir, LFE etc. could be used more. I read more and more problems people have on here and just think that they go away on the BEAM.
My absolute favorite framework I've ever worked with is Akka.NET and how it taught me how to operate against concurrency in a different way. Actor-based infrastructures and other Erlang-inspired concepts are really just wonderful and they need a whole lot more attention, yes!
I worked for a long time with Akka and Scala and share the same sentiment. It packed so much power and yet felt so intuitive. Now every time I pick up a new language I instinctively look for an Actor based framework in it.
Lithium Titanate batteries. Nothing else is lightweight, safe, currently available, and lasts 20000 cycles.
ESPHome. It's a framework for declaratively building firmware for microcontrollers, based on rules like "This pin is an input with debouncing, when it changes, toggle this".
Contributing to them has probably been the most fun I've had programming in years.
We just need power management, and a C++ implementation of the Native API client. It's so close to being able to replace most of what I'd normally code by hand in Arduino.
They fix so many issues. Linear patterns can duplicate other linear patterns!
Vorta: It's the best backup technology I've seen. Just an easy guided GUI for Borg, which gives you deduplication. I just wish they let you deduplicate across multiple repositories somehow.
I've been looking for a more convenient way to configure some ESP32-based input devices (similar to macropads). I was interested in QMK, but it doens't support ESP32. So far I've been using MicroPython / CircuitPython, which I generally like, but on multiple occasions I've thought "I wish I could just put this in a config file."
The matrix keypad and key collector components look similar to what I was looking for. Can the key collector be used with other multiplexing methods like shift registers?
MicroPython was what I used before ESPHome too! I think ESPHome could really benefit from a scripting component, but adding one seems like lots of work.
You can send keys directly to the key collector from wherever you want, but you'd probably have to configure an individual action for each key, unless there's a feature I'm not seeing.
Maybe you could create a new ShiftRegisterKeypad component?
Definitely Forth and Factor. Every programmer should get a little bit familiar with the concatenative style. 3 days ago i discovered a channel on YT which talks about Forth, even Chuck Moore himself did a talk there talking about GreenThreads and the like.
Given that webassembly is a stack language with no GC, i do expect a comeback of concatenative programming some time in the future.
Agreed, but recommend thinking of wasm that way. Wasm is not written as a proper stack machine, it's just a way to represent computations that can be optimized across architectures.
I literally made this mistake, creating a wasm interpreter, before I realized it was a terrible runtime bytecode.
Came here to say this, so I entirely agree. I found Forth and the concept of concatenative languages after deep study of the fundamentals of computing, specifically studying Lisps and the Lambda Calculus. Eventually found combinators and the Iota combinator. Finally hit the bottom of the rabbit hole!
It really does give the lightbulb moment. “Don’t try to generate code, that is impossible. Only try to realize the truth… There Is No Code (only data)”
I went through a similar path! Concatenative appeared to me like the most economic paradigm one can possible come up with, a ultimate reduction, for which there’s practically no path to further downward abstraction. It feels more like a primitive building block than anything else. I always admired the design of Unix pipes, and flow oriented programming in general, then you realize that these things are just natural to stack processing, you need to introduce nothing. It’s like you’re programming with order itself. Programming is taught and practiced in a very convoluted way, and it makes you think that complexity must somehow stem for the lower levels of abstraction, until you get a grip of stack virtual machines, they couldn’t be simpler in their innate mechanics. I don’t know if it’s only me, but I used to think Turing Completeness was something challenging to achieve in a system, a hallmark of sophisticated complexity, but as I understood stack based languages I realized it’s the opposite, it’s the hallmark of simplicity. I wonder what it's like to have had Forth et al as a first language…
RDF and semantic web used to be my goto's for this, as I believe many of the core ideas are still valid and often overlooked and sometimes even poorly re-implemented. Which says something.
However, lately I've come to like llama.cpp and friends, yes it's not ChatGTP miracle whatever but how often do you /actually/ need that? Despite its tremendous popularity, it still seems like something more people should know about. For me, I've had great fun with running LLMs locally and experiencing their different "flavors" from a more "phenomenological" (what is it like to use them) perspective rather than a technological one.
I’m doing a personal project using RDF. Not semantic web. Not OWL. Just “raw” RDF. And I really like it.
It’s perfect (so far) for my purposes of an extensible data model.
I’m sure others have augmented applications with “generic” data types (like properties and such). You always walk this fine line that if you fall to far you find you’re writing a database on top of a database.
We’ve also in the past fallen into that hole when building a DB schema that we stumble into what we coined the “absurd normal form” or, also colloquially, the “thing-thing” table that relates everything to everything.
Well, RDF is the thing-thing table, and it just embraces it. And for my project it’s a lot of fun. I have structured types, with specialized forms and screens. But, if desired, the user can jump into adding relations to anything. It’s essentially an RDF authoring environment with templates and custom logic to make entities. And in the end they can always dive into SPARQL to find whatever they want.
It’s not intended to work with zillions of data items, it’s just a desktop tool. I always found it interesting early on that the primary metric for triple stores was how fast they could ingest data, I guess nobody actually queried on anything.
* IPv6. A genuinely useful tool, in particular for homelabs: multiple globally routable addresses per machine. One address per service. No need for Host/SNI vhosting. Works well with containers. To get v6 support, either find ISPs/SIMs that do v6, or wireguard to a VM that providss a /56.
* SSH ForcedCommand. Lots of usecases here, for backups, file storage, git, etc.
* Verilog as a tool for software developers to learn digital electronics. VCS/code/simulation/unit tests are all a lot more familiar and expected for developers.
* Writing tools yourself. There's often decent stable libraries that do 90% of what you want, and the remaining 10% is less effort than dealing with awkward integration with off-the-shelf tools. This relies on having low overhead packaging/deployment, e.g. Nix/Guix/Bazel.
In the vein of your last point, I use ChatGPT4 to write all my one off scripts for odd tasks. Without knowing Python I worked up a script that can figure out which asset I have selected in UE4, grab the text from it, send it to elevenlabs to create a text to speech conversion, convert the downloaded mp3 to wav, import into UE4 and then set that as the assets (dialogue wave) voice over line…
If you just want to play with IPv6 on a VM, most VM providers will offer a /64, which is enough to have an address per service on your machine. If you wanted to play with IPv6 on multiple subnets, you'll need something larger than a /56, since subnets should be /64.
I rely on my home's v6 /56, so I don't have experience with using VMs for this, but I know of a few providers that offer /56 (and above):
* Mythic Beasts and Linode offer a /56 on request. They're not cheap VM providers though.
WinCompose¹, or more generally, use of a Compose key² to type all sorts of Unicode symbols or really any character (sequence) you like. People are used to thinking that they mostly can’t type what they don’t see on their keyboards, but a Compose key provides a universal method to type a large repertoire of characters through use of mnemonics.
I used to use the compose key lot, currently I really like Espanso.
It does arbitrary text replacement, has some pretty fancy features, but is also quite useful for turning \alpha or :laughing: into the symbols I want.
- Capability Based Security (NOT the permissions flags on your phone or "app") - Offers the possibility for honestly secure computing
- Data diodes (unidirectional networks) - allow you to monitor a network without allowing external control (or only submit info, never ever exfiltrate it)
- GNU Radio - you can play with your audio ports, and learn instinctively how do deal with all the stuff that used to require DSP chips... then apply that knowledge with a $30 RTL-SDR dongle.
- Lazarus - seconding the above... a really good Pascal GUI IDE. The documentation needs work, but it's pretty good otherwise.
Bottle.py: uber-fast and simple python web microframework, about 3x faster, saner, and more memory-efficient than Flask in my experience: https://github.com/bottlepy/bottle
Fossil: distributed version control and much more in a single executable, from the creators of SQLite: https://fossil-scm.org/
In the same vein, I'd name Tornado (www.tornadoweb.org). Also rather small and comprehensible, but with full async support that's evolved extremely nicely. Generally I love how well-designed and maintained it is.
> Pick was originally implemented as the Generalized Information Retrieval Language System (GIRLS) on an IBM System/360 in 1965 by Don Nelson and Dick Pick [...]
My first job involved working on a Pick system. The system started life on a Prime mainframe and was migrated to UniVerse on Solaris.
I seriously miss it.
Every once in a while I try to get back into it. Usually it takes the form of trying (and failing) to get a demo/personal version of UniVerse, but lately I've been poking at ScarletDME a little bit. I'd even pay money (not much since this is just hobby stuff, but some) for UniVerse, but even the cost of it seems to be a closely guarded secret.
Thanks, Mister_Snuggles, for reminding me I'm not the only one left.
I HAVE to code in PICK.
"Unless it comes out of your soul like a rocket, unless being still would drive you to madness or suicide or murder, don’t do it." - Charles Burkowski
(Funny, they named the current support company "Rocket".)
Here's the link to the current Universe trial version (free and good until 04/2025. Get it, install it, and make something with it. Please don't let that part of you die.
Yup, this is exactly where I get to when I try and fail to get UniVerse.
What's the trick to making that form work? It won't accept my @gmail.com address, and I don't really want to use my work email address and potentially mis-represent things. Especially since my work used to use one of Rocket's products.
I've to admit i tried to edit the post multiple times, even checked the formatting options https://news.ycombinator.com/formatdoc (having links between angle brackets didn't work :( ) sorry for the inconvenience and thank you for doing the effort :)
"vopono is a tool to run applications through VPN tunnels via temporary network namespaces. This allows you to run only a handful of applications through different VPNs simultaneously, whilst keeping your main connection as normal.
vopono includes built-in killswitches for both Wireguard and OpenVPN."
I’ve really enjoyed working with EdgeDB. I totally agree. I’m on a project now that’s using firebase/firestore and edge seems dramatically better suited, but it would be a hard sell.
I do all my <3 mile trips on a ebike these days unless it's raining/snowing or I need to carry something large. It's great. The lifetime cost of ownership is a little more than my annual running costs for my car.
To add, you can convert acoustic bikes to e-bikes with torque sensing using the tsdz2 which is pretty decent once flashed with the open source firmware by casainho.
Sorry, but many localities make it illegal for bicycle on the sidewalk (where they should be doing it). That's why many folks do it.
Laws making that illegal are extra stupid since it's relatively hard to kill a pedestrian with a bicycle but downright easy to kill a cyclist with a car.
> Sorry, but many localities make it illegal for bicycle on the sidewalk (where they should be doing it).
No, they shouldn't. The sidewalk is for pedestrian traffic; that's what the "walk" in the name signifies.
> Laws making that illegal are extra stupid since it's relatively hard to kill a pedestrian with a bicycle
Sidewalks can't handle much bike traffic, are suboptimal for it (which is why purpose-built separated bicycle trails are built like roads, not sidewalks), and are in many places less safe for bicyclists, crossing driveways with less visibility for drivers and bicyclists than is the case with the road proper.
Python took 20 years after its introduction to become popular as today, thanks to its more intuitive syntax that was based on ABC.
I really hope after 20 years of its introduction that D will be appreciated and becomes a de-facto language not unlike Python is now. Perhaps even more popular with the advent of connected tiny embedded sensors and machine in the form of IoT are upon us.
Matrix has some usability hurdles. I invited people to join our Matrix and a few did, but when I switched to inviting them to Discord, 10x more people came by and are still there. I prefer Matrix for several reasons but you go to where the people are.
For some reason, the best note-taking tool ever made (Verbatim) was built rather quietly by a member of the American Competitive Debate Community. I literally made people in college jealous and upset when they saw how easy it was for me to take well structured notes using Verbatim: https://paperlessdebate.com/verbatim/ (but then I showed them how to install it and they were happy)
A whole lot of innovation from the competitive debate community has quietly existed for decades now. Hopefully one day SV discovers all the cool shit debaters have been building for themselves.
- I'd like Emacs/org-mode knowledge common at least starting from universities because we need the classic desktop model and Emacs is the still developed piece of software implementing it alongside with Pharo, but Pharo is usable only to play and develop while Emacs is ready to end-users usage with a gazillion of ready-made packages;
- feeds, in the broad sense, meaning XML automation on the net/web so I can get my bills just from a feedreader, all transactions digitally signed, so both party have a proof (ah, of course a national PKI is mandatory), news, anything in the same way making my information mine in my hands instead of wasting time in gazillion of crappy services;
- IPv6 with a global per host, so we can finally profit of our modern fiber-optics connections instead of being tied to someone else computer, i.e. "the cloud";
- last but just aside: R instead of spreadsheets for the business guys, so they do not produce and share anymore crappy stuff, LaTeX for similar reasons to produce nice looking pdfs...
Black and white film processing. It used to be taught in schools. Many schools still have their darkrooms and no longer use them. It is a practical application of physics, chemistry, and art.
As a kid (11 or 12) I made an enlarger out of an old discarded slide projector, a dimmer switch and a scrap wood frame. I managed to scrounge enough money to buy supplies to make a few prints-- it worked pretty good. But, supply costs were out of reach, so those first prints were all it ever made.
For writing documentation: AsciiDoc [1] as fileformat.
For publishing documentation / to build the web site: Antora [2].
AsciiDoc has a bit more features compared to Markdown which allows for a richer and more pleasant presentation of the docs.
Antora allows you to have the project documentation in the actual project repositories. It then pulls the docs from all the different repos together to build the site. This also allows you to have the released product versions go in-synch with the docs versions. Antora builds each version of the product as part of one site. The reader can explore different product versions or navigate between pages across versions.
- IPv6 - It's 2023 and it's still not deployed correctly and universally. Github, Github Copilot, Chromium-based browsers, and Amazon Alexa give up in the presence of IPv6.
- DNSSEC+DANE - It's half-assed deployed but there's a lack of end-user UX
- wais - search before gopher
- afs - distributed fs
- discard protocol - basically, a network-based /dev/null
- humans.txt - Not around as much as it was
- makeheaders - Auto-generated C/C++ headers
- man page generators - ronn and help2man
- checkinstall - The automatic software package creator
- bashdb and zshdb
- crystal - Compiled Ruby-ish
- forth - Powered the FreeBSD bootloader menu for many years and word processors (typewriter-like almost computers)
- ocaml - The best ML, used by Jane Street and Xen
- pony - A language built around an arguably better GC than Azul's C4 with arguably stronger sharing semantics than Rust
- prolog - Erlang's grandpa
- rpython - PyPy's recompiler-compiler
- pax - POSIX archives
- shar - shell archives - Self-extracting archives that look like scripts at the beginning
- surfraw - Shell Users' Revolutionary Front Rage Against the Web - founded by Julian Assange
- step-ca - A Go-based PKI server
- dmraid - Because it works
- X10 - Before WiFi and IoT, there was the Firecracker: a parasitic power serial port RF outlet controller
- FreeBSD - It's not unknown or obscure per se, but it powers many important things in the civilized world without getting much credit
- :CueCat - A dotcom era barcode reader that was given away
- Xen - If you need to run a VPS but can't ju$tify u$ing VMware
- KataContainers - k8s but with VM isolation
- stow - software management by symlinks
- habitat - similar philosophy as nix but not as clean and functional and almost like Arch PKGBUILD but with more infrastructure around it
yes! so many times I reach for Prolog because it's a perfect fit for a problem. like modeling constraints, or generating a plan, or working out a type system. it's relatively easy to switch between functional and imperative styles, but replacing Prolog requires writing heavy algorithms.
Lightweight often just translates to less features. Unless you're rewriting a truly bad piece of software, you're "lightweight" alternative will be just as heavyweight when you're done reimplementing everything
It's a single go executable that's much easier to install and keep up to date. It's been a while since I've used pihole but Adguard Home also had a better GUI when I first started using it.
For me the main reason to switch to AdGuard was that it can easily run on OpenWRT (and PiHole can't). It's really convenient to run DNS adblock on the same device as your router.
Which platform? On Windows there is a straightforward installer, and afterwards `haxelib` command installs e.g. HaxeFlixel, HaxeUI with all of the dependencies without any hiccups.
Big fan of tusker (https://github.com/bikeshedder/tusker) for PostgreSQL migrations. Tusker takes a SQL-first approach; You write your schema in declarative DDL (I have my entire project in one schema.sql file) and when you edit it, tusker generates the sql code required to migrate. It uses temporary test databases to run both your declarative DDL and your step-by-step migrations to ensure they are in lock step. And it can connect to live databases and diff your schema/migrations against reality. I've never seen a better toolkit for schema evolution.
There's a whole category of utopian developer environments and languages with far bigger aspirations than they were able to achieve but that still influenced others. Smalltalk being another example.
One of my favorite things about the old C2 Wards Wiki is that it's like an archaeological site where time is frozen in this period and you can browse through preserved arguments about how Smalltalk and Extreme Programming will take over the world.
I keep hearing about it and its influence, but I can't really figure out if it's active or dead or even If I can use it on a virtual machine or so. 9p.io doesn't seem to even load on my machine
The Haxe programming language (https://haxe.org/). It's insane how unpopular this is compared to its value.
"Haxe can build cross-platform applications targeting JavaScript, C++, C#, Java, JVM, Python, Lua, PHP, Flash, and allows access to each platform's native capabilities. Haxe has its own VMs (HashLink and NekoVM) but can also run in interpreted mode."
It's mostly popular in game dev circles, and is used by: Nortgard, Dead Cells, Papers Please, ... .
I’ll second this by saying I am currently building a game with Kha, a low level rendering Haxe library that targets all major platforms including consoles. I am utilising the extremely performant 2d immediate mode api which is refreshingly minimalist and exceptionally well designed.
Second time I see Haxe mention. I have had a
‘google’ (with Brave search) but I am still wondering why. Is it just the multitude of platform or is it something like: ergonomics, flexibility, productivity, tooling, ecosystem, production readiness, etc etc
(1) Zulip Chat - https://zulip.com/ - seems to be reasonably popular, but more people should know about it
I’ve been using it for over 5 years now [1], and it’s as good as ever. It’s way faster than any other chat app I’ve used. It has a good UI and conversation model. It has a simple and functional API that lets me curl threads and write blog posts based on them.
(only problem is that I Ctrl-+ in my browser to make the font bigger – I think it’s too dense for most people)
A gem from the 90’s, which people have done a great job maintaining and improving (getting Go and Rust target support in the last few years).
I started using it in 2016, and used it for a new program a few months ago. I came to the conclusion that it should have been built into C, because C has shitty string processing – and Ken Thompson both invented C AND brought regular languages to computing !!
In comparison, treesitter lexers are very low level, fiddly, and error prone. I recently saw dozens of ad hoc fixes to the tree-sitter-bash lexer, which is unsurprising if you look at the structure of the code (manually crawling through backslashes and braces in C).
You mentioned regex and there used to be an awesome web engine to basically boil samples down to regex search patterns but now it's a non working creepy SEO page or something
http://txt2re.com/
I'd love for it to be back online but can't find the author.
Everyone who wants to be a robust developer should get at least a little experience with some non-c-like languages.
Of course someone will reply with a more complete language, but I'll start by throwing out array-based languages, in the form of J: https://www.jsoftware.com/#/
Once you really get your head around composing verbs, really working with arrays, and using exponents on functions, it's mind-expanding.
Gomplate is a super easy templating tool that wraps golang's template library with some useful built in functions. It's not as extensive as starlark but it is dead simple to get going with.
Also, my company (VMware) has a really powerful YAML templating engine called ytt. I originally hated it and dunked on it constantly but have grown to love it. It makes creating composable and modular YAML really easy, which is extremely unfortunate that this is a real thing, but when you need it, you need it.
Lastly, Cucumber isn't _unknown_ unknown, but I wish it was more widely used. Behavior testing is really useful even if the program has great test coverage underneath. Being able to express tests in pure English that do stuff is powerful and can be a bargaining chip for crucial conversations with product sometimes if done correctly. I mean, yes, we have GPTs that can write tests from prompts written in your language of choice and GPT Vision can manipulate a browser, but Cucumber is an easy stand-in IMO that is cheap and free!
I had experience with OpenResty, I was so proud to kill it in one of the projects, I didn't know Lua/MoonScript good enough, tooling and debugging wasn't that great. While the idea is nice, everything around it was too much for me.
When I was at school (a long time ago), they tried to explain computers to us by making an adder using beads and matchboxes. We didn't have classroom computers back then. I'd like to know how that worked.
I've always wanted to build a digital clock entirely running on fluids. It would use fluid gates, and present a digital display by pushing blobs of coloured immiscible liquids back and forth through glass tubes (perhaps arranged as a seven-segment display). The counter itself would be made using fluid gates (which I don't know how to make). It would be slow; but for a wallclock with minute precision, you hardly need nanosecond gates.
I have to recommend ComPressure for an accurate taste of the challenges in designing those kind of systems, on top of being my favorite Zachtronics-style puzzle.
If you have a distributed system, dont want to spend a lot of time on wrestling with ELK or fell out of your chair when opening the Splunk bill. Loki offers 90% of the features with OSS model and very simple deployment.
Complete game changer.
Very simple to understand data model, alerts on logs, extract and group data from structured or unstructured logs, combine logs across sources, scales to both small and big system.
It’s surprising other tools in the same space have such a hard time hitting the right balance between capability and cost+complexity. Logs are so essential you would think the tooling space around it was better.
A company I used to work for used to use the Versant OODBMS (object-oriented dbms).
It was truly interesting. Long story short, you stored your objects in the database, along with other objects. No object-relational mismatch.
Queries meant taking a subset of a graph (of objects). It was fast and performant, and fairly solid.
It's essentially the result of asking "what if database development had taken a different turn at some point?".
Of the owning company would release it under some kind of open source license (maybe open core, BSL or one of those new revenue-friendly licenses) it could probably get very very popular.
Tokenizes chinese text into "words" for learning purposes and then renders the text in a GUI where you can click on a word to get the definition. It's not perfect, but a LLM fine tuned for it will eventually result in much better "tokenization".
https://www.edgedb.com/ is pretty amazing. postgres queried with a modern language, you can treat relational data like graphs without dealing with joins... baked in migrations management and more...
Second it. When I need to put designers ideas to life, I reach for tailwind, but when I just need sensible default components to avoid thinking about design, bulma is the way to go.
I just visited bulma.io and couldn't help but notice the sketchy-looking Patreons/GitHub sponsors that Bulma has listed. Lots of casinos are supporting Bulma.
Are these genuine Bulma customers happy to support a product they use, or am I witnessing some new way of money laundering here? What is the Phone Tracking app doing there? I mean, Bulma needs all the support it can get, but what does a casino want from Bulma?
I can't imagine it's money laundering, they could just be using the sponsorship as advertising. I don't really like it, but there are 56 logos on the homepage, presumably mostly at the 100$/month tier, which is a decent income for the project. The fact that the project needs to advertise casinos to make money is a sign the FOSS model isn't sustainable, but I prefer this to Bulma not existing or becoming paid.
Don't know if https://inertiajs.com can be classified as unpopular. but I think it's one of the best web solution combining the best of the both worlds ssr and spa.
Honestly, yoga. I'm not talking about the stretching contests they do in most gyms, I'm talking about the real deal. When your sexual energy gets drawn up into your body as your nervous system is awakened you get a new definition for the word 'ecstasy'. It makes you want to jump up and down like a child. You laugh out loud spontaneously at discovering the best secret in the world.
A few people on HN are into Buddhist meditation - I read mentions of Culadasa's The Mind Illuminated or Ingram's book. Indeed I've done several Vipassana and Zen retreats, but they just aren't as integrated as yogas 8 limbs. They may lead to the same place eventually, but I think they take much longer.
If there was a device that made people feel as good as the awakening nervous system, the inventor would be a multi-millionaire, no question (in fact I think I heard a Western monk is involved in a startup to try to create one). It is truly unparalleled and something actually worth experiencing (from what I've seen so far).
For those interested in resources I've found helpful to experience these changes for myself here are 2 I recommend:
Heh. https://github.com/bottledcode/durable-php is a semi-faithful php port of Orleans, borrowing some ideas from similar things too. I’ve actually been working on some really neat FFI things for this the past few weeks.
Same for every comment. You (not you, other commenters) "wish people knew more about X" but can't be bothered to write more than the acronym? Downvote. Explain what it is, why you care enough to comment it in this thread, why anyone else might be interested, link to a page, something, anything.
The page for this is somewhat awful. What platform is this for? Something for some vms, I suppose. Linux clients then? What hypervisors are supported or is it ambiguous.
Yeah there’s a reason no one’s heard of it. It’s basically the installer & updater for Cloud Foundry but there’s a bunch of other incidental stuff that’s had Bosh Releases made for it over the years, including Kubernetes.
It creates VMs. Mostly Ubuntu Linux but there’s a slightly demented way to deploy Windows boxes too.
Hypervisor support is provided by a plugin system called the Cloud Provider Interface. Last I heard vSphere, GCP, Azure and AWS are all reasonably well tested and maintained by their respective companies. Open Stack technically is there but it’s a nightmare and not well commercially supported. I’ve heard of stuff being deployed to Alibaba and Oracle but never seen those systems myself.
In practice this is mostly used to manage VMs into vSphere clusters.
i tried to use u++ way back when, and bounced off really hard (and i've used lots of different frameworks and environments). perhaps things have got better documented since?
I haven't used it in years but when I did, it was an absolute nightmare to work with, especially with respect to version control. There was no good way to version control their "templates", and collaborating was always painful. Not to mention the interface felt like it was from the 90's and dreadfully slow and painful to maneuver. The underlying functionality it permitted was quite good though. A facelift and a better version control scheme would help a lot.
I'm thinking unpopular could mean the tech is polarizing or frequently dismissed/overlooked.
* APL -- I haven't dedicated the time to learning in part because there's little support where I normally work. I'd love for APL to have be adapted like a ___domain specific language a la perl compatible regular expressions for various languages (april in common lisp, APL.jl in julia).
* regular expressions. https://xkcd.com/1171/
* bugs/issue tracking embedded in git https://github.com/MichaelMure/git-bug/
But I'm more excited for things that fall into the niche/lesser-known side of of unpopular. I love finding the little gems that change how I organize or work with the system.
* "type efficiently by saying syllables and literal words" https://sr.ht/~geb/numen/
* I use fasd[0] 'z' alias for jumping to previous directories in shell every day.
* Alt+. in shell (readline, bash) to get the previous commands last argument is another ergonomic time saver that I think is relatively obscure. I have a bash wrapper to combine that with fzf for quick any-previous-command-argument fuzzy search and insert [1]
* zimwiki [2] (and/or a less capable emacs mode[3]) for note taking has served me well for a decade+
* DokuWiki's XML RPC [4] enables local editor edits to a web wiki. I wish it was picked up by more editor plugin developers. (cf. emacs-dokiwki [5])
* xterm isn't unpopular per say, but I don't see sixel support and title setting escape codes talked about often. I depend on a bash debug trap to update the prompt with escape codes that set the terminal title [6]
Greenclip is exactly what I've been looking for! Thanks!
Also how do you use zimwiki? I've been trying it for a month and I don't find it that great compared to something like Obsidian or QOwnNotes or even TiddlyWiki. Do you have a specific workflow?
Yeah! On the actual notetaking side: I think I stumbled into a less deliberate "interstitial journaling" paradigm (a la roam research?). I setup the journal plugin to create a file per week from there keep a list of links to project specific files (hierarchies like :tools:autossh, :studies:R01grant:datashare). I also backlink from the project file to the journal file. So each page looks like a log. I try to aggressively interlink related topics/files.
I have an ugly and now likely outdated plugin for Zim to help with this. There's a small chance the demo screenshots for it help tie together what I'm trying to say. https://github.com/WillForan/zim-plugin-datelinker
On the tech side:
My work notes (and email) has shifted into emacs but I'm still editing zimwiki formatted files w/ the many years of notes accumulated in it Though I've lost it moving to emacs, the Zim GUI has a nice backlink sidebar that's amazing for rediscovery. Zim also facilitates hierarchy (file and folder) renames which helps take the pressure off creating new files. I didn't make good use of the map plugin, but it's occasionally useful to see the graph of connected pages.
I'm (possibly unreasonably) frustrated with using the browser for editing text. Page loads and latency are noticeably, editor customization is limited, and shortcuts aren't what I've muscle memory for -- accidental ctrl-w (vim:swap focus, emacs/readline delete word) is devastating.
Zim and/or emacs is super speedy. Especially with local files. I using syncthing to get keep computers and phone synced. But, if starting fresh, I might look at things that using markdown or org-mode formatting instead. logseq (https://logseq.com/) looks pretty interesting there.
Thank you for the long answer! You've made some really great points, and regarding markdown and org-mode I've been thinking about switching to something like djot instead (from the author of pandoc) but I can't deny the power of emacs and org-mode when combined.
Also your "interstitial journaling" paradigm seems great, I'll try to apply it because I enjoy grounding what I do into some loose chronology kinda.
Thanks again for taking the time to expound on your approach!
Related to this, Revo Uninstaller. Sometimes programs don't clean up after themselves properly or the uninstaller is broken. Revo doesn't care, it tries to uninstall nicely first then resorts to scanning the drive for any remaining files, then scans the registry for remaining keys.
Not many people seem to know about it and everyone I show it to loves it!
While this sounds very sarcastic, but I actually want to give windows a try, didn't use it for ~ 20 years, I don't even have no idea how it looks like now, but I keep hearing good things
It’s a dumpster fire now. Microsoft shoving their products and choices down your throat, sneaking “updates” that change your settings back to what MS would prefer then to be… windows stopped being good after win2k and has been going downhill ever since. It’s basically a big advertising platform now with an OS on the side.
I don't mean just vacuum tubes or even electronics at all. Mechanical analog computing is insane when you get down to it. You have special shapes that move against each other and do calculus.
We make these mechanical models as analogs of more complex physical systems. We can turn huge calculations into relatively simple machines. That we can roll two weirdly shaped gears together and get an integral out says to me something very profound about the universe. I find it to be one of the most beautiful concepts in all of the sciences.
What's even more wild is that we can take those mechanical analogs of physical systems and build an electronic analog out of vacuum tubes. That vacuum tubes work at all is just completely insane, but it's some absolutely beautiful physics.
And yes, there are equally beautiful problems that can only be solved in the digital ___domain, but it just doesn't speak to me in the same way. The closest thing is the bitwise black magic like fast inverse square root from a special constant and some arithmetic. Besides, that's more a property of number systems than it is of digital computation.
I understand how and why digital took over, but I can't help but feel like we lost something profound in abandoning analog.