Hacker News new | past | comments | ask | show | jobs | submit login

I'd expect the vast majority of IO requests to be served from the kernel's IO cache (we're talking 30 * 1.44MBs here so just under 50MB, trivial for even an old computer to hold in RAM), thus I wouldn't be surprised for it to be very fast and reliable as long as he sticks to read-only workloads - those would never actually touch the floppies beyond the initial read.



I did watch -n 1 'sync; echo 3 > /proc/sys/vm/drop_caches' to try and get around it. I think it's working, because boy is it noisy lol


I'm used to referring to lots of IO as "noisy" or "chatty", but then I imagine sitting next to 30 floppy drives and that brings it to another level.



This is the greatest thing that anyone has done with any computer hardware anywhere, ever. This is more impressive to me than the moon landing. :-)


Make sure you check out his other videos including other compositions he's done.

Star Wars theme: https://youtu.be/3KS02q0BUnY

I Wat to Break Free: https://youtu.be/lbd06i9B2wU


My favourite hardware music is this Radiohead cover from 2008. Blew my mind at the time

https://www.youtube.com/watch?v=pmfHHLfbjNQ


The scanners!! This is incredible!


Ooh I’d love to see/hear a video of that if you’re willing! :)


Yup, even old computers that booted off floppy drives like the Atari ST and Amiga would often just load the entire data and program into memory, since their memory size was usually larger than the floppy disk. Same for the games, a lot of the loading at the beginning was to do with simply loading the entire disk into memory...

basically system memory size vs block device size was the opposite way around of today... then again people could and did have many individual floppy disks.


If we take any portable media as the comparison, the largest I know of is 50GB for a blueray disk, that’s still something you can ‘almost’ load into 32gb of memory (if your OS and Teams app didn’t take 16gb of course).


i have >100GB blu ray media, and that's writeable. So those exist. My BD-R from 2016 reads them correctly and can write them, so it's not some "recent" thing, except maybe the capability to create the media. I think the largest BD-R available are around 120GB.

Luckily, the gen 7 tapes are 10-100 times that large, depending on who you ask. Each disk costs approximately $2-$5 as well, assuming you get some sort of discount. Tapes are more, they hold more, but at a certain point having a disc in a jewel case is better than having some tape. bluray drives are easy to find, tape readers, not so much.


I think thumb drives get up to several TB this days


That sounded a bit crazy to me, so I checked.

I don’t think those are actually several TB. They’re fakes that are advertised as having several TB.


MicroSD cards - about the size of a fingernail - absolutely come in sizes up to 1TB, that's not controversial or dodgy brands selling fakes. There's many brands selling cards at 512GB, ten times your BluRay example.

Portable flash drives using M.2 drives internally could reasonably be up to 8TB, but being at the high end they're expensive.


Samsung is a reputable brand that makes 256 GB and 512 GB SD cards (EVO Plus) - I have good experience with using it in Raspberry Pi. And "thumb drives" - USB sticks absolutely go into TBs - I'm using a 2 TB one, and I've seen colleagues use bigger. And you can put a modern M.2 NVMe SSD into USB-C casing and it's half the size of an old portable SSD...


Ok maybe not several TB, but 1TB seems legit https://www.bhphotovideo.com/c/product/1701503-REG/lexar_ljd...


It’s maybe a bit big to call it a “thumb drive” but I’ve been putting backups on a 2TB usb “portable ssd”. I’ve also got a 256GB thumb drive, which isn’t too far off.


if you take a dvd (dvd5 or dvd9 either way) on a linux machine with ~16GB of memory, and you read the entire disk once - say dd if=/dev/sr0 of=/dev/null - the entire thing will be in memory. You can then use whatever tools you want as if you had the disk in memory, and it will never touch the physical device again.

If i hadn't seen it myself i would have wondered why it wasn't already possible already.


Is this true? This feels hard to imagine that computers would load the entire diskette into memory. Apparently the Atari ST had up to 4MB of RAM.

I can imagine many programs doing this though.


Old systems only ran one program at a time (basically). So boot floppy for OS into RAM, then load program floppy(s). Do the thing, save work to yet another floppy. Close program, maybe have to re-insert an OS floppy (ala MS-DOS). Then new floppy for next program. Only a king a 20MB RLL drive.


That's right I owned an Atari STFM with 4MB of RAM.

As another example You could easily make RAM disks via a desktop menu, to facilitate the common setup of a single built in floppy drive. So you could copy files off into RAM via the GEM desktop GUI (literally drag and drop) and then put another disk in etc.

Random aside: I accidentally found that this machine would transmit audio over FM radio for a short but very useful range automatically.. blew my mind as a kid. I could never find anything official about this online later so not sure if it was an intentional design, poor EM design of the audio chip, or a hack of my particular 2nd hand franken-tari - either way it was super useful to have wireless audio in the 90s.


That’s really interesting, a quick search provided nothing about FM transmission capabilities.

The ”FM” officially stands for Floppy & (RF) Modulator, but that’s some coincidence that the audio chip emits frequency modulated radio waves at a listenable frequency.


RAM disks were surprisingly common on the Amiga (and apparently the high end Ataris as well).


One disk, sure. 30? Well, the parent made a very biased interpretation of history, depending on what one means by "old". I remember running Linux on a 4 MB RAM computer, and how good upgrading to 8 MB felt.


You can even use mdadm to add a ramdisk as a single disk RAID array with write-through to a physical disk.

Every time you reboot, it will repair the array.


If we hit a lot of random sub folder URLs (that should all 404), would that trigger the webserver to check the drive to see if the file exists and skip the cache?


No. The cache is at the block layer, not the file/directory layer. So the filesystem will look up the directory structure, which will be cached.


Actually, no. There is no general block layer cache - the closest thing is the page cache, which is at the file level.


Are you the Kent Overstreet working on bcachefs? If so, I've been watching it and looking forward for it to get into the kernel. Keep up the good work. I'll have a donation coming in a little while.


The one and only :)


I guess the real question there is, does a directory count as a file for the purposes of this discussion.

It seems necessary that the blocks storing the directory entries are cached too. Otherwise every non-existent lookup (that doesn't hit a negative dentry) would hit disk, each separately.

So yeah, the page cache is keyed by file, but the system should still cache the directory structure.


On top of the block layer cache there's also the namei cache for filename to inode lookups, I'm not sure if that covers a file not found case or just a success path, but it may apply here too.


Yep, that's called a "negative dentry".


But if a negative dentry does not yet exist, and a filename is requested for the first time, does it read from the fs?

The DDoS scenario would be doing GET requests of random nonexistent filenames. You could change the name at every step so that the check for a negative dentry is never a cache hit.


If requesting http paths result in accessing the equivalent fs paths, then yes, you can dos it. But nowadays it’s uncommon for a webserver.


Well the point under discussion is that i/o from a floppy disk is slow. Most servers have much faster disks and it'd be absurd to consider this.

Also the page/buffer cache would probably cache the FAT direntries so probably not a huge issue here.


Still backed by block layer cache for (hopefully) quick response regardless of outcome


What block layer cache?


I think I was wrong, typed too quickly. There’s (potentially) a file buffer block cache available, but that’s not for caching raw disk blocks, but filesystem blocks of content.


Would it be more fun to run a live cd webserver only? piece together an old computer, with an old live linux installation, and either run it unupdated or patched and carrying updates it can only apply w/o reboot?


I (briefly) ran an IRC server on a Sega Dreamcast based on the same ideas


A Knoppix 4 CTF doesn't sound too challenging, tbf mate

edit: thinking about it, there's some good educational value, but maybe via vm than plopped on the interwebs :D


Do people do cyberpunk or sci-fi LARPs? An extravagantly unpatched server running on a raspberry pi or something like that would be a kind of fun prop.


They do! http://cyberpunk.jackalope-larp.com/ is run both in-person and online at the same time and apparently has a pretty high production value.


It depends on what you mean for 'old'. At the beginning of 90s even 16MBytes were pretty a luxury.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: