I'd expect the vast majority of IO requests to be served from the kernel's IO cache (we're talking 30 * 1.44MBs here so just under 50MB, trivial for even an old computer to hold in RAM), thus I wouldn't be surprised for it to be very fast and reliable as long as he sticks to read-only workloads - those would never actually touch the floppies beyond the initial read.
Yup, even old computers that booted off floppy drives like the Atari ST and Amiga would often just load the entire data and program into memory, since their memory size was usually larger than the floppy disk. Same for the games, a lot of the loading at the beginning was to do with simply loading the entire disk into memory...
basically system memory size vs block device size was the opposite way around of today... then again people could and did have many individual floppy disks.
If we take any portable media as the comparison, the largest I know of is 50GB for a blueray disk, that’s still something you can ‘almost’ load into 32gb of memory (if your OS and Teams app didn’t take 16gb of course).
i have >100GB blu ray media, and that's writeable. So those exist. My BD-R from 2016 reads them correctly and can write them, so it's not some "recent" thing, except maybe the capability to create the media. I think the largest BD-R available are around 120GB.
Luckily, the gen 7 tapes are 10-100 times that large, depending on who you ask. Each disk costs approximately $2-$5 as well, assuming you get some sort of discount. Tapes are more, they hold more, but at a certain point having a disc in a jewel case is better than having some tape. bluray drives are easy to find, tape readers, not so much.
MicroSD cards - about the size of a fingernail - absolutely come in sizes up to 1TB, that's not controversial or dodgy brands selling fakes. There's many brands selling cards at 512GB, ten times your BluRay example.
Portable flash drives using M.2 drives internally could reasonably be up to 8TB, but being at the high end they're expensive.
Samsung is a reputable brand that makes 256 GB and 512 GB SD cards (EVO Plus) - I have good experience with using it in Raspberry Pi. And "thumb drives" - USB sticks absolutely go into TBs - I'm using a 2 TB one, and I've seen colleagues use bigger. And you can put a modern M.2 NVMe SSD into USB-C casing and it's half the size of an old portable SSD...
It’s maybe a bit big to call it a “thumb drive” but I’ve been putting backups on a 2TB usb “portable ssd”. I’ve also got a 256GB thumb drive, which isn’t too far off.
if you take a dvd (dvd5 or dvd9 either way) on a linux machine with ~16GB of memory, and you read the entire disk once - say dd if=/dev/sr0 of=/dev/null - the entire thing will be in memory. You can then use whatever tools you want as if you had the disk in memory, and it will never touch the physical device again.
If i hadn't seen it myself i would have wondered why it wasn't already possible already.
Old systems only ran one program at a time (basically).
So boot floppy for OS into RAM, then load program floppy(s). Do the thing, save work to yet another floppy. Close program, maybe have to re-insert an OS floppy (ala MS-DOS). Then new floppy for next program. Only a king a 20MB RLL drive.
That's right I owned an Atari STFM with 4MB of RAM.
As another example You could easily make RAM disks via a desktop menu, to facilitate the common setup of a single built in floppy drive. So you could copy files off into RAM via the GEM desktop GUI (literally drag and drop) and then put another disk in etc.
Random aside: I accidentally found that this machine would transmit audio over FM radio for a short but very useful range automatically.. blew my mind as a kid. I could never find anything official about this online later so not sure if it was an intentional design, poor EM design of the audio chip, or a hack of my particular 2nd hand franken-tari - either way it was super useful to have wireless audio in the 90s.
That’s really interesting, a quick search provided nothing about FM transmission capabilities.
The ”FM” officially stands for Floppy & (RF) Modulator, but that’s some coincidence that the audio chip emits frequency modulated radio waves at a listenable frequency.
One disk, sure. 30? Well, the parent made a very biased interpretation of history, depending on what one means by "old". I remember running Linux on a 4 MB RAM computer, and how good upgrading to 8 MB felt.
If we hit a lot of random sub folder URLs (that should all 404), would that trigger the webserver to check the drive to see if the file exists and skip the cache?
Are you the Kent Overstreet working on bcachefs? If so, I've been watching it and looking forward for it to get into the kernel. Keep up the good work. I'll have a donation coming in a little while.
I guess the real question there is, does a directory count as a file for the purposes of this discussion.
It seems necessary that the blocks storing the directory entries are cached too. Otherwise every non-existent lookup (that doesn't hit a negative dentry) would hit disk, each separately.
So yeah, the page cache is keyed by file, but the system should still cache the directory structure.
On top of the block layer cache there's also the namei cache for filename to inode lookups, I'm not sure if that covers a file not found case or just a success path, but it may apply here too.
But if a negative dentry does not yet exist, and a filename is requested for the first time, does it read from the fs?
The DDoS scenario would be doing GET requests of random nonexistent filenames. You could change the name at every step so that the check for a negative dentry is never a cache hit.
I think I was wrong, typed too quickly. There’s (potentially) a file buffer block cache available, but that’s not for caching raw disk blocks, but filesystem blocks of content.
Would it be more fun to run a live cd webserver only? piece together an old computer, with an old live linux installation, and either run it unupdated or patched and carrying updates it can only apply w/o reboot?
Do people do cyberpunk or sci-fi LARPs? An extravagantly unpatched server running on a raspberry pi or something like that would be a kind of fun prop.
I managed to grab a (hopefully) uncorrupted version somehow. I'm putting it somewhere other than imgur so they don't recompress the image. MD5 d30fcc384a8e2de4fab3056bde42b00b. [EDIT: Removed dead link, use archive.org instead]
> I'd be curious to know what failure mode(s) conjured the 0xf6's into existence.
Today's fun fact: The MS-DOS `format` command fills the disk with 0xf6, not 0x00. Though this is linux running on Mac hardware, reading a disk that should have actual data, so maybe that isn't the reason.
hmm... is it possible to correct those errors? I have some old images with errors, and I've always wondered if it were possible to fix the individual corrupted bytes to restore at least the remainder of the photos.
Yes it's possible, the SpaceX subreddit community did that to recover imagery from one of the early rocket landings which was corrupted due to poor antenna alignment between the transmitter on the landing barge and the remote receiver.
A few years ago I finally bought a USB drive to read an old 3.5" floppy from the late '90s on which I had archived my e-mail messages before moving away to college. I completely forgot about write protection (as well as atime write-backs). I managed to read a surprising amount of data off of the disk, but I think less than if I had remembered to write-protect the disk before inserting it into the drive. The files were in mbox format, probably from Eudora, but possibly Pine. As is my habit, I first poked around with ls and less before copying the files over, and I'm pretty sure I ended up with more corruption than what I first saw with less.
Oh well. The irony is that to this day I have a tick of idly running `sync` at the command prompt, which I developed dealing with floppy and hard disk corruption running early versions of Linux. A crash or (IIRC) even a simple reboot sometimes resulted in disk corruption preventing Linux from booting. Reinstalling Slackware from floppy disks took quite awhile on its own, especially if installing the X11 disk sets, but half the time at least one of the disks would be corrupted, requiring me to download a fresh copy (using Windows--I was dual booting) over my 2400 baud modem, and then restarting the install from scratch. I probably went through this procedure at least a half dozen times, or at least enough to develop the tick. It was the best of times, it was the worst of times.... =)
There was a time when the sync CLI exited before the work was done, so you actually didn't want sync;sync;halt, typing it as separate commands gave it the right amount of time to complete.
In this case, the theoretical maximum bandwidth is 24MBit/s.
The problem is the old, slow usb bottleneck. I'm not sure how much faster, probably hundreds of bps rather than under 24, but a faster RAID0 rig would be to instead have 30x Mac G4 Digital Audios connected via gigabit switch, and share then RAID0 the internal floppies. It would also have whatever advantage running an XGrid PPC cluster on Tiger might provide. These boxes also ran PPC Ubuntu; no doubt Linux would eek out a dozen or so more bps, plus beowulf.
I don't know about that BW claim. Back when mp3 was new and computers usually had floppy drives I did the obvious and mp3 bitrates above 64K or so tended to stutter and significantly below 64K did not stutter.
Something like voice encoded at 32K sounded at least as good as a phone and played back off a 1.44 floppy and IIRC that was about the best that could be done.
You will probably be surprised how long an audio recording can be, if its voice at a low rate on one floppy. If you go variable bit rate and silence detection I subjectively remember "ten minutes" was quite reasonable on a 1.44 disk.
Extrapolating from historical experience, thirty or so in parallel should push over half a meg/sec quite reliably.
If you record speech onto a floppy drive off a cheap mic you'll record the sound of the floppy in the recording, which is funny to me.
I wish I still had those files. Useless, of course, but would be funny.
- Switch to RAID10 (a stripe of mirrors), and go 2/3 floppys wide so you can have some redundancy in each mirror gropu
- Get some Pis (or other SBCs) and hook those up and run Ceph... if this keeps going we'll have a SAN soon enough.
- ZIP disks?[0]
Also, I don't think I ever want to hear of the "hug of death" for any site ever again -- I don't think this site hosted on 30 floppies was hugged to death.
That's fine, but being a social platform, I assume everyone is a caricature of themselves. Other creators have discussed here the near moronic poster/thumb images getting more clicks vs less overly dramatic versions. So at this point, I assume that the closer to a program that would fit in Idiocracy television programming is the ultimate goal to get those precious likes.
turn off the error reporting so you get whats read rather than an error interrupt and watch the bitrot. Never did that in linux but it wouldn't surprise me if its a driver option.
I'm surprised this actually still works. As far as I remember those floppies were super unreliable.
Though this may have been caused by me being in school and having to buy crappy white label ones. Couldn't afford those fancy imation ones with all the games I copied lol.
Hahaha, not for long; not on the front page of HN…I can see it’s got the old ‘Hug of Death’, but I’m curious to how long it lasted and how loud the result was.
Imagine a floppy disk based old server surviving “Hug of deaths” while the latest react based static website hosted on Kubernetes for infinite scalability on baremetal dies in like 5 sec.
The blame would be on baremetal server. They would move to AWS with multi region EKS clusters and RDS with cross-region disaster recovery. And a multi-cloud strategy is also in place..