> I am a little bit freaked out by that because the pointer to the buffer is set before the IOCTL call; the code knowingly sets a pointer to a buffer into what looks like its code area. Let's hope they knew they were done with that part of the code, or it's just another interesting bug to dissect.
This is common in code without segmentation protection. CODE and DATA are convention. You can just specify a function, then a small buffer, then another function. .COM files in particular were easier to write with CS and DS pointing to the same region of memory, assuming you could fit both your code and inline buffers in 64kB.
The code explains what they are doing. Even more interesting, they're using their own stack too:
; 1 - This program uses its own internal stack. The stack space provided
; by DOS is used as an input buffer for transfering IBMBIO and IBMDOS.
;
; SYS is linked with the CODE segment followed by the DATA segment. The
; last symbol in DATA is BUF. It marks the end end of data and the
; start of the BUFfer. The BUFfer extends from here to SP. The first
; 6.5Kb (13 sectors) in BUFfer are used for up to 12 sectors of the FAT
; or the directory. In Main, the remaining space is set
; as follows:
; cdBuf = SP - ( FAT_BUF + BUF )
;
I looked at the call before and after to see what they had set the buffer to, and they clearly set the buffer to point into what is code. The executable is only 5KB and it's tiny; they had plenty of space in the segment to use a different part of the segment without purposefully blasting their own code.
While it's common, it was still a terrible practice. If whatever was filling in that buffer changed, they could be blasting more code than they intended. (As indicated in what I wrote, I know it was common if they wanted to reuse the space. Device drivers do something similar when they are done with their init code.)
Here's the code from DOS 3.3. I am reasonably sure they didn't intend to overwrite code -- you're probably just seeing a weird artifact where the failure case is leaving a dangling random value that happens to point into valid code.
My guess is that DS isn't being maintained across the failing call to the IOCTL and ends up pointing to the wrong segment.
DOSOutFH DW ? ; fh of DOS destination
DumpMem:
MOV DX,OFFSET DG:BUF+512 ; get offset of bios start
MOV CX,pDOS ; beginning of next guy
SUB CX,DX ; difference is length
JZ DumpDos ; no bios to move
MOV BX,BIOSOutFH ; where to output
MOV AH,Write
INT 21h ; wham
retc ; error
CMP AX,CX ; Did it work?
JNZ WRERR ; No
DumpDos:
MOV DX,pDOS ; beginning of dos
MOV CX,pDOSEnd ; end of dos
SUB CX,DX ; difference is length
retz ; if zero no write
MOV BX,DOSOutFH ; where to output
MOV AH,Write
INT 21h ; wham
retc ; error
CMP AX,CX ; Did it work?
retz ; Yes, carry clear
To use an LBA HDD or especially a SSD as well as DOS can, I've always found that the DOS from W98SE, with the 2001 update, is about the most reliable.
The only repeatable way when ongoing testing is underway is to zero the media each time, since many times DOS will rely strongest on what is already there during a Format or SYS. So will every version of Windows, but without full consistency at all.
If using Win10 or 11, you may find that even with a zeroed floppy or HDD partition when you power off, the partition will be silently formatted just as Windows last remembers it when you reboot, transparently without notice.
Plus even with successfully zeroed media, there is often a difference in what the Format comes out like depending on whether you booted the PC to an actual floppy, HDD, or SSD, and what their geometry was. And this can often come out different on different motherboards because of their dissimilar bios recognition of what the geometry is and what will be compatible with potential booting on that particular device.
Other times it seemed like some bioses were not suitable for formatting some media well enough to be bootable on their own device. But worked just fine if formatted on a more "universal" motherboard, then boot fine on the problem PC.
These days I want my FAT32 volumes, which are often being used as boot volumes as expected under UEFI, to be fully formatted under DOS for best reliability. None of the intentionally lesser stuff ever since. But I also want my structure to align with 4096 byte sectors which really helps with AF HDDs and SSDs. DOS won't do this on its own. Plus Windows mostly defaults to putting the bootsector at 2048 now instead of 63 on LBA gear, so I format a zeroed FAT32 partition using Windows 10 or 11 first. Then to the disk editor where everything is re-zeroed except the bootsector and the following 8 sectors. Edit the bootsector & backup bootsector (6 sectors later) for 2048 Hidden Sectors. And 2048, 4096 or a multiple of 4096 Sectors Per Fat, depending on which multiple is closest to the value that was there by default (according to the size of the partition) when there was no awareness of SSDs.
Then back to DOS and Format /Q, on a good motherboard it will retain the values you edited in, and you've got a more reliable foundation for your boot files or anything else.
Just use RFORMAT by Rudolph Loew. It can properly align FAT12/16/32 on any block size you want. It doesn't use reserved sectors for that. It only enlarges FAT tables so that the first cluster begins at block size boundary. It can also do all sort of weird formats: reserved sectors, any cluster size you can think of, set root dir size, number of FATs (having only one FAT improves wear on flash memory), etc.
Thanks for that, it does look quite elegant. RIP Mr. Loew.
There is a todo shown:
>? added Partition specific Geometry setting
>? add docs for above
>Add warning about Letter Assigners in DOS and Windows
> Force LogOnly and set size
>Geometry needs to be set to match Partitions settings.
What I've found using recent Windows is that Diskpart will autocorrect the CHS figures based on what LBA changes you make when using other apps, but you have to use Diskpart to "touch" that particular partition after you make the changes, preferably after a reboot. When I need this done without other modifications, I use Diskpart's SetID command, where you type "Help Setid" to get the Diskpart syntax doc on it, but tha command is like "SET ID=de" (with a space there) to change the partition type. To type DE for instance, a "Dell" OEM type. The CHS correction happens silently. Then change it back to type 0C if FAT32, or 07 if NTFS, or basically what you had before.
If you check the structure I'm using, both FATs are written in the same stroke on the size mostly found in UEFI boot volumes :)
>Add warning about Letter Assigners in DOS and Windows
That's because of my bug report. I used Letter Assinger v1.2.0 by Vadim Burtyansky and tried formatting a partition from Win98. The wrong partition was about to be formatted. Mr. Loew investigated and found the bug in the Letter Assigner, not RFORMAT.
> Diskpart will autocorrect the CHS figures based on what LBA changes you make when using other apps.
I saw many partitioning programs do that. Most ask for confirmation, some don't. It's great that you know how to edit partitioning table and FAT parameters by hand, but why waste time when there are tools that offers (probably) the same flexibility?
For partitioning, I use Ranish Partition Manager. RPM and RFORMAT are the best partitioning/formatting tools by far. Too bad I have to use a boot stick to run them. And too bad nobody wrote something as capable as RPM for Linux. mkfs.vfat comes close to RFORMAT, but none of the partitioning tools are as capable as RPM.
> If you check the structure I'm using, both FATs are written in the same stroke on the size mostly found in UEFI boot volumes :)
> If using Win10 or 11, you may find that even with a zeroed floppy or HDD partition when you power off, the partition will be silently formatted just as Windows last remembers it when you reboot, transparently without notice.
I have a USB floppy drive with a known good disk in it. Not sure if a regular 34-pin floppy would be any different. But blanked it, connected to Win 11, and rebooted. The drive seeked on startup, but otherwise it's still blank:
dd if=/dev/rsd1c | hexdump -C
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|................|
*
2880+0 records in
2880+0 records out
Actively maintained is a handicap for something that's supposed to be a fixed long-term "standard".
MS-DOS is that immutable standard of the past which reached maximum maturity in 2001.
FreeDOS was originally an alternative, but is for the future now.
I like FreeDOS but when companies started to distribute their device drivers or software on floppies or CDROMs that were formatted using FreeDOS, it was not pretty.
Often there was no successful access or booting to the FAT volume, but all you needed to do was SYS the writable floppy with MS-DOS, or go through the ordeal of "ripping" the CD to correctly SYS its contents.
Regardless of the '90's or today I would recommend very strong familiarity with MS-DOS for some time, there is really nothing new there, before moving forward to include FreeDOS in your toolbox.
This is how it happened organically when FreeDOS was first emerging.
I think of MS-DOS mostly as the ultimate fantasy console. It is sad that the only way to write something once and not have to maintain it to keep up with breaking dependencies all the time is to target a dead platform.
But I never had any compatibility issues with FreeDOS? It seems like a good implementation of DOS. I mostly use DOSBox-X, but I use FreeDOS now and then as well. It is the only DOS I would consider running on hardware.
BTW did anyone else notice that Microsoft included almost a complete 1988 vintage 16-bit DOS toolchain in their MIT-licensed MS-DOS repo? It has Microsoft C 5.1, MASM 5.1, Make, and several other tools, plus libraries and include-files. All of it in less than 3 MB.
I've taken to saying, much to the chagrin of my techy friends, that "the best language is a dead language". Usually speaking about C which isn't even dead but maybe should be, just stick with C89 and pretend that was the end (though for more modern targets C11 with atomics is tempting).
Along the same lines, maybe the best platform is a dead platform. And there is a mature emulator, or many, for pretty much every modern platform. I've definitely been exploring DOS as a way forward (lol) out of the madness of modern computers.
Radically simple, single tasking, and local first with a stable platform that is close to the metal. What is not to love? I mean maybe it doesn't hurt it was on my first computers so a nostalgia factor I'm sure is at play as well.
DOS can also be a nice, predictable runtime for embedded or low-level systems, as long as you're running x86. Some implementations are even 64-bit clean, such as this one: https://github.com/dosemu2/fdpp.
I remember this project (https://jimhall.itch.io/toy-cpu), where the author shipped his emulator as a DOS binary instead of a web-based JavaScript version that would eventually break over time.
DOS can serve as a lightweight, efficient, and predictable runtime with extremely low system requirements and no need for updates. It can run on bare metal or as a WASM binary.
The development tools are small, free, and reproducible (e.g., Turbo C), ensuring that the code will still compile and run just fine even 10 years from now. Oh, and it also has some cool TUI libraries a la QuickBASIC. I’d argue it’s still a worthy option, if it’s not user-facing.
The reason I suggested FreeDOS was because the root comment was essentially a laundry list of bugs in MS-DOS. If there's a readily-available MS-DOS that does the job and doesn't have (material) bugs then sure by all means use that, but then what are we even talking about here?
With plain Windows, one year you would get an app that's supposed to be installed from bootable floppies, they had been formatted using MSDOS/Windows and everthing went as expected. Same basic workflow as installing Windows from floppies at the time.
Then there was a trend to shipping floppies that had been formatted and/or contained non-MS DOS instead, if a new version came out like that it sometimes failed to boot on far more machines.
I ran and worked with MS-DOS in production use for years. I am that old. I have deployed hundreds of DOS machines and I was an expert in DOS memory management.
I installed lots of apps from floppies as well.
No, usually installation floppies were _not_ bootable, in my extensive experience. If they were, I can't see how or why that would affect the app so long as the app wasn't tied to one specific version of DOS.
At the time when DOS ruled business PC computing, there were not very many alternative versions. Even DR-DOS came along quite late. Aside from DR's own multitasking TASKMGR.EXE I don't think I ever saw anything else that was specific to DR-DOS. GEM and ViewMax ran fine on MS-DOS.
There was no practical difference between IBM's PC DOS and MS-DOS except for BASICA. Only when MS dicontinued MS-DOS did IBM PC DOS start to diverge, dropping BASIC for Rexx, dropping EDIT.EXE for IBM E, and so on.
FreeDOS is good, but it's a different flavor of DOS and not everything directly lines up. It's certainly where I'd start if I needed a DOS, but I'm sure there are things it won't work with.
I tried WfWg 3.11 but on reflection it was a bad version to try, what with 32-bit file access and so on. It didn't start successfully.
But saying that, I ran and supported Windows 2.01, 3.0 and 3.1 in production. If I never seen any of them running again I will not be sorry. I have zero nostalgia for them, or indeed, for CP/M either.
I was not. Some googling shows a DOS based on Win95SR2 / Win98SE's (?) with unknown changes, possibly just the copyright string or setup, possibly more?
I recall investigating a similar issue as an child sometime 35 years ago. I suspect this bug explains more of what happened way back when. Somehow I'd overwritten our hard disk's boot files and boot sector with some DOS 3.1 or 3.3 files copied from a disk and then running sys.com. I'm guessing that we triggered another path to this same bug (but without netdrive), because something splatted the bpb and made the disk not boot properly.
I recall that we got it back by patching the BPB with debug.com, specifically with the F8 media descriptor byte and the next couple of bytes indicating the size of the drive.
Off-topic, but the letter "y" of "ye" is in fact the letter 'thorn' (þ) in Old English, but got turned into "y" because the printing press, which originated from Germany, didn't have that English letter. That's why English alphabet also uses German "w" (double v's as you can notice from its shape, and pronounced "vee" in German) instead of English "double u" which used to be represented by a different letter called 'wynn' ('ƿ' or "uu").
Yup. It was never pronounced "yuhee". Sometimes it was written þe, other times it was written with a small "e" above "þ" like a diacritic. Because cursive "þ" looked similar to cursive "y" when English printers imported movable type from the continent they just used "y" for it.
So "Ye" was always pronounced "The" the way we do today.
Also the pronoun "ye" was written "ge" but pronounced similar to how we'd pronounce "ye" today. "You" was the formal pronoun. Saying "you" to family or close friends would be insulting - as if you weren't close to them. At some point it became fashionable to sound more upperclass/aristocratic so the formal "you" took over.
Thus confusion because "ye" was a real word used back then but for entirely different purposes and spelled "ge", while þe/the was always pronounced with a "th" like today but spelled differently before "th" was standardized.
If you said "Ye Olden Days" at best someone of the time might think you were saying "(your) olden days" implying they are very old and you're trying to reference their youth in a very oddly formal way but with the wrong pronoun.
Another Fun fact: thy/thine was already archaic at the time the King James Bible was written. They used it deliberately the way the OP used "Ye Olden Days" - to deliberately sound old and thus imply authority/authenticity. In the 1300s/1400s it was used when implying familiarity or contempt - with family it means familiarity/close relationships. Used with a stranger or superior it was like someone saying "Hey pal" to your boss. Again it became fashionable to switch to the second person plural for formality, then being formal all the time became fashionable, and eventually the formal forms became the new informal.
The PC version was funnily enough also long-lived because version 4 was too bloated so 3.3 was often offered as an alternative up until version 5 was released in the early 90s. The only major feature it lacked over 4 was that it was limited to 32 MB partitions so if you had a large HDD you had to make multiple partitions.
This is common in code without segmentation protection. CODE and DATA are convention. You can just specify a function, then a small buffer, then another function. .COM files in particular were easier to write with CS and DS pointing to the same region of memory, assuming you could fit both your code and inline buffers in 64kB.
The code explains what they are doing. Even more interesting, they're using their own stack too: