Aside from this file, the "fork" concept of Mac file systems caused some wtf moments. Fork not being fork() but being the two-pronged idea in that file system, both a resource and a data component existed as pair. One metadata and one the file contents. In Unix, the metadata was in the directory block inode, and wasn't bound to the file in a formalism uniquely, it had to be represented by structure in tar, or cpio or zip distinctly. Implementing Mac compatible file support in Unix meant treating the resource fork first class and the obvious way you do it is for each file have .file beside it.
You couldn't map all the properties of the resource fork into an inode block of the time in UFS. It has stuff like the icon. More modern fs may have larger directory block structure and can handle the data better.
I’d say this is not the right way to describe a resource fork. Instead, think of it as two sets of file contents—one called "data" and one called "rsrc". On-disk, they are both just bytestreams.
The catch is that you usually store a specific structure in the resource fork—smaller chunks of data indexed by 4-byte type codes and 2-byte integer IDs. Applications on the 68K normally stored everything in the resource fork. Code, menus, dialog boxes, pictures, icons, strings, and whatever else. If you copy an old Mac application to a PC or Unix system without translation, what you got was an empty file. This meant that Mac applications had to be encoded into a single stream to be sent over the network… early on, that meant BinHex .hqx or MacBinary .bin, and later on you saw Stuffit .sit archives.
That’s why these structures don’t fit into an inode—it’s like you’re trying to cram a whole goddamn file in there. The resource fork structure had internal limits that capped it at 16 MB, but you could also just treat it as a separate stream of data and make it as big as you want.
> While the data fork allows random access to any offset within it, access to the resource fork works like extracting structured records from a database.
So, whatever the on-disk structure, the motivation here is that from an OS API perspective, software (including the OS itself) can interact with files as one "seekable stream of bytes" (the data fork), and one "random-access key-value store where the values are seekable streams of bytes" (the resource fork).
So not quite metadata vs data, but rather "structured data" (in the sense that it's in a known format that's machine-readable as a data structure to the OS itself) and "unstructured data."
The on-disk representation was arbitrary; in theory, some version of HFS could have stored the data and resource forks contiguously in a single extent and just kept an inode property to specify the delimiting offset between the two. Or could have stored each hunk of the resource fork in its own extent, pre-offset-indexed within the inode; and just concatenated those on read / split them on write, if you used the low-level API that allows resource forks to be read/written as bytestreams.
This in mind, it's curious that we never saw an archive file format that sends the hunks within the resource fork as individual files in the archive beside the data-fork file, to allow for random access / single-file extraction of resource-fork hunks. After all, that's what we eventually got with NeXT bundle directories: all the resource-fork stuff "exploded" into a Resources/ dir inside the bundle.
> So, whatever the on-disk structure, the motivation here is that from an OS API perspective,
There are multiple layers to the OS API. There is the Resource Manager, which provides the structured view. Underneath it is the File Manager, which gives you a stream of bytes. You can use either API to access the resource fork, and there are reasons why you would use the lower-level API.
One example from the documentation was to provide a backup. For various reasons, it was possible that a resource fork could become corrupt—this is back in the day that macOS had no protected memory (for shame!), disk was slow, and we didn’t use journaling filesystems. Some programs kept around backup copies of whatever file you were working on. If your data was stored in the resource fork, well, there’s an easy way to get a backup… just open the resource fork as a stream of bytes and copy it to another place on disk. You could copy it a data fork, and some people even copied it to a data fork in the same file.
The other main reason you would use the lower-level API is because you are writing a program like MacBinary or Stuffit.
> This in mind, it's curious that we never saw an archive file format that sends the hunks within the resource fork as individual files in the archive beside the data-fork file,
Well, there are advantages and disadvantages to that approach. You can already access resources inside a resource fork inside various archive formats, like MacBinary, AppleDouble, and AppleSingle. But you probably do want to preserve the actual byte stream of the resource fork itself. (And there’s also an undocumented compression format for single resources.)
I am not old enough to know how resource forks were implemented on Mac OS but this is definitely not the case today. Resource forks are implemented (or maybe "emulated" is a better word to use? Not sure how much effort is put into them) as random-access. You can use POSIX APIs to interact with them (using _PATH_RSRCFORKSPEC) and these are typically faster than other interfaces.
Back in the day, you used the Resource Manager to open a resource fork. The resource manager provides functions to load individual resources, query which resources exist, and add or modify existing resources.
The Resource Manager made it to Mac OS X as part of Carbon. The main part of Carbon is gone, but a part of it called CarbonCore survives, and that contains the resource manager. If you dig through the docs, you can find it. It was deprecated in 10.8 (which seems really late… the writing was on the wall about resources back when 10.0 hit).
The modern resource manager functions in CarbonCore I think just use the POSIX API underneath. Undoubtedly, there’s some test suite at Apple that makes sure it works correctly. Also undoubtedly, there’s some application vendors who wrote code using resources in the 1990s and still has some of that shipping today.
In Unix, it's said that "Everything is a file" - i.e. that everything on the system that applications need to manage should either be actual files on disk or present themselves to the application as if they were files.
This adage translated to classic MacOS becomes "Everything is a resource". The Resource Manager started out as developer cope from Bruce Horn for not having access to SmallTalk anymore[0], but turned out to completely overtake the entire Macintosh Toolbox API. Packaging everything as type-coded data with standard-ish formats meant cross-cutting concerns like localization or demand paging were brokered through the Resource Manager.
All of this sounds passe today because you can just use directories and files, and have the shell present the whole application as a single object. In fact, this is what all the ex-Apple staff who moved to NeXT wound up doing, which is why OSX has directories that end in .app with a bunch of separate files instead. The reason why they couldn't do this in 1984 is very simple: the Macintosh File System (MFS) that Apple shipped had only partial folder support.
To be clear, MFS did actually have folders[1], but only one directory[2] for the entire volume. What files went in which folders was stored in a separate special file that only the Finder read. There was no Toolbox support for reading folder contents, just the master directory, so applications couldn't actually put files in folders. Not even using the Toolbox file pickers.
And this meant the "sane approach" NeXT and OSX took was actually impossible in the system they were developing. Resources needed to live somewhere, so they added a second bytestream to every file and used it to store something morally equivalent to another directory that only holds resources. The Resource Manager treats an MFS disk as a single pile of files that each holds a single pile of resources.
One of most important technical details about resources in early MacOS is that it allowed the system to swap resources by using double indirect pointers (aka handles) with the lock bit stuffed into the upper 8 bits of the 32 bit. Stealing the extra flag bits from the upper bits instead of increasing the alignment to make a few lower bits available was fine on the 68000 and 68010 with their 24 Bit address space, but exploded into your face on an 020/030 with a real 32 Bit address space. It was a nightmare do develop and debug. A mix of assembler, Pascal and C without memory protection, but at least you could use ResEdit to put insults into Menu entries on school computers.
> I’d say this is not the right way to describe a resource fork. Instead, think of it as two sets of file contents—one called "data" and one called "rsrc". On-disk, they are both just bytestreams.
I think it's a perfectly fine way. You're just coming at it from a wildly different level of abstraction.
One could say yours is not the right way either and jump down into quantum fields as another level.
GP is more accurate, because "file contents" could be in either or both. Not all files had a data fork, and not all files had a resource fork. Some metadata, such icon position, was also stored independently of the file, using the hidden Desktop database.
Resource fork used to contain all the stuff you could edit with ResEdit (good old times!) right? Icons, various gui resources, could be text and translation assets too. For example Escape Velocity plugins used custom resource types and a ResEdit plugin made them easy to edit there.
A lot of Classic Mac apps just used the resource fork to store all their data. It was basically used as a Berkeley DB, except the keys were limited to a 32-bit OSType plus a 16-bit integer, and performance was horrible. But it got the job done when the files were small, had low on-disk overhead, and was ridiculously easy to deploy.
Once you pushed an app beyond the level of usage the developer had performed in their initial tests, it would crawl to a near-halt, thrashing the disk like crazy on any save. Apple's algorithm would shift huge chunks of the file multiple times per set of updates, when usually it would be better to just rewrite the entire file once. IIRC, part of the problem was an implicit commitment to never strictly requiring more than a few KBs of available disk space.
In a sense, the resource fork was just too easy and accessible. In the long run, Mac users ended up suffering from it more than they benefited. When Apple finally got rid of it, the rejoice was pretty much universal. There was none of the nostalgia that usually accompanies disappearing Apple techs, especially the ones that get removed outright instead of upgraded (though one could argue that's what plists, XML and bundles did.)
The rejoicing was definitely not universal. It really felt like the NeXT folks wanted to throw out pretty much the entire Mac (except keeps its customer base and apps) and any compatibility had to be fought for through customer complaints.
Personally, MacOS X bundles (directories that were opaque in the Finder) seemed like a decent enough replacement for resource forks. The problem was that lots of NeXT-derived utilities munged old Mac files by being ignorant of resource forks and that was not ok.
The 9->X trapeze act was a colossal success, but in retrospect it was brutally risky. I can't think of a successful precedent involving popular tech. The closest parallel is OS/2, which was a flop for the ages.
A large amount of transition code was written in those years. One well-placed design failure could have cratered the whole project. Considering that the Classic environment was a good-enough catch-all solution, I would have also erred on the side of retiring things that were redundant in NeXT-land.
Resource forks were one of the best victims, 1% functionality and 99% technical debt. The one I mourned for was the Code Fragment Manager. It was one of Apple's best OS9 designs and was massively superior to Mach-O (and even more so wrt other unices.) Alas, it didn't bring enough value to justify the porting work, let alone the opportunity cost and risk delta.
MacOS X bundles are actually NeXTStep bundles, and are behind the same idea in Java JAR files with META-INF directory, and .NET resources, due to Objective-C's legacy on all those systems.
> When Apple finally got rid of it, the rejoice was pretty much universal. There was none of the nostalgia that usually accompanies disappearing Apple techs
Once you pushed an app beyond the level of usage the developer
had performed in their initial tests, it would crawl to a near-halt
With HFS (unsure about HFS+) the first three extents are stored in the extent data record. After that extents get stored in a separate "overflow" file stored at the end of the filesystem. How much data goes in those three extents depends on a lot of things, but it does mean that it's actually pretty easy for things to get fragmented.
A bit more detail: the first three extents the resource and data forks are stored as part of the entry in the catalog (for a total of up to six extents). On HFS each extent can be 2^16 blocks long (I think HFS+ moved to 32-bit lengths). Anything beyond that (due to size or fragmentation) will have its info stored in an overflow catalog. The overflow catalogs are a.) normal files and b.) keyed by the id (CNID) of the parent directory. If memory serves this means that the catalog file itself can become fragmented but also the lookups themselves are a bit slow. There are little shortcuts (threads) that are keyed by the CNID of the file/directory itself, but as far as I can tell they're only commonly written for directories not files.
tl;dr For either of the forks (data or resource) once you got beyond the capacity of three extents or you start modifying things on a fragmented filesystem performance will go to shit.
Oh, they're not gone -- still very much part of APFS. You can read the contents of the resource fork for a file at path `$FILE` by reading `$FILE/..namedfork/rsrc`
The resource fork is still how custom icons for files and directories are implemented!
(Look for a hidden file called `Icon\r` inside any directory with a custom icon, and you can dump its resource fork to a `.icns` file that Preview can open)
Hehe yep, but if we're doing vestigial nitpicks, I'd like to see an OpenResFile app that was ported to OS X and kept using the resfork to save its data. FAIK such a recalcitrant beast might even exist.
I credit ResEdit hacking partially for steering my path towards becoming a programmer. I had my Classic Mac OS installs throughly customized, as well as the other various programs and games that stored their assets in resource forks.
It was a lot of fun and something I’ve missed in modern computing. Not even desktop Linux is really fills that void. ResEdit and the way it exposed everything complete with built-in editors was really something special.
Same here but only for joining the industry. Now it's the opposite, that webdev still hasn't reached that level of maturity of classic Mac OS makes me want to quit.
The other big thing in the resource fork was the executable code segments that made up the application. In fact applications typically had nothing but the data fork at all. It was all in the resource fork.
I always thought the resource fork as a good idea poorly implemented. IMO they should have just given you a library that manipulated a regular file. Then you could choose to use it or not but it would still be a single file. It could have a standard header to identify it and the system could look inside if that header was there.
One of the big problems with resource forks was that no other system supported them so to host a mac file on a non-mac drive or an ftp server, etc, the file had to be converted to something that contained both parts, then converted back when brought to the mac. It was a PITA.
That's done as part of xattr, or extended attributes. It's a very flexible system. For example you can add comments to a file so they are indexed by Spotlight.
Except NTFS does not have "extended attributes" in Linux/Irix/HPFS sense.
Every FILE object in the database is ultimately (outside of some low level metadata) a map of Type-(optional Name)-Length-Value entries, of which file contents and what people think of as "extended attributes" are just random DATA type entries (empty DATA name marks the default to own when you do file I/O).
It's similar to ZFS (in default config) and Solaris UFS where a file is also a directory
> Except NTFS does not have "extended attributes" in Linux/Irix/HPFS sense.
Except actually NTFS does have "extended attributes" in the HPFS sense, which were added to support the OS/2 subsystem in Windows NT. And went on to be used by other stuff as well, including the POSIX subsystem (and its successors Interix/SFU/SUA) and more recently WSL (at least WSL1, not sure about WSL2), for storage of POSIX file metadata.
In NTFS, the streams of a regular file are actually attributes of `$DATA` type; the primary stream is an unnamed `$DATA` type attribute, and any alternate data stream (ADS) is a named `$DATA` type attribute. By contrast, extended attributes are not stored in `$DATA` type attributes, they are stored in the file's `$EA` and `$EA_INFORMATION` attributes. I believe `$EA` contains the actual extended attribute data, whereas `$EA_INFORMATION` is an index to speed up access.
Alternate data streams are accessed using ordinary file APIs, suffixing the file name with `:` then the stream name. Actually, in its fullest form, an NTFS file or directory name includes the attribute type, so the primary stream of a file `foo.txt` is called `foo.txt::$DATA` and an ADS named bar's full name is `foo.txt:bar:$DATA`. For a directory, the default stream is called `$I30` and its type is `$INDEX_ALLOCATION`, so the full name of `C:\Users` is actually `C:\Users:$I30:$INDEX_ALLOCATION`. You will note in `CMD.EXE`, `dir C:\Users:$I30:$INDEX_ALLOCATION` actually works, and returns identical results to `C:\Users`, while other suffixes (e.g. `:$I31` or `:$I30:$DATA`) give you an error instead. Windows will let you created named `:$DATA` streams on a directory, but not a named one.
By contrast, extended attributes are accessed using dedicated Windows NT APIs, namely `NtQueryEaFile` and `NtSetEaFile`.
I'm not sure why Windows POSIX went with EAs instead of ADS; I speculate it is because if you only have a small quantity of data to store, but want to store it on a huge number of files and directories, EAs end up being faster and using less storage than ADS do.
EaData & EaFile remind me of the murky memories of OS/2 APIs.
HPFS had a different approach of internally handling EAs, but OS/2 did create extra file on FAT16 filesystems to store EAs, which could point to origin of $EA. (HPFS itself has special EA-handling implemented in its FNODE, equivalent of inode/FILE entry)
I do not recall the EA actually being used anywhere by new code though, quite shocked by the mention of WSL. Old POSIX subsystem originated before ADSes I think, and might have decided to avoid creating more data types.
My quip about difference of Linux/Irix xattr is related to architectural design involved in the APIs - Irix style xattr API (copied by Linux) is rather explicitly designed for short attributes - do not know if it's still current but I recall something about API itself limiting it to single page per attribute? Come to think of it, that would match certain aspects of Direct IO that AFAIK were also imported from Irix...
Oh, and BTW - NTFS internal structures being accessible as "normal" files is one of the design decisions inherited from Files-11 on VMS, one I quite like from architecture cleanliness pov at the very least.
uid, gid, mode, and POSIX format timestamps are stored in an EA. It also mentions file capabilities being stored in an ADS. On Linux, capabilities and ACLs are stored in xattrs, so that seems to imply that xattrs are stored in ADS not EA.
> Old POSIX subsystem originated before ADSes I think, and might have decided to avoid creating more data types.
I'm not sure about that, I think support for ADS has been in NTFS from its very beginnings, it was designed to support it from the very start.
Actually, from what I understand, the original design for NTFS – which was never actually implemented, at least not in any version that ever shipped to customers – was to let users define their own attribute types. The reason why their names all start with $, is that was supposed to reserve the attribute type as "system", user attribute types were supposed to start with other characters (likely alphabetic). And that's the reason why they are defined in a file on the filesystem, $AttrDef, and why the records in that file contain some (very basic) metadata on validating them (minimum/maximum sizes, etc). If they were never planning to support user-defined attribute types, they wouldn't have needed $AttrDef, they could have just hardcoded it all in the code.
the dollar sign convention predates NT, it's one of the things inherited from Files-11, where the metadata-files were not hidden from end user, just marked with strict enough permission checks. (A lot of VMS APIs used dollar signs for namespacing, too, and I believe some aspects of the naming scheme come from specific PDP assemblers when referring to some names?)
Looking at NTFS from on-disk structure side, it always seemed quite obvious to me that a lot of accolades given to BeFS applied to NTFS - it's the lack of actually using the abilities - and IIRC a lot of the indexing system is actually used by Windows Search, which in tech spaces I always found mentioned as "useless thing I disabled", yet I found out later offices where people are very much dependant on the component (helps that MS Office installed document handlers to index its documents in it)
> Looking at NTFS from on-disk structure side, it always seemed quite obvious to me that a lot of accolades given to BeFS applied to NTFS - it's the lack of actually using the abilities
Microsoft had some very grand plans in this area... Cairo, OFS, WinFS... but they just kept on getting delayed, cancelled, pulled from the beta for too many issues. I think contemporary Microsoft has lost interest in this (it was something Bill Gates was big on) and moved on to other ideas.
I used to dual boot OS X and Windows on my Mac in the late 2000s. I am pretty certain when I open the HFS+ volume and copy things to the NTFS volume, some stuff became alternate data streams. Windows even had a UI to tell me about it. I didn't understand it then but my guess would be that's the resource fork.
OS/2's HPFS also had alternate data streams, called Extended Attributes. You'd make two calls to DosQueryFileInfo() - the first time to get the size of any EAs so you could allocate a buffer, then call it again to read the contents into the buffer.
It got used occasionally - not a lot. I had a newsgroup reader that would store the date of the last time you downloaded items for a group in an EA (of the file that had the items).
Rarely used because it's invisible and quite awkward to use as a user, basically unusable to most, with no GUI. Also because it will just silently be demolished if you copy to/from an FAT filesystem like a typical flash drive, so it's completely unreliable.
I work on ReFS and a little bit on NTFS. Alternate data streams are simply seekable bags of bytes, just like the traditional main data file stream. Security descriptors, extended attributes, reparse points and other file metadata are represented as a more general concept called an "attribute".
You can't actually open a security descriptor attribute and modify select bytes of it to create an invalid security descriptor, as you would if it were a general purpose stream.
Help me understand the terminology. I thought alternative data streams were just non-resident attributes. Attributes like "$SECURITY_DESCRIPTOR" have reserved names but, conceptually, I thought were stored in the same manner as an alternative data stream. (Admittedly, I've never seen the real NTFS source code-- I've only perused open source tools and re-implementations.)
Essentially, attribute names directly specify the attribute type - so $SECURITY_DESCRIPTOR declared the entry in FILE attribute list to be a security descriptor. DATA attributes have another name field to handle multiple instances
> Essentially, attribute names directly specify the attribute type - so $SECURITY_DESCRIPTOR declared the entry in FILE attribute list to be a security descriptor. DATA attributes have another name field to handle multiple instances
If you at the Linux kernel source code, `fs/ntfs3/ntfs.h` contains the following:
struct ATTRIB {
enum ATTR_TYPE type; // 0x00: The type of this attribute.
__le32 size; // 0x04: The size of this attribute.
u8 non_res; // 0x08: Is this attribute non-resident?
u8 name_len; // 0x09: This attribute name length.
__le16 name_off; // 0x0A: Offset to the attribute name.
__le16 flags; // 0x0C: See ATTR_FLAG_XXX.
__le16 id; // 0x0E: Unique id (per record).
union {
struct ATTR_RESIDENT res; // 0x10
struct ATTR_NONRESIDENT nres; // 0x10
};
};
So the name field isn't specific to `$DATA` attributes, every attribute has it. However, for most attributes either the name is zero bytes, or it is a hardcoded name (like `$I30` for directories). Is `$DATA` the only one that can have different instances of the attribute with arbitrary names?
Arguably from the point of On Disk Structure (to reuse terminology from NTFS' ancestor in VMS), all attributes can have names as well.
Now, implementation in ntfs.sys is another thing and I have no idea if it's just an unused code path or if something would explode, and from what I heard Microsoft ended up in situation where people are scared to touch it not because of code quality but because of being scared of breaking something.
> Now, implementation in ntfs.sys is another thing and I have no idea if it's just an unused code path or if something would explode,
ntfs.sys has validation checks in it which prevent you from directly creating anything other than named or unnamed $DATA attributes on a regular file, and named $DATA attributes on a directory, and (indirectly) creating other stuff (directories, file names, standard attributes, EAs) through the appropriate APIs. If you try to do anything funky, you'll get an "Access Denied" error code returned by ntfs.sys
> I was thinking more of "ntfs.sys encounters filesystem structure with names set for normally unnamed attributes".
From reading the source code of the Linux kernel NTFS driver (the ntfs3 one in the latest Linux kernel, not the older one it replaced), its (pretty reasonable) strategy is just to ignore things it doesn't expect. But I don't know what ntfs.sys does in such a scenario, I've never tried.
the 'trusted flag' (my term) == the thing that you touch when you Unblock-File (pwsh) or uncheck in the file properties UI => lives in an alternate data stream.
> the two-pronged idea in that file system, both a resource and a data component existed as pair. One metadata and one the file contents.
Application metadata describing what file types an application could open, what icons to use for those file types if they matched the application’s creator code was stored in the resource fork of the application, but file metadata never was stored in the resource fork. File types, creator codes, lock, invisible, bozo, etc. bits always were stored in the file system.
It was all of the forked data that made dual format CDs/DVDs "interesting". In the beginning it was a trick. Eventually, the Mac burning software made it a breeze. Making a Mac bootable DVD was also interesting.
I recall seeing CD-ROMs that had both Mac and Windows software on it, and depending on which OS it was mounted on, it would show the Windows EXE or the Mac app... I wonder how that's done. I'm guessing there was a clever trick so files on both filesystems share the same data (e.g. if the program/game had a movie, it would only store the bytes of the movie once but its addressable as a file on each filesystem), but that sounds like a nightmare.
I can probably look it up and figure it out myself, ah, the joys of learning about obsolete tech!
As it starts about 32k in, the ISO 9660 superblock doesn't inherently conflict with an Apple partition map which starts at the beginning. Apple also had proprietary ISO 9660 extensions that add extra metadata to the directory entries much like the RockRidge extension does. Those would get ignored by non-Apple implementations of ISO 9660.
Microsoft went a different route with its long filename extensions (Joliet) – they simply created a whole different (UCS-2/UTF-16 encoded) directory tree. An ISO 9660 implementation that's compatible with Joliet will prefer the Unicode directory hierarchy and look there for files.
There were also the audio CDs that had data on them. Audio CD players would just play the audio, but a CD-ROM could access both. Some had apps that were games that would play the audio portion for the game.
The Mac version of the original Descent was like this too, with a great redbook audio soundtrack. The game wasn't locked to the original disc though, you could pop out the CD in the middle of the game and replace it with any other audio CD and it'd play that just as well.
IIRC from that time, those CD-ROMs contained two tracks, one formatted with ISO 9660 and another with HFS+. Windows didn't come with HFS+ drivers so it ignored it, and probably MacOS prioritized mounting the HFS+ track.
I've seen some where the combined file size exposed on each track would be larger than a CD could hold, so there had to be something more going on. StarCraft and Brood War come to mind with the large StarDat.mpq / BrooDat.mpq files.
Oh, StarDat.mpq, brings back memories. That was one of the major reasons I'm in this industry now - the file itself is a "virtual file system" (MOPAQ, with MO being IIRC the authors' initials) file with some CRC and obfuscation. As a kid, I was hell-bent on learning how it works, writing code to decode and encode it, and then use it in my own hobby projects. I learned a lot of concepts from that little rabbit hole. Hell, the way StarDat.mpq, BrooDat.mpq, and Patch_something.mpq interacted, was what you'd call "overlay FS" today.
TL;DR ISO9660 provided an area to stuff type-tagged extension information for each directory entry.
In addition, first 32kB of iso9660 are unused, which allowed tricks like putting another filesystem metadata there.
By carefully arranging metadata on disk it was then possible to make essentially overlapping partitions, stuffing each filesystem metadata in area unused by the other, with files reusing the same space
> Implementing Mac compatible file support in Unix meant treating the resource fork first class and the obvious way you do it is for each file have .file beside it.
Prefixing the file name with a single dot - is this a file system convention ? Or just a "good idea" ?
Unix convention to hide. .Files hidden from ls unless -a used but cd .config/ works fine. It matched the use of . For "this dir" and .. for "parent dir" also hidden by default. It was in v7 on a pdp11, my first experience of Unix in 1980. Probably pre-dated that.
Oh sure. I started with v6 on a pdp-10 in 1979. And the leading dot is ingrained in my brain.
But what I'm wondering about is the idea of associating (for example) "myfile.xyz" and ".myfile.xyz". I've never heard of this as a convention for associating metadata.
resource and data forks were hfs(+) features that appeared in pre-osx versions of macos. post-osx made use of the bsd fast filesystem and a rather nice unix style convention from nextstep where the on-disk representation of a .app or .pkg (which would appear as a single entity in the gui) was actually a directory tree. this would rather elegantly include ui resources as well as multiple binaries for cross platform support.
No disagree, Both came later IIRC. Melbourne unis work on appletalk and Apple file system support was in the late 80s and I believe POSIX xattr spec work was mid nineties, NTFS was '93 or so. The fork model in apple file store was eighties work.
You couldn't map all the properties of the resource fork into an inode block of the time in UFS. It has stuff like the icon. More modern fs may have larger directory block structure and can handle the data better.