Hacker News new | past | comments | ask | show | jobs | submit login

Actually, many early unixes included undump, which let you build an executable from a coredump.



And what were the stability guarantees for such things? Could you expect to take such a dump, bring it to a different system (potentially with different hardware, different minor OS revision etc), and expect it to work?


In those days, you could pretty much never take any software to different hardware, and even different minor OS revision was asking for trouble. What you are asking for isn't something anyone expected back then. Different hardware was _really_ different.


Right, that's kinda what I was expecting. But that also explains why it was a mistake to adopt this model for Emacs, or at least to stick with it for so long (to clarify, I specifically mean full process dumps, not Lisp world dumps) - it's a different world, and it has been different for a very long time, with software and OS/hardware significantly more decoupled.


> it's a different world, and it has been different for a very long time, with software and OS/hardware significantly more decoupled.

This argument makes about as much sense for core files as it does for binary executables, which is to say, not very much.


Why? A binary executable does not rely on the very fine details of the implementation of the OS heap manager, for example, while a dump necessarily would.


> A binary executable does not rely on the very fine details of the implementation of the OS heap manager, for example, while a dump necessarily would.

What are you talking about? What is it about brk(2)/sbrk(2)/mmap(2) that would be a problem?


Correct me if I'm wrong, but if you make a process dump of the entire process, that dump would include internal structures used to maintain its heap, no? If those structures change between OS versions, for example, then the process dump from an older version would represent corrupted heap when loaded on a newer version.


None of the kernel data structures are saved or even visible to the process - the kernel memory (on Linux x64 everything mapped in the upper 128TiB) is not readable by the process.

You can have a problem if the kernel system call interface changes, but that is a problem for every executable. You would need to re-compile everything on the system.


Are all heap-related structures be stored in kernel memory, though? There's nothing in userspace for bookkeeping?


The kernel just gives you pointers and sizes via mmap/sbrk, it is up to you to keep track of them.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: