This may seem strange to you, but this is actually the way many lisp systems (and other image-based development systems, like Smalltalk) work.
I build software using the commercial develop environment Allegro Common Lisp and an image dump is precisely the way I deliver software. The ultimate deliverable is an executable that loads the image and launches the main execution thread.
This works really well for us. To load up our web server takes a while due to having to load up caches etc.. so we tend to do all that once, dump the image and deploy it - cache already primed.
It had an even bigger advantage for me today. Due to some git shenanigans I somehow managed to wipe out a few days of work. No idea how... Almost resigning myself to having to recode the whole lot I just remembered that I had built an image to test just prior to attempting to check in. Clozure CL saves the source code of each function in the image. I was able to probe the image and pull out all the source code that I had just lost, saving myself a few days of tedious recoding!
Not a recommended use of images - but it sure saved my butt today!
I actually have no idea. I had a lot of unchecked work - about a weeks worth. (I know, I should not have let it go that far.. it was just one of those tasks that just seemed to get bigger and bigger.) It was time to check in, so I went through file by file staging it all (using Magit). I then decided to do some more testing. Built a lisp image. Then I became distracted, started thinking about home time etc... so my recollection of what I did is a little hazy. I think I saw I had some changes that I wasn't wanting to commit and test. So I think I stashed them. Then probably surfed hacker news or something.. When I got back to things I suddenly noticed that none of my changes were there any more. Nothing was staged, nothing was stashed. It was all gone. I spent a few hours looking around for it but all to no avail.
It is the second time I have lost work by misusing git. I think the other time I was messing with rebasing and branching and all my work disappeared.
I do really need to learn how to use git properly. But in the meantime I am not taking any more chances. I have set up a zone on our internal SmartOS server which uses ZFS. An hourly cron job on my machine rsyncs my source directory to the zone and then creates a ZFS snapshot. So, hopefully no matter how badly I use git now I will never be able to lose more than about an hours worth of work.
When you stage a file in git, what it actually does is to create in its object database a blob with the contents of the staged file, and then it points the corresponding entry in the working tree index to the blob's hash. The blob object is only removed during the periodic garbage collection, and only after a configurable period of time since its creation has passed.
Therefore, once you stage a change, git will keep a copy of it for at least two weeks unless configured otherwise. Even if nothing points to it anymore (due to, for instance, an errant git reset --hard), there are ways to find its object hash (something like git fsck --unreachable, or even a ls -lR in the object database directory), and once you have the hash, you can use git cat-file to retrieve its contents.
Recovering from a lisp image is pretty darned cool. :D
> I had a lot of unchecked work - about a weeks worth
> ... It is the second time I have lost work by misusing git
I realize this is _off topic_, but one thing that helps me a lot is making regular small commits (that describe some small idea of the change), and then rebase them later once I get all the linting / tests fixed. (Obviously not on master ;)) It's easy to go too-granular here, but it's very handy to be able to reorder the commits, or squash "fixup" commits, to make things easier to review.
I spend more time rebasing than I probably need to, but on the other hand I've never lost work. ;)
For example (adapted from a PR I was working on this morning ;))
git commit -m "Add Clone Foo feature"
git commit -m "Reorganize tests"
git commit -m "Fix indentation"
git commit -m "Add generator for Baz items"
git commit -m "Add test of Clone Foo feature"
git commit -m "fixup linting in application"
git commit -m "fixup test linting"
git commit -m "fix Foo test to use correct selector"
git rebase --interactive master
# reorder fixup commits, squash them together with the things they fix
You can always rebase and squish all of these down into one commit later before you make your pull request (or, leave them as-is if your team is OK with un-squished PRs).
> # reorder fixup commits, squash them together with the things they fix
The --autosquash option of git rebase can save you some work:
--autosquash, --no-autosquash
When the commit log message begins with "squash! ..." (or "fixup! ..."), and there is a commit whose title begins with the same ..., automatically modify the todo list of rebase -i so that the
commit marked for squashing comes right after the commit to be modified, and change the action of the moved commit from pick to squash (or fixup). Ignores subsequent "fixup! " or "squash! " after
the first, in case you referred to an earlier fixup/squash with git commit --fixup/--squash.
This option is only valid when the --interactive option is used.
If the --autosquash option is enabled by default using the configuration variable rebase.autoSquash, this option can be used to override and disable this setting.
(Not off-topic for me. :-)) I will have a play with all this. Now that I have my source being backed up to zfs I can be a bit more comfortable with experimenting with new workflows.
To help learn to use Git properly, start by making many frequent commits from the command line throughout the course of the day. I would say that an appropriate pace is about once every 30 minutes.
When you're done with the feature, then depending on your preference, you can either merge your branch into the master branch, or you can rebase your changes onto the master branch (my recommendation). In both approaches you may wish to squash your commits, so that you ship perhaps just one commit or fewer than your full set of working commits. Then you prepare the changes for code review and for pushing upstream.
The best way to learn about and become comfortable with a tool is to use it regularly. As you approach mastery it will become a swiss army knife. Frequent commits give you a lot of utility for the same reason that editor undo/redo buffer does, except it's persistent.
If you are worried about screwing things up by running the wrong git commands, then check out git's reflog feature, and how you can use `git reset --hard` to roll back changes via the reflog. Virtually every change you make to your local repository is versioned, and the reflog shows you that history and allows you to roll back.
My first step with git is always creating a new branch, then I commit multiple times a day with "wip" or minor notes as the commit message. It is purely to capture the stream-of-consciousness of the code. When it comes time to create a sensible patch I immediately branch again. The old WIP branch is not touched until I've completely merged the feature. On the new branch I usually reset then commit hunks in some way that makes sense, or if the changes are small squash everything into a single commit.
You can also do the same thing with tags if you like.
There is no penalty to branches in git... use them. Frequently.
I got into trouble at one stage mixing branching with rebasing. This really put me off branching. Creating a new branch to perform the merge is a stroke of genius - thanks! I will be doing that right away.
Thanks for the explanation! I am not a heavy user of git stash, so not sure if git reflog could have helped you, but I am grateful that somebody steered me toward git reflog early in my learning git. With reflog and frequent commits, it's very hard to lose work, short of deleting the entire project or its .git/ directory. rsync+ZFS also sounds like nice combo for running around git in order to prevent future mistakes destroying work.
> An hourly cron job on my machine rsyncs my source directory to the zone and then creates a ZFS snapshot. So, hopefully no matter how badly I use git now I will never be able to lose more than about an hours worth of work.
This is a thread about Emacs - no need for ZFS, Emacs can already do versioned file backups for you!
That is very interesting. I do have Emacs making some sort of backup, but have never managed to get it to work reliably. I'll follow those instructions.
What was confusing for me is understanding that by default Emacs does not back up on every save. It makes a copy when you first open a file, so only the version of the file before you start editing it is what gets backed up.
I build software using the commercial develop environment Allegro Common Lisp and an image dump is precisely the way I deliver software. The ultimate deliverable is an executable that loads the image and launches the main execution thread.