Hacker News new | past | comments | ask | show | jobs | submit | fprotthetarball's comments login

There are still branches, but they aren't named by default. You give them names with "bookmarks", which you can push to remote git repositories as branches.

This lets you work on things without having to worry about giving it a name. This turns out to be pretty helpful when you're experimenting — just "jj new <revision>" and start editing. If it turns out to be something you want to share, "jj bookmark create <name>" and then you can push it. (You can also push without giving it a name, in which case you'll get a git branch with a name based off of the change id.)

Change IDs stay constant with each change, so you use those as a type of branch name when switching between the features you're working on.


Adding onto this, there’s also an experimental feature to move a bookmark as you create new revisions (similar to how a git branch behaves)


Oh, that would be nice. I get the reasoning not to, but it would be nice to have the option.


What changes the change ID? What constitutes a change? Is a change made up of many commits, or the other way around?


A change ID is stable over time as you tweak the message of the change or the files edited by the change. Each of these changes become a new immutable git commit under the hood.

The fact that change ID is stable is very convenient for humans - means you have something explicit to hold on to as everything else may change over time.


My backend for a simple web application I'm working on is entirely in Rust. Highlights:

- axum: web application framework - https://github.com/tokio-rs/axum

- axum-htmx: axum extractors, responders, guards for htmx - https://github.com/robertwayne/axum-htmx

- rusqlite: SQLite bindings - https://github.com/rusqlite/rusqlite

- maud: HTML templating as a macro - https://maud.lambda.xyz

The way maud lets you compose markup works very nicely with htmx. The HX-Request header lets you know if the request is coming from htmx or if it is a regular request. You either call the top-level function if it's a regular request to get the entire page rendered, or call a subset of functions to get the appropriate partial rendered if it's an htmx request.

It's also nice to easily have tests for the rendered pages. My unit tests cover verification of the rendered HTML, too.


Do you use something like cargo-watch for "hot reload"?


No, I just stop and restart when I feel like I'm in a good spot. Nothing fancy.


I have a similar setup, using snare to handle the webhook endpoint: https://github.com/softdevteam/snare

GitHub will call the webhook after a push to main and a successful test suite run. Snare runs a shell script on my server to git pull, build, deploy, and call a cronitor.io hook for monitoring deploy success.

I've been pretty happy with how relatively simple it is and how well it works.


> I have a pet theory about the "AI revolution" or AGI that keeps getting relentlessly confirmed as events unfold: Microsoft sees a massive financial upside to this technology that no one else sees and this is being kept under wraps.

If AGI is "a highly autonomous system that outperforms humans at most economically valuable work", I am Microsoft and have AGI, and other businesses do not, I am putting it charge of a Windows VM in Azure and offering it to companies to run aspects of their businesses. Why stop at "GPTs" if I can offer you specialized Clippys?

Put all your data in Microsoft 365, let Clippy do its thing, and you're saving a lot of money on not supporting people. Microsoft gets their cut, and you get to fire your employees. Win-win.


Every time I post something about this on HN, someone points this out. It’s a fairly obvious idea. Therefore, it does not fit my theory.


Kagi allows you to provide custom CSS if you want to tweak it to your liking. I've used it to remove some of the widgets I don't personally find useful, and to replace some icons.

https://help.kagi.com/kagi/features/custom-css.html


I bought a Flowbee in 2009 for $83 total. I cut my hair around once every 3 weeks, so I have paid 34 cents per cut so far, ignoring electricity cost of running a vacuum and the Flowbee for a few minutes.


This is also my experience. I regularly fill up my 8 qt as high as I'm able to, as long as I think I can carry the pot without spilling.

It takes 30-45 minutes to cool down enough to depressurize, but the lid is always clean so the food is likely staying in the pot.


Ollama is the best I have seen if you want to play around with LLMs.

After installing the application and running it, you run "ollama run <model name>" and it handles everything and drops you into a chat with the LLM. There are no dependencies for you to manage -- just one application.

Check out the README: https://github.com/jmorganca/ollama#readme


I have heard that shared conversations would also share the custom instructions. This is probably not ideal if people are putting personal information in them. Guessing they're plugging that hole.


wait are shared conversations re-run by the receiver? Otherwise it would be irrelevant whatever the secret prompt was


Joplin does, and it works rather well. I have been planning on migrating my notes since the acquisition, but wanted to see what their next move was.

Joplin seems like the best at handling my mix of text notes, PDF files, and images at the moment. If it doesn't work out, though, at least I have my data in a more open form.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: