Who are comparable providers in your opinion? Hetzner has KYC and needs ID for you to use their services, I’d rather go with a service that didn’t. Any recommendations?
There some hetzner resellers which accept crypto coins instead
OVH(and subsidiaries like server
4 you,kimsufi) is the pricing a bit higher but comparable (in some regions)
But last time I used ovh Hetzner also didn't require Id verification, maybe they changed since then
Ionos also similarly priced didn't need Id last time I used them
OVH wants ID as well in some cases. If you're in the US you aren't getting an OVH overseas anymore to my knowledge. Although, you can get 2gbps unmetered on your servers which is awesome.
I've just been being lazy and buying a ___domain from namecheap and getting the VPS Pulsar (6GB RAM, 4cores), 250mbps up/down for when I do a project. one server does fine for multiple projects usually.
Wish I could say it was more sophisticated than slow trial and error. I tried changing many different aspects: MTU, forcing different routes/peering through different VPSs, various reverse proxy configurations.
I guess what started leading me down the right path was a more methodical approach to benchmarking different legs of the route with iperf: Client <-> reverse proxy, reverse proxy <-> jellyfin server. I started testing those legs separately, w/ and w/o Wireguard, both TCP and UDP. The results showed that the problem exhibited at the host level (nothing to do with Jellyfin or the reverse proxy), only for high latency TCP. The discrepencies between TCP and UDP were weird enough that I started researching Linux sysctl networking tuneables.
There might be something smart to say about the general challenges of achieving stable high throughput over high-latency TCP connections, but I don't have the knowledge to articulate it.
In Python, I think of them as, DuckDB is for getting the data you want, in the form you want it, from a file / remote place. Polars is for when you want to do something with that data, like visualize it.
`duckdb.sql("SELECT foo, bar * 2 as bar_times_2 FROM ...").pl()` (now in polars format) -> other stuff
In Rust, it's a bit fuzzier to me, as DuckDB is a pretty heavy dependency. I'm looking more and more fondly at DataFusion.
Yeah, it is, but there's also a lot more to being a SRE than this book. This book more or less tells you how to stand up a reliability program, what it doesn't really indicate is what SREs do. A lot of people I meet think SRE is just the new title for "operator" which can't be farther from the truth. Whether you're doing an embedding model, like is referenced in the book, or you have a central org - both are made up of software and systems software engineers that are focused on performance and reliability. They build software, do analysis, and write policy that improve the bottom line reliability of the organization.
Not an SRE, but I think the main contribution from this book was to popularize terminology of operations (eg SLAs) and to give an opinionated perspective on how to handle operations at scale.
More practically, I don’t think the book is as useful, as it generally only makes sense when you reach a certain scale that few organizations ever do (imo).
However, we are heading into a future where computing will be everywhere and sensors in everything so in maybe a decade even the “smallest” of organizations may be responsible for large scale distributed systems and operating that would require concepts that are provided in the book.
As a non Googler myself, it still is if you want to know how to set up an SRE team and introduce SRE (ie good sysadmin, for lack of a better word) best practices. The focus on actual indicators such as SLI and SLO, the importance of reducing "toil" (boring repetitive tasks) and automating,... these are all valid concerns.
yes, but not as a checklist of things you have to do, instead it's a valuable discussion of lots of problems and how they were solved in specific circumstances.
The front half is for introducing ideas. The back chapters where never that great IMHO. They get both too in the weeds and at the same time missing actionable advice.
Seconded. I am very young, never bothered to learn bash, went straight for fish. ChatGPT is there to help if I ever encounter unintelligible “$@“&2>?!¿…
Gonna try nushell soon as that seems even more productive.
Nushell is great fun, but be prepared to encounter and fix errors when you paste code designed for sh/bash/zsh/fish. It's a much bigger step away from convention than the others.
I think the one that catches me off guard the most is:
export FOO=bar
BAZ=qux
Is now
$env.FOO = "bar"
BAZ = "qux"
It makes more sense the nu way, but old habits die hard.
as such doing that yields different results based on am i on a mac, in debian, in a container, am i on a zebra-on-the-moon...
its all a bunch of loops and string manipulation in the end.
in fact, awk can handle a whole lot of it too!
and octave... and others, lol; so long as it does not turn into the python2 to python3 fun we've had in the past and we have some stability between versions; this is why i still choose bash (and the whole gnu binaries)
I'll admit that I haven't tried this on other platforms, but I was under the impression that the opposite should happen: `/bin/sh` loads a basic POSIX-compliant shell that should work on other platforms, with Bash, Zsh, and whatever else, which are extensions of a POSIX-compliant shell.
sh will load whatever the OS decides it to open LOL; there is no rule. again why i just go with gnu bash as it is pretty feature rich and exec's other-cli-apps really well. the loops it can do, and arrays are just a bonus; if i need some more complex data structure, then there's another app for the job- that is not an interactive shell. just beware of the dragons when dealing with macos's old-ass bash; check out the version, go with new if you can
I prefer sticking with bash where necessary (where a script is the only thing that will reasonably work), and elsewhere using a programming language with testing, type checking, modularity, and compilation into something with zero or minimal runtime dependencies.
I use zsh on my work and personal computers. I'm not ssh'ing into boxen these days. But when I do, I'm not doing anything more than reading logs to figure out why the userdata on an EC2 didn't work as expected.
I try to use posix standards when convenient, but I'll switch to bash at the first sign of posix complexity.
Xonsh seems like I'd have to type a lot more than I do with zsh. I would also be concerned about not being able to give my team members the same command I used without forcing them into a non-standard shell.
I don't use fish because I've only met one other person IRL who used it. Everyone I've worked with has use bash, zsh, or ksh (I'm glad I left the ksh company before they had to rewrite all those ksh scripts).
Also, Bash is staying for now. posix will most likely always work for the foreseeable future. Zsh seems to be the new Bash, but I have yet to see anyone put zsh in a shebang at work.
I recently (i.e., yesterday) migrated my 15+ year old bash config to zsh. zsh has some great quality of life improvements compared to bash and is basically 1-1 compatible. I had to spend about an hour migrating my prompt, but other than that it was a smooth transition.
zsh is now the default shell in macOS, so I'd say it's a safe bet if that's what you work with.
I switched a few years ago and while there's a lot to like about (and power in) zsh, there's a lot i really dislike about it. For starters, it adds so much additional functionality and compatibility that the documentation (man pages are terrible). Also, the additional history & variable expansion capabilities are messy/ugly in shell syntax (imho). I think ultimately the problem is that, for shell scripting, bash has clearly won, but other shells show that there's room for an alternative specifically for a user's interactive interface...but zsh didn't get the memo so is trying to be all things to all people. One of these days, i'll probably swing over to fish...if I can get the energy to change my environment yet again.
careful, there are footguns in those words. It may seem like it's 1-1, but it's not. There are subtle differences especially in escapes. Bash uses \ escapes. Zsh uses % escapes. Zsh has builtin wildcard expansion. There are other differences as well but you can use the emulate command to emulate bash so it actually is 1-1.
Also, once you've made the switch to zsh - checkout oh-my-zsh (https://ohmyz.sh/)
Learn the new tools. I’m working on my own machine 99% of the time, and if I’m on a remote machine, there’s a 90% chance I’m running something automated there. I’m not going to handcuff myself to the baseline for the sake of that .1%.
There’s no way I’d go back from Fish to Zsh or Bash on my daily driver. It’s just too pleasant to give up just because of “what if?”.
The big problem is that bash is more or less portable (almost everything has bash in the box). They need to start convincing distros to include and/or default xonsh, to really make it worthwhile.
In my view, yes. It would allow you to extend you shell easily, for example do something with the history, like setting a blacklist for commands that should not appear in the history when using the up key, but still retaining the execution of such a command in the history for poor man's auditing.
If you're new and still trying to get into the industry, try all the new tools. Drive them hard and try to break them, so that you can find bug fixes you can contribute. Just go nuts, and let yourself steadily build up a backlog of unique, public, referencable commits you can show employers.
Once you're already established and comfortable, it's up to you if you want to keep trying the new flavor of the week. People gravitate towards novelty at different parts of the stack: Some people love running FreeBSD or Alpine, but stick to Bash on top of those; others, like me, try to stick with Ubuntu whenever possible, and mess around with things like shells and tiling window managers. Others even return to Windowsland and instead focus all of their efforts on innovating at the highest levels of what they can do with C# and actually making money with an innovative business model.
But you'll never learn where you don't enjoy the thrill of seeing something new break on you if you don't have that initial "question everything" phase.
Personally I'd prefer to have to learn no shells at all, but since that's not possible I'll stick with the one that's most commonly installed, which is bash. Similar feeling with an editor - vanilla vi instead of learning and depending on a slew of fragile extensions that only work on a personal laptop. If the challengers supplant the defaults on servers in these domains at some point, I'll learn them then.
Quickemu gives me the ability to instantly spin up a full blown VM without fiddling with QEMU configurations, just by telling it what OS I want.
This might be less useful for those who are quite familiar with QEMU, but it’s great for someone like me who isn’t. So this saves me a whole lot more than 2 minutes. And that’s generally what I want from a wrapper: improved UX.
I find NNN and Ranger a lot more ergonomic as a UX, but MC’s virtual file system is so good, the ability browse folders seamlessly whether local, remote, zip, tar, jar in the same interface etc is really useful.
What are you getting by checking that the file exists that you don't get from the 'file not found' error that the `openFile()` routine returns?
Because it doesn't matter if `doesFileExist()` returns true, you still have to handle the `file not found` error[1] when calling `openFile()` on the next line anyway.
[1] Just because the file exists when the program checked, that doesn't mean that the program can open it (permissions), that the file still exists (could have been removed between the call to check and the call to open), that the file is, in fact, a file that can be opened (and not a directory, for example), that the file is not exclusively locked (Windows) by some other process that opened it, etc. `doesFileExist()` tells you nothing that would change how the subsequent is written.
I guess we’re accustomed to thinking that absence of a file is not “exception-worthy” because it is expected under normal circumstances. But the cases you raised make sense.
For CRUD apps, sveltekits progressive enhancement and form actions make it quick to to add simple function to the page. You can store the pocketbase instance, pb, in locals and reference it all over the application.
For more multiplayer things, sticking a client-side subscription to a collection allows updates of elements that can be worked with/added/moved around etc.