Nothing is preventing you to add an IP whitelist and/or basic auth to same configuration. That is what I do to all my nginx configurations to be extra careful, so nothing slips by accident.
I got something similar running with nginx myself with purpose of getting access to my internal services from outside. The main idea here is that internal services are not on same machine this nginx is running on, so it will pass around to needed server in internal network. It goes like this:
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).
I haven't run a Samba instance that has changed those options from their defaults in like twenty years.
# grep socket /etc/samba/smb.conf
#
I don't have any performance-tweaking options set... just auth, share definitions and server identity and protocol information. I learned long ago that for SOHO (and probably even medium-size-office) use, the performance-tweaking defaults for well-tested software like this are just fine.
Minor nitpick - shouldn't you first define the service and only then a timer for it? Otherwise since you enabled timer and are still trying to figure out how to write service, systemd won't have anything to run when timer triggers. Maybe I am wrong, but that just feels like logical order. Anyways, after years on hating on systemd I also started to embrace it and porting my cron jobs to systemd timers and I must admit it's really nice, the overall overview with list-timers, next execution timestamp, total execution time, ordering of services so one can run after another is completed and of course the logging in journal so I can filter output and keep track of everything it's just wonderful experience.
EDIT: yea, the email reporting is certainly missing, but it was hard to control it since whole STDOUT was shipped, which is not what I wanted most of the time anyways. It would be good to come up with some way to still have small overview emails sent about important jobs done, maybe a dependency service which starts when important job finished and just sends an email about that
Kubernetes admin here with ~2y experience. Since a lot of you have misconception of what this guy is doing I will try to explain. Author wrote a piece of code which will interact with network gateway to get IPv4/IPv6 network address and then update kubernetes configuration accordingly from within a container running on said cluster. That seems to be needed, because MetalLB component in use exposes kubernetes deployments in cluster via predefined IPv6 network address pool which is given from ISP, so if that changes, cluster configuration should change too. This is one of most bizarre things I have read about kubernetes this year and probably shouldn't exist outside a home testing environment, but hey, props to author for coming up with such idea.
Thanks. If I was a company, I would probably be in control over when my IPv6 range changes. And if my ISP is any good (I just recently switched to it), my IPv6 network should stay the same.
The network range in a home setting is always given by your ISP, most likely with DHCPV6 prefix delegation, very rarely do you in a home setting dish out for a permanent IPv6 network range. Granted, most decent ISPs try to persist it, since there's no good reasons not to, and it's a strong recommendation from standardization bodies etc. But it's still just best effort, accidents happen, state get lost, and suddently you have a different network.
Sure, it's probably take me less than an hour to just change everything, but we are hackers here, so what's the fun in that? At least I gravitate towards perfecting things even beyong pure needs, just because I can. At work, I have to call it a day when it gives no more significant gain, at home I am free to think "this is fine, but can I actually do it better?". If the answer is yes, and you have the time, I'd say go for it. Some people like to watch cat videos on Youtube, I prefer to tinker with getting stuff to work. Sometimes it's useful, sometimes it's just for the fun of it.
I'm on my way to improve this, by the way. I plan to create a Unifi Networking Operator that can help me not only this, but to configure my Unifi Gateway and firewall rules through Kubernetes properties. It will be more logical to let my "dynamic IP" setup just change Kubernetes properties, and let the OPerator handling the Unifi Configuration of it.
Overkill? Hell, yes! Fun? For me, at least? Will I learn something? Yes, I will learn how to create a Kubernetes Operator!
Yeah, I'm a beginner in Kubernetes, but not in IT and sysadmining in general, I've got 30 years experience there. For now, Kubernetes is a just-for-fun project at home, but it's used to run my day-to-day home services, which makes it even more fun to improve it. We use Kubernetes where I work, but not in my area, it's not inconcieveable that my home-tinkering will be of benefit at work, some day.
And yes. I run a personal blog (in my Kubernetes cluster). I try to make it a bit educational, with more or less repeatable experiments for people to pick and chose from.
Some will be good, some will probably be a bad idea. But as long as there's learnings to be had, it's worth doing.
Yea, Latvia is actually doing quite alright culturally for it's small size. We have multiple NBA players, Oscar winning producer ( movie Flow ) and bronze in hockey world championship in 2023. All that for a population bit below 2 million is not bad, sadly none of that will matter if we get invaded.
Anyone has an interesting project to build with Kinect? I have a 360 kinect laying around, since nobody does Just Dance parties anymore. Also, was the XBone version of kinect much better than 360?
There was something unsatisfactory to V1 users in the V2 system. Depth was jittery or whatever it had been.
V1 was based on an Israeli structured light tech that later became Apple TrueDepth, a static dot pattern taken with a camera which deviations becomes depth. V2 was technically an early and alternative implementation of Time-of-Flight LIDAR based on a light phase delaying device, that output deviations from set known distance as pictures or something along that.
There wouldn't have been lacking app support issues if V2 did work. There was/were something that made users be "yeah... by the way" about it.
The problem with the v2, from someone who had a boss that loved the kinect and used it in retail environments experimentally wasn't the new tech. The new tech was/is amazing. The level of detail you can get from the v2 dwarfs the v1 on every axis.
The problem was that it ONLY had a windows SDK and most of the people who did amazing work outside of games and the xbox with the kinect v1 were using it with macs and linux in tools like openframeworks and processing. The v1 was developed outside microsoft and primesense and there were SDKs that were compatible cross platform. Tons of artists dove right in.
The KinectV2 only offered a windows sdk, and that's what killed it in the 'secondary market'.
The problem with the original Kinect (v1) is that good tracking software for it was never really written. Most application that support it, just use the original Microsoft SDK to do the motion tracking, and it's just not very good, the main issue is that it always assume that the tracked person is directly facing the camera, and is very bad at dealing with occlusion. The good thing about it, is that it ran in real-time on a potato.
In order to get a good result, someone would need to train a good model for HPE that could use the cloud point data directly, but it seems nobody cares about depth sensor anymore, most efforts are going to HPE from regular 2d video (like media pipe holistic model). And given the result you can get with media pipe, openpose and the likes, it's understandable nobody is bothering working with low resolution cloud point anymore for 3D HPE.
The only use-case I can think of for a Kinect v1 in 2025, would be robotics if you want a low latency low resolution cloud point for your robot control, but even there I think we are moving to big vision model capable of making sense of regular video feeds.
Was that really so bad in terms of performance? Surely .htaccess didn't exist there most of the time and even if it did, that would have been cached by kernel so each lookup by apache process wouldn't be hitting disk directly to check for file existance for each HTTP request it processes. Or maybe I am mistaken about that.
a) If you didn't use it (the less bad case you are considering) then why pay for the stat syscalls at every request?
b) If you did use it, apache was reparsing/reprocessing the (at least one) .htaccess file on every request. You can see how the real impact here was significantly worse than a cached stat syscall.
Most people were using it, hence the bad rep. Also, this was at a time where it was more comon to have webservers reading from NFS or other networked filesystem. Stat calls then involve the network and you can see how even the "mild" case could wreak havoc in some setups
What kind of intense read/write nature you are talking about in a video game console? It just reads the game ROM from storage and executes it, there is nothing to write back, the game is not being modified in any way while playing.
All these talks about wearing out sdcards in game consoles or raspberry pi devices in my personal opinion are partially because of people encountering poor quality cards - counterfeits. There is an sdcard in my car camcorder which must have seen thousands of full write cycles and still has no issues functioning despite all the operating temperature differences it endures due to weather seasons.
Writes should be minimal yeah. But reads could be intense. My car has worn out two maps SD cards. One of them had a questionable chain of custody, but I went back to the almost certainly original card that came with the car, and it's starting to misbehave in the same ways, so I think it's only a matter of time. These cards are unwritable after factory initialization, so writes are definitely not a factor.
I understand that reads can technically cause read disturb, but isn't this normally handled by the controller? My intuition says that the writes caused by block rewrites should not significantly accelerate wear. I'd suspect more mundane issues such as bad solder, but would love to hear an expert take.