Hacker News new | past | comments | ask | show | jobs | submit | merpkz's comments login

Nothing is preventing you to add an IP whitelist and/or basic auth to same configuration. That is what I do to all my nginx configurations to be extra careful, so nothing slips by accident.


I got something similar running with nginx myself with purpose of getting access to my internal services from outside. The main idea here is that internal services are not on same machine this nginx is running on, so it will pass around to needed server in internal network. It goes like this:

  server_name ~^(?<service>(?:lubelogger|wiki|kibana|zabbix|mail|grafana|git|books|zm))\.___domain\.example$;
  ___location / {
        resolver 127.0.0.1;
        include proxy.conf;
        proxy_set_header Authorization "";
        proxy_set_header Host $service.internal;
        proxy_set_header Origin http://$service.internal;
        proxy_redirect http://$proxy_host/ /;
        proxy_pass http://$service.internal;
  }
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).


I would guess it depends on your exact smb configuration, as I recall there were multiple configuration parameters for transfer speed, like

> socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072

I never benchmarked these though.


I haven't run a Samba instance that has changed those options from their defaults in like twenty years.

  # grep socket /etc/samba/smb.conf
  #
I don't have any performance-tweaking options set... just auth, share definitions and server identity and protocol information. I learned long ago that for SOHO (and probably even medium-size-office) use, the performance-tweaking defaults for well-tested software like this are just fine.


Minor nitpick - shouldn't you first define the service and only then a timer for it? Otherwise since you enabled timer and are still trying to figure out how to write service, systemd won't have anything to run when timer triggers. Maybe I am wrong, but that just feels like logical order. Anyways, after years on hating on systemd I also started to embrace it and porting my cron jobs to systemd timers and I must admit it's really nice, the overall overview with list-timers, next execution timestamp, total execution time, ordering of services so one can run after another is completed and of course the logging in journal so I can filter output and keep track of everything it's just wonderful experience.

EDIT: yea, the email reporting is certainly missing, but it was hard to control it since whole STDOUT was shipped, which is not what I wanted most of the time anyways. It would be good to come up with some way to still have small overview emails sent about important jobs done, maybe a dependency service which starts when important job finished and just sends an email about that


Kubernetes admin here with ~2y experience. Since a lot of you have misconception of what this guy is doing I will try to explain. Author wrote a piece of code which will interact with network gateway to get IPv4/IPv6 network address and then update kubernetes configuration accordingly from within a container running on said cluster. That seems to be needed, because MetalLB component in use exposes kubernetes deployments in cluster via predefined IPv6 network address pool which is given from ISP, so if that changes, cluster configuration should change too. This is one of most bizarre things I have read about kubernetes this year and probably shouldn't exist outside a home testing environment, but hey, props to author for coming up with such idea.


OP here.

Thanks. If I was a company, I would probably be in control over when my IPv6 range changes. And if my ISP is any good (I just recently switched to it), my IPv6 network should stay the same.

The network range in a home setting is always given by your ISP, most likely with DHCPV6 prefix delegation, very rarely do you in a home setting dish out for a permanent IPv6 network range. Granted, most decent ISPs try to persist it, since there's no good reasons not to, and it's a strong recommendation from standardization bodies etc. But it's still just best effort, accidents happen, state get lost, and suddently you have a different network.

Sure, it's probably take me less than an hour to just change everything, but we are hackers here, so what's the fun in that? At least I gravitate towards perfecting things even beyong pure needs, just because I can. At work, I have to call it a day when it gives no more significant gain, at home I am free to think "this is fine, but can I actually do it better?". If the answer is yes, and you have the time, I'd say go for it. Some people like to watch cat videos on Youtube, I prefer to tinker with getting stuff to work. Sometimes it's useful, sometimes it's just for the fun of it.

I'm on my way to improve this, by the way. I plan to create a Unifi Networking Operator that can help me not only this, but to configure my Unifi Gateway and firewall rules through Kubernetes properties. It will be more logical to let my "dynamic IP" setup just change Kubernetes properties, and let the OPerator handling the Unifi Configuration of it.

Overkill? Hell, yes! Fun? For me, at least? Will I learn something? Yes, I will learn how to create a Kubernetes Operator!

Yeah, I'm a beginner in Kubernetes, but not in IT and sysadmining in general, I've got 30 years experience there. For now, Kubernetes is a just-for-fun project at home, but it's used to run my day-to-day home services, which makes it even more fun to improve it. We use Kubernetes where I work, but not in my area, it's not inconcieveable that my home-tinkering will be of benefit at work, some day.

And yes. I run a personal blog (in my Kubernetes cluster). I try to make it a bit educational, with more or less repeatable experiments for people to pick and chose from.

Some will be good, some will probably be a bad idea. But as long as there's learnings to be had, it's worth doing.


Yea, Latvia is actually doing quite alright culturally for it's small size. We have multiple NBA players, Oscar winning producer ( movie Flow ) and bronze in hockey world championship in 2023. All that for a population bit below 2 million is not bad, sadly none of that will matter if we get invaded.


Anyone has an interesting project to build with Kinect? I have a 360 kinect laying around, since nobody does Just Dance parties anymore. Also, was the XBone version of kinect much better than 360?


In academia it's often used as a cheap environment scanner, accessibility studies for people with limited mobility, etc.

I worked on a few projects where the Kinect was mounted on a self-driving robot for 3D mapping of areas unaccessible to humans.

Also worked in using it in applications with people with Alzheimer's.

The issue is very often lack of or outdated SDKs for the target platform. The hardware could be better but it's already getting many project to 90%.


If you would like to venture into generative/interactive visuals there are libraries for TouchDesigner which uses the Kinect.

An example: https://www.youtube.com/watch?v=GDZoOnzLYGo


The 360 one is better for 3d scanning. Xbox one version is better for object tracking.


Isn't this just because there was nowhere near as much software for 3d scanning made for v2?


There was something unsatisfactory to V1 users in the V2 system. Depth was jittery or whatever it had been.

V1 was based on an Israeli structured light tech that later became Apple TrueDepth, a static dot pattern taken with a camera which deviations becomes depth. V2 was technically an early and alternative implementation of Time-of-Flight LIDAR based on a light phase delaying device, that output deviations from set known distance as pictures or something along that.

There wouldn't have been lacking app support issues if V2 did work. There was/were something that made users be "yeah... by the way" about it.


The problem with the v2, from someone who had a boss that loved the kinect and used it in retail environments experimentally wasn't the new tech. The new tech was/is amazing. The level of detail you can get from the v2 dwarfs the v1 on every axis.

The problem was that it ONLY had a windows SDK and most of the people who did amazing work outside of games and the xbox with the kinect v1 were using it with macs and linux in tools like openframeworks and processing. The v1 was developed outside microsoft and primesense and there were SDKs that were compatible cross platform. Tons of artists dove right in.

The KinectV2 only offered a windows sdk, and that's what killed it in the 'secondary market'.

Luckily now we have libfreenect2 (https://github.com/OpenKinect/libfreenect2) but that came too late for most of us.


Interesting stuff, thanks for the clarification!

Did MS provide SDKs for other OSes for v1 or was it just easier for people to make an open source one?


primesense made them, the guys who sold the sensor to microsoft.


the Kinect One is better in a bunch of ways (field of view, resolution) but a big one for certain use-cases is that it can fully track 6 skeletons.

The 360 Kinect can only track two skeletons (but differentiate 6).


Source on that tracking 6 skeletons? That's cool.


The problem with the original Kinect (v1) is that good tracking software for it was never really written. Most application that support it, just use the original Microsoft SDK to do the motion tracking, and it's just not very good, the main issue is that it always assume that the tracked person is directly facing the camera, and is very bad at dealing with occlusion. The good thing about it, is that it ran in real-time on a potato.

In order to get a good result, someone would need to train a good model for HPE that could use the cloud point data directly, but it seems nobody cares about depth sensor anymore, most efforts are going to HPE from regular 2d video (like media pipe holistic model). And given the result you can get with media pipe, openpose and the likes, it's understandable nobody is bothering working with low resolution cloud point anymore for 3D HPE.

The only use-case I can think of for a Kinect v1 in 2025, would be robotics if you want a low latency low resolution cloud point for your robot control, but even there I think we are moving to big vision model capable of making sense of regular video feeds.


There might be some work at https://k2vr.tech regarding this it seems like.


Was that really so bad in terms of performance? Surely .htaccess didn't exist there most of the time and even if it did, that would have been cached by kernel so each lookup by apache process wouldn't be hitting disk directly to check for file existance for each HTTP request it processes. Or maybe I am mistaken about that.


The recommendation was to disable it because:

a) If you didn't use it (the less bad case you are considering) then why pay for the stat syscalls at every request?

b) If you did use it, apache was reparsing/reprocessing the (at least one) .htaccess file on every request. You can see how the real impact here was significantly worse than a cached stat syscall.

Most people were using it, hence the bad rep. Also, this was at a time where it was more comon to have webservers reading from NFS or other networked filesystem. Stat calls then involve the network and you can see how even the "mild" case could wreak havoc in some setups


What kind of intense read/write nature you are talking about in a video game console? It just reads the game ROM from storage and executes it, there is nothing to write back, the game is not being modified in any way while playing. All these talks about wearing out sdcards in game consoles or raspberry pi devices in my personal opinion are partially because of people encountering poor quality cards - counterfeits. There is an sdcard in my car camcorder which must have seen thousands of full write cycles and still has no issues functioning despite all the operating temperature differences it endures due to weather seasons.


Writes should be minimal yeah. But reads could be intense. My car has worn out two maps SD cards. One of them had a questionable chain of custody, but I went back to the almost certainly original card that came with the car, and it's starting to misbehave in the same ways, so I think it's only a matter of time. These cards are unwritable after factory initialization, so writes are definitely not a factor.


I understand that reads can technically cause read disturb, but isn't this normally handled by the controller? My intuition says that the writes caused by block rewrites should not significantly accelerate wear. I'd suspect more mundane issues such as bad solder, but would love to hear an expert take.


Knowing what inflated security researcher egos usually are I wouldn't hold my breath to find out the truth here.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: