Hacker News new | past | comments | ask | show | jobs | submit | Rotareti's comments login

I think it's called 'direct binding.' All of my lights are directly bound to my switches, so they work even without the server or Zigbee coordinator running. With some switches, you can even bind light scenes directly to the buttons. For example, you can bind the four buttons of a Philips Hue Tap Dial to different light scenes using Zigbee2MQTT.

Some switches don't allow direct binding, though. All of the Hue switches I tried support it, but some Tuya switches don't. On the other hand, all lights from different manufacturers which I tried were able to bind directly.

The first year I had to fiddle a lot with the tooling. The uptime of HA/zigbee2mqtt wasn't great. It was good to have this as a fall back.


Do you have to choose between direct binding -or- controlling them through HA, or can you still control them through HA even when they are directly bound? Because I directly paired a switch with a light before (in Philips Hue), but then wasn't able to control it through the app anymore. Since switching to HA and Zigbee2mqtt, I just used triggers with switches, but wasn't aware that direct binding is a possibility. Gonna have to look into that now :)


I configure bindings through Zigbee2MQTT. You can view the bindings in the switch configuration, where you can also update or remove them. I use Zigbee2MQTT to handle low-level Zigbee tasks such as pairing, binding, creating groups, setting up scenes, and configuring devices. Most of these things are stored within the devices and work without the coordinator/ha-server.

The configurations you set up in Zigbee2MQTT are synced with HA and can be used there as well. I use the options provided by HA for higher-level tasks, such as automations, custom sensors, statistics, and more. Everything I configure there will not work when the server/coordinator goes down.


They can still be controlled through HA when directly bound.


Same. I was using intellij, then vscode the ten years before I switched to doom. I tried a lot of other editors and IDEs, but emacs was the only one with good vim integration. From all editors I tried, it was also the only one with good plugin interoperability. In emacs plugins are often build up on the interfaces of other plugins. In vscode plugins tend to be encapsulated, competing units which often do not play well together. I also find it much easier to customize emacs to my needs, because the interfaces of both the system and the plugins are mostly well documented.

Emacs has a high barrier of entry though. I think it would be difficult for a novice to get a good IDE experience from emacs, even with doom.


> Meanwhile we have distros lagging behind for years to provide a new package because they can't break all the things depending on the old version.

I'm glad I left this category of problems behind me 5 years ago when I switched both, my personal and my work laptop to arch-linux/i3wm. These two machines have been running for 5 years, almost daily, with almost no issues, with the latest software packages. If the hardware lasts, I will go on like this for another 3 to 5 years and then upgrade hardware and (maybe) switch to wayland. I don't see anything on the horizon which would make me switch away from this setup.


My impression is that Arch is not technically unusual among distributions, but is simply well-polished, documented, and very active (and has an easy way to install unvetted community packages). If this impression is correct, you still run the risk of unnoticed outdated software if the amount of volunteers drop, or a particularly critical one no longer has the time.

Which part of Arch's design prevents the issue described in the grandparent post? The issue is "distros lagging behind for years to provide a new package because they can't break all the things depending on the old version", which is solvable either with enough manpower or by sandboxing a la NixOS, where you can keep old versions around indefinitely for the things that need them. Does Arch use such sandboxing now?


>Which part of Arch's design prevents the issue described in the grandparent post?

As a former long-time Arch user, you're correct. It's "just" a distro not unlike the biggest one. The reason Arch repos are fairly well updated and big is the relatively easy to understand PKGBUILD format and the tooling around it, which lessens friction on package management.


The impact of having frictionless package building cannot be understated. I'm publishing Arch Linux packages for all my applications because it takes just a few minutes to write up a PKGBUILD. Then one time, I tried providing a Debian package as well, but I gave up after several hours of trying to get through all the bureaucracy of the tooling.


The difference is that arch doesn't have downstream forks of repositories. They just package the upstream versions which means the dev cycle is much tighter which in turn means they can update more frequently which means they just switch to the next major version and don't worry about breaking their special sauce forks. The downside is that there are people using old distros and stuck with old major versions who use eg. Python 2 by default.


Regardless of the explanations, as an Arch user, I think you have to at least acknowledge that Arch does not have this problem in practice. I am sure there are counter-examples but in almost all cases Arch packages are extremely up to date. If they are not, it is probably a package you are not even going to find on another distro.


My only complaint about arch/i3 so far is that updating Firefox forces a restart of Firefox, and I've got FF windows scattered over a few workspaces so I need to sift them back to their places.

Come to think of it I can probably prune one of the windows...

ETA: and maybe one of the workspaces...


I'm almost sure this is an excerpt straight from https://youtu.be/7Nj9ZjwOdFQ


I'm flattered you think I achieve perfect window placement.


Would it be possible to create a script to save the desktop and screen position for each firefox window on close and restore them to the recorded desktops and positions on open? I haven't used i3 much (or arch at all), but I'm fairly certain I did something similar using Xubuntu some years back.


I did that on xubuntu too. Used xdotool, wmctrl and something else I can't recall.


I'm vaguely aware of tools that'll do it for i3 but thus far I've just avoided restarting as much as I can. The call-out to James Mickens is not altogether inaccurate (:


Keep in mind that you can downgrade firefox back after the update with pacman -U /var/lib/something, then just refresh/keep using the open windows with no issue.


Aren't downgrades heavily discouraged in Arch?


Been running this same setup for eight or nine years now, extremely pleased with it


This is awesome, thanks for sharing! I think this should be added to the PyO3 examples list :)

https://github.com/PyO3/pyo3#examples


Python supports type annotations and static typing via mypy and co. I find statically typed Python is absolutely comparable to other statically typed languages. At least I don't feel much of a difference working with it compared to go, typescript.


> And docker on Linux doesn’t support all the features that docker on Mac does. Specifically kubernetes.

This is nonsense. We invested multiple man month this year to create a local k8s dev environment for our company. Our devs use MacOS and Linux. Me and my colleague evaluated various solutions for running k8s locally, all of which worked fine on Linux out of the box, while the process of setting them up for MacOS was riddled with issues (mostly around performance).


I am talking about out of the box features. Docker for Mac has kubernetes built in. Check a box and you have a k8s cluster. On Linux you need minikube, kind, or in my case, I built a custom k3s solution.


You can compile your Python code using Nuitka, the resulting binary has much better startup time. I do this for a couple of command line tools.


This looks interesting!

I think the approach where a typed subset of Python is used to compile a fast extension module is the way forward for Python. This would leave us with a slow but dynamic high-level-variant (CPython) and typed lower-level-variant (EPython, mypyc & co) to compile performant extension modules, which you can easily import into your CPython code.

The most prominent of such projects I know of is mypyc [0], which is already used to improve performance for mypy itself and the black [1] code formatter. I think it would be interesting to see how EPython compares to mypyc.

[0] https://github.com/python/mypy/tree/master/mypyc

[1] https://github.com/psf/black/pull/1009


Nuitka is great! I use it to compile some of the command line tools which I have written. Command lines tools compiled with Nuitka have a much better startup time.

Nuitka also has a single file binary target in the works.

Compiled single file binaries for python. Wohoo!


Single file binaries would be great for my use case! It all works fine now, but the instructions would go from "copy all this directory with everything inside, MAKE SURE EVERY FILE IS COPIED, then make sure again" to "copy the file".


> This is also very easy to achieve in Python by using Cython to selectively optimize code

There is also mypyc [0] coming up on the horizon.

https://github.com/python/mypy/tree/master/mypyc


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: