Hacker News new | past | comments | ask | show | jobs | submit | Steltek's comments login

Self-hosted Matrix with all the bridges is awesome and brings back that Pidgin/Adium life of one chat app for all of my friends. Too bad Apple has an uncanny ability to avoid consequences with iMessage.


It's wonderful that it seems work well for you but my experience in bridging group chats with XMPP or IRC was terrible. Lost messages, bridge crashes, puppet accounts getting randomly broken/duplicated with discarded messages.

From the bridges I've run, only the Telegram bridge is somewhat stable for me but it also has it's warts.

Might be different if you run a strictly personal server for 1:1 conversations but I'd say from an ux perspective the bridges idea largely failed IMHO.

I don't think it's the fault of element/matrix it's a difficult problem and I guess with limited resources they made a lot of progress and made things possible that weren't before but it's not plug and play, at least it wasn't for me.

In general I've found it's also difficult to communicate in group chats if there are two worlds with a slightly different view (missing reactions, some elements of the messenger are not supported like captions, polls and so on...)


While I generally agree, the slidge bridge for XMPP has been working quite well for me, especially whatsapp, but it is really new.


> slidge bridge

Didn't knew about this one. Thanks I'm looking into it!


Tarpit instead? Trickle out a dead end response (no links) at bytes-per-second speeds until the bot times out.

https://en.wikipedia.org/wiki/Tarpit_(networking)


The original blog post for the revived Pebble was very clear about the design goals and it drove home something quite clearly: this is not going to deliver a laundry list of features or support all possible lifestyles. It will be focused on doing a few things well because there's a need for a modern Pebble not met by existing watches.

I have a Bangle2 and while it's super fun, I think it perfectly illustrates the point that simply having features isn't enough. I would not say my Bangle2 is the same as my OG Pebble.


As someone who only ever cared about a handful of features (HR, sleep, steps, notifications), the BangleJS is definitely the superior offering imo.

It does everything my Pebble did, it's cheaper, and it's been open source since day one rather than first requiring an acquisition and resurrection.

Obviously different strokes for different folks, Eric is great and I wish him and the team over at Rebble (hi ishot!) all the best, but the smartwatch landscape is very different from what it was in 2014


What's the software like? One of the biggest reasons I'm still using my pebbles is Notification Center, which has the most control over watch notifications that I've ever seen, likes being able to set regex filters and get very fine green control about what notifications get sent to my watch their vibration etc)


The more interesting Apple trademark case was "iPhone", which was in active use by Cisco for their VoIP desk phones.


The politics are so bad that I think people should stay away from Lemmy period. It's like saying it's fine to create a Xitter account these days as long as you stay away from politics.


I have a Garmin Instinct 2. They definitely did not nail it. It's horrible in all respects. It's HUGE, physically painful to use, and the UI must have been written by 10 different teams who weren't talking to each other.


Compared to _Garmin_? The Pebble UI and OS will run circles around it. Intuitive, fast, and amazing.


Do you think Garmin watches are displaying the seconds late or something?

The aesthetics are different enough that a comparison doesn't seem that interesting.


I use Linux daily; the Garmin interface is painless by comparison, dohohoho.

More seriously, yeah it could be more intuitive and consistent. But I sat down, read through the manual end to end, and now I'm fine with it.


My Pebble Time Steel finally bit the dust so I turned to a Garmin Instinct. I can't stand it. The button placement is totally random and legitimately painful to use. The Garmin software focuses on fitness activities to the exclusion of everything else.

I recoil at having been tempted by the more expensive Garmin watches. What a waste of money that would have been!


You could always sell it. <hint hint>


Moving PCs to a more mobile-like environment, where apps are always sandboxed and all permissions are explicit, sounds so tempting. But then you realize that manufacturers will always cross the line: MS and Apple apps get special privileges, the bootloader will be permanently locked, and you'll eat your DRM because you have no other choice.

I prefer to curate my own apps and if they misbehave, they're uninstalled immediately. The downside there is this philosophy feels a lot like, "I don't need a virus scanner because I'm too good to get stung by a virus". Still, it seems better than the alternative hell.


> Moving PCs to a more mobile-like environment, where apps are always sandboxed and all permissions are explicit, sounds so tempting. But then you realize that manufacturers will always cross the line: MS and Apple apps get special privileges, the bootloader will be permanently locked, and you'll eat your DRM because you have no other choice.

It's always worth emphasizing imo that technological prophylactics like mobile-style sandboxing are redundant on operating system distributions whose social processes of development take responsibility for things like this (auto-updating, startup behavior, default app associations, file type associations, etc.) away from apps and centralize it under the control of the user via the OS' built-in config tools.

To be more explicit: free software distros generally lack software that behaves this way, and often actively patch such behavior out of ill-behaved cross-platform applications (who also sometimes lack it themselves, in builds targeting free operating systems). The problem is, as you note in your second paragraph, just as much that we're getting our software from untrustworthy distributors as it is that our desktop operating systems do too little to protect us from untrustworthy software. In some cases, the problem is rather that our software distributors have too little capacity to intervene, both for technical reasons (e.g., they don't have the damn source code) and social/economic ones (e.g., the model is to accept as much software as possible rather than to include a piece of software only after it is clear that there is capacity to scrutinize it, sanitize it, and maintain a downstream package of it).

You can avoid 99.999% of this kind of crap just by running a free software base system and avoiding proprietary software. Better app sandboxing is great! We should demand it and pursue it in whatever ways we can. But installing software directly from unscrupulous sources and counting on sandboxing to prevent abuse is also an intervention at the least effective stage. Relying on a safer social process of distributing software goes way further, as does keeping shitty software off our systems in the first place! These should be the approaches of first resort.


The problem is, reliably catching programs misbehaving is non-trivial even for the technically capable, let alone everybody else which easily results in a lot of damage having been done by the time they’re caught, if they’re caught.

I don’t think there’s a practical way forward for desktop OSes that don’t embrace sandboxing and tight permission controls, at least for the masses. Even for myself, I’d be more comfortable using an OS like that — if trust of the corporations making the OSes becomes an issue, the solution that’s apparent to me is to run a Linux distribution with deeply integrated permissions and sandboxing, not to run without either.


I think that the way is to design an entirely new operating system, and I have some ideas how to do it (but not a name, yet). It will be FOSS, and the user can easily reprogram everything in the entire system. It uses capability-based security with proxy capabilities; no I/O is possible (including the current date/time) without using a capability. Also, better interaction of data between command-line and GUI will also be possible.

Linux and other systems tend to be overly complicated; they need to add extra functions to the kernel because of the problems with the initial design; but, I think it can be done better in a simpler way.

Webapps are also too complicated, and does not really solve the permissions issue properly (the set of permissions doesn't (and cannot) include everything, the possibility of overriding by the user is difficult and inefficient, etc), and also isn't very well designed either.

There are sandboxing systems available on Linux but they have their own problems; e.g. many have no way to specify the use of popen with user-specified programs, or what needs to be accessed according to user configuration files and/or command-line arguments, user-defined plugins, or cannot properly support character encoding in file names (due to issues with D-bus), etc. (Using D-bus for this is a mistake, I think. The other things they have done (other than D-bus), also don't handle it very well.) There is also issue of being unknown what permissions will be needed before the program is installed, especially when using libraries that can be set up to use multiple kinds of I/O.


Are you aware of genode ( https://genode.org/ )? It's a full-blown capabilities OS that is FOSS and already exists and AFAIK basically works today.


Yes. My ideas have some similarities with Genode but also many significant differences. For example:

- The design is separate from the implementation. The kernel of the system will be specified and simple enough that multiple implementations are possible; the other parts of the system can also be specified like that and be made, and the implementations from different sources can be used together. (Components can also be replaced.)

- The ABI will be defined for each instruction set, and this will be usable same for any implementation that uses that instruction set; the system calls will be the same, etc.

- The design is not intended for use with C++ (although C++ can be used). It is intended for use with C, and for use with its own programming language called "Command, Automation, and Query Language". Assembly language, Ada, etc are also possible, although the core portable design supports C, but would be written in such a way that the abstract system interfaces are defined in a way that is not specific to any programming language.

- All I/O (including the current date/time) must be done using capabilities. A process that has no capabilities is automatically terminated since it cannot perform any I/O (including in future, since it can only receive new capabilities by receiving a message from an existing capability). (An uninterruptable wait for one of an empty set of capabilities also terminates a process, and is the usual way to do so.)

- It does not use or resemble POSIX. (However, a POSIX compatibility library can be made, in order to compile and run POSIX-based programs.)

- It does not use XML, JSON, etc. It has its own "common data format" (which is a binary format), used for most files, and for command-line interface, and some GUI components, etc.

- The character set is Extended TRON Code. The common data format, keyboard manager, etc all support the Extended TRON character set; specialized 8-bit sets are also possible for specialized uses. (This is not a feature at the kernel level though; the kernel level doesn't care about character encoding at all.)

- Objects don't have "methods" at the kernel level, and messages do not have types. A message consists of a sequence of bytes and/or capabilities, and has a target capability to send the message to.

- Similar than the "actor model", programs can create new objects, send their addresses in messages through other capabilities, and can wait for capabilities and receive messages from capabilities. (A "capability" is effectively an opaque address of an object, similar to a file descriptor, but with less operations available than POSIX file descriptors allow.) It works somewhat similar to the socketpair function in POSIX to create new objects, with SCM_RIGHTS to send access to other objects.

- A new process will receive an "initial message", which contains bytes and/or capabilities; it should include at least one capability, since otherwise the process cannot perform any I/O.

- There is no "component tree".

- Emulation is possible (this can be done by programs separate from the kernel). In this way, programs designed for this operating system but for x86 computers can also run on RISC-V computers and vice-versa, and other combinations (including supporting instructions that are only available in some versions of instruction sets; e.g. programs using BMI2 instructions work even on a computer that doesn't support those instructions). Of course, this will make the program less efficient, so native code is preferable, although it does make it possible to run programs that don't.

- Due to emulation, network transparency, etc, a common set of conventions for message formats will be made so that they can use the same endianness, integer size, etc on all computer types. This will allow programs on different types of computers (or on the same computer but emulated) to communicate with each other.

- A process can wait for one or more capabilities, as well as send/receive messages through them. You can wait for any objects that you have access to.

- The file system does not use directory structures, file names, etc. It uses a hypertext file system. A file can have multiple streams (identified by 32-bit numbers), and each stream can contain bytes as well as links to other files.

- Transactions/locks that involve multiple objects at once should be possible. In this way, a process reading one or more files can avoid interference from writers.

- Better interaction between objects between command-line and GUI, than most existing systems.

- Proxy capabilities (which you can use C, or other programming languages, including the interpreted "Command, Automation, and Query Language") can be defined. This is useful for many purposes, including network transparency, fault simulation, etc. If a program requires permission to access something, you can program it to modify the data being given, to log accesses, to make it revocable, etc. (For example, if a program expects audio input, the user can provide capability for microphone or for an existing audio file etc)

- There are "window indicators", which can be used for e.g. audio volume, network, permissions, data transfer between programs, etc.

- The default user interface is not designed to use fancy graphics (a visual style like Microsoft Windows 1.0, or like X Athena widgets, is good enough).

- USB is no good. (This does not mean that you cannot add drivers to support USB (and other hardware), but the system is not designed to depend on USB, so avoiding USB is possible if the computer hardware supports it, without any loss of software functionality.)

- System resources are not the same like Sculpt; they are set up differently, because I think that many things would be better done differently

- As much as possible, everything in the system is operable by keyboard. Mouse is also useful for many things, although keyboard is also usable for any functions; a mouse is optional (but recommended).

- There are many other significant differences, too.


> - Due to emulation, network transparency, etc, a common set of conventions for message formats will be made so that they can use the same endianness, integer size, etc on all computer types. This will allow programs on different types of computers (or on the same computer but emulated) to communicate with each other.

If you're going there, you could consider just going to wasm as the binary format on all architectures.

> - There are "window indicators", which can be used for e.g. audio volume, network, permissions, data transfer between programs, etc.

Kind of like qubes? More so, obviously, but it reminds me of that.

> - USB is no good.

What? USB is extremely high utility; just this makes me think you'll never get traction. By all means lock down what can talk to devices, do something like https://usbguard.github.io/ or whatever, but not supporting USB is going to outweigh almost any benefits you might offer to most users.

(Also on the note of things that will impede uptake, throwing out POSIX and a conventional filesystem are understandable but that's going to make it a lot harder to get software and users.)


> If you're going there, you could consider just going to wasm as the binary format on all architectures.

There are several reasons why I do not want to use wasm as the binary format on all architectures, although the possibility of emulation means that it is nevertheless possible to add such a thing if you wanted it.

> Kind of like qubes?

Similar in some ways.

> What? USB is extremely high utility; just this makes me think you'll never get traction. By all means lock down what can talk to devices

I had clarified my message, since "USB is no good" does not mean that it cannot be used by adding suitable device drivers. However, it means that the rest of the system does not use or care about USB; it cares about "keyboard", "mouse", etc, whether they are provided by PS/2, USB, IMIDI, or something else. However, USB has problems with the security of such devices, especially if the hardware cannot identify which physical port they are connected to, which makes it more complicated. Identifying devices by the physical ports they are connected to is much superior than USB, for security, for user device selection, and for other purposes; so, if that is not available, then it must be emulated.

For distributions that do have a USB driver, something like USBGuard could be used to configure it, perhaps. However, USBGuard seems to only allow or disallow a device, not to specify how it is to be accessible to the rest of the system (although that will be system-dependent anyways). (For example, if a device is connected to physical port 1, and a program has permission to access physical port 1, then it accesses whatever device is connected there if translated to the protocol expected by the virtual port type that is associated with that physical port.)

Even so, the system will have to support non-USB devices just as easily (and to prefer non-USB devices).

> Also on the note of things that will impede uptake, throwing out POSIX and a conventional filesystem are understandable but that's going to make it a lot harder to get software and users.

Like I mention, a POSIX compatibility library in C would be possible, and this can also be used to emulate POSIX-like file systems (e.g. by storing a key/value list in a file, with file names as the keys and links to files as the values). Emulation of DOS, Uxn/Varvara, NES/Famicom, etc is also possible of course.

However, making it new does make it possible to design a better software specifically for this system. Since C programming language is still usable, porting existing software (if FOSS) should be possible, too.


I guess I wasn't thinking of my primary line of defense: webapps! Naturally sandboxed and least privilege'd. And many of them are locally selfhosted in containers too.

Native apps for me tend to fall into a few narrow categories:

* Professional software engineering tools

* Videogames (on a Steam Deck these days)

* CAD for 3D printing (FreeCAD)

* A/V editing with Audacity, Gimp, etc


You may be interested in "immutable" distros like OpenSUSE's Aeon, Fedora's Silverblue or the kind-of-Debian Vanilla OS. If you go and try Vanilla, by all means try the beta.


I feel the same way about e-bikes: expensive, proprietary parts and form factors everywhere. Oh, your battery is worn out? You need one that's custom molded to your downtube? That's too bad.

Thankfully they're easier to DIY than an EV car.


I think that at least some of this comes about because it's still relatively early days for the form-factor. As the industry matures, as it becomes more cutthroat everything will become more comodified and therefore standardised.

Look at some of the cars of (say) the late-19thC where not even the steering wheel was standard. So, while e-bikes are probably not quite that early stage right now, they've not advanced terribly far from the plain vanilla bicycle yet.


There are thousands of e-bike manufacturers. Many use the so-called "dolphin" battery pack which is fairly standard and always removable. The dolphin doesn't look as sleek as an in-frame battery but it's replaceable and it will usually provide longer range.

https://bafangusadirect.com/products/52v-11-6-ah-dolphin-ebi...


I've been expecting a ebike that could take the tool eco-system batteries: DeWalt 60v, Eco 58v, Milwaukee 18v. Probably would need to dock several of them with the exception of the Eco.


There's the Makita BBY 18V foldable bike.


The battery case might be molded plastic but inside it’s probably just a wad of 18650s that can be replaced.


Unless you find out it's potted, the BMS needs to be reprogrammed and there's a custom mesh or holder that doesn't work with some standard cells because of tolerances...


Can confirm.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: